About This Site Natural Numbers is my personal exploration into number theory. It is autodidactic, and it is iconoclastic. It is factual, and it is speculative. It is not recommended for footnotes!
- Michael M. Ross

A proximate-prime polynomial
is simply a quadratic equation - a finite polynomial of
the second degree - that is derived from four successive
(proximate, or neighboring) primes. Proximate-prime
polynomials are interesting because they exhibit much
greater prime densities than other polynomials.

When you graph
primes against an X-axis that treats the expanding
interval between successive perfect squares as a constant
unit subdivided into equal parts, you produce a
distinctive wave form for primes and prime factors.

For every composite number that is
not itself a perfect square there exists a pair of
nonconsecutive perfect squares whose difference is equal
to the composite. Even before we get to the subject of
factorization, the consequences of this observation are
fascinating and far-reaching.

It began with an
exploration of biquadratic paired primes: 2
primes separated by the equivalent of exactly 2
quadratic intervals.... Then the investigation took the
logical next level by asking the question: Are there
prime pairs that are separated by other, greater
multiples of the quadratic interval? And if there are,
what are the frequency characteristics by interval size
and perfect square offset? The results are in, with
charts, an Excel visualization, and masses of
half-digested data...!

Find examples throughout this site that demonstrate using
VBA code with worksheets and graphing - including
generating primes, perfect squares, and composites, doing
modular arithmetic, calculating GCDs, and more....

March 23, 2015

Is the Prime Count of x to 2x a Lower Bound for Prime Count x^{2} to (x+1)^{2}?

This graph plots the deviation of the half-interval prime count by projecting the true prime count for every perfect square interval as a constant value (of 1).

Nearly.

Given any even span of integers, the number of primes greater than half the span is nearly equivalent to the prime count between the square numbers for which this integer span is an interval.

For the first 20,000 perfect square intervals, the average value of the "half-interval count" is 0.952 of the true prime count. The half-interval count is just under the true prime count for 97.6% of intervals, is the same as it for 0.4%, and is just above it for 2.0%. It is also converging. Download data (784KB)

Example: If 20 is the span, then 11, 13, 17, and 19 are the four primes greater than half. The span divided by 2 and squared gives the first square - (20 / 2)^{2} = 100 - for which this span is the quadratic interval. There are five primes between 100 and 121.

There's a small cognitive leap you need to make in order to understand the significance of this near equivalency. The primes that are greater than x and less than 2x are also prime factors within the span's corresponding interval between two perfect squares. But these primes cannot be the smallest prime factors in the composites of x^{2} to (x+1)^{2}. Smallest is highly significant; you will have to read the explanation below to understand why.

If the half-interval count is a watertight lower bound above a certain magnitude - as it appears to be - that fact leads directly to proving Legendre's Conjecture:

Given the Bertrand–Chebyshev theorem (n < p < 2n), the correlation of the prime counts for n to 2n and n^{2} to (n+1)^{2} proves "There is a prime number between n^{2} and (n+1)^{2} for every positive integer n."

I have two small programs: one illustrates how it works, the other illustrates how well it works. (You can graph the CSV data with Excel.)

An interesting property of even perfect squares minus 1 (which are always composite) is the triviality of their smallest prime factor unless they are twin-prime composites. This makes it extremely fast to factor them and easy to determine the instances of twin primes (simply by elimination). The rule is that the smallest prime factor of a non-twin-prime X^{2}-1 composite cannot be greater than the square root of its square root - and usually much smaller. If such a factor is not found, the composite must be the product of twin primes. Thus the largest of these non-twin-prime factors less than 10^{12} is 991 for 999836006723.

Here is the complete list of the odd X^{2}-1 composites less than 10^{12} with their smallest prime factor (including the twin primes):
Zipped-up CSV file(2.6MB).

I've written a lot about twin primes and why the conjecture can be considered already proved by the Infinitude of Primes and the Fundamental Theorem of Arithmetic. This Excel program gets its speed by exploiting the universal property of all twin primes:

If X^{2} - 1 is semiprime then X - 1 and X + 1 are its prime factors.

In other words: Don't look for primes! Look for one tiny class of semiprimes. The product of all twin primes is an even perfect square minus 1. And since we're searching for semiprimes, a primality test is not required. Any factor (prime or not) less than the square root of X^{2} - 1 eliminates the composite as a candidate. Furthermore, these factors are usually trivial - that is, they're found long before X. The complete predictability of the product location and the elimination of a primality test account for the polynomial speed of this algorithm.

Well, almost perfected.... My fractional-elimination algorithm produces an arrow that pierces the heart of the primes for the intervals I've graphed. (Straying to the north for intervals a few magnitudes larger is a possibility based on some limited sampling. However, I'll let the graph speak for itself.)

So what is the extra step that makes this work so beautifully?

The fractional prime-counting algorithm now takes into account the "unpredictable", or more correctly probabilistic, occurrence of prime factors greater than half the interval (which I noted in my previous post). These factors must appear once, but can occur more than once. The new step in the algorithm is to apply simple logic to this, by extending fractional elimination with fractional division.

Taking the case of 81 -100 = 19, half the interval is 9.5. The primes greater than half are 11, 13, 17, and 19. It's easier to think backward: Since the interval is 19, 19 can only be a prime factor once. Now consider 17: It has a 17/19th possibility of occurring more than once. Knowing nothing about the factors in this prime interval, if 17 was a factor of the 1st or 2nd composite in the interval, it would occur again in the 18th or 19th composite. With this understood, we can now express a weighted, hypothetical version of 17, which is 15.21. And so on....

1. Take the difference of two consecutive perfect squares.
2. Divide the difference by 2.
3. Divide this result by 3 and subtract the quotient from the result.
4. In Step 3 if the prime divisor is greater than half the interval:
a. Divide the prime divisor by the interval.
b. Multiply the prime divisor by this fraction.
c. Divide the result by this product.
5. Repeat Steps 3 and 4 for every prime less than or equal to the difference.
6. Compare the final result with the actual prime count in the interval.

Repeating this for every interval produces a picture-perfect primes-per-interval curve.

Unlike the "asymptotic law of the distribution of prime numbers", the curve that I have graphed using the method of fractional elimination of prime factors (see how the formula works) is generated by a calculated value for every quadratic interval. The importance of this is that, unlike the asymptotic curve, this curve is the product of a discrete function applied to every perfect square interval, and it shows the growth in the prime count is due to the rules of arithmetic that apply to every interval (independently of every other interval).

The line is almost a mirror-image of x/log x. It's actually slightly more accurate than the asymptotic curve for this data range, in that the real prime count crosses the "fractional curve" with higher frequency. Why is it not an exact representation of the true prime-counting growth rate? The explanation is that the prime factors greater than the interval's midpoint can be represented once or more than once depending on how the cards fall. Substitute composite for card, and you will understand it's a stochastic process without applying a lot of computational effort to it. (That would miss the point anyway.) For the baby case of interval {81,100}, for example, 11 occurs twice, but 13 and 17 occur only once. Hypothetically, either 13 or 17 could occur twice in the interval because they are smaller than it. Fractional elimination "assumes" that every integer appears "as a fraction" n divided by the remaining interval. Since these are fractions and not whole numbers, the result cannot be perfect.

There is a simple geometric principle that demonstrates primality and symmetry are equivalent properties in every perfect square interval. Specifically, it relates to the lines of symmetry of quadrilateral shapes.

Imagine you're making squares with counters or Lego pieces or whatever. Begin with a 2-by-2 square. You're allowed to put down only one piece at a time by adding each one to the existing shape. So you're first task is to get from 2-by-2 to 3-by-3. Can you do this while keeping the shape a rectangle? You can change the shape with each new piece you put down but only to another rectangular shape. (Don't get creative: You must keep the distance between each adjacent piece the same.) Let's just try it:

It becomes clear right away that with certain odd numbers you cannot keep the shape rectangular. These are prime numbers. The rule is simple: You cannot add counters to make one perfect square into the next bigger perfect square - 2*2 to 3*3 or 10*10 to 11*11 - and maintain rectangular symmetry with every addition. Specifically, the squares have two lines of symmetry. When you add "composite counters", the rectangle has one line of symmetry. When you add "prime counters", you break the rectangular symmetry.

Exactly what does this mean for the distribution of primes in perfect square intervals?

For every perfect square interval there is one pronic number (also called a rectangular or oblong number). Pronic numbers have a couple of interesting properties, but the one we're interested in here is that they lie at the midpoint of every perfect square interval: x(x+1): The sequence begins: 2, 6, 12, 20, 30, 42. They are always even (and, therefore, apart from 2 always composite).

(Just to be clear, the pronics are not the only numbers that I'm loosely calling rectangular. However, they are "special". They occur only once in every interval, and they are always the same "shape".)

Now visualize the pronics: A rectangle with dimensions determined by two consecutive integers (for example, 2*3 or 10*11). These dimensions are the square roots of the squares that they "divide". It is morphologically impossible to change a square to a pronic in increments of one without an intermediate asymmetry. Between x^{2} and x(x+1) there must be one asymmetry - or prime number. Between x(x+1) and the next x^{2} there must be one asymmetry - or prime number. Therefore, there must be at least two primes per perfect square interval.

We did not have to wait 220 years for this truth, but essentially that's what the story is. It does not rest upon the Prime Number Theorem; it does not rest upon the Riemann Hypothesis. It rests upon geometry, although whether or not a geometer can prove that is another question.

The Ross Conjecture: "It is impossible to change a square of countable units to a pronic of countable units in increments of one without an intermediate asymmetry."

A Formula for Counting Primes in Quadratic Intervals

Yes. Why not?

1. Take the difference of two consecutive perfect squares.
2. Divide the difference by 2.
3. Divide this result by 3 and subtract the quotient from the result.
4. Repeat Step 3 for every prime less than or equal to the difference.
5. Compare the final result with the actual prime count in the interval.

There are three primes between 81 and 100: 83, 89, 97.

Why does this work?

Simple:

a. The frequency of every prime factor can be considered a fraction.
b. The frequency of 2 is 1/2, the frequency 3 is 1/3, the frequency of 5 is 1/5, and so on.
c. Repeat subtraction of these fractional amounts eliminates all the composites.
d. The remainder must be an estimation of the integers that are prime.

I've created a small program (for Windows) to demonstrate the algorithm.

If it can be shown that this method of elimination is valid, it can be used toward a proof of Legendre's Conjecture.

For example, it suggests a proof by contradiction as follows:

a. Apply this formula to the next interval, {100,121}, which contains five primes. The formula produces a number that rounds to 3.
b. Suppose four or five primes of this interval were to vanish, and you were to apply the formula to the interval {100,121} again.
c. Again, the formula eliminates all the composites of the interval with prime factors less than 21.
d. Thus there would be remaining one or two (subtract 3 from 4 or 5) integers in the interval {100,121} that cannot be composite or prime.

Four green bottles hanging on the wall,
Four green bottles hanging on the wall,
And if one green bottle should accidentally fall,
There'd be three green bottles hanging on the wall.

Landau's problems are four conjectures of number theory that were stated by Edmund Landau to be "unattackable at the present state of science" 102 years ago.

One thing can be said for certain about Landau's problems: What a sad state of affairs! What other scientific discipline would tolerate the humiliation of such a situation for 102 years? The working hypotheses must be that all four are true based upon all the available evidence. Yet for number theorists, a hypothesis is no more than a hunch. These four conjectures are in the deadbeat category of unproved, and they could remain there for infinity employing many able mathematicians.

Why? Because academic mathematicians are concerned with infinity, and a proof must be deterministically true to infinity. Such proofs are extremely elusive, if not impossible, for any conjecture that depends on prime number distribution that is more strictly defined than asymptotically. Because academic mathematicians do not have deterministic knowledge about prime numbers, the Prime Number Theorem, also called the "asymptotic law of the distribution of prime numbers" is the best they can do: a trend that's proven but still probabilistic - and, honestly, not that interesting. (Even the holy grail of deterministically proving the Riemann Hypothesis will only refine this trend. Despite the enormous amount of attention the RH gets, it's essentially geeky. It will not provide a deterministic solution to the prediction of prime numbers. It will simply refine the PNT.)

Yet we know, from very first counting principles, that every prime number is accountable for merely by the rules of arithmetic. It's only that mathematicians will never be granted the computational power to apply those rules of arithmetic to infinity - or even close! Do you remember that one about the permutations of a standard deck of playing cards? The factorial of 52. If we consider each playing card to be a prime number, those primes would account for 52! composites. That is 80658175170943878571660636856403766975289505440883277824000000000000 different composites with distinct factors. Imagine counting them. (By comparison, the age of the universe in seconds is about 432043200000000000.) Well you get the idea.

However, what really makes the Landau problems problematic is not the limitations of computational power, but the limitations of number theory. Number theory, or higher arithmetic, is the study of integers. It's a collection of "magic tricks" that enable mathematicians to manipulate and understand big numbers more easily. However, there appears to be one thing that's missing from that box of tricks: A means to relate the quadratic growth of square numbers with the asymptotic growth of prime numbers.

My thumbnail thesis is that starting about 150 years ago, academic mathematics began to lose touch with the bedrock principles of the Infinitude of Primes and the Fundamental Theorem of Arithmetic out of lack of things to do. About the same time, some overlooked classical problems also reemerged, such as the twin-prime observation. Because of their late arrival, the Landau problems were caught between Euler's ordinal universe and Cantor's cardinal universe. Then add to that the parallel universes of complex numbers and planes, combinatorics and topologies, probability and statistics, computer logic and algebra - the whole panoply of the mathematical sciences.

I suggest that the Landau problems are really one problem. The inability to call them proven with the current standard of academic rigor - despite their obvious, overwhelming, likelihood of being true - is because of the following:

1. They've stood for over a century, so it's assumed they must be intractable.
2. They are not particularly attractive subjects for academic mathematicians (particularly if sticking one's neck out can result in ridicule).
3. They are not accessible to the standard alchemy of number theory.

I add to these three two other reasons of the cosmic joke variety:

4. One of these is quite possibly provable by arithmetic or algebra without needing any advanced concepts or techniques. *
5. One of these does not need to be proved true. (In fact, it needs to proved false.) *

This has created a logjam. Things that are easily observable - and for which a possibility of error approaches the infinitesimally tiny - are ignored. And yet, if you step back it's quite silly to say that you cannot admit something as true until it is proved that not a single exception to it will be found until infinity. That's an absurdist point of view if you think about it. Human beings are heuristic beings, and even mathematicians are not permitted firsthand knowledge of infinity. We have to imagine it.

It seems to me that some, not all, mathematicians are not pattern-seekers but think atomistically and linearly. That's fine - if there is authenticity (and even originality)! For me, there are more interesting ways to understand the natural numbers than the classical model in which they march, like little tin soldiers, to infinity on a line. My mental-model is cyclicality around a geometric backbone.

I subscribe to the philosophy of holism (from the Greek holos). My only faith is that natural systems taken as a whole have properties that cannot be understood only in terms of their component parts. Thus, "The whole exists prior to and is more elementary than its parts."

* I refer to one and the same problem: Legendre's conjecture. It is clearly and demonstrably true - and has been since the time Legendre and Euler lived.

Wikipedia states, matter-of-factly without irony and without attribution: "a counterexample near 10^18 would require a prime gap fifty million times the size of the average gap."

Such a counterexample will never be found. If it were found, the Prime Number Theorem would, ipso facto, be proved false.

In reality, the number of primes between perfect squares increases indefinitely and absolutely with minor perturbations that never exceed a known error limit because the quadratic growth of square numbers "more than negates" the decline in the density of primes. And I will explain why the PNT may be admitted as proof of this.

Because the three other Landau problems are closely interconnected with this one, if this one falls - that is, is stamped with a credible seal of a proof - the other, harder ones, become less intimidating and more vulnerable to attack (academic math lingo for being provable).

A common response a mathematician will make when presented with the facts of Legendre's conjecture is something like this: The PNT cannot be used to prove Legendre's conjecture because the PNT is an asymptotic law that's a discrete limit applied to all the numbers less than particular given value of N and not to any specific N^{2} and (N+1)^{2}.

I'm not buying this, and I think it's fairly easy to show that this reflexive notion is false. However, regardless, I will not stay up at night worrying about the theoretical possibility that the vast sea of the asymptotic law of prime number distribution will be divided, in some miraculous fashion, to permit a square gap of a vast magnitude to be prime-free. ("It just ain't gonna happen.")

Consider the possibility - unlikely as it is - that permitting a proof without perfect precision - might actually matter to some other branch of science?

This is, most likely, the weakest argument you will find on this whole page because we all know that science will use math that cannot be proved, when it needs to, with a working assumption that it is true. In fact, the natural sciences and even applied mathematics thrive and prosper on the back of uncertainty. Consider quantum physics! A much stronger argument is that no mathematical proof can be a philosophical proof because of at least three reasons: the theory of supersets, the theory of incompleteness, and the theory of complexity. Since philosophy is not the subject of this article, I will not explain why I choose these three in particular, but I'm not being very original.

I recognize the joy of an intellectual pursuit. However, the arcaneness - even alchemy - of the science that is being employed to solve the Landau problems - in particular, the daisy-chaining of obscure dependent theorems, theories, and other conjectures - can never produce an elegant proof that can be appreciated by the rest of us. And that's a sad state affairs because even the most esoteric physics is explicable conceptually to the averagely intelligent and curious. (I'm including general and special relativity, quantum mechanics, the big bang and inflation - but maybe not the Higgs boson :( - in this category.) Even that's not to say that obscurity is necessarily an impediment to popularity: Consider the proof of Fermat's Last Theorem.

"There is a prime number between n^{2} and (n+1)^{2} for every positive integer n."

I will double up on this conjecture with my own:

"Of the list of composite numbers between any interval n^{2} and (n+1)^{2} there is at least one semiprime 2y and one semiprime 3z?".

(I could go further and say the ratio of the real prime count to the quadratic interval is nearly linear, but I'm getting ahead of myself.)

Let's step back...

The Prime Number Theorem, without any of its elaborations, states: "The number of primes not exceeding x is asymptotic to x/log x." This answers the question: How many primes are there less than the number x? The answer is: Π(x) ~ x/log x, approximately. There is a reason that it's called the prime-counting function - that's how it was originally conceived (see the historical note).

What we care about with respect to Legendre's conjecture is not this overall trend, but the prime count, "projected" and real, between every two consecutive perfect squares - and, all-importantly, the proportional deviation between the projected and real counts as the number line progresses.

This question is not secondary to the PNT: It is actually, to my mind, the more important question. This is where I can happily state - without having the faintest knowledge of the existence of the conjecture we are discussing - that I asked and answered this question to my own satisfaction six or seven years ago, writing about it here in complete ignorance of academic number theory - like many other topics you will find on these pages...! A program and results are here: Counting
primes by quadratic interval

Let's break the monotony of this monologue with some light refreshment:

Contrast the real prime count with the "PNT-predicted" count * This is not the graph you usually see about the PNT, which shows a step-like ascending pattern. These are plots of the PNT value for each interval (which creates a trend line) and the real count (which produces a not-unexpected "wave" pattern) due to noise in the error term. The real prime count exceeds the PNT limit in "most" intervals, and where it does not it falls short by a "marginal amount". This is the PNT error for each interval.

Left: The real prime count per quadratic interval versus the "asymptotic line".
Right: The deviation between the real and log-derived count for each interval.

* I recognize that the PNT does not actually predict a count for discrete intervals because it only defines a limit (a log-derived rate of growth). Nevertheless there is a formula that applies it to quadratic intervals: ((n+1)^{2}/log((n+1)^{2})). Furthermore, it is graph-able as a line that passes through those intervals, and, empirically, it gets within shooting distance of the discrete real data..

The PNT is a blunt instrument - and I don't like it very much - but it has its uses, whether intended for this purpose or not.

I pause to note here the historical and ironical fact that Legendre, himself, was the first person to conjecture the size of the prime-counting function, in 1798 to be exact. (There was a revolution on, but he found time.) To quote from the Prime Pages: "The constant 1.08366 was based on his limited table for values of Π(x) (which only went to x = 400,000). In the long run 1 is a better choice..." ( This quote is suspiciously inauthentic though.)

(FYI I know that you know that the "pi symbol" in the prime-counting function, Π(x), has nothing to do with the other pi.)

This is, in fact, the conjecture that Legendre should be remembered for. And the limit he came up with increases my respect even more. Could the same individual who figured out something that precociously and with such accuracy actually have stated the trite conjecture that became one of Landau's problems?

It's not necessary to embark on the journey to infinity, because the arithmetic does not change with magnitude. The growth of the prime count continues to increase to infinity if the PNT continues to infinity. If one is true, the other is also. The delta between the unrefined prime-counting function and the actual prime-counting data is so tiny that it negates the possibility that its deviation could ever come close to challenging Legendre's conjecture. In fact, even entertaining the theoretical possibility of this occurring requires the repudiation of not one but two bedrock principles of mathematical science:

- The evidence: that is the known and observable data.
- The theory: that is the prime number theorem.

Logically, therefore, one could contend that Legendre's Conjecture does not require its own proof.

However, to be as clear as possible, there is the reality of an informal "statistical proof" and the high-probability of an arithmetic proof.

Legendre's Conjecture is statistically true

First, scientists do not use statistical proof as a means to attain certainty, but to falsify claims and explain theory.
Second, it conforms to the hypothetic-deductive model or method for defining the scientific method. Third, let me note that nothing I'm saying here is in the least bit original (or well-said for that matter). "It's all out there. Connect the dots."

Consider the gap, or interval, between two square numbers, X^{2} and (X+1)^{2} strictly as excluding the end-points (that is, the square numbers themselves), its proven empirically (for "small" numbers) and statistically (for "large" numbers) that there can be no cases where the unique prime count is less than two.

The prime number theorem states that X / log(X) is a good approximation of the prime-counting function. This can be refined by various methods. However, even in its most unrefined state, for numbers greater than 10^{7} the error terms are tiny in comparison to the ratio of primes to all integers.

Given the truth of the prime number theorem, we know that the following formula shows that the prime count, within an error term, increases over successive quadratic intervals to infinity if PNT is correct.

((n+1)^{2}/log((n+1)^{2})) - (n^{2}/log(n^{2}))

This is not a matter of dispute. The absolute prime count growth will continue if you plug in any number for which it is computationally possible to use the natural logarithm.

It is not necessary to invoke the Riemann Hypothesis to refine the error term. because the growth of the prime number count dwarfs it. For "small" numbers the empirical computed data is a piece of cake (my generated data < 10^{8}). For "large" numbers, we can use data that uses raw computational horsepower up to 10^{25} at present. Beyond this, we have rigorous proofs that use analytical continuation without the RH.

The real error term is far smaller than any deviation that could negate the effect of quadratic growth. For any computed magnitude greater than, say, 10^{10} the largest negative deviations are a fraction of one percentage point of the asymptotic line.

That's it: A statistical proof that "There is a prime number - in fact, at least two - between n^{2} and (n+1)^{2} for every positive integer n" without a single statistic.

In the beginning, the real prime count expansion establishes a near-linear-ratio growth pattern with the square interval count (as projected by the PNT-derived interval count formula).

Left: A tiny plot of the early prime expansion.
Right: The linear growth pattern for my generated data < 10^{8}.

This one shows all the primes and all the 2*y and 3*z prime factors for each square interval:
- Column 1 and 2: The pair of squares
- Column 3: The primes in the pair
- Column 4: The 2* prime factors
- Column 5: The 3* prime factors

This particular Landau problem is true until proven false (which it simply cannot be).

No other mathematical puzzle
has produced more heated arguments and more
misunderstanding than the Monty Hall Problem.
It's tempting to call it deceptively simple, but
the truth is it's deceptively complex. In fact,
I can simplify its essence to one question: If
you were handed two decks of cards, one with
2/3 aces and the other with 2/3 jokers, and
were asked to pick an ace, from which pack
would you draw the card?If
that is not a sufficient clue, you need to
read this.

A factor family is a set of
composites that share common factors. The family
begins with a semiprime, and all members of the
set share the two prime factors of the
semiprime. The family can be extended
indefinitely. For example, for the first 100,000
odd composites the factor family of 4619
(149*31) looks like this:

There are a few things to note
about this representation: The prime factors (in
red) are distinct, so, for example, 3 can be 3*3
or 3*3*3, and so on. The complete factors
(nonprime and prime in blue) of each composite
all have three common factors: the semiprime and
its prime factors. By definition, a semiprime
cannot have nonprime factors.

Modal Distribution of
Distinct Prime Factors: It’s Not Necessarily What You Might Think

Yes, you can find the answers
to life's most persistent questions here! What
is the cumulative distribution of distinct prime
factors? The answer is both what you were
expecting and more than you were expecting. As
expected, the cumulative frequency of unique
prime factors diminishes with size, so that 2 is
the most frequent factor, 3 is the second-most
frequent factor, 5 the third, and so on for all
prime factors. However, it turns out that the
cumulative frequency of the largest
prime factors - the biggest factor of each
composite - has a modal distribution that is increasing
at a slow rate approximating the cube root of N.
Up to 10,000, this mode is 19. Up to 1,000,000,
this mode is 73. It's fairly safe to assume that
if we graph any given set of numbers, there will
be a distinctive peak near P^3 = N.

Distribution of the
largest prime factor for each composite less
than 10,000. (The difference
between the even and odd curves is accounted
for entirely by the exclusion of prime
numbers.)

If we pick a square or a cube
and count up to the nearest prime number, what
are the odds that a prime number will also exist
if we count up by the same amount from its root?
The answer, it turns out, is surprisingly high -
and even higher for cubes than for squares. If
the question sounds confusing, it's easy to
illustrate:

Squared example:

Square

Prime

Offset

Root

Prime

Offset

3025

3037

+12

55

67

+12

Cubed example:

Cube

Prime

Offset

Root

Prime

Offset

79507

79531

+24

43

67

+24

The results are:

For squared numbers up to 1
million, this relationship holds true 41.0%
of the time (410 out of 1,000).

For cubed numbers up to 1
billion, this relationship holds true 59.8%
of the time (598 out of 1,000).

Did you know?
The difference between the first
and fourth product of a quadratic polynomial - including
proximate-prime polynomials - isalways a multiple of 6.

This divisibility
by 6 does not stop with the fourth term. It
recurs with the polynomial's 7th term, 10th
term, 13th term, and so on ad infinitum.

For example:

Prox. Prime Poly.

1st Term

4th Term

4t-1t

7th Term

7t-1t

10th Term

10t-1t

13th Term

13t-1t

n^2 + n
+ 10157

10159

10177

18

10213

54

10267

108

10339

180

n^2 - n
+ 10331

10331

10343

12

10373

42

10421

90

10487

156

2n^2 -
2n + 10627

10627

10651

24

10711

84

10807

180

10939

312

n^2
- n + 11777

11777

11789

12

11819

42

11867

90

11933

156

n^2 - n
+ 12107

12107

12119

12

12149

42

12197

90

12263

156

2n^2 -
2n + 12277

12277

12301

24

12361

84

12457

180

12589

312

2n^2 -
2n + 12409

12409

12433

24

12493

84

12589

180

12721

312

3n^2 -
3n + 12653

12653

12689

36

12779

126

12923

270

13121

468

n^2 + 5n
+ 12785

12791

12821

30

12869

78

12935

144

13019

228

n^2 + n
+ 12887

12889

12907

18

12943

54

12997

108

13069

180

A whole new meaning for "sexy primes"...?

Lots more data available: PPPs
< 50000, t1 - t15 ("T" denotes each t
value divisible by 6) Download (115KB)

Factoring in Polynomial
Time: A Pronic Solution...

Sometimes the best things
in life are free - well, almost free... and
very simple... and blindingly obvious. A single
GCD
calculation using the closest pronic
number to N will produce a factor for
one-third of composites not divisible by 2 or
5 up to any size. For example, the
nearest pronic to 898097881 is 898110992, and
these numbers share a GCD of 1873 - a prime
factor of both numbers.

An analysis of N <
10,000,000 shows that 35.8% of the nonobvious
composites are factorable with a single GCD
calculation. What is the common characteristic
of this huge class of numbers? They appear to
conform to rational
angles in the Sacks number spiral. Such
numbers can be generated with RadiusTest
using the Lines option and produce polynomials
with a third coefficient of 0.

Expanding the GCD calculation to
pronic numbers within 6 quadratic intervals of N
provides a nearly instantaneous factorization
test for more than two-thirds of composites
ending in 1, 3, 7, or 9 regardless of magnitude.
Here is an analysis of composites less than 10^{7}
with GCDs calculated for pronics from 6
quadratic intervals less than N through 6
quadratic intervals greater than N. It shows a
74.4% success rate.

If there is nothing there exists the possibility of something. With 0 and 1 there exists the constructs with which the number line and all the abstractions of mathematics are possible. Numbers are a perfect expression of the Platonic ideal. It would seem that any imagined universe would necessarily receive this preexistent knowledge just by virtue of its existence.

Undesigned Intelligence proposes that the universe is the way it is
neither by accident nor by design. The axioms
of mathematics, the laws of physics are what
they are innately and without reference to
percepted existent reality. The intelligence
inheres in and is nonreferentially existent
without regard to the observer. Undesigned
intelligence is uncreated.