Fascinating New Results in the Theory of Randomness

Author: Vincent Granville

I present here some innovative results in my most recent research on stochastic processes. chaos modeling, and dynamical systems, with applications to Fintech, cryptography, number theory, and random number generators. While covering advanced topics, this article is accessible to professionals with limited knowledge in statistical or mathematical theory. It introduces new material not covered in my recent book (available here) on applied stochastic processes. You don’t need to read my book to understand this article, but the book is a nice complement and introduction to the concepts discussed here.

None of the material presented here is covered in standard textbooks on stochastic processes or dynamical systems. In particular, it has nothing to do with the classical logistic map or Brownian motions, though the systems investigated here exhibit very similar behaviors and are related to the classical models. This cross-disciplinary article is targeted to professionals with interests in statistics, probability, mathematics, machine learning, simulations, signal processing, operations research, computer science, pattern recognition, and physics. Because of its tutorial style, it should also appeal to beginners learning about Markov processes, time series, and data science techniques in general, offering fresh, off-the-beaten-path content not found anywhere else, contrasting with the material covered again and again in countless, identical books, websites, and classes catering to students and researchers alike. 

Some problems discussed here could be used by college professors in the classroom, or as original exam questions, while others are extremely challenging questions that could be the subject of a PhD thesis or even well beyond that level. This article constitutes (along with my book) a stepping stone in my endeavor to solve one of the biggest mysteries in the universe: are the digits of mathematical constants such as Pi, evenly distributed? To this day, no one knows if these digits even have a distribution to start with, let alone whether that distribution is uniform or not. Part of the discussion is about statistical properties of numeration systems in a non-integer base (such as the golden ratio base) and its applications. All systems investigated here, whether deterministic or not, are treated as stochastic processes, including the digits in question. They all exhibit strong chaos, albeit easily manageable due to their ergodicity.  .

Interesting connections with the golden ratio, special polynomials, and other special mathematical constants, are discussed in section 2. Finally, all the analyses performed during this work were done in Excel. I share my spreadsheets in this article, as well as many illustration, and all the results are replicable.

1. General framework, notations and terminology

We are dealing here with sequences { x(n) }, starting with n = 1, recursively defined by an iterative formula x(+ 1) = g(x(n)). We will explore various functions g in the next sections. Typically, x(n) is a real number in [0, 1], and g is a mapping such that g(x(n + 1)) is also in [0, 1]. The first, value, x(1), is called the seed. In short, x(n) is a time series or stochastic process, and the index n denotes the (discrete) time or iteration. 

Typically, the values x(n) appear to be distributed somewhat randomly, according to some statistical distribution called the equilibrium distribution, and generally, the x(n)’s are auto-correlated.  So x(n) can be seen as a realization or observation of a random variable X, whose distribution is the equilibrium distribution. That is, the empirical distribution of the x(n)’s, when computed on a large number of terms, tends to the theoretical equilibrium distribution in question.  Also, in practice, the vast majority of seeds yield the same exact equilibrium distribution. Such seeds are known as good seeds, the other ones are called bad seeds

1.1. Finding the equilibrium distribution

The equilibrium distribution can be obtained by solving the equation P(X < y) = P(g(X) < y) with y in [0, 1]. This is actually a stochastic integral equation: the probability distribution P is the solution, and corresponds to the distribution of X. This distribution is sometimes denoted as F. Whether the equilibrium distribution exists or not, and whether it is unique or not (for good seeds), is not discussed here. However, we will provide several examples with unique equilibrium distribution, throughout this article, including how to solve the stochastic integral equation. The density attached to the equilibrium distribution is called the equilibrium density and is denoted as f.

1.2. Auto-correlation and spectral analysis

The theoretical auto-correlation between successive values of x(n) can be computed as follows:

Once computed, it is interesting to compare its value with the observed auto-correlation measured on the first few thousand terms of the sequence { x(n) }. Longer-term auto-correlations (lag-2, lag-3 and so on) can be computed using the same principle, either theoretically or empirically (on data). The entire auto-correlation structure, given the equilibrium distribution, uniquely characterizes the stochastic process. The study of these auto-correlations is called spectral analysis.

1.3. Ergodicity, convergence, and attractors

So far we have looked at one instance or realization { x(n) } of the underlying process, characterized by a mapping g and a seed x(1). This provides enough information to determine the auto-correlation structure and equilibrium distribution, which do not depend on the good seed. 

There is another way to look at things. You can simulate m deviates of a random variable Z(1) with any pre-specified distribution, say uniform on [0, 1]. Then apply the mapping g to each of these deviates, to obtain another set of m values. These new values are m deviates of a random variable denoted as Z(2), also with known statistical distribution. Repeat this step over and over, to obtain Z(3), Z(4), and so on. For large n, Z(n) converges in distribution to the equilibrium distribution, regardless of the initial distribution chosen for Z(1). We illustrate how it works on an example, later in this article. Because convergence to the same equilibrium distribution occurs regardless of the initial distribution, the equilibrium distribution, in the language of dynamical systems, is called an attractor distribution. The method described here can be used to identify these attractors. 

A stochastic process where the equilibrium distribution does not depend on Z(1) nor on the good seed, is called ergodic. All the processes studied here are ergodic. An example of non-ergodic process can be found here

1.4. Space state, time state, and Markov chain approximations

The space state is the space where { x(n) } takes its values; here it is [0, 1] and is thus continuous. The time space is attached to the index n, and it is discrete here. However, in some of our examples, x(n) can be written explicitly as a function of n, thus it can easily be extended to real values of n, making it a time-continuous process. 

We can also divide the continuous space state [0, 1] into a finite number of disjoint intervals S(1), S(2), … , S(k). Rather than studying the full equilibrium distribution, we could compute the probabilities

Then we are dealing with a state-discrete Markov chain with k states, and p(ij) represents the transition probability for moving from state i to state j, estimated on n observations x(1), …, x(n). One can compute the steady state probability vector, by solving a linear system of k equations; it represents the stable state of the system. As k and n tend to infinity and the intervals S(1), …, S(k) become infinitesimal, the steady state probability vector converges to the equilibrium density. This is another way to approximate the equilibrium distribution.  

1.5. Examples

The most well known examples of { x(n) } are random walks (n discrete) and Brownian motions (n continuous). However, since the space state considered in this article is [0, 1], a better suited example would be a random walk constrained to stay within [0, 1]. Such processes are discussed in my book, in chapter 3.

Let us now introduce the example that we will discuss in detail throughout the remaining of this article. It is defined with the following mapping: 

Here the curly brackets represent the fractional part function, also denoted as FRAC. The straight brackets on the right-hand side, represent the integer part or floor function, also denoted as INT. Since the seed is between 0 and 1, we also have this interesting property: INT(b x(n)) is the n-th digit of the seed x(1), in base b. Thus the parameter b is called the base, and it can be any real number greater than 1. In the next section, we consider the case where b is in ]1, 2]. Non-integer bases are also discussed here and here, and also extensively in my book. While this process looks very peculiar, there is a mapping between the base-2 system, and the very popular logistic map system: see chapter 10 in my book for details, or here for a summary.  

It makes sense to call this process the base-b process. If b is an integer, its equilibrium distribution is uniform on [0, 1] assuming you use a good seed. Also, if b is an integer, the auto-correlation between successive values of x(n) is 1 / b. This fact was probably mentioned for the first time, in my book. It was never proved, but assumed based on simulations and a lot of data crunching. The proof is actually quite simple and worth reading; it shows how to compute such auto-correlations, and constitutes an interesting classroom problem or exam question. You will find it in the appendix in this article.  

Pretty much all numbers in [0, 1] are good seeds for the b-process. However, there are infinitely many exceptions: in particular, none of the rational numbers is a good seed. Identifying the class of good seeds is an incredibly complicated problem, still unsolved today. If we knew which numbers are good seeds, we would know whether or not the digits of Pi or any popular mathematical constant, are evenly distributed. Another question is whether or not a good seed is just a normal number, and conversely. The two concepts are closely related, and possibly identical. Later in this article, we will discuss a stochastic process where all seeds are good seeds. 

Finally, the most interesting values of b are those that are less than two. In some ways, the associated stochastic processes are also much easier to study. But most interestingly, the similarities between these b-processes and stochastic dynamical systems, are easier to grasp, for instance regarding branching behavior, and attractors. This is the subject of the next section. The second fundamental theorem in the next section is one of the fascinating results published here for the first time, and still a work in progress. 

Note that if b is an integer, it is easy to turn the time-discrete b-process into a time-continuous one. We have

Thus the formula can be extended to values of n that are not integers. 

2. Case study

In this section, we consider the b-process introduced as an example at the bottom of the first section, with b in ]1, 2]. We now jump right away to the two fundamental theorems, and cool applications will follow afterwards. The function g is the fractional part function, as introduced in the first section.  

2.1. First fundamental theorem

Let Z be a random variable with an arbitrary distribution F, admitting a density function f on [0, 1]. Let Y = g(Z) be the fractional part of bZ, and b in ]1, 2]. Then we have:

This result is easy to obtain and constitutes an interesting classroom problem, or exam question. The proof is in the appendix. This theorem allows you to design a simple iterative algorithm to approximate the equilibrium distribution, and to asses how fast it converges to the solution. The result is valid even if the density function of Z has an infinite but countable number of discontinuities. This will be the case in the examples discussed here, in which a uniform distribution on [0, 1] is chosen for Z

The algorithm, already discussed in the first section (see the ergodicity, convergence and attractors subsection), consists in iteratively computing the distribution of g(Z), g(g(Z)), g(g(g(Z))), and so on, until the difference between two successive iterates is small enough. Here, the difference is measured as the distance between two distributions, using one of the many distance metrics discussed in the literature (see here.)

The next theorem tells you in more details what happens if you choose a uniform distribution on [0, 1], for Z. This was our favorite choice in most of our simulations.   

2.2. Second fundamental theorem

We use the same assumptions as in the first theorem, but here Z has a uniform distribution on [0, 1]. The following theorem can be used to find the equilibrium density, as illustrated in the appendix on the supergolden ratio constant. 

Let Z(1) = Z, and Z(n+1) = g(Z(n)). Then Z(n+1) has a piece-wise uniform distribution, more precisely, a mixture of n+1 uniform distributions on n+1 intervals. These intervals are denoted as 

[0, c(1)[,     [c(1), c(2)[,     [c(2), c(3)[, … , [c(n), 1],

and the constant value of the density of Z(n+1) on the k-th interval (k = 1, … , n +1) is denoted as d(k). The distribution of Z(n+1) has the following features: 

  • Sometimes c(k-1) = c(k) depending on bk, and n 
  • b^n d(k) is an integer and { d(k) } is a decreasing sequence
  • c(k) is a polynomial of degree n in b, with coefficients equal to 0, 1, or -1 
  • Only the dominant coefficient of this polynomial is equal to 1

It is convenient to use the notation c(0) = 0 and c(n+1) = 1. The c(k)’s, for k = 1, … , n, are called the change points. A change point is thus a discontinuity in the density function. One of these change points is always equal to – 1.

I haven’t completed the proof of the second theorem yet, and the theorem itself can probably be further refined. However, using the first fundamental theorem, it is easy to see that when moving from iteration n to n+1, we observe the following:

  • Because b is smaller then 2 and Z(n+1) takes on value between 0 and 1, it is clear that Z(n+1), the fractional part of bZ(n), takes more frequently on smaller values (closer to 0) than on larger ones (closer to 1.) Thus the interval densities d(k) are highest next to zero, and lowest next to 1, and decreasing in between. This explains why { d(k) } is a decreasing sequence.
  • The densities are also constant on each interval, as we are only dealing with intervals with uniform densities, throughout the iterations. Also b^k d(k) must be an integer, as the formula in the first fundamental theorem only involves adding integers (divided by b). This is easy to prove by recursion. 
  • Finally, at iteration n = 2, we have a single change point c(1) = – 1, and two intervals. Any new iteration, because of the formula in the first fundamental theorem, creates a whole new set of new change points, each one either equal to cb or c(b – 1), where c is any change point from the current iteration. This explains the special polynomial expression for c(k). 

For any value of n, the exact distribution of Z(n+1) can be computed explicitly. The computation is elementary, but becomes incredibly tedious (and should be automated using some software) as soon as n is larger than 5: this is a combinatorial problem. But particular results are easier to obtain. The simplest case is as follows:

Exercise: prove that this is actually a density, that is,

Other easy cases include the full solution (for any value of b between 1 and 2) when n = 2 or n =3. This is left as an exercise. Note that if for some finite value of n, Z(n) has the equilibrium density, then Z(n+1), Z(n+2) and so on also have that exact same density.  This is why the equilibrium distribution is called an attractor. 

2.3. Convergence to equilibrium: illustration

The first picture below illustrates the convergence of the empirical equilibrium densities to the theoretical solution, starting with a simulated uniform density on [0, 1] for Z(1), and computing the empirical densities for Z(2), Z(3), and so on, up to Z(7). You can check out the computations in this spreadsheet. The parameter b used here is the supergolden ratio constant (see next section) and we used 100,000 observations to estimate each density. 

Below are a few equilibrium densities (approximated using the empirical density) for various values of b

The spreadsheet used to produce the 4 above charts, with detailed computations, is available here. Some exact solutions (where the theoretical equilibrium density is easy to compute) are provided in the next section and in the appendix, with a short tutorial on how to discover these solutions and to apply the methodology to the general case (see appendix.). 

3. Applications

In this section, we discuss applications, still focusing for the most part on b-processes with b smaller than 2. But we also discuss other stochastic processes.

3.1. Potential application domains

Stochastic processes or time series models are routinely used in Fintech to model the price of some commodities. Thus, b-processes offer a new tool for quants, behaving quite differently than traditional processes and time series. The variety in these b-processes is such, and the behavior so unique depending on b, that it allows the data scientist to attach a unique number to an observed time series: its estimated parameter b. Two different values of b provide two wildly different types of patterns, as is usually the case in all chaotic dynamical systems, for instance, with the logistic map. Whether in finance or other fields, these processes model situations in which an auto-correlated system evolves smoothly over time (flat or with a downward trend, either linear or non-linear), experiencing sharp drops every now and then, occurring at what we defined earlier as change points. Depending on b, the number of change points in the scaled time interval [0, 1] can be 2, 3, 4, and so on, up to values so large that the process looks perfectly smooth (this is the case, for instance if b is very close to 1.) Thus the parameter b can be chosen to fit with a wide array of change point locations, as well as various downward trends and drop intensities, observed in your data set. As discussed in section 2, the b-process can be seen as a smooth but infinite mixture of uniform distributions on infinitesimal intervals, or finite mixture on larger intervals, depending on b.    

Other specific applications include:

  • Generation of non-periodic, continuous, replicable pseudo-random numbers. By far, the largest class of pseudo random number generators currently in use is made of periodic, discrete generators, though the period is extremely large in modern generators. And random numbers produced using physical devices are typically not replicable. To get a good generator, one would have to use a value of b resulting in a known equilibrium distribution, start with a good seed, and map the sequence { x(n) } so that the distribution of the mapped x(n)’s (each one representing a random number) becomes uniform, with no auto-correlation. How to do this is described in the next sub-section about the golden ratio process.
  • Thus, with a careful choice of b and proper mapping, b-processes can be used in cryptographic systems and Blockchain. Digits produced by b-processes, and defined as a(n) = INT(b x(n)), have the following particular property. The digits are binary (equal to 0 or 1), so each digit can be called a bit, using the language of information theory. Indeed, when b = 2, this is just the standard base-2 numeration system that we are all familiar with. However, when b is smaller than 2, each digit carries an amount of information smaller than the standard bit. Indeed, that amount is equal to the logarithm of b in base 2 (and of course, equal to 0 if b = 1.) So not only we invented a unit of information that is smaller than the smallest unit currently in use, but it allows you to create encryption systems filled with some amount of natural blurring, which may or may not be useful depending on the purpose. 
  • Another application is to benchmark computer systems, testing for accuracy when performing heavy computations that require a large number of decimals. If you compute the successive values of x(1), x(2) and so on up to x(n), all your numbers will be completely wrong once n is larger than 45. You may not notice it initially, but try in Excel with a base b that is an even integer: it will become very obvious! Sometimes it does not matter (for instance when studying asymptotic properties such as auto-correlations or the equilibrium distribution) because b-processes are ergodic, and sometimes it matters. This is discussed in detail in my book, available here: see the chapters about high precision computing, or read this article
  • Finally, b-process can be used to benchmark and test the power of statistical tests, the sample size needed, and other statistical procedures. Since the “observations” { x(n) } have a known statistical distribution (depending on b, see next subsection about the golden ratio process)  — a property never found in actual, real life data sets — you can test a number of hypotheses (for instance about the value of some auto-correlation), and check when your statistical tests provide the right or wrong answer. Here, the right answer is known in advance!

3.2. Example: the golden ratio process

The golden ratio process, as its name indicates, corresponds to b = (1 + SQRT(5)) / 2. Its associated numeration system, with INT(b x(n)) representing the n-th digit of the seed x(1) in base b, has been studied in some details, see here. This b-process is the best behaved one, and the easiest to study, besides b = 2. It is probably the only b-process with exactly one change point, and its equilibrium distribution is known (probably published here for the first time.) Thus, it is a good candidate for applications such as encryption, random number generation, model fitting, testing statistical tests, or Blockchain. 

Using the notations introduced in section 2, this process has the following features:

  • The unique change point is c(1) = b – 1
  • The equilibrium distribution is a mixture of two uniform distributions: one on [0, c(1)[, and one on [c(1), 1[
  • At equilibrium, the two respective densities are d(1) = (5 + 3*SQRT(5)) / 10 and d(2) = (5 + SQRT(5)) / 10. 

Below is a picture of the equilibrium density associated with this process:

In order to make this process suitable for use in cryptography, one has to map { x(n) } onto a new sequence { y(n) }, so that the new equilibrium density becomes uniform on [0, 1]. This is achieved as follows:

If x(n) <  -1, then y(n) = x(n) / (b – 1) else y(n) = (x(n) – (b1)) / (2 – b). 

Now the { y(n) } sequence has a uniform equilibrium distribution on [0, 1]. However, this new sequence has a major problem: high auto-correlations, and frequently, two or three successive values that are identical (this would not happen with a random b, but here b is the golden ratio — a very special value — and this is what is causing the problem.)

A workaround is to ignore all values of x(n) that are larger than – 1, that is, discarding y(n) if x(n) is larger than b -1. This is really a magic trick. Now, not only the lag-1 auto-correlation in the remaining { y(n) } sequence is equal to 1/2, the same value as for the full { x(n) } sequence with b = 2, but the lag-1 auto-correlation in the remaining sequence of binary digits (digits are defined as INT(b y(n)) is also equal to zero, just like for ordinary digits in base 2! The proof of these facts is left as an exercise. 

3.3. Finding other useful b-processes 

There are different ways to compute the equilibrium distribution when you have 3 change points or less. Finding the change points is easy: one of them is always b – 1, and the other ones can be any of these:

You can identify them by visual inspection of the empirical equilibrium density. And among the 8 potential change points listed above, you must ignore those below 0 or above 1. Note that the golden ratio process actually has two change points: b^2 – b  and b – 1. But b^2 – b = 1 in this case, thus the first one is not a real change point. If you try with b = 1.61 (very close to the golden ratio) this ghost change point is now visible, and it is very close to 1. If you try b = 1.60, you now have 3 change points. And with b = 1.59, the empirical equilibrium density looks entirely different, possibly with a lot of change points and no visible drop (just a linear, bumpy downward trend), though it is hard to tell. In some cases, a change point can be double (or triple) for instance if b^3 – b^2 – b = b – 1. It typically results in an equilibrium density with very few change points. Once the change points are known, the densities can be computed using the first fundamental theorem (see section 2 in this article) and solving a linear system of equations. This is illustrated in the appendix.   

Interestingly, the b-processes most likely to have a simple equilibrium density with very few change points, correspond to b‘s for which two of the above polynomials have the exact same value, or one is equal to 0 or 1, when evaluated for the b in question. This was the case for the golden ratio process. Below are other examples: 

  • b^3 – b^2 = 1 yields b = 1.4655712318767… (3 change points, b is the supergolden ratio)
  • b^3 – b^2 – b = b – 1 yields b = 1.8019377358048… (3 change points)
  • b^5 – b^4 = 1 or b^3 – b = 1 yields b = 1.32471795724475… (4 change points, b is the plastic number)

You can find more about these three special constants, in this article. The exact values, respectively for the supergolden ratio and the plastic number, are 

To get an approximation of the equilibrium distribution and see the change points, start with a good seed x(1). For whatever reasons x(1) = SQRT(2)/2 works better than Pi/4. Then compute x(n) up to N = 1,000,000, then plot the empirical equilibrium density computed on the x(n)’s. This is illustrated in my spreadsheet, available here. See also the picture below based on values of x(n) for n =1, …,300,000,  with b being the plastic number.

As a general rule, the lower the value of b, the more change points, on average. Also, most values of b (whether special or not) always produce a few major change points (and frequently a large number of minor ones), with big drops in the density function occurring at the major change points. Analyzing the polynomials discussed in the second fundamental theorem, can help you identify these major change points. 

In the appendix, we completely solve the case where b is the supergolden ratio. 

4. Additional research topics

Here we discuss three potential topics for future research: stochastic processes free of bad seeds, the asymptotic properties of attractors and the construction of a table of attractors summarizing their features, and finally, some applications of b-processes to probabilistic and experimental number theory, including the discussion of some special integrals. 

4.1. Perfect stochastic processes

The b-process, defined by g(x) = bx – INT(bx), has bad seeds, as discussed earlier. For a b-process, the vast majority of seeds are good seeds (the set of bad seeds actually has Lebesgue measure zero), but nobody knows if mathematical constants such as PI or SQRT(2) are good or bad seeds. Are there any stochastic processes free of bad seeds? Such processes can have some benefits (but mostly drawbacks!) and are called perfect processes, until someone comes up with a better word. The term universally good averaging sequence is sometimes used. One example is the following.

The process defined by g(x) = x + b – INT(x + b), where b is a positive irrational number, fits the bill. Since by definition, x(n+1) = g(x(n)), it is easy to see that

x(n) = (n-1)b + x(1) – INT((n-1)b + x(1)).

The fact that there is no bad seed is guaranteed by the equidistribution theorem. Even x(1) = 0 is a good seed. 

This process is investigated in chapter 11 in my book, available here (see page 70.) The n-th binary digit is defined as INT(2 x(n)), and these digits carry even less information than those generated by b-processes with b between 1 and 2. If b = log(2), the first few digits of the seed x(1) = 0 are as follows:

0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0

In contrast to b-processes, all seeds (regardless of b) have 50% of digits equal to 0, and 50% equal to 1. This process is related to integral C defined later in this article, in the sub-section Probabilistic calculus and number theory, special integrals.

The equilibrium distribution is always uniform if b is irrational, thus it is possible to compute the theoretical lag-1 auto-correlation of the sequence { x(n) } (using the first formula in section 1) and search for the few b‘s that minimize, in absolute value, that auto-correlation. The empirical equilibrium distribution converges much faster to the theoretical one, than with b-processes. However, I’ve found a striking, unusual pattern for b = Pi and b = exp(Pi).

The empirical density, computed on x(1), …, x(n) and binned into N intervals, shows strong periodic bumps that other irrational b do not produce, not even b = Pi – 0.00001. It occurs even with the seed x(1) = 0, with specific values of n and N, for instance n = 10,000 and N = 100, or n =1,000,000 and n = 100, but not with N = 100,000 and N = 100. These bumps decrease as n increases, and convergence to uniform [0, 1] still occurs for b = Pi and b = exp(Pi). Initially, I thought it was an issue with my internal machine arithmetic, but both my Perl and Excel implementations reproduce the same patterns. The Perl code is available here. The pattern is illustrated in the figure below.  

The above picture shows the empirical density, with n = 56,000 observations and N = 100 bins, for four values of b. It is extremely close to the theoretical uniform equilibrium on [0, 1]. I truncated the Y-axis to visually amplify the pattern described. The spreasheet is available here.

4.2. Characterization of equilibrium distributions (the attractors)

Another interesting research topic is about characterizing the class of attractors. That is, what kind of distribution is an equilibrium distribution? What makes them peculiar, compared to other distributions? Another question is about how the number of attractors grows as the number of change points increases. Is there an asymptotic relationship between the number of change points (say N), and the number of attractors that have N change points?

It is not even known if the number of attractors with a finite number of change points, is finite or infinite. Surely, there are more than two attractors with two change points, and much more than one attractor with three change points. The ones listed in the above table are only those that I have studied. So this table is a work in progress. 

4.3. Probabilistic calculus and number theory, special integrals

When I first started this research project a while back, the initial purpose was to study the behavior of the digits of numbers such as Pi. In fact, in this article, INT(b x(n)) represents the n-th digit of the seed x(1) in base b, whether b is an integer, a real number between 1 and 2, or any real number above 1. My book Applied Stochastic Processes, Chaos Modeling, and Probabilistic Properties of Numeration Systems published in June 2018 (see here) was the first milestone: developing a general framework to study this kind of problems. Since then, I have had new ideas. Here, I present some of them, related to this article, that I am still pursuing today.

In this subsection, the notation { x } represents the fractional part of the number x, in contrast to the remaining of the article, where { x(n) } represents the entire sequence x(1), x(2), and so on.  Here we will only consider the case b = 2.  Also, the seed x(1) is denoted as x.

If b is an integer and if the seed x = x(1) is in [0, 1[, we have, for k larger or equal to 0:

One of the promising starting points is the following result. The proportion of digits of x equal to 0 in base 2, is 50% if and only if the series below, with b = 2, converges:

In particular, it always converges if x is a good seed in base 2. It would be interesting to study the wildly erratic behavior of this function, which is not only discontinuous everywhere, but admits a dense set of singularities (where it does not converge.) Note that if we replace b^k by k in the above series, it always converges whenever x is an irrational number, a consequence of the equidistribution theorem. What happens if we replace b^k by k^2 or k^b ?

A related quantity is the following:If b is an integer, M(b) = (log 2) / 2 and does not depend on x, assuming x is a good seed. If b is not an integer, M(b) is smaller than (log 2) / 2. More precisely, M(b) = E(b) log(2), where E(b) is the expected value of the equilibrium distribution of a b-process. For instance, 

Finally, let us consider the following integrals:

The first integral is related to the limit M(b). It is a type of Frullani integral. Using the techniques presented in this article, one would think that both the limit M(b) and the integral A have the same value. The second integral is identical to the first one, after a change of variable that made the parameter b disappear. But B = 1/4, not (log 2) / 2. See here for details. The assumptions in the Frullani theorem (see here) must be violated in this case. What about C? That one is equal to (log 2) / 2, as one would expect. Other similar integrals can be found here

Integral A is associated with b-processes, which have bad seeds, and are sometimes called universally bad averaging sequences for that reason. Integral C is associated with a process with no bad seed, defined at the beginning of section 4, see Perfect stochastic processes in this article.

5. Appendix

Here we dive into more technical details, regarding three problems discussed in the article.

5.1. Computing the auto-correlation at equilibrium

Here we consider the case where b is an integer, so the equilibrium distribution is “known” to be uniform on [0, 1]. This fact has been taken for granted probably for more than a thousand years (and that’s why people believe that the digits of Pi and other mathematical constants, are uniformly distributed), but it would be nice (and easy) to prove it, if the seed is a good seed. This is left as an exercise. It is not true usually if the seed is a bad seed. Pi is believed to be a good seed, but no one has ever managed to prove it: it is one of the biggest mathematical challenges of all times, and I once offered a $500,000 award either for a solid proof or rebuttal of this fact.

Note that at equilibrium, both X and g(X) have the same distribution, so their mean and variance are identical. So if b is an integer and the seed x(1) is a good seed, the only challenge in the auto-correlation formula mentioned in the first section, is the computation of E[X g(X)].

We have:

By definition, g(X) = bX – INT(bX). Thus,

Combined with the fact that E(X) = E(g(X)) = 1/2 and Var(X) = Var(g(X)) = 1/12, as the equilibrium distribution is uniform on [0, 1], we obtain the final result: the correlation between X and g(X), that is, the theoretical auto-correlation between two successive values of x(n), is equal to 1 / b. It is easy to check this result by computing the estimated value of the lag-1 auto-correlation on x(1), …, x(n), with n = 1,000,000: this test provides a very good approximation.

5.2. Proof of the first fundamental theorem

Here b is in ]1, 2]. If Y = g(X), we have, for y in [0, 1]:

Thus, using the notation F for the probability distribution function (and f for the density) we have:

Taking the derivative with respect to y on both sides of the equation, we obtain the final result:

5.3. How to find the exact equilibrium distribution

This is accomplished in two steps.

First, compute P(b) for all the polynomials P mentioned in the second fundamental theorem. Any value of P(b) between 0 and 1 corresponds to a potential change point. By looking at the empirical equilibrium density, computed on 100,000 values of { x(n) }, you can find the approximate value of the major change points: they correspond to points of discontinuity in the density function. For instance, if b = 1.4656… (the supergolden ratio), there is clearly a change point around 0.46, see the first picture in section 2. There is another one around 0.68. The exact values c(1) and c(2) of the two change points are derived from two of these polynomials:

c(1) = b – 1, and c(2) = b^2 – b

because no other polynomial (in the small list that you have to check) gets so close to 0.46 and 0.68 respectively, when evaluated at b

Then, once the change points are identified, take three different values — one in each interval — for instance 

0.25 in S(1) = [0, c(1)[,  0.50 in S(2) = [c(1), c(2)[,  and 0.75 in S(3) = [c(2), 1].

This assumes that you have three intervals, but you can easily generalize if you have more. Now apply the first fundamental theorem with y = 0.25, y = 0.50, and y = 0.75 respectively. You get:

Note that

  • 0.853… = (1 + 0.25) / b, and it is in S(3), and 0.171… = 0.25 / b, and it is in S(1)
  • 0.341… = 0.50 / b, and it is in S(1)
  • 0.512… = 0.75 / b, and it is in S(2)

At equilibrium, the density functions of X and Y are identical. Thus, if d(1), d(2) and d(3) denote the density values in each of the 3 intervals, we end up with the following linear system, where d(1), d(2) and d(3) are the unknowns:

d(1) = (d(3) + d(1)) / b

d(2) = d(1) / b

d(3) = d(2) / b 

It has an infinite number of solutions, and you need to add one constraint, the fact that the total density sums to 1, to be able to solve it. That constraint is

d(1)c(1) + d(2)(c(2) – c(1)) + d(3)(1 – c(2)) = 1,

that is, 

d(1)(b-1) + d(2)(b^2 – 2b +1) + d(3)(1 – b^2 + b) = 1. 

Finally, the solution, in this case, is

d(1) = b^2 / (2b^3 – 4b^2 + 2b +1), d(2) = d(1) / b, and d(3) = d(1) / b^2.

If you can not easily determine which polynomials yield the change points or you want to automate the method, you may as well try any two combinations of the potential polynomials (assuming you have two change points), and for each pair of polynomials (that that is, for each pair of change point candidates) solve a similar linear system. You then plug the tentative equilibrium densities obtained for each pair of polynomials, into the formula in the first fundamental theorem. Only one of them will satisfy the fact that X and Y have the same density everywhere on [0, 1], and that is the solution. 

To not miss this type of content in the future, subscribe to our newsletter. For related articles from the same author, click here or visit www.VincentGranville.com. Follow me on on LinkedIn, or visit my old web page here.

DSC Resources

Follow us: Twitter | Facebook

Go to Source