Skip to main content
\( \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Subsection2.7.1The doubling map

Let \(H\) denote the half-open, half-closed unit interval:

\begin{equation*} H = [0,1) = \{x\in\mathbb R: 0\leq x < 1\}. \end{equation*}

The doubling map \(d\) is the function \(d:H\to H\) defined by

\begin{equation*} d(x) = 2x \bmod 1. \end{equation*}

A graph of the doubling map together with a typical cobweb plot starting at an irrational number is shown in FigureĀ 1

<<SVG image is unavailable, or your browser cannot render it>>

Figure2.7.1The doubling map

As it turns out, the doubling map is particularly easy to analyze if we consider its effect on the binary representation of a number. Suppose that \(x\in H\) has binary representation

\begin{equation*} x = 0_{\dot 2}b_1 b_2 b_3 b_4 b_5 \cdots, \end{equation*}

where each \(b_i\) is a zero or a one. (In computer parlance, the \(b_i\)s are called the bits of the number.)

Now, the effect of \(d\) on the binary representation of \(x\) is simple:

\begin{equation*} d(x) = 0_{\dot 2}b_2 b_3 b_4 b_5 \cdots. \end{equation*}

That is, the effect of \(d\) is to simply shift the bits of \(x\) to the left, discarding the bit that shifted into the ones place.

Of course, some numbers have multiple binary representations. For example,

\begin{equation*} 0_{\dot 2}1 = 0_{\dot 2}0\overline{1} = \frac{1}{2}, \end{equation*}

The doubling map, though, is defined independently of the binary representation of the number. Thus, we would expect that the shift operation, when applied to different representations of the same number, should lead to the same result. For example, the shift operation when applied to the two representations of \(1/2\) above yield.

\begin{equation*} 0_{\dot 2}\overline{0} = 0_{\dot 2}\overline{1} = 0 \mod 1. \end{equation*}

This observation makes it very easy to find orbits with specific properties. Suppose, for example, we want an orbit of period 3. Simply pick (almost) any number of the form

\begin{equation*} x = 0_{\dot 2}\overline{b_1 b_2 b_3} \end{equation*}

The only caveat is that we can't have all \(b_i\)s the same for that would lead to either zero (which is fixed) or one (which is not in \(H\)). As a concrete example,

\begin{equation*} x = 0_{\dot 2}\overline{001} = \sum_{k=1}^{\infty}\frac{1}{8^k}=\frac{1}{7} \end{equation*}

has period 3. In fact, it's easy to verify that

\begin{equation*} \frac{1}{7} \to \frac{2}{7} \to \frac{4}{7} \to \frac{8}{7} = \frac{1}{7} \bmod 1 \end{equation*}

under the doubling map.

Another nice feature of this representation is that there is a simple correspondence between the binary expansion of a number and its position in the unit interval. Every number with a binary expansion starting with a zero lies in the left half of the unit interval, while every number starting with a one lies in the right half. The first two bits of a number specify in which quarter of the interval the number lies; the first three bits specify in which eighth of the unit interval the number lies, as shown in FigureĀ 2

<<SVG image is unavailable, or your browser cannot render it>>

Figure2.7.2Dyadic intervals

More generally, given \(n\in\mathbb N\text{,}\) we can break the unit interval up into \(n\) pieces with length \(1/2^n\) and endpoints \(i/2^n\) for \(i=0,1,\ldots,2^n\text{.}\) These are called dyadic intervals and their endpoints (number of the form \(1/2^n\)) are called dyadic rationals. The first \(n\) bits of a number specify in which \(n^{\text{th}}\) level dyadic interval that number lies. In fact, the left hand endpoint of a dyadic interval has a terminating binary expansion which tells you exactly the first \(n\) bits of all the points in that interval.

Now, suppose that

\begin{equation*} x_1 = 0_{\dot 2}b_1 b_2 \cdots b_n b_{n+1} b_{n+2}'\cdots \: \text{ and } \: x_2 = 0_{\dot 2}b_1 b_2 \cdots b_n b_{n+1}' b_{n+2}'\cdots. \end{equation*}

Thus, the binary expansions of \(x_1\) and \(x_2\) agree up to at least the \(n^{\text{th}}\) spot but potentially disagree after that. Then, our geometric understanding of dyadic intervals allows us to easily see that,

\begin{equation*} |x_1-x_2|\leq\frac{1}{2^n}. \end{equation*}

Of course, there's also a simple algebraic proof of this fact, based on the fact that the bits cancel for \(k\leq n\) \begin{align*} |x_1 - x_2| &= \left|\sum_{k=n+1}^{\infty}\frac{b_k-b_k'}{2^k}\right| \\ &\leq \sum_{k=n+1}^{\infty}\frac{|b_k-b_k'|}{2^k} \leq \sum_{k=n+1}^{\infty}\frac{1}{2^k} = \frac{1}{2^n}. \end{align*}