Posts Tagged ‘Furstenberg’

Coding Fractals by trees

November 15, 2010

Recently I’ve been editing a set of notes taken by professor Kra during Furstenberg’s course last year. (Well…I should have done this last year >.<) One of the main ideas in the course was to build a correspondence between trees and Fractal sets – and hence enables one to prove statements about dimensions and projections of Fractals by looking at the corresponding trees. I want to sketch some highlights of the basic idea here.

Let Q=Q^{(n)} denote the unit cube in \mathbb{R}^{n}.

Let A\subset\mathbb{R}^{n} and x\in A. For t\in\mathbb{R}, consider the family
t(A-x)\cap Q.

Question:Does the limit exist as t\to\infty? (here we consider the Hausdorff limit on compact subsets of Q)

i.e. we ‘zoom in’ the set around the point x, always crop the set to the cube Q and consider what does the set ‘look like’ when the strength of the magnifying glass approaches infinity.

If A is smooth (curve of surface), then this limit exists and is a subspace intersected with Q.

Generally one should not expect the above limit to exist for fractal sets, however if we weaken the question and ask for the limit when we take a sequence (n_k)\rightarrow\infty, then it’s not hard to see self-similar sets would have non-trivial limits. Hence in some sense fractal behavior is characterized by having non-trivial limit sets.

Definition: Let A\subset Q^{(n)},
A' is a mini-set of A if for some \lambda\geq 1 and u\in\mathbb{R}^{n}, A' =(\lambda A+u)\cap Q

A'' is a micro-set of A if there is sequence of minisets A'_{n} of A and A'' = \lim_{n\to\infty}A'_{n}

A is homogeneous if all of its microsets are minisets. As we should see below, homogeneous fractals are ‘well-behaved’ in terms of dimensions.

Here comes a version of our familiar definition:

Definition:A set A\subset X has Hausdorff \alpha-measure 0 if for every \varepsilon > 0, there exists a covering of A\subset \bigcup_{n=1}^{\infty}B_{\rho_{n}} with \sum_{n}\rho_{n}^{\alpha}  \alpha.

Thus it makes sense to define:

Definition: The Hausdorff dimension of A is

\dim(A) = \inf\{\alpha>0\colon \text{Hausdorff } \alpha \text{ measure is } 0 \}.

Now comes trees~ For A\subset[0,1] closed, consider expansion in base 3 of numbers in A. In the expansion of each number in A, there will be certain digits which appear. Following this digit, there may be certain digits that can appear. This defines a tree, which is a tree of shrinking triadic intervals that intersects the set A.

Definition: Let \Lambda be a finite set of symbols (which we will refer to as the alphabet and elements of \Lambda are letters of the alphabet).

A word w is any concatenation of finitely many symbols in \Lambda, the number of symbols in w is called the length of w, denoted by \ell(w).

A tree \tau in the alphabet \Lambda is a set of words satisfying
1. If uv\in\tau then u\in\tau.
2. If u\in\tau, then there exists a letter a\in\Lambda such that ua\in\tau.

Notation: if w\in\tau, denote \tau^{w} = \{v\colon wv\in\tau\}.

Definition: A section \Sigma of \tau is a finite set of words for which there exists N such that if s\in\tau and \ell(s) \geq N, then there exists r\in\Sigma and a word w such that s = rw.

Definition: The Hausdorff dimension of a tree is defined by \dim(\tau)=\inf\{ \beta \ | \ \inf_{\Sigma}\{\sum_{r\in\Sigma} e^{-\beta\ell(r)}\} = 0 \} where \Sigma is any section of \tau.

Theorem: The Hausdorff dimension of a tree equals the Hausdorff dimension of set it corresponds to.

The proof is merely going through the definition, hence we won’t present here. It suffice to check the dimension is unchanged if we only consider open ball covers with balls of radius $p^{-n}$ for any given $p \in \N$. Which is indeed true.

Now let’s give a simple example to illustrate how to make use of this correspondence:

It is easy to prove \dim(X \times Y) \geq \dim(X) \times \dim(Y), here we produce an example were the inequality is strict.

Let A \subseteq [0,1] be the set of numbers whose decimal expansion vanishes from the k_{2n} to k_{2n+1}-th digits to the right of 0.

To compute \dim(A), look at levels such that a small number of intervals will cover the set A. The number of intervals of size 10^{k_{2n+1}} is less than 10^{k_{2n}}. Then if \frac{10^{k_{2n}}}{k_{2n+1}} goes to 0, we’ll have that the Hausdorff dimension is 0. So we just
need k_{2n}/k_{2n+1}\to 0.

Let B be the set of numbers whose decimals vanish from k_{2n-1} to k_{2n}-1-th digits. By the same computation, \dim(B) = 0.

What about A\times B? The tree that corresponds to this is a regular tree which every point has 10 branches. So the Hausdorff dimension is 1.

Note: Here the structure of the tree gives you easy information on the set, but it is hard to look directly at the set and see this information.

Many theorems regarding dimension of projections and intersections of fractal sets turns out to be simple to prove after we coded the sets by trees.

On compact extensions

May 10, 2010

This is again a note on my talk in the Szemerédi’s theorem seminar, going through Furstenberg’s book. In this round, my part is to introduce compact extension.
Let \Gamma be an abelian group of measure preserving transformations on (X, \mathcal{B}, \mu), \alpha: (X, \mathcal{B}, \mu, \Gamma) \rightarrow ( Y, \mathcal{D}, \nu, \Gamma') be an extension map.
i.e. \alpha: X \rightarrow Y s.t. \alpha^{-1} sends \nu-0 sets to \mu-0 sets;

\gamma'\circ \alpha (x) = \alpha \circ \gamma (x)

Definition: A sequence of subsets (I_k) of \Gamma is a Folner sequence if |I_k| \rightarrow \infty and for any \gamma \in \Gamma,

\frac{| \gamma I_k \Delta I_k|}{|I_k|} \rightarrow 0

Proposition: For any Folner sequence I = (I_k) of \Gamma, for any f \in L^1(X), \displaystyle \frac{1}{|I_k|} \sum_{\gamma \in I_k} \gamma f converges weakly to the orthogonal projection of f onto the subspace of \Gamma-invariant functions. (Denoted P(f) where P: L^2(X) \rightarrow L^2_{inv}(X).

Proof: Let \mathcal{H}_0 = P^{-1}(\bar{0}) = (L^2_{inv}(X))^\bot
For all \gamma \in \Gamma,

\gamma (L^2_{inv}(X)) \subseteq L^2_{inv}(X)

Since \Gamma is \mu-preserving, \gamma is unitary on L^2(X). Therefore we also have \gamma( \mathcal{H}_0) \subseteq \mathcal{H}_0.

For f \in \mathcal{H}_0, suppose there is subsequence (n_k) where \displaystyle \frac{1}{|I_{n_k}|} \sum_{\gamma \in I_{n_k}} \gamma (f) converges weakly to some g \in L^2(X).

By the property that \frac{| \gamma I_k \Delta I_k|}{|I_k|} \rightarrow 0, we have for each \gamma \in \Gamma, \gamma(g) = g, \ g is \Gamma-invariant. i.e. g \in (\mathcal{H}_0)^\bot

However, since f \in \mathcal{H}_0 hence all \gamma(f) are in \mathcal{H}_0 hence g \in  \mathcal{H}_0. Therefore g \in \mathcal{H}_0 \cap (\mathcal{H}_0)^\bot, g=\bar{0}

Recall: 1)X \times_Y X := \{ (x_1, x_2) \ | \ \alpha(x_1) = \alpha(x_2) \}.

i.e. fibred product w.r.t. the extension map \alpha: X \rightarrow Y.

2)For H \in L^2(X \times_Y X), \ f \in L^2(X),

(H \ast f)(x) = \int H(x_1, x_2) f(x_2) d  \mu_{\alpha(x_1)}(x_2)

Definition: A function f \in L^2(X) is said to be almost periodic if for all \varepsilon > 0, there exists g_1, \cdots g_k \in L^2(X) s.t. for all \gamma \in \Gamma and almost every y \in Y,

\displaystyle \min_{1 \leq i \leq k} || \gamma (f) - g_i||_y < \varepsilon

Proposition: Linear combination of almost periodic functions are almost periodic.

Proof: Immediate by taking all possible tuples of g_i for each almost periodic function in the linear combination corresponding to smaller \varepsilonl.

Definition: \alpha: (X, \mathcal{B}, \mu, \Gamma) \rightarrow ( Y, \mathcal{D}, \nu, \Gamma') is a compact extension if:

C1: \{ H \ast f \ | \ H \in L^\infty (X \times_Y X) \cap \Gamma_{inv} (X \times_Y X), f \in L^2(X) \} contains a basis of L^2(X).

C2: The set of almost periodic functions is dense in L^2(X)

C3: For all f \in L^2(X), \ \varepsilon, \delta > 0, there exists D \subseteq Y, \ \nu(D) > 1- \delta, \  g_1, \cdots, g_k \in L^2(X) s.t. for any \gamma \in \Gamma and almost every y \in Y, we have

\displaystyle \min_{1 \leq i \leq k} || \gamma (f)|_{f^{-1}(D)} - g_i||_y < \varepsilon

C4: For all f \in L^2(X), \ \varepsilon, \delta > 0, there exists g_1, \cdots, g_k \in L^2(X) s.t. for any \gamma \in \Gamma, there is a set D \subseteq Y, \ \nu(D) > 1- \delta, for all y \in D

\displaystyle \min_{1 \leq i \leq k} || \gamma (f) - g_i||_y < \varepsilon

C5: For all f \in L^2(X), let \bar{f} \in L^1(X \times_Y X) where

\bar{f}: (x_1, x_2) \mapsto f(x_1) \cdot f(x_2)

Let I=(I_k) be a Folner sequence, then \bar{f}=\bar{0} iff P \bar{f} = \bar{0}.

Theorem: All five definitions are equivalent.

Proof: “C1 \Rightarrow C2″

Since almost periodic functions are closed under linear combination, it suffice to show any element in a set of basis is approximated arbitrarily well by almost periodic functions.

Let our basis be as given in C1.

For all H \in L^\infty (X \times_Y X) \cap \Gamma_{inv} (X \times_Y X), the associated linear operator \varphi_H: L^2(X) \rightarrow L^2(X) where \varphi_H: f \mapsto H \ast f is bounded. Hence it suffice to check H \ast f for a dense set of f \in L^2(X). We consider the set of all fiberwise bounded f i.e. for all y \in Y, ||f||_y \leq M_y.

For all \delta > 0, we perturb H \ast f by multiplying it by the characteristic function of a set of measure at least 1- \delta to get an almost periodic function.

“C2 \Rightarrow C3″:

For any f \in L^2(X), there exists f' almost periodic, with ||f-f'||< \frac{\epsilon \sqrt{\delta}}{2} . Let \{ g_1, g_2, \cdots, g_{k-1} \} be the functions obtained from the almost periodicity of f' with constant \varepsilon/2, g_k = \bar{0}.

Let D = \{ y \ | \ ||f-f'||_y < \varepsilon/2 \}, since

|| f - f'||^2 = \int ||f-f'||_y^2 d \nu(y)

Hence ||f-f'||< \frac{\varepsilon \sqrt{\delta}}{2} \ \Rightarrow \ ||f-f'||^2 < \frac{\varepsilon^2 \delta}{4}, \{ y \ | \ ||f-f'||_y > \varepsilon/2 \} has measure at most \delta/2, therefore \nu(D) > 1- \delta.

For all \gamma \in \Gamma, ify \in \gamma^{-1}(D) then

|| \gamma f|_{\alpha^{-1}(D)} - \gamma f'||_y  = ||f|_{\alpha^{-1}(D)} - f'||_{\gamma(y)} < \varepsilon /2

Hence \displaystyle \min_{1 \leq i \leq k-1} ||\gamma f|_{\alpha^{-1}(D)} - g_i||_y < \varepsilon /2 + \varepsilon /2 = \varepsilon

If y \notin \gamma^{-1}(D) then f|_{\alpha^{-1}(D)} vanishes on \alpha^{-1}(\gamma y) so that || \gamma f|_{\alpha^{-1}(D)} - g_i||_y = 0 < \varepsilon.

Hence \alpha satisfies C3.

“C3 \Rightarrow C4″:

This is immediate since for all y \in \gamma^{-1}(D), we have \gamma f = \gamma f|_{\alpha^{-1}(D)} on \alpha^{-1}(y) hence

\displaystyle \min_{1 \leq i \leq k} ||\gamma f - g_i||_y < \min_{1 \leq i \leq k-1} ||\gamma f_{\alpha^{-1}(D)} - g_i||_y < \varepsilon

\nu(\gamma^{-1}(D)) = \nu(D) > 1-\delta. Hence \alpha satisfies C4.

“C4 \Rightarrow C5″:

For all f \in L^2(X), \ \varepsilon, \delta > 0, by C4, there exists g_1, \cdots, g_k \in L^2(X) s.t. for any \gamma \in \Gamma, there is a set D \subseteq Y, \ \nu(D) > 1- \delta, for all y \in D

\displaystyle \min_{1 \leq i \leq k} || \gamma (f) - g_i||_y < \varepsilon

W.L.O.G. we may suppose all g_i are bounded since by making \delta slighter larger we can modify the unbounded parts to be bounded.

\bar{g_j} \otimes g_j \in L^\infty(X \times_Y X), suppose P(\bar{f}) = 0.

Recall in C5 we have \bar{f}: (x_1, x_2) \mapsto f(x_1) \cdot f(x_2), and \displaystyle P_I \bar{f}(x_1, x_2) = \lim_{k \rightarrow \infty} \frac{1}{|I_k|} \sum_{\gamma \in I+k} f(\gamma x_1) \bar{ f(\gamma x_2)}.

For each 1 \leq j \leq k, we have \int (\bar{g_j} \otimes g_j) \cdot P \bar{f} d(\mu \times_Y \mu) = 0

Hence we have \displaystyle \lim_{i \rightarrow \infty} \frac{1}{|I_i|} \sum_{\gamma \in I_i} \int (\bar{g_j(x_1)} g_j(x_2)) \cdot \gamma f(x_1) \bar{\gamma f(x_2)} d\mu \times_Y \mu(x_1, x_2) = 0

\Rightarrow \displaystyle \lim_{i \rightarrow \infty} \frac{1}{|I_i|} \sum_{\gamma \in I_i} \int | \int \bar{g_j(x)} \gamma f(x) d\mu_y(x)|^2 d \nu(y) = 0

\Rightarrow \displaystyle \lim_{i \rightarrow \infty} \frac{1}{|I_i|} \sum_{\gamma \in I_i} \{ \sum_{j=1}^k \int | \int \bar{g_j(x)} \gamma f(x) d\mu_y(x)|^2 d \nu(y) \} = 0

Hence for large enough i, there exists \gamma \in I_i s.t. \sum_{j=1}^k \int | \int \bar{g_j(x)} \gamma f(x) d\mu_y(x)|^2 d \nu(y) is as small as we want.

We may find D' \subseteq Y with \nu(D) > 1-\delta s.t. for all y \in D' and for all j, we have

| \int \bar{g_j(x)} \gamma f(x) d\mu_y(x)|^2 < \varepsilon^2

On the other hand, by construction there is j with || \gamma f - g_j||^2_y < \varepsilon^2 for all y \in D, with \nu(D) > 1-\delta .

Hence for y \in D \cap D', \ ||f||_{\gamma'^{-1}(y)}^2 = || \gamma f||_y^2 < 3 \varepsilon^2.

Let \varepsilon \rightarrow 0, \ \delta \rightarrow 0 we get f = \bar{0}. Hence C5 holds.

“C5 \Rightarrow C1″

Let f \in L^2(X) orthogonal to all of such functions. Let (I_k) be a Folner sequence.

Define \displaystyle H(x_1, x_2) := \lim_{i \rightarrow \infty} \frac{1}{|I_i|}\sum_{\gamma \in I_i} \gamma f(x_1) \cdot \gamma f(x_2) = P \bar{f}(x_1, x_2)

Let H_M(x_1, x_2) be equal to H whenever H(x_1, x_2) \leq M and 0 o.w.

H is \Gamma-invariant \Rightarrow \ H_M is \Gamma-invariant and bounded.

Therefore f \bot H_M \ast f, i.e.

\int \bar{f(x_1)} \{ \int H_M(x_1, x_2) d \mu_{\alpha(x_1)}(x_2) \} d \mu(x_1) = 0 <\p>

Since \mu = \int \mu_y d \nu(y), we get

\int \bar{f} \otimes f \cdot H_M d \mu \times_Y \mu = 0 <\p>

Hence H_M \bot (\bar{f} \otimes f). For all \gamma \in \Gamma, \ \gamma (\bar{f} \otimes f) \bot \gamma H_M = H_M.

Since H = P \bar{f} is an average of \gamma (\bar{f} \otimes f), \ \Rightarrow \ H \bot H_M.
0 = \int \bar{H} \cdot H_M = \int |H_M|^2 \ \Rightarrow \ H_M = \bar{0} for all M

Hence H = \bar{0}. By C5, we obtain f = \bar{0}. Hence \{ H \ast f \ | \ H \in L^\infty (X \times_Y X) \cap \Gamma_{inv} (X \times_Y X), f \in L^2(X) \} contain a basis for L^2(X).

Definition: Let H be a subgroup of \Gamma, \alpha: (X, \mathcal{B}, \mu, \Gamma) \rightarrow ( Y, \mathcal{D}, \nu, \Gamma') is said to be compact relative to H if the extension \alpha: (X, \mathcal{B}, \mu, H) \rightarrow ( Y, \mathcal{D}, \nu, H') is compact.

Notes for my lecture on multiple recurrence theorem for weakly mixing systems – Part 2

February 19, 2010

Now we can start to prove the multiple recurrence theorem in the weak mixing case. Again the material is from Furstenberg’s book ‘Recurrence in Ergodic Theory and Combinatorial Number Theory’ which, very unfortunately, is hard to find a copy of.

Definition: A sequence (x_i) \subseteq X converges in density to x if there exists Z \subseteq \mathbb{N} of density 0 s.t. for all neighborhood U of x, \exists N \in \mathbb{N}, \ \{ x_n \ | \ n \geq N and n \notin Z \} \subseteq U.

We denote (x_n) \rightarrow_D \ x.

Theorem: For (X, \mathcal{B}, \mu, T) weakly mixing,

then \forall f_0, f_1, \cdots, f_k \in L^\infty(X), we have

\int f_0(x) f_1(T^n(x)) f_2(T^{2n}(x)) \cdots f_k(T^{kn}(x)) \ d \mu

\rightarrow_D \int f_0 \ d \mu \int f_1 \ d \mu \cdots \int f_k \ d \mu as n \rightarrow \infty

In particular, this implies for any A \in \mathcal{B} with \mu(A)>0, by taking f_0 = f_1 = \cdots = f_k = \chi_A we have
\mu(A \cap T^{-n}(A) \cap \cdots \cap T^{-kn}(A)) \rightarrow_D \mu(A)^k > 0.
Hence we may pick N \in \mathbb{N} for which
\mu(A \cap T^{-N}(A) \cap \cdots \cap T^{-kN}(A)) > 0.

Establishes the multiple recurrence theorem.

To prove the theorem we need the following:

Lemma 1: Let (f_n) be a bounded sequence in Hilbert space \mathcal{H}, if \langle f_{n+m}, f_n \rangle \rightarrow_D a_m as n \rightarrow \infty, a_m \rightarrow_D 0 as m \rightarrow \infty. Then (f_n) converges weakly in density to \overline{0}

In order to prove the lemma 1, we need:

Lemma 2: Given \{ R_q \ | \ q \in Q \} a family of density 1 subsets of \mathbb{N}, indexed by density 1 set Q \subseteq \mathbb{N}. Then for all S \subseteq \mathbb{N} of positive upper density, for all k \geq 1. There exists \{ n_1, n_2, \cdots, n_k \} \subseteq S, n_1 < n_2 \cdots < n_k such that whenever ii \in Q and n_i \in R_{n_j-n_i}.

Proof of lemma 2: We’ll show this by induction. For k=1, since there is no such j>1, the statement is vacant.

We’ll proceed by induction: Suppose for k>1, there exists S_k \subseteq S of positive upper density and integers m_1< \cdots < m_k such that (S_k+m_1) \cup (S_k+m_2) \cup \cdots \cup (S_k+m_k) \subseteq S and for all j>i, m_j - m_i \in Q and S_k+m_i \subseteq R_{m_j-m_i}.

For k+1, we shall find S_{k+1} \subseteq S_k with positive upper density and m_{k+1}>m_k where S_{k+1}+m_{k+1} \subseteq S and for all 1 \leq i \leq k, m_{k+1} - m_i \in Q and S_{k+1}+m_i \subseteq R_{m_{k+1}-m_i}.

Let S_k^* = \{n \ | \ S_k+n \cap S_k has positive upper density \}.

Claim:S_k^* has positive upper density.

Since \overline{D}(S_k) = \epsilon >0, let N = \lceil 1/ \epsilon \rceil.

Hence there is at most N-1 translates of S_k that pairwise intersects in sets of density 0.

Let M < N be the largest number of such sets, let p_1, \cdots, p_M be a set of numbers where (S_k+p_i) \cap (S_k+p_j) has density 0.
i.e. S_k+(p_j-p_i) \cap S_k has density 0.

Therefore for any p>p_M, (S_k+p-p_i) \cap S_k has positive upper density for some i. Hence p-p_i \in S_k^*. S_k^* is syntactic with bounded gap 2 \cdot p_M hence has positive upper density.

Pick \displaystyle m_{k+1} \in S_k^* \cap \bigcap_{i=1}^k(Q+m_i).

(Hence m_{k+1}-m_i \in Q for each 1 \leq i \leq k)

Let \displaystyle S_{k+1} = (S_k - m_{k+1})  \cap S_k \cap \bigcap_{i=1}^k (R_{m_k+1 - m_i}-m_i).

(S_k - m_{k+1}) \cap S_k has positive upper density, \bigcap_{i=1}^k (R_{m_k+1 - m_i}-m_i) has density 1, S_{k+1} has positive upper density.

S_{k+1}, \ m_{k+1} satisfied the desired property by construction. Hence we have finished the induction.

Proof of lemma 1:

Suppose not. We have some \epsilon > 0, \ f \neq \overline{0},

S = \{ n \ | \ \langle f_n, f \rangle > \epsilon \} has positive upper density.

Let \delta = \frac{\epsilon^2}{2||f||^2}, let Q = \{m \ | \ a_m < \delta/2 \} has density 1.

\forall q \in Q, let R_q = \{ n \ | \ \langle f_{n+q}, f_n \rangle < \delta \} has density 1.

Apply lemma 2 to Q, \{R_q \}, S, we get:

For all k \geq 1. There exists \{ n_1, n_2, \cdots, n_k \} \subseteq S, n_1 < n_2 \cdots < n_k such that whenever i<j , n_j - n_i \in Q and n_i \in R_{n_j-n_i}.

i) n_i \in S \ \Leftrightarrow \ \langle f_{n_i}, f \rangle > \epsilon

ii) n_i \in R_{n_j-n_i} \ \Leftrightarrow \ \langle f_{n_i}, f_{n_j} \rangle < \delta

Set g_i = f_{n_i} - \epsilon \cdot \frac{f}{||f||}. Hence

\forall \ 1 \leq i < j \leq k \langle g_i, g_j \rangle = \langle f_{n_i} - \epsilon \frac{f}{||f||}, f_{n_j} - \epsilon \frac{f}{||f||}\rangle < \delta - 2\cdot \frac{\epsilon^2}{||f||^2} + \frac{\epsilon^2}{||f||^2} =  \delta - \frac{\epsilon^2}{||f||^2}= -\delta.

On the other hand, since (f_n) is bounded in \mathcal{H}, \ (g_n) is also bounded (independent of k). Suppose ||g_n||< M for all k,
then we have
\displaystyle 0 \leq || \sum_{i=1}^k g_i ||^2 = \sum_{i=1}^k ||g_i ||^2 + 2 \cdot \sum_{i < j} \langle g_i, g_j \rangle \leq kM - k(k-1) \delta

For large k, \ kM - k^2 \delta<0, contradiction.
Hence S must have density 0.

Proof of the theorem:
By corollary 2 of the theorem in part 1, since T is weak mixing, T^m is weak mixing for all m \neq 0.
We proceed by induction on l. For l=1, the statement is implied by our lemma 2 in part 1.

Suppose the theorem holds for l \in \mathbb{N}, let f_0, f_1, \cdots, f_{l+1} \in L^\infty(X),

Let C = \int f_{l+1} \ d \mu, \ f'_{l+1}(x) = f_{l+1}(x) - C.

By induction hypothesis, \int f_0(x) f_1(T^n(x)) f_2(T^{2n}(x)) \cdots f_l(T^{ln}(x)) \cdot C \ d \mu

\rightarrow_D \int f_0 \ d \mu \int f_1 \ d \mu \cdots \int f_l \ d \mu \cdot C as n \rightarrow \infty

Hence it suffice to show \int f_0(x) f_1(T^n(x)) f_2(T^{2n}(x)) \cdots f_l(T^{ln}(x)) \cdot
f'_{l+1}(T^{(l+1)n}(x)) \ d \mu \rightarrow_D 0

Let \int f_{l+1} \ d\mu = 0

For all n \in \mathbb{N}, set g_n (x)= f_1 \circ T^n(x) \cdot f_2 \circ T^{2n}(x) \cdots f_{l+1} \circ T^{(l+1)n}(x)

For each m \in \mathbb{N}, \ \forall \ 0 \leq i \leq l=1, let F^{(m)}_i (x)= f_i(x) \cdot f_i(T^{im}(x))

\langle g_{n+m}, g_n \rangle = \int (f_1(T^{n+m} (x) \cdots f_{l+1}(T^{(l+1)(n+m)} (x))) \cdot (f_1(T^n(x)) \cdots f_{l+1}(T^{(l+1)n} (x))) \ d\mu
= \int F^{(m)}_1(T^n(x)) \cdots F^{(m)}_{l+1}(T^{(l+1)n}(x)) \ d \mu

Since T^{l+1} is measure preserving, we substitute y = T^{(l+1)n}(x),

= \int F^{(m)}_{l+1}(y) \cdot F^{(m)}_1(T^{-ln}(y)) \cdots F^{(m)}_l(T^{-n}(y)) \ d \mu

Apply induction hypothesis, to the weak mixing transformation T^{-n} and re-enumerate F^{(m)}_i

\langle g_n, g_{n+m} \rangle \rightarrow_D ( \int F^{(m)}_1 \ d\mu) \cdots (\int F^{(m)}_{l+1} \ d\mu) as n \rightarrow \infty.

\int F^{(m)}_{l+1} \ d\mu = \int f_{l+1} \cdot f_{l+1} \circ T^{(l+1)m} \ d\mu

By lemma 2 in part 1, we have \int F^{(m)}_{l+1} \ d\mu \rightarrow_D 0 as m \rightarrow \infty.

We are now able to apply lemma 2 to g_n, which gives (g_n) \rightarrow_D \overline{0} under the weak topology.

i.e. for any f_0, we have \int f_0(x) g_n(x) \ d \mu \rightarrow_D 0.

Establishes the induction step.

Remark: This works for any group of commutative weakly mixing transformations. i.e. if G is a group of measure preserving transformations, all non-identity elements of G are weakly mixing. T_1, \cdots, T_k are distinct elements in G, then \int f_0(x) f_1(T_1^n(x)) f_2(T_2^n(x)) \cdots f_k(T_k^n(x)) \ d \mu \rightarrow_D \int f_0 \ d \mu \int f_1 \ d \mu \cdots \int f_k \ d \mu as n \rightarrow \infty.

Notes for my lecture on multiple recurrence theorem for weakly mixing systems – Part 1

February 15, 2010

So…It’s finally my term to lecture on the ergodic theory seminar! (background, our goal is to go through the ergodic proof of Szemerédi’s theorem as in Furstenberg’s book). My part is the beginning of the discussion on weak mixing and prove the multiple recurrence theorem in the weak mixing case, the weak mixing assumption shall later be removed (hence the theorem is in fact true for any ergodic system) and hence prove Szemerédi’s theorem via the correspondence principal discussed in the last lecture.

Given two measure preserving systems (X_1, \mathcal{B}_1, \mu_1, T_1) and (X_2, \mathcal{B}_2, \mu_2, T_2), we denote the product system by (X_1 \times X_2, \mathcal{B}_1 \times \mathcal{B}_2, \mu_1 \times \mu_2, T_1 \times T_2) where \mathcal{B}_1 \times \mathcal{B}_2 is the smallest \sigma-algebra on X_1 \times X_2 including all products of measurable sets.

Definition: A m.p.s. (X, \mathcal{B}, \mu, T) is weakly mixing if (X \times X, \mathcal{B} \times \mathcal{B}, \mu \times \mu, T \times T) is ergodic.

Note that weak mixing \Rightarrow ergodic

as for non-ergodic systems we may take any intermediate measured invariant set \times the whole space to produce an intermediate measured invariant set of the product system.

For any A, B \in \mathcal{B}, let N(A, B) = \{ n \in \mathbb{N} \ | \ \mu(A \cap T^{-n}(B)) > 0 \}.

Ergodic \Leftrightarrow for all A,B with positive measure, N(A,B) \neq \phi

Weakly mixing \Rightarrow for all A,B,C,D with positive measure, N(A \times C, B \times D) \neq \phi.

Since n \in N(A \times C, B \times D)

\Leftrightarrow \ \mu \times \mu(A \times C \cap T^{-n}(B \times D)) > 0

\Leftrightarrow \ \mu \times \mu(A \cap T^{-n}(B) \times C \cap T^{-n}(D)) > 0

\Leftrightarrow \ \mu(A \cap T^{-n}(B)) > 0 and \mu(C \cap T^{-n}(D)) > 0

\Leftrightarrow \ n \in N(A,B) and n \in N(C,D)

Hence T is weakly mixing \Rightarrow for all A,B,C,D with positive measure, N(A,B) \cap N(C,D) \neq \phi. We’ll see later that this is in fact \Leftrightarrow but let’s say \Rightarrow for now.

As a toy model for the later results, let’s look at the proof of following weak version of ergodic theorem:

Theorem 1: Let (X, \mathcal{B}, \mu, T) be ergodic m.p.s., f, g \in L^2(X) then

\lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N \int f \cdot g \circ T^n \ d\mu = \int f \ d\mu \cdot \int g \ d\mu.

Proof: Let \mathcal{U}: f \mapsto f \circ T, \ \mathcal{U} is unitary on L^2(X).

Hence \{ \frac{1}{N+1} \sum_{n=0}^N g \circ T^n \ | \ N \in \mathbb{N} \} \subseteq \overline{B( \overline{0}, ||g||)}

Any weak limit point of the above set is T-invariant, hence ergodicity implies they must all be constant functions.

Suppose \lim_{i \rightarrow \infty} \frac{1}{N_i+1} \sum_{n=0}^N g \circ T^n \equiv c

then we have c = \int c \ d\mu = \lim_{i \rightarrow \infty} \frac{1}{N_i+1} \sum_{n=0}^N \int g \circ T^n \ d\mu = \int g \ d \mu

Therefore the set has only one limit point under the weak topology.

Since the closed unit ball in Hilbert space is weakly compact, hence \frac{1}{N+1} \sum_{n=0}^N g \circ T^n converges weakly to the constant valued function \int g \ d \mu.

Therefore \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N \int f \cdot g \circ T^n \ d\mu

= \int f \cdot ( \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N \int g \circ T^n) \ d\mu

= \int f \cdot (\int g \ d\mu) d\mu = \int f \ d\mu \cdot \int g \ d\mu.

Next, we apply the above theorem on the product system and prove the following:

Theorem 2: For (X, \mathcal{B}, \mu, T) weakly mixing,

\lim_{N \rightarrow \infty} \frac{1}{1+N} \sum_{n=0}^N (\int f \cdot (g \circ T^n) \ d \mu - \int f \ d \mu \int g \ d \mu)^2

= 0

Proof: For f_1, f_2: X \rightarrow \mathbb{R}, let f_1 \otimes f_2: X \times X \rightarrow \mathbb{R} where f_1 \otimes f_2 (x_1, x_2) = f_1(x_1) f_2(x_2)

By theorem 1, we have

\lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N(\int f \cdot g \circ T^n \ d \mu)^2

= \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N \int (f \otimes f) \cdot ((g \otimes g) \circ (T \times T)^n) \ d \mu \times \mu

= (\int f \otimes f \ d\mu \times \mu) \cdot (\int g \otimes g \ d\mu \times \mu)

= (\int f \ d\mu)^2 (\int g \ d\mu)^2 \ \ \ \ ( \star )

Set a_n = \int f \cdot (g \circ T^n) \ d \mu, \ a = \int f \ d \mu \int g \ d \mu hence by theorem 1, we have

\lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N a_n = a;

By ( \star ), we have

\lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N a_n^2 = a^2

Hence \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N (a_n-a)^2

= \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N (a_n^2 - 2a \cdot a_n + a^2)

= \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N a_n^2 - 2a \cdot  \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N a_n + a^2

= a^2 - 2 a \cdot a + a^2 = 0

This establishes theorem 2.

We now prove that the following definition of weak mixing is equivalent to the original definition.

Theorem 3: (X, \mathcal{B}, \mu, T) weakly mixing iff for all (Y, \mathcal{D}, \nu, S) ergodic, (X \times Y, \mathcal{B} \times \mathcal{D}, \mu \times \nu, T \times S) is ergodic.

proof:\Leftarrow” is obvious as if (X, \mathcal{B}, \mu, T) has the property that its product with any ergodic system is ergodic, then (X, \mathcal{B}, \mu, T) is ergodic since we can take its product with the one point system.
This implies that the product of the system with itself (X \times X, \mathcal{B} \times \mathcal{B}, \mu \times \mu, T \times T) is ergodic, which is the definition of being weakly mixing.

\Rightarrow” Suppose (X, \mathcal{B}, \mu, T) weakly mixing.

T \times S is ergodic iff all invariant functions are constant a.e.

For any g_1, g_2 \in L^2(X), \ h_1, h_2 \in L^2(Y), let C = \int g_1 \ d \mu, let g_1' = g_1-C; hence \int g_1' \ d \mu = 0.

\lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N \int g_1 \cdot (g_2 \circ T^n) \ d \mu \cdot \int h_1 \cdot (h_2 \circ S^n) \ d \nu

= \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N \int C \cdot (g_2 \circ T^n) \ d \mu \cdot \int h_1 \cdot (h_2 \circ S^n) \ d \nu

+ \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N \int g_1' \cdot (g_2 \circ T^n) \ d \mu \cdot \int h_1 \cdot (h_2 \circ S^n) \ d \nu

Since \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N \int C \cdot (g_2 \circ T^n) \ d \mu \cdot \int h_1 \cdot (h_2 \circ S^n) \ d \nu

= C \cdot \int g_2 \ d \mu \cdot \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N \int h_1 \cdot (h_2 \circ S^n) \ d \nu

By theorem 1, since S is ergodic on Y,

=\int g_1 \ d \mu \cdot \int g_2 \ d \mu \cdot \int h_1 \ d \nu \cdot \int h_2 \ d \nu

On the other hand, let a_n = \int g_1' \cdot (g_2 \circ T^n) \ d \mu, \ b_n = \int h_1 \cdot (h_2 \circ S^n) \ d \nu

By theorem 2, since T is weak mixing \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N (a_n - 0 \cdot \int g_2 \ d \mu)^2 = 0 hence \lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N a_n^2 = 0 \ \ \ (\ast)

(\sum_{n=0}^N a_n \cdot b_n)^2 \leq ( \sum_{n=0}^N a_n^2) \cdot ( \sum_{n=0}^N b_n^2) by direct computation i.e. subtract the left from the right and obtain a perfect square.

Therefore \lim_{N \rightarrow \infty} (\frac{1}{N+1} \sum_{n=0}^N a_n \cdot b_n)^2

\leq (\frac{1}{N+1} \sum_{n=0}^N a_n^2) \cdot (\frac{1}{N+1} \sum_{n=0}^N b_n^2)

Approaches to 0 as N \rightarrow \infty by (\ast).

Therefore, \lim \frac{1}{N+1} \sum_{n=0}^N \int g_1' \cdot (g_2 \circ T^n) \ d \mu \cdot \int h_1 \cdot (h_2 \circ S^n) \ d \nu = 0

Combining the two parts we get

\lim_{N \rightarrow \infty} \frac{1}{N+1} \sum_{n=0}^N \int g_1 \cdot (g_2 \circ T^n) \ d \mu \cdot \int h_1 \cdot (h_2 \circ S^n) \ d \nu

= \int g_1 \ d \mu \cdot \int g_2 \ d \mu \cdot \int h_1 \ d \nu \cdot \int h_2 \ d \nu.

Since the linear combination of functions of the form f(x, y) = g(x)h(y) is dense in L^2(X \times Y) (in particular the set includes all characteristic functions of product sets and hence all simple functions with basic sets being product sets)

We have shown that for a dense subset of f \in L^2(X \times Y) the sequence of functions \frac{1}{N+1}\sum_{n=0}^N f(T^n(x), S^n(y)) converge weakly to the constant function. (Since it suffice to check convergence a dense set of functional in the dual space)

Hence for any f \in L^2(X \times Y), the average weakly converges to the constant function \int f \ d \mu \times \nu.

For any T \times S-invariant function, the average is constant, hence this implies all invariant functions are constant a.e. Hence we obtain ergodicity of the product system.

Establishes the theorem.

Kaufman’s construction

October 23, 2009

This is a note on R. Kaufman’s paper An exceptional set for Hausdorff dimension

We construct a set D \subseteq \mathbb{R}^2 with \dim(D) = d < 1 and E \subseteq [0, \pi) with \dim(E) > 0 s.t. for all directions \theta \in E, \dim(\pi_\theta(D)) < d-\epsilon (the projection of D in direction \theta is less than d-\epsilon)

\forall \alpha >1, let (n_j)_{j=1}^\infty be an rapidly increasing sequence of integers.

Define D_j = \{ (a, b)/n_j + \xi \ | \ a, b \in \mathbb{Z}, \ ||(a, b)|| \leq n_j; \ | \xi | \leq n_j^{- \alpha} \}

i.e. D_j = \bigcup \{ B((a,b)/n_j, 1/n_j^\alpha) \ | \ (a, b) \in \mathbb{Z}^2 \cap B( \overline{0}, n_j) \}

Let D = \bigcap_{j=1}^\infty D_j

\because \alpha > 1, \ (n_j) rapidly increasing, \dim(D) = 2 / \alpha

Let c \in (0, 1) be fixed, define E' = \{ t \in \mathbb{R} \ | \ \exists positive integer sequence (m_{j_i})_{i=1}^\infty s.t. m_{j_i} < C_1 n_{j_i}^c, \ || m_{j_i} t || < C_2 m_{j_i} / n_{j_i}^\alpha \}

\forall t \in E', \ \forall i \in \mathbb{N}, \ \forall p =  (a, b)/n_{j_i} + \xi \in D_{j_i}, we have:

| \langle p, (1, t) \rangle - a/n_{j_i} - bt/n_{j_i} | \leq (1+|t|)/n_{j_i}^\alpha

Let b = q m_{j_i} + r where 0 \leq r < m_{j_i}, |q m_{j_i}| < C n_{j_i}

\exists z_{j_i} \in \mathbb{Z}, \ | z_{j_i}  | < C | n_{j_i} |, \ | \theta |<1

bt = qm_{j_i}t +rt = X + rt + q \theta ||m_{j_i}t||

A question by Furstenberg

September 25, 2009

Yesterday I was talking about some properties of different dimensions with Furstenburg. Somehow I mentioned Kekaya, and he told me about the following question he has been longing to solve (which is amazingly many similarities to Kekaya):

For set S \in \mathbb{R}^2, if \exists \delta>0 s.t. for all direction \theta, \exists line l with direction \theta s.t. \dim (l \cap S) > \delta . Does it follow that \dim(A) \geq 1 ?

Note that a stronger conjecture would be \dim(A) is at least 1+\delta which when taking \delta = 1 would give a generalization of the 2-dimensional Kekaya. (i.e. instead of having to have a line segment, we only require a 1-dimension set in each direction)

Reviewing the proofs of the 2-dimsional Kekaya, I found they all rely on the fact that the line segment is connected…Hence it might be interesting to even find an answer to the following question:

If A \subseteq \mathbb{R}^2 contains a measure 1 set in every direction, does it follow that \dim(A)=2?