%
\input amstex \documentstyle{amsppt} \magnification\magstep1 \redefine\gg{\gamma} \define\hh{\theta} \predefine\abscont{\ll} \redefine\ll{\lambda} \redefine\ss{\sigma} \define\ve{\varepsilon} \define\DD{\Delta} \define\GG{\Gamma} \define\LL{\Lambda} \define\a{\alpha} \predefine\barunder{\b} \redefine\b{\beta} \define\F{{\Cal F}} \define\Ft{{\Cal F}_t} \define\Int{\DOTSI\int_0^{\infty}} \define\PR{{\bold P}} \define\Px{{\bold P}^x} \define\Py{{\bold P}^y} \define\R{{\bold R}} \define\Rp{{\bold R}^+} \define\W{\Omega} \define\w{\omega} \define\({\left(} \define\){\DOTSX\right)} \define\[{\left[} \define\]{\right]} \def\litem"#1"{\par\leftskip3\parindent\noindent\hbox to0pt{\kern-3\parindent{\rm #1}\hss}\ignorespaces}%addition to roster_item \def\conv#1{\overset{d^{#1}}\to\implies} \def\eqd#1{\overset{d^{#1}}\to=} \def\em#1{e^{-#1}} \def\intt{\int_0^t } \def\Rpp{\,]0,\infty[\,} \def\vf{\varphi} \def\Pz{\PR^0} \def\notequiv{\not\equiv} \def\sgn{\operatorname{sgn}} \def\var{\operatorname{Var}} \hfuzz2pt \topmatter \affil University of California, San Diego\endaffil \address Department of Mathematics---0112, 9500 Gilman Drive, La Jolla CA 92093-0112\endaddress \email rgetoor\@ucsd.edu, mjsharpe\@ucsd.edu\endemail \thanks Research supported in part by NSF Grant DMS 91-01675\endthanks \author R\. K\. Getoor and M\. J\. Sharpe\endauthor \title Arc-sine laws for Levy processes\endtitle \subjclass 60J30\endsubjclass \keywords arc-sine law, L\'evy process, stable process\endkeywords \abstract Let $X$ be a L\'evy process on the real line, and let $F_c$ denote the generalized arc-sine law on $[0,1]$ with parameter $c$. Then $t^{-1}\int_0^t \Pz(X_s>0)\,ds\to c$ as $t\to\infty$ is a necessary and sufficient condition for $t^{-1}\int_0^t 1_{\{X_s>0\}}\,ds$ to converge in $\Pz$ law to $F_c$. Moreover, if $X_t$ has a diffuse distribution for all $t>0$, $\Pz(X_t>0)=c$ for all $t>0$ is a necessary and sufficient condition for $t^{-1}\int_0^t 1_{\{X_s>0\}}\,ds$ under $\Pz$ to have law $F_c$ for all $t>0$. We show how to derive Spitzer's theorem for random walks in a simple way from the L\'evy process version. We also use the main theorem to derive distributional limit theorems for functionals of the form $t^{-1}\int_0^t f(X_s)\,ds$. \endabstract \endtopmatter \document \outer\def\beginsection#1\par{\vskip0pt plus.2\vsize\penalty-250\vskip0pt plus-.2\vsize\bigskip\vskip\parskip \message{#1}\leftline{\bf#1}\nobreak\smallskip\noindent} \beginsection 1. Introduction In 1939, L\'evy \cite{L1},\cite{L2} proved his celebrated arc-sine laws for occupation time for Brownian motion and coin tossing. More explicitly, he showed that if $X=(X_t)$ is standard Brownian motion on $\R$ with $X_0=0$, then for each $t>0$, $\frac1t\intt 1_{\{X_s>0\}}\,ds$ has an arc-sine distribution, and if $(S_n)$ is the random walk with $S_0=0$ and $\PR(S_1=1)=\PR(S_1=-1)=1/2$, the distribution of $\frac1n\sum_{k=1}^n1_{\{S_k>0\}}$ tends to an arc-sine law as $n\to\infty$. These results have been the starting point for much research during the ensuing years. In the case of random walks, the theory culminated with Spitzer's theorem \cite{S, 7.1} which showed that if $(S_n)$ is a random walk starting at 0, then $n^{-1}\sum_{k=1}^n\PR(S_k>0)\to c$ as $n\to\infty$ is necessary and sufficient in order that the distribution of $n^{-1}\sum_{k=1}^n1_{\{S_k>0\}}$ converge to the {\it generalized arc-sine law with parameter\/} $c$; {\it viz.\/}, the distribution $F_c$ on $[0,1]$ defined as follows. For $00\}}\,ds$ has distribution $F_{1/2}$, as in the Brownian case. His proof works just as well for any symmetric L\'evy process $X$ such that $X_1$ has a bounded (necessarily continuous) density. The result for symmetric L\'evy processes such that $\Pz(X_1=0)=0$ is well-known, and may be obtained by observing that $s\to1_{\{X_s>0\}}$ has a discontinuity set of measure zero, and so it is Riemann integrable, hence the ``discrete'' arc-sine law for random walks \cite{F, XII--8} can be used to provide approximating Riemann sums which converge in distribution to $F_{1/2}$. Moreover, using (8.12) and (8.13) in \cite{F, Ch\. XII}, it follows similarly that if $X=(X_t)$ is strictly stable then $\frac1t\intt 1_{\{X_s>0\}}\,ds$ has distribution $F_c$ with $c:=\PR(X_t>0)$, which does not depend on $t$. See also \cite{D}. The main result of this paper is the exact analog for L\'evy processes of Spitzer's result. If $X=(X_t)$ is a L\'evy process on $\R$, the condition $$\lim_{T\to\infty}\frac1T\int_0^T\Pz(X_t>0)\,dt= c\tag1.3$$ is necessary and sufficient in order that the law, under~$\Pz$, of $\frac1T\int_0^T1_{\{X_t>0\}}\,dt$ converge to $F_c$ as $T\to\infty$. Aside from the cases mentioned above, the only result of this nature of which we are aware is for a spectrally negative, mean zero, L\'evy process in the domain of attraction of a spectrally negative stable process of index $\a$, $1<\a<2$, due to Bingham and Hawkes \cite{BH}. The main theorem is proved in \S2, and at the same time, we show that if $X_t$ has a diffuse distribution for all $t>0$, the stronger condition $$\frac1T\int_0^T\Pz(X_t>0)\,dt= c\qquad\text{for all $T>0$}\tag1.4$$ is necessary and sufficient in order that $\frac1T\int_0^T1_{\{X_t>0\}}\,dt$ have law $F_c$ for all $T>0$ under~$\Pz$. In particular, this is the case when $X$ is a strictly stable process, and in \S4 we evaluate the constant $c$ in terms of the parameters defining the stable process. In \S3 we show that Spitzer's theorem for random walks can be deduced rapidly from the compound Poisson case of the main theorem, thus giving a unified treatment of arc-sine laws for occupation times. In \S4 we show that (1.3) implies that for every starting point~$x$, the law of $\frac1T\int_0^T1_{\{X_t>0\}}\,dt$ under $\Px$ converges to $F_c$ as $T\to\infty$. We also evaluate the constant $c$ in (1.3) and (1.4) in a number of special cases. Finally in \S5 we consider some limit theorems for functionals of the form $\frac1T\int_0^T f(X_t)\,dt$ for a bounded Borel function $f$ which is not integrable. Under various conditions on $f$, we obtain results which generalize results of Davydov \cite{D} for stable processes. Our proof of the main result is very simple and direct. Other than standard facts about L\'evy processes, the only thing we use is Sparre Anderson's formula \cite{A} relating the generating functions of $p_n:=\PR(S_1>0,\dots,S_n>0)$ and $r_n:=\PR(S_n>0)$ for a random walk. See \cite{F, XII--7, Th\. 4}. The observation by P\. J\. Fitzsimmons that using this formula for the random walk with $\PR(S_1\in\,\cdot\,)$ given by $q\Int\em{qt}\Pz(X_t\in\,\cdot\,)\,dt$ leads quickly to the arc-sine law for a symmetric process was the starting point of our investigation. We wish to thank him for sharing his insights with us and also for numerous enlightening discussions during the course of this work. \beginsection 2. Main theorem for L\'evy processes Throughout this paper, $X=(\W,\F,\Ft,X_t,\hh_t,\Px)$ denotes the canonical realization of a L\'evy process. Thus $X$ is a Hunt process with stationary independent increments specified by $$\gather \Pz\(e^{i\xi X_t}\)=\em{t\psi(\xi)},\tag2.1\\ \psi(\xi)=ia\xi+\ss^2\xi^2/2+\int[1-e^{i\xi x}+i\xi x1_{\{|x|<1\}}]\, \nu(dx).\tag2.2 \endgather$$ Here $a\in\R$, $\ss^2\ge0$, and $\nu$ is a measure on $\R$ such that $\int x^2(1+x^2)^{-1}\,\nu(dx)<\infty$ and not charging zero. The measure $\nu$ is called the L\'evy measure of $X$ and $\psi$ is called the L\'evy exponent of $X$. We write $(\mu_t)$ for the associated convolution semigroup of probability measures on $\R$ so that the transition function of $X$ is given by $$P_t(x,dy)=\mu_t(dy-x),$$ and we write $U^q(dx)$ for resolvent measure given by $$U^q(B):=\Int\em{qt}\mu_t(B)\,dt,\qquad q\ge0.\tag2.3$$ The main theorem, (2.7) below, is trivially true when $X_t$ is a\.s\. constant (equivalently, $\psi\equiv0$, or $\mu_t\equiv\varepsilon_0$ for all $t\ge0$), and we rule out this case in all subsequent arguments. {\it Thus in the remainder of the paper, when we say\/} $X$ {\it is a L\'evy process, we mean a non-constant L\'evy process.\/} For future reference, we record the well-known fact that $\mu_t$ is diffuse for some (equivalently, all) $t>0$ if and only if at least one of the following two conditions holds: \roster \litem"(2.4)" (i) $\ss^2>0$; \litem"" (ii) $\nu(\R)=\infty$. \endroster (If $X$ is symmetric, this is the case if and only if $\mu_t(\{0\})=0$ for some $t>0$.) We begin with the following lemmas which are undoubtedly well-known. \proclaim{(2.5) Lemma} Let $X$ be a L\'evy process. If $K\subset\R$ is compact, then $\mu_t(K)\to0$ as $t\to\infty$. \endproclaim \demo{Proof} For every $t>0$, $$\int_{-\infty}^\infty e^{-x^2/2}\, \mu_t(dx)=\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty \int_{-\infty}^\infty e^{i\xi x}e^{-\xi^2/2}\,d\xi\,\mu_t(dx)=\frac1{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-t\psi(\xi)}e^{-\xi^2/2}\,d\xi.$$ Suppose first that $|e^{-\psi(\xi)}|\equiv1$, which happens only when $\mu_t=\varepsilon_{kt}$ for some constant $k$, non-zero by the hypothesis $\psi\notequiv0$. In this case, the result is clearly true. If $|e^{- \psi(\xi)}|\notequiv1$, then $\{\xi:|e^{-\psi(\xi)}|=1\}$ has the form $\{nh:n=0,\pm1,\pm2,\dots\}$ for some $h\ge0$, and so in this case $e^{-t\psi(\xi)}\to0$ for a.a. $\xi$ as $t\to\infty$. Consequently $\int_{-\infty}^\infty e^{-x^2/2}\,\mu_t(dx)\to0$ as $t\to\infty$, clearly yielding the conclusion of the lemma. \enddemo \proclaim{(2.6) Lemma} $t\to \mu_t([0,\infty[\,)$ is upper semicontinuous in all cases, and there is a continuous function $t\to \rho(t)\le\mu_t(\,]0,\infty[\,)\le \mu_t(\,[0,\infty[\,)$ such that $\mu_t([0,\infty[\,)-\rho(t)\to0$ as $t\to\infty$. (Continuity of $t\to \mu_t([0,\infty[\,)$ may fail only if $(X_t)$ is a compound Poisson process with non-zero drift.)\endproclaim \demo{Proof} If $f$ is bounded and continuous, then $t\to\mu_t(f)=\PR^0 f(X_t)$ is continuous since $X$ is a Hunt process. It follows that $t\to\mu_t(\LL)$ is upper semicontinuous if $\LL$ is closed, proving the first assertion. If $(X_t)$ is not a compound Poisson process (with or without drift), then by (2.4), $\mu_t$ is diffuse for all $t>0$, and as $t\to\mu_t(\Rpp)$ is lower semicontinuous, continuity of $t\to \mu_t([0,\infty[\,)$ follows. If $X$ is compound Poisson without drift, say $X_t=S_{N(t)}$ where $(S_n)$ is a random walk generated by $\nu$ and $N(t)$ is an independent Poisson process with rate $\ll$, then $\Pz(X_t=0)=\em{\ll t}\sum_{k=0}^\infty (\ll t)^k/k!\PR(S_k=0)$ is clearly continuous in $t$, so that $t\to\mu_t([0,\infty[\,)=1-\mu_t(\,]-\infty,0]\,)+\mu_t(\{0\})$ is both upper and lower semicontinuous, hence continuous. Finally, let $\rho(t):=\Pz \phi(X_t)$, where $\phi(x):=1\land x1_{\{x\ge 0\}}$, a continuous function bounded above by $1_{\,]0,\infty[}(x)$. Then $\rho(t)$ is continuous and $\rho(t)\le\Pz(X_t>0)$, while $\Pz(X_t\ge0)-\rho(t)\le \Pz(0\le X_t\le 1)\to0$ by (2.5). \enddemo Let $Z_c$ denote a random variable with distribution $F_c$. In the sequel, the symbol $\conv x$ denotes convergence in law relative to $\Px$, and $\eqd x$ denotes equality in law relative to $\Px$. \proclaim{(2.7) Theorem} Let $X$ be a L\'evy process and let $A_t:=\int_0^t1_{\{X_s>0\}}\,ds$. Then the following are equivalent. \roster \item"(i)"$\lim_{t\to\infty}\frac1t\intt\mu_s(\Rpp)\,ds=c$. \vskip6pt \item"(ii)"$\lim_{q\to0}qU^q(\Rpp)=c$. \vskip6pt \item"(iii)"$\frac{A_{\b t}}{\b}\conv0 tZ_c$ as $\b\to\infty$ for each $t>0$. \endroster In addition, if $\mu_t$ is diffuse for all $t>0$, the following are equivalent to one another, but not to (i)--(iii). \roster \item"(iv)"$\frac1t\intt\mu_s(\Rpp)\,ds=c$ for each $t>0$. \vskip6pt \item"(v)"$qU^q(\Rpp)=c$ for each $q>0$. \vskip6pt \item"(vi)"${A_{t}}\overset{d^0}\to{=} tZ_c$ for each $t>0$. \endroster \endproclaim \noindent{\bf (2.8) Remarks:} (a) If $X$ is symmetric and $\mu_t(\{0\})=0$ for some (and hence all) $t>0$, then (iv) holds with $c=1/2$, and so for each fixed $t>0$, the law of $A_{t}$ under $\PR^0$ is the same as that of $tZ_{1/2}$. (b) If $X$ is symmetric but $\mu_t(\{0\})>0$, then by (2.5), (i) holds with $c=1/2$, and so the law of $A_{t}/t$ under $\PR^0$ converges to that of $Z_{1/2}$. (c) If $X$ is strictly stable with index $\a$, $0<\a\le2$, there is a constant $c$ which may be explicitly determined in terms of the L\'evy exponent function such that (iv) holds (see (4.8) and (4.9)), and then $A_t\overset{d^0}\to{=} tZ_c$ for each $t>0$. (d) Condition (iii) for one fixed $t>0$ is enough to imply (i) and (ii), and hence (iii) for all $t>0$. (e) If any of (i), (ii), (iii) is satisfied, then in fact convergence in (iii) takes place in $\Px$ distribution for all $x$. See \S4. (f) The assertion ${A_{\b t}}/{\b}\conv0 tZ_c$ as $\b\to\infty$ for each $t>0$ is equivalent to the assertion ${A_{t}}/{t}\conv0 Z_c$ as $t\to\infty$, as one may see by writing the second assertion in the form $\Pz\exp(-\ll A_t/t)\to\PR(-\ll Z_c)$ as $t\to\infty$, and making a simple change of variables. (g) If $t_n^{-1}\int_0^{t_n}1_{\{X_s>0\}}\,ds$ has a limiting distribution $F$ as $n\to\infty$ for some sequence $t_n$ such that $t_{n+1}/t_n\to1$, then (i) holds with $c:=\int_0^1 x\,F(dx)$ and so $F=F_c$. (h) By (2.6), if $\mu_t$ is diffuse, $t\to\mu_t(\Rpp)$ is continuous, and so (iv) is equivalent to the condition $\mu_t(\Rpp)=c$ for each $t>0$. \demo{Proof} Since $t\to H(t):=\intt \mu_s(\Rpp)\,ds$ is a continuous increasing function with $H(0)=0$ and $$U^q(\Rpp)=\Int\em{qt}\,dH(t),$$ the equivalence of (i) and (ii) is an immediate consequence of the Karamata Tauberian Theorem \cite{BGT, Th\. 1.7.1}. The equivalence of (iv) and (v) amounts to the uniqueness of Laplace transforms. Now suppose (iii) holds for a fixed $t>0$. Since $\b^{-1}A_{\b t}\le t$, it follows that as $\b\to\infty$, $$(\b t)^{-1}\PR^0(A_{\b t})\to t^{-1}t\int_0^1 xF_c(dx)=c,$$ so (i) holds. Likewise, if (vi) holds, $\Pz A_t=t\PR Z_c=tc$ shows that (iv) holds. It remains only to prove that (i) implies (iii) and (iv) implies (vi), the most significant assertions of the theorem. We assume condition (i), and note parenthetically the modifications required if (iv) is assumed instead. Before proceeding with the main line of the argument, we shall eliminate some trivial cases. By (2.6), $t\to\mu_t([0,\infty[\,)$ is upper semicontinuous. Let $\LL_t$ denote the closed support of $\mu_t$, so that $\LL_{s+t}$ is the closure of $\LL_s+\LL_t$. If $\mu_{t_0}([0,\infty[\,)=1$ for some $t_0>0$, then $\LL_{t_0}\subset[0,\infty[\,$, and it follows that $\LL_t\subset[0,\infty[$ for all $t$ of the form $k2^{-n}t_0$, and hence $\mu_t([0,\infty[\,)=1$ for all $t>0$ by upper semicontinuity. Since $\mu_t(\{0\})\to 0$, this shows $\Pz|A_{\b t}/\b-t|=\frac1{\b}\int_0^{\b t} \mu_s(\{0\})\,ds\to 0$ as $\b\to\infty$, so (iii) holds with $c=1$. (If, in addition, $\mu_t(\{0\})=0$ for all $t>0$, we could conclude that $A_t=t$ a.s. $\PR^0$ for all $t>0$, and so (vi) holds with $c=1$.) The case $\mu_t(\,]-\infty,0])=1$ for some (and hence all) $t>0$ is similar, but with $c=0$. We may therefore assume from now on that $0<\mu_t(]0,\infty[)\le\mu_t([0,\infty[)<1$ for all $t>0$. Hence for each $q>0$, $qU^q$ is a probability measure on $\R$ with $0 0$ for the moment and let $(S_n)$ denote a random walk on $\R$ with one-step transition probability $qU^q$ and $S_0=0$. Let $p_0(q):=1$, and define for $n\ge1$ $$\align p_n(q):\,&=\PR\(S_1>0,S_2>0,\dots,S_n>0\),\tag2.9\\ &= \underset{x_1>0,\dots,x_n>0}\to{\int\dots\int} qU^q(dx_1)qU^q(dx_2-x_1)\dots qU^q(dx_n-x_{n-1})\\ r_n(q):\,&=P\(S_n>0\)=(qU^q)^{n*}(\Rpp),\tag2.10\endalign$$ where, for a finite measure $\nu$ on $\R$, $\nu^{n*}$ denotes the $n^{\text{th}}$ convolution power of $\nu$. Then (7.3), (7.18) and (7.20) of \cite{F, XII} combine to give $$\sum_{n=0}^\infty p_n(q)s^n=\exp\(\sum_{n=1}^\infty r_n(q)\frac{s^n}{n}\),\qquad|s|<1.\tag2.11$$ Note that by a simple induction argument, for every Borel set $J\subset\R$, one has $$(qU^q)^{n*}(J)=q^n\Int\!\dots\!\Int\em{q(t_1+\dots+t_n)}\mu_{t_1+\dots+t_n}(J)\,dt_1\dots dt_n.$$ Hence if $h(t):=\mu_t(\Rpp)$ and $s_k:=t_1+\dots+t_k$, $$\align r_n(q)&=q^n\Int\!\dots\!\Int\em{q(t_1+\dots+t_n)} h({t_1+\dots+t_n})\,dt_1\dots dt_n\\ &=q^n\Int ds_1\int_{s_1}^\infty\!\dots\!ds_{n-1}\int_{s_{n-1}}^\infty ds_n\em{qs_n}h(s_n)\\ &=q^n\Int ds_n\em{qs_n}h(s_n)\int_0^{s_n}ds_{n-1}\dots \int_0^{s_2}ds_1\\ &=\frac{q^n}{(n-1)!}\Int t^{n-1}\em{qt} h(t)\,dt.\endalign$$ As before, let $H(t):=\intt h(s)\,ds$. Then, integrating by parts and using $H(0)=0$, $H(t)\le t$, one finds $$\align r_n(q)&=\frac{q^n}{(n-1)!}\(q\Int t^{n-1}\em{qt}H(t)\,dt-(n-1)\Int t^{n-2}\em{qt}H(t)\,dt\)\\ &=\frac{1}{(n-1)!}\Int \frac qs H(\frac sq)\em s\(s^{n}-(n-1)s^{n-1}\)\,ds. \endalign $$ By (i), $\frac qs H(\frac sq)\to c$ as $q\to 0$, boundedly. Therefore $$\lim_{q\to 0} r_n(q)=\frac{c}{(n-1)!}(\GG(n+1)-(n-1)\GG(n))=c.\tag2.12$$ (Under (iv), we would have $H(t)=ct$ for all $t>0$, and so $r_n(q)=c$ for all $q>0$.) In view of (2.11), (2.12) yields $$\lim_{q\to 0}\sum_{n=0}^\infty p_n(q)s^n=\exp\(-c\,\log(1-s)\)=(1-s)^{-c},\quad |s|<1, \quad0\le c\le 1.\tag2.13$$ (Under (iv), (2.13) holds with the leading $\lim_{q\to0}$ excised.) We are now ready to prove (iii). A straightforward calculation using the Markov property leads for $n\ge 1$ to $$\align \Int\em{qt}\PR^0(A_t^n)\,dt&=\frac{n!}{q} \underset{x_1>0,\dots,x_n>0}\to{\int\dots\int} U^q(dx_1)U^q(dx_2-x_1)\dots U^q(dx_n-x_{n-1})\\ &=\frac{n!}{q^{n+1}} p_n(q).\tag2.14\endalign$$ Since $p_0(q)=1$, (2.14) is also valid for $n=0$. Therefore, if $\ll>0$, $$\align \PR^0\Int\em{qt}\exp(-\ll\b^{-1}A_{\b t})\,dt &=\b^{-1}\PR^0\Int\em{qt/\b}\exp(-\ll\b^{-1}A_t)\,dt\tag2.15\\ &=\b^{-1}\sum_{n=0}^\infty\frac{(-1)^n}{n!}\(\frac{\ll}{\b}\)^n \Int\em{qt/\b}\PR^0(A_t^n)\,dt\\ &=\b^{-1}\sum_{n=0}^\infty(-1)^n\(\frac{\ll}{\b}\)^n\(\frac{\b}{q}\)^{n+1}p_n(q/\b)\\ &=q^{-1}\sum_{n=0}^\infty p_n\({q}/{\b}\)\(-\frac{\ll}{q}\)^n.\endalign$$ Letting $\b\to\infty$ and using (2.13) yields, for $0<\ll 0$ and $t>0$, $\Pz \exp(-\ll\b^{-1}A_{\b t}) \to\PR\exp(-\ll tZ_c)$ as $\b\to\infty$. Fix $\ll>0$, and define increasing functions $G_{\b}(t)$, $K(t)$ by $G_{\b}(t):=1-\Pz \exp(-\ll\b^{-1}A_{\b t})$, $K(t):=1-\PR\exp(-\ll tZ_c)$. Clearly $G_{\b}(0)=K(0)=0$. By (2.16), for $q>\ll$, $$\lim_{\b\to\infty}\Int\em{qt}G_{\b}(t)\,dt =\Int\em{qt} K(t)\,dt.\tag2.17$$ Integration by parts shows $$\lim_{\b\to\infty}q^{-1}\Int\em{qt}\,d_tG_{\b}(t) =q^{-1}\Int\em{qt} \,d_tK(t).\tag2.18$$ We see from the extended continuity theorem for Laplace transforms \cite{F, Th\. 2a, p\. 433} that the measures $d_tG_{\b}(t)$ converge in the weak${}^*$ sense to $d_tK(t)$. As the limit measure is continuous, it follows that $G_{\b}(t) \to K(t)$ as $\b\to\infty$, which proves (iii), and completes the proof of the first set of equivalences. (Under (iv), note that (2.16), (2.17) and (2.18) hold with the leading $\lim_{\b\to\infty}$ erased, and then, in the part of the argument following (2.16), each ``$\to$'' may be replaced with ``$=$''. The extended continuity theorem is not used---uniqueness of Laplace transform suffices.) \enddemo \beginsection 3. Applications to random walk We show how Spitzer's result may be derived in a very simple manner from (2.7). Let $S_0:=0$, $S_n:=X_1+\dots+ X_n$, where $(X_k)$ is an i\.i\.d\. sequence with $\PR(X_1\neq0)>0$. We assume $$\frac 1n\sum_{k=1}^n\PR(S_k>0)\to c\qquad\text{ as $n\to\infty$}.\tag3.1$$ Let $N(t)$ denote a Poisson process of rate 1 independent of $(S_n)$ so that $Y_t:=S_{N(t)}$ is a L\'evy process---more precisely, a compound Poisson process. Let $T_0:=0$ and $T_1$, $T_2$,\dots denote the successive arrival times for $N(t)$, and for $k\ge0$, let $U_k:=T_{k+1}-T_k$ denote the interarrival times, so that the $U_k$ are i\.i\.d\. with $\PR U_k=1$ and $\PR(U_k-1)^2=1$ and independent of the sequence $(S_k)$. Then $$\frac1n\sum_{k=1}^n1_{\{S_k>0\}}U_k=\frac1n\int_0^{T_{n+1}} 1_{\{Y_t>0\}}\,dt=\frac1n\int_0^n 1_{\{Y_t>0\}}\,dt+V_n,\tag3.2$$ where $|V_n|\le \frac1n|n-T_{n+1}|$. By the strong law of large numbers, $T_{n+1}/n\to1$ a\.s\. and in $L^1$ as $n\to\infty$, so $V_n\to0$ a\.s\. and in $L^1$. Therefore, taking expectations in (3.2) and letting $n\to\infty$, we see that (3.1) yields $\lim_{n\to\infty} n^{-1}\int_0^n\PR(Y_t>0)\,dt=c$. This clearly implies that (2.7--i) holds for $Y$, and so Theorem (2.7) asserts that $$\frac1n \int_0^n 1_{\{Y_t>0\}}\,dt\conv0 Z_c.\tag3.3$$ Combining this with (3.2) gives $$\frac1n\sum_{k=1}^n 1_{\{S_k>0\}} U_k\conv0 Z_c\quad\text{ as $n\to\infty$}.\tag3.4$$ But a trivial calculation using independence gives $$\PR(\frac1n\sum_{k=1}^n 1_{\{S_k>0\}} (U_k-1))^2=n^{-2}\sum_{k=1}^n \PR1_{\{S_k>0\}} \PR(U_k-1)^2\le n^{-1},$$ and so $\frac1n\sum_{k=1}^n 1_{\{S_k>0\}} (U_k-1)\to0$ in probability as $n\to\infty$. Hence, by (3.4), $$\frac1n\sum_{k=1}^n 1_{\{S_k>0\}}\conv0 Z_c\quad\text{ as $n\to\infty$},\tag3.5$$ proving the arc-sine law for the random walk $(S_n)$. That (3.5) implies (3.1) is clear. \beginsection 4. Complements to the main theorem We shall show that (2.7iii) implies that for every $x$, $\b^{-1}A_{\b t}\conv x tZ_c$ as $\b\to\infty$. In fact, the convergence will turn out to be uniform in $(x,t)$ on compacts. The next lemma is almost surely well-known. \proclaim{(4.1) Lemma} For each compact $K\subset\R$, $\frac1{\b}\int_0^{\b t}1_K(X_s)\,ds\to0$ in $L^2(\Px)$ as $\b\to\infty$, for every $x\in\R$.\endproclaim \demo{Proof}Let $C_t:=\intt 1_K(X_s)\,ds$. As in (2.14), $$\Int\em{qt}\Px(C^2_t)\,dt=\frac{2}{q^{3}}b_2(q),\quad\text{where } b_2(q):=\int_K\!\int_K qU^q(dx_1-x) qU^q(dx_2-x_1).$$ If $L\subset\R$ is compact, $\mu_t(L)\to0$ as $t\to\infty$ implies that $qU^q(L)\to0$ as $q\to0$. Therefore, if $y\in K$, $qU^q(K-y) \le qU^q(K-K)\to0$, and it follows readily from this that $b_2(q)\to0$ as $q\to0$. Therefore Karamata's Tauberian theorem and the monotone density theorem \cite{BGT; 1.7.1, 1.7.2} imply that $t^{-2}\Px C_t^2\to0$ as $t\to\infty$. Consequently $t^{-1}C_t\to0$ in $L^2(\Px)$, and this yields (4.1).\enddemo \noindent{\bf (4.2) Remark:} Actually, using the ergodic theorem it is not difficult to see that the convergence in (4.1) takes place a\.s\. $\PR^x$ for every $x$, at least if $\mu_t$ is absolutely continuous for some $t>0$. \proclaim{(4.3) Theorem} Each of the conditions (i)--(iii) in (2.7) is equivalent to $\b^{-1} A_{\b t}\conv x tZ_c$ as $\b\to\infty$, and the convergence is uniform on compacts in the sense that if $g$ is bounded continuous function, then $\Px g(\b^{-1} A_{\b t})\to\PR g(tZ_c)$ as $\b\to\infty$, uniformly for $(x,t)\in K\times L$, where $K\subset\R$ and $L\subset[0,\infty[$ are compact.\endproclaim \demo{Proof} Let $A^x_t:=\intt 1_{\{X_s>x\}}\,ds$ so $A_t=A^0_t$. Clearly $A=(A_t)_{t\ge 0}$ under $\Px$ has the same law as $A^{-x}=(A^{-x}_t)_{t\ge0}$ under $\PR^0$. Suppose $x>0$ for definiteness. Then $$A^{-x}_t=\intt 1_{]-x,0]}(X_s)\,ds+A_t.$$ Now, for $x\le a$ and $t\le T$, $$\b^{-1}\int_0^{\b t} 1_{]-x,0]}(X_s)\,ds\le \b^{-1}\int_0^{\b T}1_{]-a,0]}(X_s)\,ds\to0\text{ as } \b\to\infty \tag4.4$$ in $\Pz$ probability by (4.1). Let $g$ be a bounded continuous function. Then $$\Px g(\b^{-1}A_{\b t})=\PR^0 g(\b^{-1}A_{\b t}) +\PR^0[ g(\b^{-1}A_{\b t}+\b^{-1}\int_0^{\b t}1_{]-x,0]}(X_s)\,ds)- g(\b^{-1}A_{\b t})].$$ Since the arguments of $g$ in the formula above lie in the interval $[0,T]$ for $t\le T$, we may suppose that $g$ is uniformly continuous. Combining this with (4.4) it is routine to see that (2.7--iii) implies that $$\lim_{\b\to\infty} \Px g(\b^{-1}A_{\b t})=\PR g(tZ_c)$$ uniformly for $0\le x\le a$ and $t\le T$. The case $x<0$ is handled similarly, proving (4.3). \enddemo Next we are going to evaluate the constant $c$ appearing in (2.7) under additional hypotheses on $X$. Of course, if $X$ is symmetric, then $c=1/2$. We shall need the following lemma, which is due to Croft. See \cite{BGT, Th\. 1.9.1}. \proclaim{(4.5) Lemma} Let $f\!\!:\,]0,\infty[\to\R$ be continuous and suppose there exists $c$ such that $\lim_{n\to\infty} f(nh)=c$ for each $h>0$. Then $\lim_{x\to\infty} f(x)=c$.\endproclaim It is trivial to see that (4.5) also applies when it is known that $f$ is the sum of a continuous function and one with limit 0 at infinity. \proclaim{(4.6) Proposition} Suppose $X_1$ has finite variance $\ss^2$ and mean $m$ (under $\PR^0$). Then $c=0$, $1/2$ or $1$ according as $m<0$, $m=0$ or $m>0$.\endproclaim \demo{Proof} If $h>0$, we have $\PR^0(X_h)=mh$ and $\var(X_h)=\ss^2 h$. Since $X_{nh}=\sum_{k=1}^n (X_{kh}-X_{(k-1)h})$ under $\PR^0$, one has $$\frac{X_{nh}-nmh}{\ss\sqrt{nh}}\conv0 N(0,1).$$ But $$\PR^0(X_{nh}>0)= \PR^0\(\frac{X_{nh}-nmh}{\ss\sqrt{nh}}>-\sqrt{n}\frac{m\sqrt{h}}{\ss}\)$$ and so $\mu_{nh}(\Rpp)\to0$, $1/2$ or $1$ according as $m<0$, $m=0$ or $m>0$. By (2.6), $t\to \mu_t(\Rpp)$ is the sum of a continuous function and a function having zero limit at infinity, and so the desired conclusion follows from (4.5). \enddemo Suppose next that $X=(X_t)$ is a strictly stable process of index $\a$, $0<\a\le 2$, $\a\neq1$. Then the exponent function $\psi$ may be written in the form $$\psi(\xi)= k|\xi|^{\a}\exp(i\vf\sgn \xi)\tag4.7$$ where $k>0$, $|\vf|\le\pi\a/2$ if $0<\a<1$ and $|\vf|\le (2-\a)\pi/2$ if $1<\a\le 2$. See \cite{F, p\. 581}. The density $p(t,x)$ of $X_t$ scales according to the rule $p(t,x)=t^{-1/\a}p(1,t^{-1/\a}x)$, and so $$\mu_t(\Rpp)=\Int p(t,x)\,dx=\Int p(1,x)\,dx.\tag4.8$$ Thus $c=\Int p(1,x)\,dx$, and this integral is clearly independent of the value of $k$ in (4.7). This integral has been calculated \cite{Z, \S2.6} in the form of a Mellin transform of a strictly stable density, and more directly in \cite{DI}, to the effect that $$c=c(\a,\vf):=\Int p(1,x)\,dx=1/2-\frac{\vf}{\pi\a}.\tag4.9$$ Note that if $0<\a<1$, then $c(\a,\vf)$ ranges over $[0,1]$, while if $1<\a<2$, $c(\a,\vf)$ ranges over $[1-\a^{-1},\a^{-1}]$. Now suppose that $X=(X_t)$ is a L\'evy process with $X_1$ in the domain of attraction of a strictly stable law with characteristic function $e^{-\psi}$, where $\psi$ is given by (4.7). We suppose in addition that $\PR^0(X_1)=0$ if $\a>1$. Then there exist positive constants $a_n\to\infty$ such that $X_n/a_n\conv0 Y_1$, where $Y=(Y_t)$ is the stable process with L\'evy exponent $\psi$. See \cite{F, Th\. 3, p\. 580}. It follows that $X_{nh}/a_n\conv0 Y_h$ for $h>0$. Therefore, using the notation of (4.8) and (4.9), $$\PR^0(X_{nh}>0)=\Pz(X_{nh}/a_n>0)\to\Int p(h,x)\,dx=c(\a,\vf).$$ Another appeal to (2.6) and (4.5) establishes the following result. \proclaim{(4.10) Proposition} For a process $X$ satisfying the conditions of the preceding paragraph, one has $\mu_t(\Rpp)\to1/2-\frac{\vf}{\pi\a}$ as $t\to\infty$.\endproclaim Next suppose $X=(X_t)$ is a stable process of index $\a=1$. Then $\psi$ has the form $$\psi(\xi)=-i\gg \xi+k|\xi|[1+i\frac{2\b}{\pi}\sgn\xi\log|\xi|]$$ where $\gg\in\R$, $k>0$ and $-1\le\b\le1$. %Again, we may set $k=1$ without loss of generality. Since $t\psi(\xi)=\psi(t\xi)-i \frac{2\b}{\pi} k\xi t\log t$ for $t>0$, $X_t$ has the same law as $tX_1+ \frac{2\b}{\pi} kt\log t$ under $\Pz$. Therefore $\Pz(X_t>0)=\Pz(X_1>-\frac{2\b}{\pi} k\log t)$, and if $\b\neq0$, this approaches $1$ or $0$ according as $\b>0$ or $\b<0$. If $\b=0$ and $\gg=0$, $X$ is a symmetric Cauchy process, and this is covered by (2.8a). If, however, $\b=0$ and $\gg\neq0$, since $X_1+\gg$ is Cauchy, $\Pz(X_t>0)=\Pz(X_1>0)=\Pz(X_1+\gg>\gg)=1/2-\arctan\gg$. Thus the Cauchy process with drift can lead to an arc-sine law with any preassigned parameter $c\in]0,1[$ by suitable choice of the drift parameter $\gg$. If $X_1$ is in the domain of {\it normal\/} attraction of a stable law $Y$ of index $\a>1$, then the norming constants $a_n$ satisfy $a_n\sim bn^{1/\a}$ as $n\to\infty$, where $b>0$. In this situation, as in the finite variance case (4.6), it follows that if $\PR(Y)\neq0$, then $\mu_t(\Rpp)\to0$ or $1$ as $t\to\infty$ according as $\PR(Y)<0$ or $\PR(Y)>0$. Perhaps it is worthwhile to re-write (4.9) when $\psi$ is written in a more familiar form ($\a\neq 1$) $$\psi(\xi)=k|\xi|^{\a}(1+i\b\sgn(\xi)\tan\frac{\pi\a}{2})$$ where $|\b|\le 1$. Then $\vf=\arctan(\b\tan(\pi\a/2))$ if $0<\a<1$, and $\vf=\arctan(-\b\tan(\pi-\pi\a/2))$ if $1<\a\le 2$. In particular, letting $c^*(\a,\b):=c(\a,\vf)$, one has $c^*(\a,1)=0$, $c^*(\a,-1)=1$ when $0<\a<1$, and $c^*(\a,1)=1/\a$, $c^*(\a,-1)=1-1/\a$ when $1<\a<2$. Since $\b=1$ corresponds to a spectrally negative process, this is in accordance with the results in \cite{BH}. In all of the above situations, $\mu_t(\Rpp)$ itself approaches a limit. On the other hand, examples (g) and (h) of \cite{F, XVII.9} are easily modified to provide an example of a L\'evy process for which the set of accumulation points of $t\to\mu_t(\Rpp)$ at infinity is the entire unit interval. Although we do not have an example for which (2.7--i) fails, it appears extremely unlikely that it is always true. There is another interesting class of examples in which $c$ may be evaluated. Let $X$ be a L\'evy process for which $\mu_t(\Rpp)\to c$ as $t\to\infty$. Let $T(t)$ be a subordinator (i.e., a L\'evy process with non-decreasing sample paths) which is independent of $X$. Then $Y_t:=X_{T(t)}$ is a L\'evy process. If $(\eta_t)$ denotes the semigroup of $Y$ and $(\pi_t)$ the semigroup of $T$, then $\eta_t=\Int \mu_s\,\pi_t(ds)$, and in particular, $$\eta_t(\Rpp)=\Int\mu_s(\Rpp)\,\pi_t(ds).$$ If $\pi_t\notequiv\varepsilon_0$, then by (2.5), $\pi_t([0,a])\to 0$ as $t\to\infty$ for each $a<\infty$. Since each $\pi_t$ is a probability on $[0,\infty[$, it follows readily that $\eta_t(\Rpp)\to c$ as $t\to\infty$. Note the agreement with (4.10) if $T$ is a stable subordinator of index $\gg$, $0<\gg<1$, and $X_1$ is in the domain of attraction of a stable law of index $\a$. \beginsection 5. Related limit distributions Let $X$ be a L\'evy process satisfying condition (2.7--i) so that the results of the preceding sections apply. We study in this section the distribution limits as $t\to\infty$ of functionals having the form $t^{-1}\intt f(X_s)\,ds$, with $f$ a bounded, not necessarily integrable, function on~$\R$. The special case in which $X$ is a stable process has been treated in \cite{D}. (In the case of integrable $f$, one could expect the Darling-Kac theory to apply, but with normalizations other than $t^{-1}$. See \cite{DK} or \cite{BGT, 8.11.1}.) To begin with, let $f$ be a bounded Borel function on $\R$ having finite limits $f(\pm\infty):=\lim_{x\to\pm\infty} f(x)$ at $+\infty$ and $-\infty$. Define $$g(x):=f(x)-f(+\infty)1_{\Rpp}(x)-f(-\infty)1_{\,]-\infty,0[\,}(x).$$ Then $g$ is a bounded Borel function with $g(\pm\infty)=0$. Hence $g_n:=1_{[-n,n]}g$ is bounded with compact support and $g_n\to g$ uniformly as $n\to\infty$. Let $M:=\|g\|_{\infty}$, so that (4.1) implies that for each $x$, $$\align | \b^{-1}\int_0^{\b t} g_n(X_s)\,ds| &\le M\b^{-1}\int_0^{\b t}1_{[-n,n]}(X_s)\,ds\\ &\to0\quad\text{ in $\Px$ probability as $\b\to\infty$.}\endalign $$ But $g_n\to g$ uniformly as $n\to\infty$, and consequently $\b^{-1}\int_0^{\b t} g(X_s)\,ds\to 0$ in $\Px$ probability as $\b\to\infty$. Now, $$\int_0^{\b t}f(X_s)\,ds=\int_0^{\b t} g(X_s)\,ds +[f(+\infty)-f(-\infty)]\int_0^{\b t}1_{\{X_s>0\}}\,ds+f(-\infty)\b t.$$ Divide by $\b$ and let $\b\to\infty$. Then, combining the above remarks and (4.3) establishes the following result, in which $Z_c$ denotes, as in the preceding sections, a generalized arc-sine variable. \proclaim{(5.1) Theorem} Let $X$ be a L\'evy process satisfying (2.7--i). Let $f$ be a bounded Borel function such that $f(\pm\infty)$ exists. Then, for each $x$, $$\b^{-1}\int_0^{\b t}f(X_s)\,ds\conv x[f(+\infty)-f(-\infty)]tZ_c+f(-\infty)t=f(+\infty)tZ_c+ f(-\infty)t(1-Z_c).\tag5.2$$ \endproclaim When $X$ is a strictly stable process of index $\a$, $0<\a\le 2$, one may weaken the hypothesis on $f$. This will depend on the following simple lemma. \proclaim{(5.3) Lemma} Let $f$ be a bounded Borel function on $\Rp:=[0,\infty[$ such that as $T\to\infty$, $\frac1T\int_0^T f(x)\,dx\to c$. If $g\in L^1(\Rp)$, then $\Int f(Tx)g(x)\,dx\to c\Int g(x)\,dx$ as $T\to\infty$.\endproclaim \demo{Proof} If $0\le a1$. Then $X$ has a jointly continuous local time $(L^x_t)$ which we normalize so that for every positive Borel function $f$ on $\R$, $$\intt f(X_s)\,ds=\int f(x)L^x_t\,dx.\tag5.4$$ Moreover, $L$ has the following scaling property. For each $\b>0$, the following equality in law holds between two-parameter processes: $$(\b^{1-1/\a}L_t^{x\b^{-1/\a}};x\in\R;t\ge0:\PR^y)\eqd{} (L^x_{\b t};x\in\R;t\ge0:\PR^{y\b^{1/\a}}).\tag5.5$$ See for example \cite{FG}. \proclaim{(5.6) Theorem} Let $f$ be a bounded Borel function such that $f^+:=\lim_{T\to\infty}\frac1T\int_0^T f(x)\,dx$ and $f^-:=\lim_{T\to\infty}\frac1T\int_{-T}^0 f(x)\,dx$ both exist. Let $X$ be a strictly stable process of index $\a$, $1<\a\le 2$. Then for each $x$ as $\b\to\infty$, $$\align \b^{-1}\int_0^{\b t} f(X_s)\,ds&\conv x f^+\intt 1_{\{X_s>0\}}\,ds+f^-\intt1_{\{X_s<0\}}\,ds\tag5.7\\ &=(f^+-f^-)\intt1_{\{X_s>0\}}\,ds+f^-t\\ &\eqd{}(f^+-f^-)tZ_c+f^-t.\endalign$$ \endproclaim \demo{Proof} Using (5.4), (5.5) and (5.3), $$\align \b^{-1}\int_0^{\b t}f(X_s)\,ds&=\int_{-\infty}^\infty f(x)L^x_{\b t}\,dx\tag5.8\\ &\eqd0\b^{-1/\a}\int_{-\infty}^\infty f(x)L^{x\b^{-1/\a}}_t\,dx\\ &=\Int f(\b^{1/\a}x)L^x_t\,dx+\int_{-\infty}^0 f(\b^{1/\a}x)L^x_t\,dx\\ &\to f^+\Int L^x_t\,dx+f^-\int_{-\infty}^0 L^x_t\,dx\\ &=f^+\intt 1_{\{X_s>0\}}\,ds+f^-\intt 1_{\{X_s<0\}}\,ds, \endalign$$ almost surely $\Pz$ since, a\.s\. for each $t$, $x\to L^x_t$ is a continuous function with compact support; in particular, it is in $L^1$ (this is clear form (5.4)). This establishes (5.7) when $x=0$. The law of $(X_t)_{t\ge 0}$ under $\Px$ is the same as the law of $(x+X_t)_{t\ge 0}$ under $\Pz$. If $f_x(y):=f(x+y)$, then $f^{\pm}_x=f^{\pm}$, and so (5.7) for all $x$ follows from the case $x=0$.\enddemo \noindent{\bf Remarks:} If $X$ is strictly stable of index $\a$, $0<\a\le 1$, then no local time exists. However, one may still use the scaling and a straightforward moment calculation to see that Theorem 5.6 is valid for any strictly stable process. See, for example, \cite{D}. We conjecture that the existence of the Ces\`aro limits $f^\pm$ suffices for the validity of Theorem 5.1 provided $\mu_t$ is absolutely continuous for all sufficiently large $t$. \def\wk#1{\overset{{\Cal L}^{#1}}\to\longrightarrow} For strictly stable processes $X$ with index $\a>1$, Theorem 5.1 may be strengthened to a weak convergence statement. Let $f$ satisfy the conditions of Theorem 5.1 and let $X$ be as in Theorem 5.6. Then the calculation (5.8) is valid with $f^\pm$ replaced by $f(\pm\infty)$. But, if $0\le t\le T$, $$\Int|f(\b^{1/\a}x)-f(+\infty)|L^x_t\,dx\le \Int|f(\b^{1/\a}x)-f(+\infty)|L^x_T\,dx\to0$$ as $\b\to\infty$. It follows that the convergence in (5.8) is uniform on compact $t$ sets, and consequently $$\b^{-1}\int_0^{\b t}f(X_s)\,ds\wk0 f(+\infty)\intt 1_{\{X_s>0\}}\,ds+f(-\infty)\intt 1_{\{X_s<0\}}\,ds.\tag5.9$$ Here, $\wk0$ means weak convergence of the laws that the indicated processes under $\Pz$ induce on the space $C([0,\infty[,\R)$ equipped with the topology of uniform convergence on compacts. As before, the identity in law of $(X_t)_{t\ge 0}$ under $\Px$ and $(x+X_t)_{t\ge 0}$ under $\Pz$ shows that $\wk0$ may be replaced by $\wk x$ in (5.9). \Refs \define\ky#1 {\key {\bf #1}\ } \define\TPA{\jour Theory Probab\. Appl\.} \widestnumber\key{\bf BGT} \ref \ky A \by E. Sparre Anderson \paper On the fluctuation of sums of independent random variables II\jour Math. Scand. \yr 1954\pages195--223\vol 2\endref \ref \ky BGT \by N. H. Bingham, C. M. Goldie and J. L. Teugels \book Regular Variation \bookinfo Encyclopedia of Mathematics and its Applications; Vol. 27 \yr1987 \publ Cambridge University Press \publaddr Cambridge \endref \ref \ky BH \by N. H. Bingham and J. Hawkes \paper On limit theorems for occupation times\inbook Probability, Statistics and Analysis\eds J. F. C. Kingman and G. E. H. Reuter\bookinfo London Math. Soc. Lecture Notes, vol. 79\yr 1983\pages46--62\publ Cambridge University Press \publaddr Cambridge\endref \ref \ky DK \by D. A. Darling and M. Kac\paper On occupation times for Markov processes\jour Trans. Amer. Math. Soc.\yr 1957\pages95--107\vol 73\endref \ref \ky D \by Yu. A. Davydov \paper Limit theorems for functionals of processes with independent increments\jour \TPA\yr 1973\pages431--441\vol XVIII\endref \ref \ky DI \by Yu. A. Davydov and I. A. Ibragimov\paper On the asymptotic behavior of some functionals of processes with independent increments\jour \TPA\yr 1971\pages162--167\vol XVI\endref \ref \ky F \by W. Feller \book Introduction to Probability Theory and its Applications, Vol\. II (2${}^{\text{nd}}$ edition) \yr1971 \publ Wiley \publaddr New York \endref \ref \ky FG \by P. J. Fitzsimmons and R. K. Getoor\paper Limit theorems and variation properties for fractional derivatives of the local time of a stable process\jour Ann. Inst. Henri Poincar\'e\yr 1992\pages311--333\vol 28\endref \ref \ky K \by M. Kac\paper On some connections between probability theory and differential and integral equations\inbook Proc. 2${}^{\text{nd}}$ Berkeley Symp. on Math. Stat. and Probability\yr 1951\pages189--215\publ University of California Press\endref \ref \ky L1 \by P. L\'evy\paper Sur un probl\`eme de M. Marcinkiewicz\jour C.R.A.S.\yr 1939\pages318--321\vol 208\nofrills\finalinfo Errata p\. 776\endref \ref \ky L2 \bysame\paper Sur certains processus stochastiques homog\`enes\jour Compositio Math.\yr 1939\pages283--339\vol 7\endref \ref \ky S \by F. Spitzer\paper A combinatorial lemma and its application to probability theory\jour Trans. Amer. Math. Soc.\yr 1956\pages323--339\vol 82\endref %\ref \ky Z1 \by V. M. Zolotarev\paper Mellin-Stieljes %transform in probability theory\jour \TPA\yr %1957\pages433--460\vol II\endref \ref \ky Z \by V. M. Zolotarev\book One-dimensional Stable Distributions \yr1983 \publ ``Nauka''\publaddr Moscow\lang Russian\transl English transl.\publ Amer. Math. Soc.\publaddr Providence\yr 1986 \endref \endRefs \enddocument