id
stringlengths 20
20
| score
int64 1
5
| normalized_score
float64 0.2
1
| content
stringlengths 217
3.74M
| sub_path
stringclasses 1
value |
|---|---|---|---|---|
BkiUdfDxK6EuNCvep7GJ
| 5 | 1 |
\section{\@startsection {section}{1}{\z@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\normalfont\large\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\normalsize\bfseries}}
\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\normalsize\it}}
\renewcommand\paragraph{\@startsection{paragraph}{4}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\normalsize\bf}}
\def{\it i.e.}{{\it i.e.}}
\def{\it e.g.}{{\it e.g.}}
\def\revise#1 {\raisebox{-0em}{\rule{3pt}{1em}}%
\marginpar{\raisebox{.5em}{\vrule width3pt\
\vrule width0pt height 0pt depth0.5em
\hbox to 0cm{\hspace{0cm}{%
\parbox[t]{4em}{\raggedright\footnotesize{#1}}}\hss}}}}
\newcommand\fnxt[1] {\raisebox{.12em}{\rule{.35em}{.35em}}\mbox{\hspace{0.6em}}#1}
\newcommand\nxt[1] {\\\fnxt#1}
\def{\cal A} {{\cal A}}
\def{\mathfrak A} {{\mathfrak A}}
\def{\underline \calA} {{\underline {\mathfrak A}}}
\def{\cal B} {{\cal B}}
\def{\cal C} {{\cal C}}
\def{\cal D} {{\cal D}}
\def{\cal E} {{\cal E}}
\def{\cal F} {{\cal F}}
\def{\cal G} {{\cal G}}
\def{\mathfrak G} {{\mathfrak G}}
\def{\cal H} {{\cal H}}
\def{\cal I} {{\cal I}}
\def{\cal J} {{\cal J}}
\def{\cal K} {{\cal K}}
\def{\cal L} {{\cal L}}
\def{\cal M} {{\cal M}}
\def{\cal N} {{\cal N}}
\def{\cal O} {{\cal O}}
\def{\cal P} {{\cal P}}
\def{\cal Q} {{\cal Q}}
\def{\cal R} {{\cal R}}
\def{\cal S} {{\cal S}}
\def{\cal T} {{\cal T}}
\def{\cal U} {{\cal U}}
\def{\cal V} {{\cal V}}
\def{\cal W} {{\cal W}}
\def{\cal Z} {{\cal Z}}
\def{\mathbb C} {{\mathbb C}}
\def{\mathbb N} {{\mathbb N}}
\def{\mathbb P} {{\mathbb P}}
\def{\mathbb Q} {{\mathbb Q}}
\def{\mathbb R} {{\mathbb R}}
\def{\mathbb Z} {{\mathbb Z}}
\def{\partial} {\partial}
\def\bar\partial {\bar\partial}
\def{\rm i} {{\rm i}}
\def{\circ} {{\circ}}
\def{\rm tr} {\mathop{\rm Tr}}
\def{\rm Re\hskip0.1em} {{\rm Re\hskip0.1em}}
\def{\rm Im\hskip0.1em} {{\rm Im\hskip0.1em}}
\def{\it id} {{\it id}}
\def\de#1#2{{\rm d}^{#1}\!#2\,}
\def\De#1{{{\cal D}}#1\,}
\def\frac{1}{2}{{\frac12}}
\newcommand\topa[2]{\genfrac{}{}{0pt}{2}{\scriptstyle #1}{\scriptstyle #2}}
\def\undertilde#1{{\vphantom#1\smash{\underset{\widetilde{\hphantom{\displaystyle#1}}}{#1}}}}
\def\mathop{{\prod}'}{\mathop{{\prod}'}}
\def\gsq#1#2{%
{\scriptstyle #1}\square\limits_{\scriptstyle #2}{\,}}
\def\sqr#1#2{{\vcenter{\vbox{\hrule height.#2pt
\hbox{\vrule width.#2pt height#1pt \kern#1pt
\vrule width.#2pt}\hrule height.#2pt}}}}
\def\square{%
\mathop{\mathchoice{\sqr{12}{15}}{\sqr{9}{12}}{\sqr{6.3}{9}}{\sqr{4.5}{9}}}}
\newcommand{\fft}[2]{{\frac{#1}{#2}}}
\newcommand{\ft}[2]{{\textstyle{\frac{#1}{#2}}}}
\def\mathop{\mathchoice{\sqr{8}{32}}{\sqr{9}{12}{\mathop{\mathchoice{\sqr{8}{32}}{\sqr{9}{12}}
{\sqr{6.3}{9}}{\sqr{4.5}{9}}}}
\def\alpha{\alpha}
\def\beta{\beta}
\defr_+{r_+}
\def\tilde{q}{\tilde{q}}
\def\Omega{\Omega}
\def\Omega{\Omega}
\def\lambda{\lambda}
\def\omega{\omega}
\def\mu{\mu}
\def\gamma{\gamma}
\def\lambda{\lambda}
\def\nu{\nu}
\def\bar{\nu}{\bar{\nu}}
\def\bar{\mu}{\bar{\mu}}
\catcode`\@=12
\usepackage{color}
\newcommand{\Red}[1]{{\color{red} #1}}
\newcommand{\Green}[1]{{\color{darkgreen} #1}}
\newcommand{\Purple}[1]{{\color{magenta} #1}}
\newcommand{\Blue}[1]{{\color{blue} #1}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\bar{\alpha}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\nonumber}{\nonumber}
\def\bf vol{\bf vol}
\def\bf Vol{\bf Vol}
\def{\partial}{{\partial}}
\def\vev#1{\left\langle #1 \right\rangle}
\def{\cal N}{{\cal N}}
\def{\cal O}{{\cal O}}
\def{\mathbb C}{{\mathbb C}}
\def{\mathbb R}{{\mathbb R}}
\def{\mathbb Z}{{\mathbb Z}}
\def{\bf RP}{{\bf RP}}
\def{\bf CP}{{\bf CP}}
\def{Poincar\'e }{{Poincar\'e }}
\def{\rm tr}{{\rm tr}}
\def{\tilde \Phi}{{\tilde \Phi}}
\def{\bf Y}{{\bf Y}}
\def\theta{\theta}
\def\bf{X}{\bf{X}}
\def\hfil$\displaystyle{##}${\hfil$\displaystyle{##}$}
\def$\displaystyle{{}##}$\hfil{$\displaystyle{{}##}$\hfil}
\def\hfil$\displaystyle{##}$\hfil{\hfil$\displaystyle{##}$\hfil}
\def\hbox{##}{\hbox{##}}
\def\HLINE{\noalign{\vskip1\jot}\hline\noalign{\vskip1\jot}}
\def\seqalign#1#2{\vcenter{\openup1\jot
\halign{\strut #1\cr #2 \cr}}}
\def\lbldef#1#2{\expandafter\gdef\csname #1\endcsname {#2}}
\def\eqn#1#2{\lbldef{#1}{(\ref{#1})}%
\begin{equation} #2 \label{#1} \end{equation}}
\def\eqalign#1{\vcenter{\openup1\jot }}
\def\eno#1{(\ref{#1})}
\def\href#1#2{#2}
\def\frac{1}{2}{{1 \over 2}}
\def{\it AdS}{{\it AdS}}
\def{\it AdS}$_{p+2}${{\it AdS}$_{p+2}$}
\def{\it CFT}{{\it CFT}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\cal N}{{\cal N}}
\newcommand{{\cal O}}{{\cal O}}
\newcommand{\cal A}{{\cal A}}
\newcommand{{\cal T}}{{\cal T}}
\newcommand{{\cal F}}{{\cal F}}
\newcommand{{\cal C}}{{\cal C}}
\newcommand{{\cal R}}{{\cal R}}
\newcommand{{\cal W}}{{\cal W}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\lambda}\newcommand{\Lm}{\Lambda}{\lambda}\newcommand{\Lm}{\Lambda}
\newcommand{\epsilon}{\epsilon}
\newcommand{\nonumber}{\nonumber}
\newcommand{\displaystyle{\frac{1}{2}}}{\displaystyle{\frac{1}{2}}}
\newcommand{\dsl}
{\kern.06em\hbox{\raise.15ex\hbox{$/$}\kern-.56em\hbox{$\partial$}}}
\newcommand{\not\!\! A}{\not\!\! A}
\newcommand{\not\! p}{\not\! p}
\newcommand{\not\! k}{\not\! k}
\newcommand{\Delta}{{\cal{D}}}
\newcommand{d^2x}{d^2x}
\newcommand{{\cal Z}}{{\cal Z}}
\newcommand{\mbox{\tiny NS}}{{\cal N}}
\newcommand{\not\!\! D}{\not\!\! D}
\newcommand{\not\!\! B}{\not\!\! B}
\newcommand{\not\!\! P}{\not\!\! P}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{{\rm \kern 0.275em Z \kern -0.92em Z}\;}{{\rm \kern 0.275em Z \kern -0.92em Z}\;}
\def\sigma{\sigma}
\def\alpha{\alpha}
\def\beta{\beta}
\def\delta{\delta}
\def\gamma{\gamma}
\def\Gamma{\Gamma}
\def\epsilon{\epsilon}
\makeatletter \@addtoreset{equation}{section} \makeatother
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\mu{\mu}
\def\nu{\nu}
\def\gamma{\gamma}
\def\phi{\phi}
\def\Lambda{\Lambda}
\def {\cal W}{{\cal W}}
\def\bar{\nu}{\bar{\nu}}
\def\bar{\mu}{\bar{\mu}}
\def\bar{w}{\bar{w}}
\def\bar{\alpha}{\bar{\alpha}}
\def\bar{\beta}{\bar{\beta}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\nonumber{\nonumber}
\newcommand\belabel[1]{\begin{equation}\label{#1}}
\newcommand\bearlabel[1]{\begin{eqnarray}\label{#1}}
\newcommand\bra[1]{{\langle {#1}|}}
\newcommand\ket[1]{{|{#1}\rangle}}
\newcommand{\braket}[2]{\langle #1|#2\rangle}
\def{:=}{{:=}}
\def\alpha{\alpha}
\def\beta{\beta}
\def\gamma{\gamma}
\def\theta{\theta}
\def\lambda{\lambda}
\def\mu{\mu}
\def\tilde L{\tilde L}
\def\cal A{\cal A}
\def\cal J{\cal J}
\def\cal L{\cal L}
\def\cal V{\cal V}
\def\cal M{\cal M}
\def\cal N{\cal N}
\def\Gamma{\Gamma}
\def\Delta{\Delta}
\def\mbox{\tiny NS}{\mbox{\tiny NS}}
\def\mbox{\tiny R}{\mbox{\tiny R}}
\def\mbox{\tiny P}{\mbox{\tiny P}}
\def\mbox{\tiny X}{\mbox{\tiny X}}
\def\frac{1}{2}{\frac{1}{2}}
\def\frac{3}{2}{\frac{3}{2}}
\def\widetilde{\widetilde}
\def\nonumber{\nonumber}
\begin{document}
\begin{titlepage}
\versionPhase transition
\vskip -.8cm
\rightline{\small{\tt MCTP-07-30}}
\vskip 1.7 cm
\centerline{\bf \Large Holographic Entanglement Entropy at Finite Temperature }
\vskip .4cm
\vskip .7 cm
\vskip .2cm \vskip 1cm
\centerline{\large Ibrahima Bah$^1$, Alberto Faraggi$^{1}$,}
\vskip .5cm
\centerline{\large Leopoldo A. Pando Zayas$^{1}$ and C\'esar A. Terrero-Escalante$^{2}$ }
\vskip 1cm
\vskip .5cm
\centerline{\it ${}^1$ Michigan Center for Theoretical
Physics}
\centerline{ \it Randall Laboratory of Physics, The University of
Michigan}
\centerline{\it Ann Arbor, MI 48109-1040}
\vspace{1cm}
\centerline{\it ${}^2$ Departamento de F\'{\i}sica, Centro de Investigaci\'on y Estudios Avanzados del IPN,}
\centerline{ \it Apdo. Postal 14-740, 07000 M\'exico D.F., M\'exico}
\vspace{1cm}
\begin{abstract}
Using a holographic proposal for the entanglement entropy we study its behavior in various supergravity backgrounds. We are particularly interested in the possibility of using the entanglement entropy as way to detect transitions induced by the presence horizons. We consider several geometries with horizons: the black hole in $AdS_3$, nonextremal Dp-branes, dyonic black holes asymptotically to $AdS_4$ and also Schwarzschild black holes in global $AdS_p$ coordinates. Generically, we find that the entanglement entropy does not exhibit a transition, that is, one of the two possible configurations always dominates.
\end{abstract}
\end{titlepage}
\section{Introduction}
Given a system in a pure quantum state $|\Psi>$ and density matrix $\rho= |\Psi><\Psi|$, if
we split the system into two subsystems $A$ and $B$,
the reduced density matrix is obtained by
tracing over the degrees of freedom in the complementary subsystem, say, $\rho_A= {\rm Tr}_B \rho$.
The entanglement entropy is defined as the von Neumann entropy
\begin{equation}
S_A=-{\rm Tr}\rho_A\log \rho_A.
\end{equation}
This provides a measure of how entangled or ``quantum'' a system is. Entanglement plays a central role
in quantum information theory as it determines the ability to send
quantum information \cite{book1}. The entanglement entropy also plays an important role in the study of
strongly correlated quantum systems \cite{osbornenielsen}.
The above definition is completely field theoretical. Interestingly in the context where such field theories have supergravity duals, a prescription for the holographic computation of the entanglement entropy has been provided \cite{Ryu:2006bv},that is, a prescription for computing $S_A$ completely within the holographic gravity dual.
Inspired by the Bekenstein-Hawking entropy, it suggests to calculate the entanglement entropy as the
area associated to a minimal surface $\gamma_A$ whose boundary is the region $A$ in the field theory living at the
boundary:
\begin{equation}
S_A=\frac{{\rm Area}\, {\rm of}\, \gamma_A}{4G_N^{d+2}}.
\end{equation}
This recipe has been successfully applied to various systems and extended in different directions \cite{Ryu:2006ef} including
its covariant formulation \cite{Hubeny:2007xt}.
A slightly modified version of the entanglement entropy is
\begin{equation}
S_A=\frac{1}{4G_N^{(10)}}\int d^8\sigma e^{-2\phi}\sqrt{G_{ind}^{(8)}}.
\end{equation}
The entropy is obtained by minimizing the above action over all surfaces that approach the boundary of the subsystem $A$.
In a very interesting paper \cite{Klebanov:2007ws}, it was suggested that in the presence of regions with
collapsing cycles, which are
typical for supergravity duals of confining field theories, alternative surfaces arise
(see also \cite{Nishioka:2006gr}).
By comparing the entropy due to two
different configurations, it was shown that the entanglement entropy could be an
order parameter for the confinement/deconfinement transition. The motivation comes from the fact that
the entanglement entropy jumps from a
configuration with dominant term of the form $N^2$ to a configuration with leading term $N^0$ for supergravity backgrounds describing theories with $SU(N)$ gauge
groups in the large $N$ and fixed 't Hooft coupling limit.
One intriguing fact about the results of \cite{Klebanov:2007ws} is that, by analyzing a surface in the supergravity dual
to the confined phase of the field theory, one is able to anticipate the existence of a deconfined phase. This
prompts the natural question
of whether the exploration of the deconfined phase can equally well give information about the existence of a
confined phase.
According to the AdS/CFT correspondence \cite{Maldacena:1997re}, the dual of field theories at finite temperature involves
black hole horizons on the supergravity side \cite{Witten:1998zw}.
The horizon provides a natural end of the space similar to the situation discussed in \cite{Klebanov:2007ws}.
In fact, in the context of the AdS/CFT is has already been established in
various situations that there are phase transitions associated to different behavior of surfaces describing branes in the presence
of a horizon \cite{Aharony:2006da,Parnachev:2006dn,Babington:2003vm,Mateos:2006nu,Albash:2006ew,Kobayashi:2006sb}.
In this paper we explore the entanglement entropy for various field theories at finite temperature using its holographic definition. Namely, we study the entanglement entropy for various black hole geometries. We define our subsystem $A$ to be roughly determined by an interval of length $l$ on a curve generated by a spacelike killing vector in the conformal boundary of the geometry. Generically, there are two surfaces that satisfy the boundary conditions: a smooth one and a piece-wise smooth (see figure \ref{fig:discon}). We study the behavior of these two surfaces as a function of the distance $l$.
\begin{figure}[!h]
\centering
\includegraphics[width=0.45\linewidth]{disconn.eps}\\
\caption{Two competing configurations for the entanglement entropy in the presence of a black hole horizon. The green
surface represents a continuous configuration while the red surface goes straight down from the
boundary to the horizon. The subsystem $A$ is given by the blue section. Its characteristic length is $l$.}\label{fig:discon}
\end{figure}
In section \ref{2d} we discuss the gravity dual of 2D CFT at finite temperature -- the BTZ black hole -- where the smooth surface is just the geodesic anchored on $A$. The piece-wise smooth surface is the curve composed of the lines stretching from the conformal boundary to the horizon connected by the segment of length $l$ on the horizon. We show that the smooth configuration is always dominant from the point of view of the thermodynamical comparison. Section \ref{general} presents a general setup for the computations at hand and discusses explicitly the case of the the supergravity backgrounds describing the ${\cal N}=4$ plasma and other field theories at finite temperature.
Also in section \ref{general} we extend the analysis to black $p$-branes corresponding to various gauge theories on the world volume of $p$-branes at
finite temperature. Using the supergravity backgrounds for nonextremal Dp-branes for all values of $p$, we found that the entanglement entropy is given by the smooth surface for all $l$, except for $p=6$. In the $p=6$ case we observe that the entanglement entropy is given by the smooth surface at large $l$ and by the piece-wise smooth surface at small $l$. Furthermore in these geometries, we observe that the radius of the black hole factors as an overall scaling factor. This scaling implies that the entanglement entropy has the same $N$ (number of color) dependence for all the temperatures. In section \ref{globalads}, we study the entanglement entropy in black hole geometries in global $AdS_5$ and $AdS_4$. The main motivation for our analysis comes from the fact that the Hawking-Page phase transition takes place only in global coordinates \cite{Hawking:1982dh}, that is, for the nonextremal Dp-brane geometries there is no Hawking-Page phase transition. In the context of the AdS/CFT this is interpreted as a transition for the dual field theory on a sphere \cite{Witten:1998zw}. Since this feature is absent in the Poincare patch we are motivated to study the entanglement entropy in global AdS backgrounds. However, we find that in global coordinates, the entanglement entropy is given also by the smooth surface.
Our work suggests
that starting from the deconfined phase, changing the lenght of the subsystem does not allow for a transition into confinement, as opposed to what was observed in \cite{Klebanov:2007ws,Nishioka:2006gr}.
According with their results, at zero temperature the length of the subsystem seems to play the role of temperature in the confined phase, while in our case this lenght corresponds possibly to some other thermodynamical quantity.
It is probably worth mentioning a recent study of the topological entanglement entropy \cite{Pakman:2008ui} where no change in the dependence on $N$ was observed. However there is mounting evidence that the entanglement entropy is a {\it bona fide} thermodynamical quantity and its precise meaning should be illuminated through further work. In section \ref{conclusions} we discuss some of the directions suggested by our exploration of the entanglement entropy in the context of finite temperature.
\section{2D CFT at finite temperature from the BTZ black hole }\label{2d}
We begin this section with a discussion of the holographic entanglement entropy using the gravitational background dual to a 2D CFT at finite temperature. Most of the calculation was explicitly done in \cite{Ryu:2006ef}, however, we reproduce it here making emphasis on the thermodynamical competition between the two configurations, something that was not considered in \cite{Ryu:2006ef}.
The relevant geometry holographically describing the field theory at finite temperature is the BTZ black hole
\cite{Banados:1992wn}:
\begin{equation}
ds^2=-(r^2-r_+^2)dt^2 + \frac{R^2}{r^2-r_+^2}dr^2 +r^2 d\phi^2.
\end{equation}
The smooth surface is parametrized by $t=$constant and $r=r(\phi)$. Subsystem A is defined as the region given by $0\leq \phi \leq 2 \pi l/L$ where $L$ is the circumference of the boundary. Thus, $l$ is the characteristic size of A. For the BTZ black hole, the smooth surface is just the geodesic in the bulk that connects the two boundary points of A. The action of this curve is given by
\begin{equation}
A_c=\int d\phi \sqrt{r^2+\frac{R^2}{r^2-r_+^2}r'^2}.
\end{equation}
The equation of motion can be integrated to give
\begin{equation}
\frac{dr}{d\phi}=\frac{r}{R r_*}\sqrt{(r^2-r_*^2)(r^2-r_+^2)}.
\end{equation}
This allows us to relate the length in the $\phi$ direction with the minimum of the curve, $r_*$:
\begin{equation}
\frac{2\pi l}{L}=\frac{R}{r_+}\ln \frac{r_*+r_+}{r_*-r_+}.
\end{equation}
The temperature is $\beta=RL/r_+$. The gravitational theory on $AdS_3$ with radius $R$
is dual to a 2D CFT living on its boundary with central charge $c=3R/2G_N^{(3)}$ \cite{Brown:1986nw}.
The length of the geodesic is then
\begin{equation}
Area_c=2R\ln \frac{r_\infty}{r_+}\sinh\left(\frac{r_+}{RL}\pi l \right)
\end{equation}
where $r_\infty$ is a UV cutoff.
The entanglement entropy given by the continuous configuration is:
\begin{equation}
S_{c}=\frac{1}{4G_N^{(3)}}Area_{c}=\frac{2R}{4G_N^{(3)}}\ln \frac{r_\infty}{r_+}\sinh\left(\frac{r_+}{RL}\pi l \right).
\end{equation}
As noted in \cite{Ryu:2006bv}, this result is in agreement with the field theory result for a
2D CFT at finite temperature \cite{Calabrese:2004eu}:
\begin{equation}
S_A=\frac{c}{3}\log\left(\frac{\beta}{\pi a}\sinh \frac{\pi l}{\beta}\right),
\end{equation}
where $a$ is a ultraviolet cutoff that can be thought of as a lattice spacing.
The piece-wise smooth surface is given by the parametrization gluing the 3 surfaces described as by $\phi = 0$, $r=r_+$ and $\phi = 2 \pi l/L$. The area for this curve is then
\begin{equation}
Area_d=2 \int\limits_{r_+}^{r_\infty} \frac{R}{\sqrt{r^2-r_+^2}}dr + \frac{2r_+ \pi l}{L}
=2 R \ln \frac{r_\infty}{r_+} + \frac{2r_+ \pi l}{L}.
\end{equation}
The first term of the piece-wise smooth configuration consists of two lines that go from the boundary to the horizon, their contribution to the entropy is:
\begin{equation}
S_{d(1)}=\frac{1}{4G_N^{(3)}}2\int\limits_{r_+}^{r_\infty} \frac{R}{\sqrt{r^2-r_+^2}}dr
= \frac{R}{2G_N^{(3)}}\ln \frac{r_\infty}{r_+}.
\end{equation}
By construction, this entropy is independent of the length $l$ of the interval that defines the subsystem $A$. In intrinsic
field theoretic terms we could rewrite it as:
\begin{equation}
\eqlabel{discontinuous}
S_{d(1)}=\frac{c}{3}\ln \frac{\beta}{a}.
\end{equation}
The study of entanglement entropies in 1+1 system is well developed. In \cite{Holzhey:1994we} a formula for the
entanglement entropy of a system of length $l$ was obtained for conformal field theories: $(c/3)\ln(l/a)$. Some extensions of
this result have been discussed recently, including systems away from criticality
\cite{Calabrese:2004eu,Vidal:2002rm,Calabrese:2005zw}.
Systems close to a phase transition (large but finite correlation length) are described
by massive quantum field theories with mass inversely proportional to the correlation length \cite{Calabrese:2004eu}:
\begin{equation}
S_A\sim \frac{c}{3}\ln\left(\frac{\xi}{a}\right).
\end{equation}
The correlation length that appears above is considered to be the inverse of the mass $m=\xi^{-1}$.
The correlation length $\xi$ and the inverse temperature $\beta$ can be naturally identified in our setup
completing the matching of the gravity (\ref{discontinuous}) with the field theoretic one. Interpreting the second term in the piece-wise smooth configuration is more challenging, we simply note that in field theoretic terms the contribution coming directly from the horizon takes the form: $S_{d(2)}=(c/3) (\pi l/\beta)$.
Now consider the difference in area:
\begin{equation}
\Delta A = A_c - A_d = 2 R \ln \left[\sinh\left(\frac{r_+ \pi l}{R L}\right)\right] - \frac{2r_+ \pi l}{L}.
\end{equation}
As expected, the difference is UV finite. Setting it to zero we obtain:
\begin{equation}
\sinh(x) = e^x \;\;\mbox{where} \;\; x =\frac{r_+ \pi l}{R L}.
\end{equation}
This equation has no solutions and it follows that
\begin{equation}
A_c < A_d .
\end{equation}
This implies that the entropy is always given by the smooth surface. As noted in \cite{Ryu:2006ef} the answer matches precisely the field theory calculation \cite{Calabrese:2005zw}. More impressive agreement has also been found in the context of disconnected segments in the field theory \cite{Hubeny:2007re}.
\section{Entanglement entropy for nonextremal Dp branes}\label{general}
The next geometries that we study are static geometries with a spacelike killing vector $X^\mu$ that commutes with all other killing vectors. Furthermore we assume that these geometries have conformal radius $r$. Thus there exist a frame where the metric is given as,
\begin{equation}
ds^2 = -\tau(r) dt^2 + f(r) dr^2 + g(r) dx^2 + h_{ij}dy^i dy^j
\end{equation} where $x$ is the affine parameter along $X^\mu$ and $y^i$ are coordinates of the internal submanifold. In general, this submanifold can have compact and non-compact directions and $h_{ij}$ can have non-trivial dependence on the coordinates. However, the geometries considered here satisfy:
\begin{equation}
h_{ij}dy^i dy^j = h^1_{ab}(r)du^a du^b + h^2_{mn}(r,\theta) d\theta^m d\theta^n
\end{equation} where the $u^i$'s are non-compact coordinates and $\theta^m$'s are compact.
The conformal boundary is obtained by taking the large $r$ limit. The region $A$ that we will consider is parametrized by:
\begin{equation}
t = \mbox{constant} \;\;\; \mbox{and} \;\;\; -\frac{l}{2} < x < \frac {l}{2} \nonumber
\end{equation} where $l$ is the size of the subsystem $A$.
As discussed above, there are two surfaces that satisfy the minimal surface condition. The smooth surface is given by,
\begin{equation}
t = \mbox{constant}, \;\;\; \mbox{and} \;\;\; r = r(x) \;\;\; \mbox{with b.c.} \;\; r(\pm l/2) = r_\infty,
\end{equation} where $r_\infty$ is the location of the holographic boundary. The second surface is piece-wise smooth and parametrized as:
\begin{equation}
t= \mbox{constant}, \;\;\; \mbox{and} \begin{cases} &x= -\frac{l}{2} \\ &r= r_0 \\ &x= \frac{l}{2} \end{cases}
\end{equation} where $r_0$ is the lower bound of $r$. Now we can proceed to study the difference in area of these surfaces.
\subsection{Two branches of the holographic entropy}
We start by computing the area of the smooth surface. The induced metric is:
\begin{equation}
ds^2 = g(r)(1+\frac{f(r)}{g(r)} r'^2(x)) dx^2 + h_{ij}dy^i dy^j
\end{equation} where prime denotes derivative with respect to $x$. The volume element is then
\begin{equation}
\sqrt{G_{ind}} = \sqrt{g(r)(1 + \frac{f(r)}{g(r)} r'^2(x)) h^1(r) h^2(r,\theta)}
\end{equation} where $h^1$ and $h^2$ are the determinant of the metric for the non-compact and compact sub-manifolds, respectively. Since the non-compact sub manifold will contribute an infinite volume, we must work with the area density. It is given by
\begin{equation}
A = \int dx e^{-2\phi} \sqrt{g(r)(1 + \frac{f(r)}{g(r)} r'^2(x)) h^1(r)} \int d^n \theta \sqrt{h^2(r,\theta)}
\end{equation} where $\phi$ is the dilaton. Before we proceed, we define the following quantities:
\begin{equation} \sqrt{H(r)} = e^{-2\phi} \sqrt{g(r) h^1(r)} \int d^n \theta \sqrt{h^2(r,\theta)} \;\;\; \mbox{and} \;\;\; \beta^2 = \frac{f(r)}{g(r)}.
\end{equation} The area is:
\begin{equation}
A = \int dx \sqrt{H(r)} \sqrt{1 + \beta^2 r'^2(x)}.
\end{equation} Now we want to find the $r(x)$ that minimizes the area. Since the Lagrangian does not explicitly depend on $x$, we can use the conserve Hamiltonian to integrate the equation of motion. It is then:
\begin{equation} E = L - \frac{\partial L}{\partial ( r')}( r') = \frac{\sqrt{H(r)}}{\sqrt{1+ \beta^2 r'^2}}.
\end{equation} The constant $E$ can be obtained by considering the turning point $r_*$ where $r'=0$. We thus obtain
\begin{equation}
E = \sqrt{H(r_*)}.
\end{equation} Physically, $r_*$ is the minimum of the surface. It corresponds to $x=0$ since the surface must be symmetric under $x \to -x$. The quantity, $r_*$, can be used to label different surfaces for different values of $l$. This relationship is obtained by integrating the equation of motion one more time,
\begin{equation}
l(r_*) = \int dx = \int \frac{dr}{r'} = 2 \sqrt{H(r_*)} \int_{r_*}^{r_\infty} \frac{\beta(r)}{\sqrt{H(r) - H(r_*)}} dr . \label{l1:1}
\end{equation}
Similarly, we can obtain the area as an integral over $r$,
\begin{eqnarray}
A_c (r_*) &=& \int dx \sqrt{H(r)} \sqrt{1 + \beta^2 r'^2(x)} = \int \frac{dr}{r'} H(r) \nonumber \\
&=& 2 \int_{r_*}^{r_\infty} \frac{\beta(r) H(r)}{\sqrt{H(r) - H(r_*)}}.
\end{eqnarray}
We also compute the area of the piece-wise smooth surface. The induced line elements for different segments are
\begin{eqnarray}
ds^2 &=& f(r) dr^2 + h_{ij}dy^i dy^j \;\;\; \mbox{for} \;\;\; x = \pm \frac{l}{2}\\
ds^2 &=& g(r_0) dx^2 + h_{ij}dy^i dy^j \;\;\; \mbox{for} \;\;\; r = r_0.
\end{eqnarray} The area of the piece-wise smooth is then
\begin{equation}
A_d = 2 \int_{r_0}^{r_\infty} \beta(r) \sqrt{H(r)} dr + l \sqrt{H(r_0)}.
\end{equation}
In what follows it suffices to identify $H$ and $\beta$ from the geometries; the difference in area and $l$ are given by;
\begin{eqnarray}
\Delta A (r_*) &=& 2 \int_{r_*}^{r_\infty} \frac{\beta(r) H(r)}{\sqrt{H(r) - H(r_*)}} \left[1 -\left(\frac{H(r_0)}{H(r_*)}\right)^{1/2} \frac{H(r_*)}{H(r)}- \left(1 - \frac{H(r_*)}{H(r)}\right)^{1/2}\right] \nonumber \\ &-& 2\int_{r_0}^{r_*} \beta(r)\sqrt{H(r)} \\
l(r_*) &=& 2 \sqrt{H(r_*)} \int_{r_*}^{r_\infty} \frac{\beta(r)}{\sqrt{H(r) - H(r_*)}} dr .
\end{eqnarray} It should be noted here that when $r_* = r_0$, the difference in area is negative since
\begin{eqnarray}
\Delta A(r_0) &=& 2 \int_{r_*}^{r_\infty} \frac{\beta(r) H(r)}{\sqrt{H(r) - H(r_*)}} \left[1 - \frac{H(r_0)}{H(r)}- \left(1 - \frac{H(r_0)}{H(r)}\right)^{1/2}\right]<0.
\end{eqnarray}
Similarly if $\beta \to 0$ fast enough as $r \to r_\infty$, then
\begin{equation}
\Delta A(r_* \to r_\infty) \approx - 2\int_{r_0}^{r_\infty} \beta(r)\sqrt{H(r)} <0.
\end{equation} This statement is almost always true. We will see an exception where $\beta \propto \frac{1}{\sqrt{r}}$. Thus we obtain two important results before we even start to look at specific geometries
\begin{equation}
\Delta A(r_*=r_0) < 0, \;\;\; \mbox{and} \;\;\; \Delta A(r_* \rightarrow r_\infty) < 0. \label{alim}
\end{equation} Thus, if the difference in area is monotonous, the smooth surface gives the entanglement entropy. Now we proceed to study geometries corresponding to holographic duals of the ${\cal N}=4$ plasma at finite temperature and other field theories leaving in Dp-branes world volumes
and dyonic black hole.
\subsection{Entanglement entropy in the ${\cal N}=4$ plasma}
The holographic background dual to the ${\cal N}=4$ plasma is a stack of nonextremal D3 branes. This background has proved to be an interesting playground for finite temperature field theories, in particular, it seems to catch some features displayed by experiments at the Relativistic Heavy Ion Collider. The corresponding background metric is:
\begin{equation}
ds^2 =R^2\bigg[\frac{du^2}{hu^2}+ u^2 \left(-hdt^2 +dx_idx^i\right) + d\Omega_5^2\bigg],
\end{equation}
with
\begin{equation}
h=1-\frac{u_0^4}{u^4},
\end{equation}
where the parameter $u_0$ completely characterizes the temperature of the background: $u_0=\pi R\,T$. The coordinate $u$ is related to the holographic coordinate $r$ by $r=R u$ where $R$ is the AdS radius.
The quantities $H$ and $\beta$ are:
\begin{equation}
H(u) = (R^8 \Omega_5)^2 u^6, \;\;\; \beta^2(u) = \frac{1}{u^4 - u_0^4}.
\end{equation}
The difference in area and $l$ are:
\begin{eqnarray}
\Delta A &=& 2I u_0^2 \left[\int_{y_*}^{y_{\infty}}\frac{y^{6} -y^3 \sqrt{y^{6}-y_*^{6}} - y_*^3}{\displaystyle{\sqrt{\left(y^{6}-y_*^{6}\right)\left(y^4-1\right)}}}dy -\frac{\sqrt{y_*^4-1}}{2} \right] \\
l &=& \frac{2}{u_0} \int_{y_*}^{y_\infty} \frac{y_*^3 dy}{\sqrt{(y^4-1)(y^6-y_*^6)}} \label{eq:n4l}
\end{eqnarray} where $y=u/u_0$ and $I=\Omega_5 R^8$. The $y$ coordinate makes it clear that $\Delta A$ is just scales with respect to the temperature. We numerically plot $\Delta A$ as function of $lu_0$ in units of $2Iu_0^2$.
\begin{figure}[!h]
\subfigure[]{\label{fig:1-a}\includegraphics[scale=.70]{n4l.eps}}
\subfigure[]{\label{fig:1-b}\includegraphics[scale=.70]{n4s.eps}}
\caption{Figure \ref{fig:1-a} shows the behavior of $l$ as a function of $y_*$ for constant $u_0$. Figure \ref{fig:1-b} Shows the difference in area as a function of $l$. These plots show no phase transition the entropy in the nonextremal D3 brane background.}
\label{fig:n4}
\end{figure}
From figure \ref{fig:n4} we see that the difference in entropy does not change sign; instead it approach 0. $\Delta A$ is also monotonous with respect to $l$ as observed from \ref{fig:n4}. $l$ is monotonous with respect $y_*$; in fact it logarithmically diverges as $y_* \to 1$ as observed from \ref{eq:n4l}. Thus by \ref{alim}, $\Delta A$ is always negative.
\subsection{Entanglement entropy for gauge theories on the world volumes of Dp branes }\label{higher}
The backgrounds dual to finite temperature field theories are given in the string frame as \cite{Itzhaki:1998dd}:
\begin{equation}
ds^2=\frac{u^{a/2}}{\sqrt{\lambda}}\Bigg[-\Bigg(1-\left(\frac{u_0}{u}\right)^a\Bigg)dt^2+dx_1^2+\cdots+dx_p^2\Bigg]+\frac{\sqrt{\lambda}}{u^{a/2}}\Bigg(1-\left(\frac{u_0}{u}\right)^a\Bigg)^{-1}du^2+\sqrt{\lambda}u^{b/2}d\Omega_{8-p}^2
\end{equation} where $a=7-p$ and $b=p-3$. The dilaton is also given as:
\begin{equation}
e^{\phi}=(2\pi)^{2-p}g_{YM}^2\left(\frac{\sqrt{\lambda}}{u^{a/2}}\right)^{-b/2}
\end{equation} It is important to note that for $p<3$, the radius of the $(8-p)$-sphere diverges in the UV limit and shrinks in the IR limit. However, for $p>3$, it shrinks to zero in the UV limit and expands in the IR limit. At $p=3$, it decouples from the AdS sector. A similar feature takes place for the dilaton. The quantities $H$ and $\beta$ are
\begin{equation}
H(u) = \frac{C^2}{\lambda} u^{2\alpha}, \;\;\; \beta^2(u) = \frac{\lambda}{u^a - u_0^a}
\end{equation} where
\begin{equation}
C=\frac{(2\pi)^{2(p-2)}}{g_{YM}^4} \lambda \Omega_{8-p}, \;\;\; \alpha=\frac{1}{2}(9-p).
\end{equation}
The difference in areas $\Delta A$ and $l$ as integrals over $y$ are
\begin{eqnarray}
\Delta A = A_c - A_d &=& 2 C u_0^2 \left[ \int_{y_*}^{y_{\infty}}\frac{y^{2\alpha} -y^\alpha \sqrt{y^{2\alpha}-y_*^{2\alpha}} - y_*^\alpha}{\displaystyle{\sqrt{\left(y^{2\alpha}-y_*^{2\alpha}\right)\left(y^a-1\right)}}}dy -\int_{1}^{y_*}\frac{y^{\alpha}\,dy}{\sqrt{y^a-1}} \right] \nonumber \\
l&=& \frac{2\sqrt{\lambda}}{u_0^{\alpha-2}}\int_{y_*}^{y_{\infty}}\frac{y_*^{\alpha}dy}{\displaystyle{\sqrt{\left(y^{2\alpha}-y_*^{2\alpha}\right)\left(y^a-1\right)}}}.
\end{eqnarray}
We observe something interesting here. The difference in areas $\Delta A$ scales as $u_0^2$ for all $p$ while $l$ scales as $u_0^{\frac{p-5}{2}}$. For $p>5$, it is proportional to $u_0$. In figure \ref{fig:finiteT} we plot the difference in areas in units of $C u_0^2$ with respect to $l$ in units of $\sqrt{\lambda} u_0^{\frac{p-5}{2}}$ for $p = 3,4,5$. In figure \ref{fig:finiteT6} we show the plot for $p=6$.
\begin{figure}[!h]
\subfigure[]{\label{fig:2-a}\includegraphics[scale=.70]{finiteTl.eps}}
\subfigure[]{\label{fig:2-b}\includegraphics[scale=.70]{finiteTs.eps}}
\caption{Figure \ref{fig:2-a} shows the behavior of $lu_0^{(p-5)/2}$ as a function of $y_*$ at $p=3,4,5$ (Blue, Green, Orange respectively). Figure \ref{fig:2-b} Shows the difference in area as a function of $l$. These plots show no phase transition for the entanglment entropy in the backgrounds of nonextremal D3, D4 and D5 branes.}
\label{fig:finiteT}
\end{figure}
\begin{figure}[!h]
\subfigure[]{\label{fig:3-a}\includegraphics[scale=.70]{finiteTl6.eps}}
\subfigure[]{\label{fig:3-b}\includegraphics[scale=.70]{finiteTs6.eps}}
\caption{Figure \ref{fig:finiteT6}(a) shows the behavior of $l/\sqrt{u_0}$ as a function of $y_*$ for constant $u_0$ at $p=6$. Figure \ref{fig:finiteT6}(b) Shows the difference in area as a function of $l$. These plots show a transition for the entanglement entropy in the nonextremal D6 brackground.}
\label{fig:finiteT6}
\end{figure}
Here we observe a transition for $p=6$ and no transition for $p<6$. This result is also in agreement with result \ref{alim}.
\section{Dyonic Black Hole} \label{dyonic}
This is a very important system from the holographic point of view. It has the potential to describe a few systems that are certainly of interest from the condensed matter point of view. A partial list of interesting applications can be found in \cite{Hartnoll:2007ih}.
\subsection{The solution }
The dyonic black hole is a solution to Einstein Maxwell on $AdS_4$. The solution is a consistent truncation of 11 dimensional supergravity on $AdS_4\times S^7$. Some interesting properties of this solution and its potential applications to condensed matter have been recently discussed in various papers including, \cite{Herzog:2007ij,Hartnoll:2007ai,Hartnoll:2007ih}
The relevant metric is:
\begin{equation} ds^2 = R^2 u^2 (-h(u) dt^2 + dz^2 + dy^2) + \frac{R^2}{u^2}\frac{du^2}{h(u)}, \end{equation} with
\begin{equation} h(u) = 1 + (h^2 + q^2) \frac{u_0^4}{u^4} - (1 + h^2 + q^2) \frac{u_0^3}{u^3}, \end{equation} and electromagnetic field tensor \begin{equation} F = hu_0^2 dx\wedge dy - q u_0^2 dt\wedge du^{-1}. \end{equation} The three parameters $u_0$, $h$ and $q$ are related to the physical quantities, mass, electric charge and magnetic field of the black hole in the following way,
\begin{eqnarray} M= \alpha (1 + h^2 + q^2) u_0^3, \;\; C = q u_0^2, \;\; \mbox{and}, \;\; B = h u_0^2 \end{eqnarray} respectively. The quantity $\alpha$ is a constant independent of $h,q$ and $u_0$. It is useful to also define the parameters $\rho^2 = h^2 + q^2$ and $Q^2 = \rho^2 u_0^4$ to characterize the effect of the electromagnetic field. In terms of the physical parameters, $u_0$ and $\rho^2$ are given by \begin{eqnarray} \rho^2 = \frac{Q^2}{u_0^4},\;\;\; u_0^4 + Q^2 - m u_0 = 0 \label{u:1} \end{eqnarray} where $m=M/\alpha$. A computation of the mass using holographic renormalization can be found in \cite{Hartnoll:2007ai}. Equation $\ref{u:1}$ has a positive real root for $u_0$ when
\begin{equation} \left(\frac{Q^2}{3}\right)^3 \leq \left(\frac{m}{4}\right)^4 \end{equation} which implies
\begin{equation} \left(\frac{\rho^2}{3}\right)^3 \leq \left(\frac{1+\rho^2}{4}\right)^4. \end{equation} From these relationship, we can expect extremality at $\rho^2 =3$. Furthermore, we observe that even though $Q$ is bounded by the mass, $\rho^2$ is allowed to take on any values. Thus if there is extremality at $\rho^2=3$, we can expect different thermodynamical description of space when $\rho^2\leq 3$ from when $\rho^2 > 3$. This is discussed in the next section.
\subsection{Comments on thermodynamics}
With the coordinate change $y=u/u_0$, $h(u)$ can be rewritten
\begin{eqnarray} h(y) &=& \frac{1}{y^4} \left( y^4 + \rho^2 - (1+\rho^2)y \right) = \frac{1}{y^4} \left( y^4-y + \rho^2(1 - y) \right) \nonumber \\
&=& \frac{(y-1)}{y^4} \left( y^3 +y^2 +y -\rho^2 \right) \equiv \frac{1}{y^4}f(y)\nonumber \\ &=& \frac{(y-1)(y-\rho_0)}{y^4} \left( y^2 +(1+\rho_0)y + \frac{\rho^2}{\rho_0} \right) \end{eqnarray} where $\rho_0$ satisfies \begin{equation} \rho^3_0 + \rho^2_0 + \rho_0 - \rho^2 =0. \label{r:1} \end{equation} We observe that $h(y)$ has at least one positive zero at $y=1$ corresponding to $u=u_0$; and at most two positive zeros with $y=\rho_0$ corresponding to $u = \rho_0 u_0$. We can understand this second zero by studying equation (\ref{r:1}). We can rewrite it as
\begin{equation} \rho^2(\rho_0) = \rho^3_0 + \rho^2_0 +\rho_0. \end{equation} First we observe that $\rho_0$ vanishes when $\rho^2=0$ and $\rho_0=1$ when $\rho^2=3$. In addition, the function $\rho^2(\rho_0)$ is monotonic since its derivative has no real zeros. Thus $\rho^2$ has a one to one relationship with $\rho_0$. This justifies the above statement that $h(u)$ has at most two real zeros. Since $\rho^2$ is not bounded, $\rho_0$ is also unbounded. The inverse of this function $\rho_0(\rho^2)$ is then:
\begin{equation} \rho_0 = \frac{b^2 - b -2}{3b} \;\;\; \mbox{where} \;\;\; b = \left[\frac{7+27\rho^2 + 3\sqrt{3}\sqrt{3+14\rho^2+27\rho^4}}{2}\right]^{1/3}. \end{equation}
The following picture emerges for the position of the horizon $\rho_* u_0$.
\begin{equation} \rho_* = \begin{cases} &1 \;\;\; \mbox{when} \;\;\; 0\leq \rho^2 \leq 3 \\ &\rho_0 \;\;\; \mbox{when} \;\;\; 3\leq \rho^2 < \infty \end{cases}. \end{equation} The temperature of the horizon is given by the surface gravity,
\begin{equation} T = \frac{\kappa}{2\pi} = \frac{1}{2\pi}\sqrt{\frac{-(\nabla^\alpha\xi^\beta)(\nabla_\alpha\xi_\beta)|_{u=\rho_* u_0}}{2}} \end{equation} where $\xi$ is a timelike killing vector. With $\xi^\alpha = \left(\frac{\partial}{\partial t}\right)^\alpha$, the temperature is,
\begin{equation} T = \frac{u_0}{4\pi} \begin{cases} & 3 - \rho^2 \;\;\; \mbox{for} \;\;\; 0\leq \rho^2 \leq 3 \\ &\frac{2 \rho_0^4 +\rho_0 + \rho^2 (\rho_0 -2)}{\rho_0^3} \;\;\; \mbox{for} \;\;\ 3 \leq \rho^2 < \infty \end{cases}. \end{equation} We observe extremality at $\rho^2 =3$ as expected.
\begin{figure}[!h]
\centering
\includegraphics[width=0.45\linewidth]{tem.eps}
\caption{This plots shows the behavior of the temperature as a function of $\rho^2$.}
\label{fig:dyotem}
\end{figure}
In figure \ref{fig:dyotem}, the temperature is plotted against $\rho^2$.
\subsection{Entanglement Entropy}
The quantities $H$ and $\beta$ are:
\begin{equation}
H = R^2 u_0^2 y^2, \;\;\; \beta^2 = \frac{1}{u_0^2 f(y)}.
\end{equation} The difference in area and $l$ are given as
\begin{eqnarray}
\Delta A &=& 2 u_0 R^2 \left[ \int_{y_*}^{\infty}\frac{dy}{\sqrt{f(y)}}\left(\frac{y^4-\rho_*^2y_*^2}{\sqrt{y^4 - y^4_*}}-y^2\right) - \int_{\rho_*}^{y_*} \frac{y^2}{\sqrt{f(y)}} dy \right] \\
l &=& \frac{2 y_*^2}{u_0} \int_{y_*}^{\infty} \frac{1}{\sqrt{f(y)}}\frac{dy}{\sqrt{y^4 - y^4_*}}.
\end{eqnarray}
In the following graphs, $\Delta A$ is plotted in units of $u_0 R^2$ against $lu_0$ and $\rho^2$.
\begin{figure}[!h]
\centering
\includegraphics{ss.eps}
\caption{The difference in entropy is plotted against $\rho^2$ at $y_* = 1\rho_*,2\rho^2_*,3\rho_*,4\rho_*$ (Blue, Green, Orange and Red). We observe that $\Delta A$ is always negative.}
\label{fig:dyonicT}
\end{figure}
\begin{figure}[!h]
\subfigure[]{\label{fig:5-a}\includegraphics[scale=.70]{l00.eps}}
\subfigure[]{\label{fig:5-b}\includegraphics[scale=.70]{s00.eps}}
\caption{Figure \ref{fig:dyonic*}(a) shows the behavior of the order parameter, $l$, as a function of $y_*$ for $\rho^2 = 0,1,2,2.99$ (Blue, Green, Orange, Brown). Figure \ref{fig:dyonic*}(b) shows the difference in entropy as a function of the order parameter. These plots show that there is no phase transition for $\rho^2<3$ in the dyonic black hole asymptotic to $AdS_4$.}
\label{fig:dyonic*}
\end{figure}
\begin{figure}[!h]
\subfigure[]{\label{fig:6-a}\includegraphics[scale=.70]{l11.eps}}
\subfigure[]{\label{fig:6-b}\includegraphics[scale=.70]{s111.eps}}
\caption{Figure \ref{fig:dyonic*2}(a) shows the behavior of $l$ as a function of $y_*$ for $\rho^2 = 3.1,10,100,500,1000$ (Brown, Blue, Green, Orange, Red). Figure \ref{fig:dyonic*2}(b) Shows the difference in entropy as a function of the order parameter. These plots show that there is no phase transition for $\rho^2>3$ in the dyonic black hole asymptotic to $AdS_4$.}
\label{fig:dyonic*2}
\end{figure}
In figure \ref{fig:dyonicT}, $\Delta A$ is plotted against $\rho^2$ for different values of $y_*$; here we observe that the difference in entropy is strictly bounded by zero from above. In figure \ref{fig:dyonic*}(b), $\Delta A$ is plotted against $lu_0$ for values $\rho^2$ less than 3. From this graph it is apparent that the difference in entropy approaches a constant negative value for large values $lu_0$. We also note that an increase in $\rho^2$, translates to a decrease in $\Delta A$. This feature is apparent in figure \ref{fig:dyonicT} and in figure \ref{fig:dyonic*}. Furthermore, we also see that increasing $y_*$, in figure \ref{fig:dyonicT}, shifts $\Delta A$ downward. This allows us to also see that the difference in area is negative for large values of $l$; since small values of $y_*$ coincide with large values of $l$ as shown in figure \ref{fig:dyonic*}(a). From these observations, we conclude that the difference in entropy is always negative for $\rho^2<3$. Thus there is no transition.
For $\rho^2>3$, we also do not observe any transition. This can be seen in figure \ref{fig:dyonicT} where the difference in entropy is strictly negative. In figure \ref{fig:dyonic*2}(a) the difference in entropy is plotted against $lu_0$. Here we observe an increase in entropy for small $\rho^2$ and then a monotonous decrease that is consistent with figure \ref{fig:dyonicT}.
\section{Global $AdS_p$}\label{globalads}
\subsection{Global $AdS_5$}
Now we explore the entanglement entropy of black holes in $AdS_5$ in global coordinates. This geometry does not satisfy the conditions of section \ref{general} since there is no spacelike killing vector that commutes will with all killing vectors. We reformulate the problem explicitly in this case. The Schwarzschild black hole in global $AdS_5$ is given by:
\begin{equation}
ds^2 = -f(r) dt^2 + \frac{1}{f(r)}dr^2 + r^2(d\theta^2 + \cos^2(\theta)d\phi^2 + \sin^2(\theta)d\varphi^2)
\end{equation}
where the 3-sphere is written in Hopf coordinates and $f(r)$ given by:
\begin{equation}
f(r) = 1-\frac{M}{r^2} + \frac{r^2}{L^2}.
\end{equation}
The conformal boundary is obtained, in these coordinates, by taking the $r\rightarrow \infty$ limit. The region $A$ is given by the hypersurface on the boundary parametrized as:
\begin{equation}
t = \mbox{constant} \;\;\; \mbox{and} \;\;\; 0 \leq \phi \leq 2\pi l.
\end{equation} So, $l$ measures the size of $A$. This quantity is bounded by 1 since the period of $\phi$ is $2\pi$.
\subsubsection{Entanglement}
The surfaces in $M$ that have the same boundary as $A$ can be parametrized as:
\begin{equation}
t=\mbox{constant} \;\;\; \mbox{and} \;\;\; r = r(\phi),
\end{equation}
with boundary conditions as:
\begin{equation}
r(0) = r(2\pi l) = r_\infty,
\end{equation} where $r_\infty$ is a cutoff parameter which will be taken to $\infty$. The induced metric for this surface is given by:
\begin{equation}
ds^2 = (\frac{r'^2(\phi)}{f(r)}+ r^2\cos^2(\theta))d\phi^2 + r^2(d\theta^2 + \sin^2(\theta)d\varphi^2).
\end{equation}
Its area is then
\begin{eqnarray}
A_c &=& 2\pi \int_0^{2\pi l}\int_0^{\pi/2} r^3 \sqrt{\cos^2(\theta) + b^2}\,\sin(\theta)\, d\theta \, d\phi \;\;\; \mbox{where} \;\;\; b=\frac{r'(\phi)}{r\sqrt{f(r)}} \\
&=& 2\pi \int_0^{2\pi l}\int_0^{1} r^3 \sqrt{a^2 + b^2} da \, d\phi \;\; \mbox{where} \;\; a =\cos(\theta).
\end{eqnarray}
The minimal surface is obtained by solving for the function $r(\phi)$ that minimizes the action $A_c$.
$A_c$ is an action for a point particle with Lagrangian:
\begin{equation}
L = 2 \pi \int_0^{1} r^3 \sqrt{a^2 + b^2} \, da.
\end{equation}
From the Euler-Lagrange equation (H=Hamiltonian):
\begin{equation}
\frac{d}{d\phi} H = - \frac{\partial}{\partial \phi} L
\end{equation}
We obtain the equation of motion:
\begin{equation}
0 = \frac{d}{d\phi} \left[ (r'\frac{\partial}{\partial r'} -1)L \right] \, \Rightarrow \, (r'\frac{\partial}{\partial r'} -1)L = c
\end{equation} where $c$ is some constant.
Now we have
\begin{eqnarray}
r' \frac{\partial}{\partial r'}L -L &=& r^3 \int_0^1 \left[\frac{b^2}{\sqrt{a^2+b^2}}-\sqrt{a^2+b^2}\right] da \\
&=& - r^3 \int_0^1 \frac{a^2}{\sqrt{a^2+b^2}} da = -r^3 \int_{0}^{1} a \frac{d}{d a}\sqrt{a^2 + b^2} da \nonumber \\
&=& -r^3 \left[\sqrt{1+b^2} - \int_0^1\sqrt{a^2 + b^2} da \right]. \label{eq:a1}
\end{eqnarray} We can determine $c$ by considering the point $r_*$ where $r'=0$. This corresponds to $b=0$. It is also important to note that $r_*$ must be bounded by $r_0$. We thus have:
\begin{equation}
c= -\frac{r_*^3}{2}.
\end{equation} After integrating, we obtain the equation of motion of $r(\phi)$,
\begin{equation}
\frac{r_*^3}{r^3} = \sqrt{1+b^2} - b^2 \ln\left[\frac{1+\sqrt{1+b^2}}{b}\right].
\end{equation} We note that this is a transcendental equation for $b$. Thus we cannot evaluate the area explicitly. However we will be able to obtain some information when we substitute the equation of motion into the area formula.
Before we proceed, we first evaluate the piece-wise smooth surface. This surface is the sum of 3 surfaces given by $\phi = 0$, $r= r_0$ and $\phi = 2\pi l$ at constant $t$. The line element for the $\phi=$constant is,
\begin{equation}
ds^2 = \frac{1}{f(r)}dr^2 + r^2(d\theta^2 +\sin^2(\theta)d\varphi^2)
\end{equation}
with area
\begin{equation}
A = 2\pi \int_{r_0}^{\infty} \int_0^{\pi/2} \frac{r^2}{\sqrt{f(r)}} \sin(\theta)\,d\theta dr = 2 \pi \int_{r_0}^{\infty} \frac{r^2}{\sqrt{f(r)}} dr.
\end{equation}
The $r=r_0$ piece has area
\begin{equation}
A = r_0^3 (2\pi)^2 l \int_0^{\pi/2} \sin{\theta}\cos{\theta} d\theta = 2\pi^2 r_0^3 l .
\end{equation}
The area of the piece-wise smooth surface is then
\begin{equation}
A_d= 4 \pi \int_{r_0}^{\infty} \frac{r^2}{\sqrt{f(r)}} dr + 2\pi^2 r_0^3 l.
\end{equation}
\subsubsection{Comparison}
Before we can compare the areas, we rewrite the area of the smooth surface as an integral over $r$. This is done by integrating out $a$; and then substituting $\frac{dr}{r'(\phi)}$ for $d\phi$ and integrated from $r_*$ to $r_\infty$. One then obtains,
\begin{eqnarray}
A_c &=& \pi \int_{0}^{2 \pi l} r^3 \left[\sqrt{1+b^2} + b^2 \ln\left(\frac{1+\sqrt{1+b^2}}{b}\right)\right] d\phi \\
&=& 2 \pi \int_{r_*}^{r_\infty} \frac{r^2}{b\sqrt{f}} \left[\sqrt{1+b^2} + b^2 \ln\left(\frac{1+\sqrt{1+b^2}}{b}\right)\right] dr.
\end{eqnarray} We can also write $l$ as a function of $r_*$. This is done by integrating $r'(\phi)$.
\begin{equation}
2 \pi l = \int_{0}^{2\pi l} d\phi = 2 \int_{r_*}^{r_\infty} \frac{dr}{r'} = 2 \int_{r_*}^{r_\infty} \frac{dr}{br\sqrt{f(r)}}.
\end{equation} Now we can evaluate the difference in area $\Delta A =A_c -A_d$. It is given as:
\begin{eqnarray}
\Delta A &=& 2 \pi \int_{r_*}^{r_\infty} \frac{r^2}{b\sqrt{f}} \left[\sqrt{1+b^2} + b^2 \ln\left(\frac{1+\sqrt{1+b^2}}{b}\right)\right] dr \nonumber \\
&-& 4 \pi \int_{r_0}^{\infty} \frac{r^2}{\sqrt{f(r)}} dr - 2\pi r_0^3 \int_{r_*}^{r_\infty} \frac{dr}{br\sqrt{f(r)}} \label{eq:a1p}\\
&=& 2 \pi \int_{r_*}^{r_\infty} \frac{r^2}{b\sqrt{f}} \left[\sqrt{1+b^2} + b^2 \ln\left(\frac{1+\sqrt{1+b^2}}{b}\right) -\frac{r_0^3}{r^3} -2 b\right] dr \nonumber \\ &-& 4 \pi \int_{r_0}^{r_*} \frac{r^2}{\sqrt{f(r)}} dr.
\end{eqnarray}
Notice that we have split the second integral in equation \ref{eq:a1} to two integrals with ranges $r_0 \to r_*$ and $r_* \to r_\infty$. Now we can use the equation of motion:
\begin{equation}
\frac{r_*^3}{r^3} = \sqrt{1+b^2} - b^2 \ln\left[\frac{1+\sqrt{1+b^2}}{b}\right]
\end{equation} to obtain
\begin{equation}
\Delta A = 2 \pi \int_{r_*}^{r_\infty} \frac{r^2}{b\sqrt{f}} \left[2b^2 \ln\left(\frac{1+\sqrt{1+b^2}}{b}\right) -2b + \frac{r_*^3-r_0^3}{r^3}\right] dr - 4 \pi \int_{r_0}^{r_*} \frac{r^2}{\sqrt{f(r)}} dr.
\end{equation}
Now we can take the $r_*=r_0$ limit and obtain:
\begin{equation}
\Delta A = 4 \pi \int_{r_0}^{r_\infty} \frac{r^2}{\sqrt{f}} \left[b \ln\left(\frac{1+\sqrt{1+b^2}}{b}\right) -1\right] dr < 0
\end{equation} since
\begin{equation}
b \ln\left(\frac{1+\sqrt{1+b^2}}{b}\right) -1 < 0
\end{equation}
for nonzero values of $b$. Thus we obtain
\begin{equation}
\Delta A(r_* \to r_0) < 0.
\end{equation}
We can also evaluate the large $r_*$ limit by eliminating the $\ln$ term in the difference equation:
\begin{equation}
\Delta A = 4 \pi \int_{r_*}^{r_\infty} \frac{r^2}{b\sqrt{f}} \left[\sqrt{1+b^2}- b -\frac{r_*^3 + r_0^3}{2r^3} \right] dr - 4 \pi \int_{r_0}^{r_*} \frac{r^2}{\sqrt{f(r)}} dr. \label{dA:3}
\end{equation} As $r_* \rightarrow r_\infty$, we have
\begin{equation}
\Delta A = -4 \pi \int_{r_*}^{r_\infty} \frac{r_*^3}{2 r b\sqrt{f}} dr - 4 \pi \int_{r_0}^{r_*} \frac{r^2}{\sqrt{f(r)}} dr.
\end{equation} Thus we obtain
\begin{equation}
\Delta A(r_* \rightarrow r_\infty) < 0.
\end{equation} Now we see that the difference in entropy is negative for both large values of $r_*$ and for $r_* = r_0$. We collect all this information in:
\begin{equation}
\Delta A(r_*=r_0) < 0, \;\;\; \mbox{and} \;\;\; \Delta A(r_* \rightarrow r_\infty) < 0.
\end{equation} From this argument, it is clear that if a transition exist, it must be at some intermediate value of $r_*$. Furthermore, if there is transition, there exist a $r_*$ such that $\frac{d \Delta A (r_*)}{dr_*} = 0$. This may not be straightforward to do since $\Delta A$ is an integral equation over $b(r,r_*)$; which satisfies a transcendental equation. So one should integrate over $b$ instead of $r$ since the equation of motion can be read as:
\begin{equation}
r(b,r_*) = r_* \left( \sqrt{1+b^2} - b^2 \ln\left[\frac{1+\sqrt{1+b^2}}{b}\right] \right)^{-1/3}.
\end{equation}
One should also check that $r$ has a one to one relation with $b$, which can be done by showing that the first derivative has no zeros. The derivative is then:
\begin{equation}
\frac{dr}{db} = r_* \frac{ 2b(-1-\sqrt{1+b^2} +(1+b^2 +\sqrt{1+b^2})\ln(\frac{1+\sqrt{1+b^2}}{b})) }{3\sqrt{1+b^2} (1+\sqrt{1+b^2})(\sqrt{1+b^2} -b^2 \ln(\frac{1+\sqrt{1+b^2}}{b}))^{4/3} }.
\end{equation}
Now, $db/dr$ is zero when the above expression diverges. However, this expression is always finite making it safe to integrate over $b$ instead of $r$. The integration bounds for $b$ will be from $0$ to $\infty$. Notice that we have written $\Delta A$ in 3 different ways by using the equation of motion. In what follows, we use the one given in equation (\ref{dA:3}). It is
\begin{equation}
\Delta A = 4 \pi \int_{r_*}^{r_\infty} \frac{r^2}{b\sqrt{f}} \left[\sqrt{1+b^2}- b -\frac{r_*^3 + r_0^3}{2r^3} \right] dr - 4 \pi \int_{r_0}^{r_*} \frac{r^2}{\sqrt{f(r)}} dr.
\end{equation}
It will also be useful to factor out the AdS radius $L$ by making the coordinate change $x=r/L$ and to introduce the parameter $m=M/L^2$. We then have:
\begin{eqnarray}
f(r)&=& 1 - \frac{M}{r^2} + \frac{r^2}{L^2} = \frac{1}{x^2} (x^2 -x_0^2)(x^2 +x_0^2 +1) \equiv \frac{1}{x^2} h(x,x_0) \label{h:1}\\
x_0^2 &=& \frac{\sqrt{1+4m} -1}{2} \\
x(b,y_*) &=& x_* \left( \sqrt{1+b^2} - b^2 \ln\left[\frac{1+\sqrt{1+b^2}}{b}\right] \right)^{-1/3} \\
\Delta A (x_*,x_0) &=& 4\pi L^3 \int_{x_*}^{x_\infty} \frac{x^3}{b\sqrt{h(x,x_0)}} \left[\sqrt{1+b^2}- b -\frac{x_*^3 + x_0^3}{2x^3} \right] dx \nonumber \\ &-& 4 \pi L^3 \int_{x_0}^{x_*} \frac{x^3}{\sqrt{h(x,x_0)}} dx.
\end{eqnarray} Notice that we have introduced the polynomial function $h(x,x_0)$ in (\ref{h:1}). As an integral over $b$, the difference in area is given by:
\begin{eqnarray}
\frac{\Delta A}{4 \pi L^3} &=& \int_{0}^{\infty} \frac{x_*^4 g^3(b) g'(b)}{b\sqrt{h(x_* g(b),x_0)}}[\sqrt{1+b^2}-b] db - \int_{x_0}^{x_*} \frac{x^3}{\sqrt{h(x,x_0)}} dx \nonumber \\ &-& \int_{0}^{\infty} \frac{x_*(x_*^3 + x_0^3) g'(b)}{2b\sqrt{h(x_* g(b),x_0)}} db \;\;\; \mbox{where} \\
g(b) &=& \left( \sqrt{1+b^2} - b^2 \ln\left[\frac{1+\sqrt{1+b^2}}{b}\right] \right)^{-1/3}.
\end{eqnarray} We can also write $l$ as an integral over $b$:
\begin{equation}
l = \frac{x_*}{\pi} \int_{0}^{\infty} \frac{g'(b)}{b \sqrt{h(x_* g(b),x_0)}} db
\end{equation}
\begin{figure}[!h]
\subfigure[]{\label{fig:global-a}\includegraphics[scale=.70]{globall1.eps}}
\subfigure[]{\label{fig:global-b}\includegraphics[scale=.70]{globalS1.eps}}
\caption{Figure \ref{fig:global}(a) shows the behavior of the order parameter, $l$, as a function of $x_*$ for $x_0 = .5,1,1.5,2,2.5$ (Brown, Red, Orange, Blue). Figure \ref{fig:global}(b) Shows the difference in entropy as a function of the order parameter. These plots show no phase transition for the entanglement entropy in the Schwarzschild black hole in $AdS_5$.}
\label{fig:global}
\end{figure}
In figure \ref{fig:global} we have plotted the difference in area against $l$ at various horizon positions. The features observed here are similar to those in the planar systems. However, we see that the difference in area shifts downward as we increase the temperature instead of just scaling. Furthermore, we notice that there is a minimum $r_*$ for which one can define the entanglement entropy. This minimum is given by the equation $l(r_*)=1$.
\subsection{Global $AdS_4$} \label{global4}
The relevant geometry is
\begin{equation}
ds^2 = -f(r) dt^2 + \frac{1}{f(r)}dr^2 + r^2(d\theta^2 + \sin^2(\theta)d\phi^2)
\end{equation}
where
\begin{equation}
f(r) = 1-\frac{M}{r} + \frac{r^2}{L^2}.
\end{equation}
Introducing the $x$-coordinate as $x=r/L$, we find that the difference in area and the expression for $l$
\begin{eqnarray}
\Delta A &=& 2 L^2 \int_{x_*}^{x_\infty} \frac{dx}{b\sqrt{x}\sqrt{h(x,x_0)}} \left[x^2 (q(b)-\pi b) -2x_0^2\right] -2\pi L^2 \int_{x_0}^{x_*} \frac{dx x^{3/2}}{\sqrt{h(x,x_0)}} \\
l &=& \frac{1}{\pi} \int_{x_*}^{x_\infty} \frac{dx}{b\sqrt{x}\sqrt{h(x,x_0)}}.
\end{eqnarray} The equation of motion and $h(x,x_0)$ are given as
\begin{eqnarray}
x &=& x_* \frac{\sqrt{2}}{\sqrt{p(b)}} \;\;\; \mbox{where} \;\;\; b^2 = \frac{r'^2(\phi)}{r^2 f(r)}=\frac{x'^2(\phi)}{x^2 f(x,x_0)} \\
h(x,x_0) &=& x f(x,x_0) = (x-x_0)(x^2 + x x_0 +x_0^2 +1).
\end{eqnarray} The position of the horizon is given by $M$ as,
\begin{equation}
x_0^2 + x_0 -\frac{M}{L} = 0.
\end{equation} The functions $p(b)$ and $q(b)$ are:
\begin{eqnarray}
q(b) &=& \int_0^\pi \sqrt{b^2 + \sin^2(\theta)} d\theta \\
p(b) &=& \int_0^\pi \frac{\sin^2(\theta)}{\sqrt{b^2 + \sin^2(\theta)}} d\theta.
\end{eqnarray}
By substituting the equation of motion in the difference of area integral, one can see that
\begin{equation}
\Delta A (x_* \to x_0) < 0
\end{equation} since the integrand becomes negative for all $b$. Now we can plot the above function by integrating over $b$. In figure \ref{global4} we show the behavior of the difference in area with respect to $l$. Again we do not observe a transition.
\begin{figure}[!h]
\subfigure[]{\label{fig:global4-a}\includegraphics[scale=.70]{global4l1.eps}}
\subfigure[]{\label{fig:global4-b}\includegraphics[scale=.70]{global4S1.eps}}
\caption{Figure \ref{fig:global4}(a) shows the behavior of the order parameter, $l$, as a function of $x_*$ for $x_0 = .5,1,1.5,2,2.5$ (Brown, Red, Orange, Green, Blue). Figure \ref{fig:global4}(b) Shows the difference in entropy as a function of the order parameter. These plots show no phase transition for the entanglement entropy in the Schwarzschild black hole in $AdS_5$.}
\label{fig:global4}
\end{figure}
\section{Conclusions}\label{conclusions}
In lower dimensional systems the entanglement entropy has served as a way to detect exotic phases, see for example \cite{exotic}. In the context of higher dimensional field theories, most interestingly, four-dimensional field theories, it was recently suggested that the entanglement entropy could be used as an order parameter for confinement/deconfinement \cite{Klebanov:2007ws}. Motivated largely by these developments we launched a systematic study of the entanglement entropy in what should be understood as field theories at finite temperature. The hope was that by studying the entanglement entropy we would be able to identify a transition, just as a study of the entanglement entropy in the supergravity dual of the confined phase suggested the possibility of a transition to the deconfined phase. The premier examples of supergravity backgrounds dual to field theories at finite temperatures are the nonextremal Dp-brane solutions. We presented a detailed study in those geometries and generically detected no transition, except for the case of $p=6$ which is of limited interest from the field theory point of view.
We then analyzed the entanglement entropy in global coordinates. The motivation for the analysis comes from the fact that the Hawking-Page phase transition in asymptotically AdS spaces takes place only in global coordinates. This fact was given the interpretation, in the context of the AdS/CFT, of being dual to the confinement/deconfinement transition for a field theory on a sphere. We did not observe a transition for most of the supergravity backgrounds. Since, as shown in \cite{Hawking:1982dh}, the free energy undergoes a transition in this context and the entanglement entropy does not we can safely conclude that the entanglement entropy is not the free energy. This is an important observation since the free energy is considered the standard order parameter for the confinement/deconfinement transition. The main reason arising from the natural expectation that in the confined phase the number of degrees of freedom comes from hadronic states and it is therefore of order $N^0$ for an $SU(N)$ theory while in the deconfined phase one expects $N^2$ degrees of freedom.
An interesting system where a Hawking-Page phase transition was established in \cite{Mahato:2007zm} is the black hole with cascading asymptotics constructed in \cite{PandoZayas:2006sa}. This system is the high temperature to which the Klebanov-Strassler background \cite{Klebanov:2000hb} flows. Preliminarily, with an analysis based on a few values of the temperature, we have been unable to observed a transition in the entanglement entropy for the background of \cite{PandoZayas:2006sa}. We differ a more detailed analysis to a separate publication.
The relationship between the entanglement entropy and the black hole entropy has been considered in various works in the literature, for example \cite{Brustein:2005vx,Emparan:2006ni,Kabat:1995eq,Kabat:1994vj}. In \cite{Kabat:1995eq} the one-loop corrections due to matter fields to the black hole entropy (computed through the free energy of the system) and to the entropy of entanglement were computed and found to agree for spin zero and spin one-half fields. A discussion more tuned to our main interest was presented in \cite{Brustein:2005vx}, where it was suggested that the microstates responsible for the black hole entropy are those due to the entanglement of the vacuum of the black hole. More bluntly, it was suggested in \cite{Brustein:2005vx} that the leading contribution to the black hole entropy is due to entanglement. A similar proposal, albeit with a major modification about the position of the black hole, was argued in \cite{Emparan:2006ni}. The idea that black hole entropy is due to the entanglement entropy, in fact finds a place in our analysis and was also explicitly realized already in \cite{Ryu:2006ef} in the context of the holographic dual of the ${\cal N}=4$ plasma. We believe this is, ultimately, the right track but, as we have shown in various examples, the story is more complicated due to the fact that the entanglement entropy does not behave as an entropy obtained directly from the free energy. It is worth stressing the significant role of the horizon, in fact, if one does not include the contribution to the piece-wise smooth configuration coming directly from the horizon, there is generically a transition. So, we conclude that the horizon is crucial in the thermodynamical competition.
From a slightly more technical point of view, we can arrive at an important observation regarding collapsing cycles. In all these geometries, as in the cases discussed in \cite{Klebanov:2007ws}, there is a collapsing cycle coming from the temporal direction in Euclidean frame. However this cycle does not contribute in a significant way to the computations; in \cite{Klebanov:2007ws} the collapsing cycle was critical in obtaining transitions. An interesting alternative to the entanglement entropy has been recently suggested in \cite{Fujita:2008zv}. The geometric entropy defined in this paper uses a double Wick rotation and therefore includes the full effect of the collapsing Euclidean cycle that was, in the original geometry, the temporal direction. We predict that this modification will lead to transitions in most of the cases presented in this paper.
Finally, we end with a remark about the place of the entanglement entropy. We believe it is a thermodynamical potential rather than an entropy. Large parts of this intuition has been developed while collaborating with Carlos N\'u\~nez in similar issues. Namely, while studying the entanglement entropy, we observe that $l$ and $r_*$ (the minimum of the smooth surface) behave like thermodynamically conjugate variables. This statement is motivated by the fact that when the function $l(r_*)$ is double valued, the entropy exhibits branches as observed in the geometries in \cite{Klebanov:2007ws} and for $p=6$. This would suggest that the entropy is a new thermodynamical potential. This is subject to further studies. Since we have observed, in a wide variety of models, no transition in the entanglement entropy we conclude that the entanglement entropy is, possibly,a thermodynamical potential that is different from the free energy. We are currently pursuing this line of investigation in collaboration with C. N\'u\~nez and expect to report on our findings soon.
\section*{Acknowledgments}
C. Terrero-Escalante is thankful to MCTP and CEFIMAS for hospitality. This work is partially supported by Department of Energy under
grant DE-FG02-95ER40899 to the University of Michigan.
|
train/arxiv
|
BkiUeDM4eIOjRy7MRIoX
| 5 | 1 |
\section{Introduction}
Topological insulators have attracted a lot of interests due to their novel properties and potential applications~\cite{Hasan2010, Qi2011}. One of the prototype of topological insulators is the Haldane model of Chern insulators~\cite{Haldane1988}, which describes spinless fermions hopping on a 2D honeycomb lattice, within a staggered magnetic field. With the shaking-lattice technique~\cite{Struck_Sengstock2011, Hauke_Lewenstein2012, Struck_Sengstock2012, Struck2013, Zheng2014}, the Haldane model has been realized directly in a recent cold atom experiment~\cite{Jotzu2014}, and the topological phase diagram is investigated using the Bloch oscillation method. It is worth mentioning that, although the Haldane model is realized in the cold atom setup, the topological ground state for this system is generally hard, if not impossible, to be reached. As the system is generally prepared in a topologically trivial state before the shaking is applied, by driving the system crossing the phase boundary to the topological phase, excitations are inevitable as the band gap closes at the phase boundary and adiabaticity could not be maintained.
The idea of a quantum quench, which starts with the ground state of a topologically trivial Hamiltonian $H_i$ supplemented by the evolution of a topologically trivial/nontrivial Hamiltonian $H_f$, provides an alterative way to studying the properties of the Haldane model (and Haldane-like topological models)~\cite{Caio-Cooper2015, Hu-Zoller2016, Mueller2016, Wilson2016, Pengfei2016}. Although the Chern number~\cite{TKNN1982} of the time-dependent state remains unchanged as the state evolution governed by $H_f$ is unitary~\cite{Rigol2015, Caio-Cooper2015}, the edge states~\cite{Caio-Cooper2015} as well as nontrivial Hall response~\cite{Hu-Zoller2016, Mueller2016, Wilson2016} can be built up dynamically.
It is interesting to see that, quantum quenches can also be used to perform Bloch-state tomography, as proposed in Ref.~\cite{Hauke_Lewenstein2014}, and then realized experimentally for a Haldane-like model on the honeycomb lattice by the Hamburg group~\cite{Sengstock2016a}. In this experiment, they prepared the initial (topologically trivial) state adiabatically, and then measured the Bloch state for each quasimomentum $\bf{k}$ parameterized on the Bloch sphere as ${\left( {\sin \left( {{\theta _{\mathbf{k}}}/2} \right),\; - \cos \left( {{\theta _{\mathbf{k}}}/2} \right){e^{i{\phi _{\mathbf{k}}}}}} \right)^T}$. The Berry curvature of the corresponding topologically trivial state is also mapped out. More recently, generalizing their previous scheme in Ref.~\cite{Sengstock2016a}, the Hamburg group performed a double-quench experiment and observed emergent phase vortices in the azimuthal phase $\phi_{\bf k}$~\cite{Sengstock2016}. To be more specific, they prepare the initial state as all the ${\bf k}$-dependent Bloch vectors pointing to the north pole regime $\left| {{\psi _{\mathbf{k}}}\left( {t = 0} \right)} \right\rangle = {\left( {0,\;1} \right)^T}$ (i.e., $\theta_{\bf k}\to0$), then quench the Hamiltonian to $H_f$ giving rise to the time-dependent Bloch state $\left| {{\psi _{\mathbf{k}}}\left( t \right)} \right\rangle = {e^{ - i{H_f}t}}\left| {{\psi _{\mathbf{k}}}\left( {t = 0} \right)} \right\rangle $. A second quench to a flat band Hamiltonian is performed to measure the time- and quasimomentum- dependent state parameterized by ${\theta _{\mathbf{k}}}(t)$, and ${\phi _{\mathbf{k}}}(t)$. They find that, two different types of vortices emerge in the phase profile of ${\phi _{\mathbf{k}}}(t)$. The first type is static vortices, which indicate the Dirac points of $H_f$. The locations of this kind of vortices do not move as the name \emph{static} implies. The second type is dynamical vortices. This kind of vortices emerge for particular quasimomentum points at particular times for some particular parameters of $H_f$. This kind of vortices can be pairwise created at some $\bf k$ points. They then move along some trajectories, and finally pairwise annihilate each other, making the trajectories closed. The appearance of dynamical vortices is identified as the dynamical order parameter for a dynamical phase transition~\cite{Heyl_DQPT_2013}, which describes the singularities of the Loschmidt amplitude of the time-dependent state at a critical time. In addition to the identification of phase vortices, the dynamical phase diagram, which is defined as the parameter regime of the final Hamiltonian $H_f$ that support the appearance of dynamical vortices, is also explored. They find that, the region of the dynamical phase diagram is larger than the one of the equilibrium phase diagram of $H_f$. The dynamics of the phase vortices and the phenomena of enlarged dynamical phase diagram are not quantitatively discussed in Ref.~\cite{Sengstock2016}.
It is the purpose of the current work to explore the dynamics of the phase vortices for a quenched topological model. As the effective Hamiltonian (of a Haldane-like model) in Ref.~\cite{Sengstock2016} forbids a simple analytical expression, we study instead the phase vortices of the quenched Haldane model. We give explicitly the expression of the trajectories of the dynamical vortices, and give the possible reason for the enlarged dynamical phase diagram. We will show that, the trajectories of the dynamical vortices tell exactly the Chern number of the Hamiltonian $H_f$: when the dynamical vortices wind around static vortices, the lower-band Chern number of $H_f$ is $C_1 = 1$; while when the dynamical vortices do not wind around static vortices, we have $C_1 = 0$. More generally, as is proved in detail in Ref.~\cite{Pengfei2016}, the linking number between these two kinds of trajectories in the $(k_x, k_y, t)$ space equals to the Chern number of the lower band. Thus the quench protocol in Ref.~\cite{Sengstock2016} can be used to directly identify the topological phase boundaries of the equilibrium Hamiltonian.
The paper is organized as follows. In Sec.~\ref{Sec:2}, we describe the quench protocol of the Haldane model, and study the state evolution of this system for the case of infinite large initial energy offset $M_i\to\infty$ in the initial Hamiltonian $H_i$ (i.e., $\theta_{\bf k} = 0$). In Sec.~\ref{Sec:3}, the evolution of the time-dependent azimuthal phase ${\phi _{\mathbf{k}}}(t)$ is explored. We show that, once the south pole ($\theta_{\bf k} = \pi$) is reached, the dynamical vortices are pairwise created. The trajectories of the dynamical vortices as well as the dynamical phase diagram are identified. In Sec.~\ref{Sec:Finite_Mi}, we study the case of a finite initial energy offset $M_i$, and the enlarged dynamical phase diagram is achieved. We conclude in Sec.~\ref{Sec:Conclusion}.
\section{The state evolution of the quenched Haldane model}\label{Sec:2}
We consider the Haldane model~\cite{Haldane1988} on the honeycomb lattice [Fig.~\ref{Fig1}(a)] with the following Hamiltonian:
\begin{equation}
\begin{aligned}
H =& - {J_0}\sum\limits_{j,l} {\left( {a_{{{\mathbf{r}}_j}}^\dag {b_{{{\mathbf{r}}_j} + {{\bm{\delta }}_l}}} + {\text{h}}{\text{.c}}{\text{.}}} \right)} + M\sum\limits_j {\left( {a_{{{\mathbf{r}}_j}}^\dag {a_{{{\mathbf{r}}_j}}} - b_{{{\mathbf{r}}_j}}^\dag {b_{{{\mathbf{r}}_j}}}} \right)} \hfill \\
&- {J_1}\sum\limits_{j,l} {\left( {{e^{i{\phi _{jl}}}}a_{{{\mathbf{r}}_j}}^\dag {a_{{{\mathbf{r}}_j} + {{\mathbf{a}}_l}}} + {e^{ - i{\phi _{jl}}}}b_{{{\mathbf{r}}_j}}^\dag {b_{{{\mathbf{r}}_j} + {{\mathbf{a}}_l}}}} \right)} , \hfill \\
\end{aligned}
\end{equation}
where $a_{{{\mathbf{r}}_j}}$ and $b_{{{\mathbf{r}}_j}}$ are the annihilation operators for the $A$ and $B$ sublattices with coordinates ${{\mathbf{r}}_j}$ respectively, $J_0$ and $J_1$ are respectively the nearest-neighboring (NN) and next-nearest-neighboring (NNN) hopping amplitude, $\bm{\delta}_l$ ($\mathbf{a}_l$) with $l=1,2,3$ are the vectors pointing to the NN (NNN) sites, $M$ is the sublattice energy offset. Here $\phi_{jl}=\pm\phi$ are the NNN hopping phases, and the positive sign is taken for the NNN hopping along the arrows as shown in Fig.~\ref{Fig1}(a). We set $|\bm{\delta}_l| = 1$ as the length unit.
The Fourier transformed Hamiltonian in the quasimomentum space
gives the following compact form
\begin{equation}
H = \sum\limits_{\mathbf{k}} {\left( {a_{\mathbf{k}}^\dag \quad b_{\mathbf{k}}^\dag } \right)} \mathcal{H}\left( {\mathbf{k}} \right)\left( \begin{gathered}
{a_{\mathbf{k}}} \hfill \\
{b_{\mathbf{k}}} \hfill \\
\end{gathered} \right),
\end{equation}
where ${\mathbf{k}}$ is the quasimomentum, $a_{\mathbf{k}}$ and $b_{\mathbf{k}}$ are the sublattice annihilation operators.
In the above expression, $\mathcal{H}\left( {\mathbf{k}} \right)$ is a $2\times2$ matrix with the form
\begin{equation}\label{H_eff}
\mathcal{H}\left( {\mathbf{k}} \right) = {h_0}\left( {\mathbf{k}} \right){I_2} + {\mathbf{h}}\left( {\mathbf{k}} \right) \cdot {\bm{\sigma}},
\end{equation}
where $I_2$ is the $2\times2$ unit matrix, ${\bm{\sigma}}$ is the vector of Pauli matrices, ${h_0} = - 2{J_1}\cos \phi \sum\nolimits_l {\cos \left( {{\mathbf{k}} \cdot {{\mathbf{a}}_l}} \right)}$, and ${\mathbf{h}}\left( {\mathbf{k}} \right) = (h_x, h_y, h_z)$ with
\begin{equation}\label{h0_hxyz_haldane}
\begin{gathered}
{h_x} = - {J_0}\sum\limits_l {\cos \left( {{\mathbf{k}} \cdot {{\bm{\delta }}_l}} \right)} ,\quad {h_y} = - {J_0}\sum\limits_l {\sin \left( {{\mathbf{k}} \cdot {{\bm{\delta }}_l}} \right)} , \hfill \\
{h_z} = M - 2{J_1}\sin \phi \sum\limits_l {\sin \left( {{\mathbf{k}} \cdot {{\mathbf{a}}_l}} \right)} . \hfill \\
\end{gathered}
\end{equation}
The spectrum of the system is ${E_{\mathbf{k}}} = {h_0} \pm h$, where $h=\sqrt {h_x^2 + h_y^2 + h_z^2} $. The topology of the system can be characterized by the (first) Chern number~\cite{TKNN1982} of the lowest band as follows~\cite{Qi2006}
\begin{equation}
{C_1} = \iint_{{\text{BZ}}} {\frac{{{d^2}k}}{{4\pi }}}{\mathbf{\hat h}} \cdot \left( {\frac{{\partial {\mathbf{\hat h}}}}{{\partial {k_x}}} \times \frac{{\partial {\mathbf{\hat h}}}}{{\partial {k_y}}}} \right),
\end{equation}
where ${\mathbf{\hat h}} = {\mathbf{h}}/h$. The nontrivial winding of ${\mathbf{\hat h}}$ [see Fig.~\ref{Fig1}(b)] exists in the parameter regime $|M| < 3\sqrt 3 |{J_1}\sin \phi |$, which gives the phase diagram of the system [Fig.~\ref{Fig1}(c)].
\begin{figure}[b]
\includegraphics[width=\columnwidth]{Fig1}\\
\caption{Quench dynamics of the Haldane model. (a) The Haldane model on a honeycomb lattice. The two sublattices $A$ and $B$ take an energy offset $M$. The arrows indicate the tunneling coefficients with positive hopping phases. (b) The normalized vector ${\mathbf{\hat h}}(\mathbf{k}) = {\mathbf{h}}/|{\mathbf{h}}|$ in the quasimomentum space for the topologically nontrivial case. The hexagon shows the first Brillouin zone. (c) The topological phase diagram of the Haldane model and the quench protocol. Sublattice energy offset $M$ is quenched from $M_i$ to $M_f$ for a particular $\phi$. (d) The state evolution visualized on the Bloch sphere for a particular quasimomentum point $\bf{k}$. (e) The state evolution for quasimomentum points in the first Brillouin zone, viewed from the north pole. The south pole region is reached for time $t\sim0.6\tau$. Here we take $\phi = \pi/2$, $M_i \to \infty$, $M_f = 0.1 J_0$, $J_1 = 0.1 J_0$, and $\tau=1/J_0$. }\label{Fig1}
\end{figure}
We now consider the case that the sublattice energy offset $M$ is time dependent:
\begin{equation}
M(t) = \left\{ \begin{gathered}
{M_i},\quad t \leqslant 0; \hfill \\
{M_f},\quad t > 0. \hfill \\
\end{gathered} \right.
\end{equation}
Suppose the system is initially prepared in the ground state of the Haldane Hamiltonian $H_i=h_0{I_2}+{\bf{h}}_i\cdot\bm{\sigma}$ with offset $M_i$, after the quench at $t=0$ the state evolves under a new Hamiltonian $H_f=h_0{I_2}+{\bf{h}}_f\cdot\bm{\sigma}$ with offset $M_f$ [Fig.~\ref{Fig1}(d)]. We then want to ask the following question: could the time-dependent state give any insight of the topological property of the system? The answer is \emph{Yes}. The time-dependent state, albeit always is topologically trivial, can indeed be used to identify the topological phase diagram of the equilibrium Hamiltonian~\cite{Pengfei2016}.
For conceptual simplicity, we first consider $M_i\to+\infty$, and defer the more complex case of finite $M_i$ to Sec.~\ref{Sec:Finite_Mi}. For the $M_i\to+\infty$ case, the initial single-particle state for each quasimomentum point reads
\begin{equation}
\left| {{\psi _{\mathbf{k}}}\left( {t = 0} \right)} \right\rangle = \left( \begin{gathered}
0 \hfill \\
1 \hfill \\
\end{gathered} \right).
\end{equation}
Under the Hamiltonian ${H}_f\left( {\mathbf{k}} \right) = {h_0}\left( {\mathbf{k}} \right){I_2} + {\mathbf{h}_f}\left( {\mathbf{k}} \right) \cdot {\bm{\sigma}}$, we get (we always take the reduced Planck constant $\hbar=1$)
\begin{equation}\label{Eq:psi_t}
\begin{aligned}
\left| {{\psi _{\mathbf{k}}}\left( t \right)} \right\rangle &= {e^{ - iH_f\left( {\mathbf{k}} \right)t}}\left| {{\psi _{\mathbf{k}}}\left( {t = 0} \right)} \right\rangle \hfill \\
&= {e^{ - i{h_0}t}}\left( \begin{gathered}
- i\sin \left( {ht} \right)\left( {{{\hat h}_x} - i{{\hat h}_y}} \right) \hfill \\
\cos \left( {ht} \right) + i\sin \left( {ht} \right){ {{\hat h}_z}} \hfill \\
\end{gathered} \right). \hfill \\
\end{aligned}
\end{equation}
In the above, as the initial state is $\mathbf{k}$-independent, the subscript $f$ is omitted to keep notation concise.
This state can be parameterized on a Bloch sphere as
\begin{equation}\label{Eq:psi_theta_phi}
\left| {{\psi _{\mathbf{k}}}\left( t \right)} \right\rangle : = \left( \begin{gathered}
\sin \left( {{\theta _{\mathbf{k}}}/2} \right) \hfill \\
- \cos \left( {{\theta _{\mathbf{k}}}/2} \right){e^{i{\phi _{\mathbf{k}}}}} \hfill \\
\end{gathered} \right),
\end{equation}
then we have
\begin{equation}\label{Eq:phi_k}
\left\{ \begin{gathered}
{\theta _{\mathbf{k}}} = \arccos \left( {\cos {{\left( {ht} \right)}^2} + \sin {{\left( {ht} \right)}^2}\left( {\hat h_z^2 - \hat h_x^2 - \hat h_y^2} \right)} \right), \hfill \\
{\phi _{\mathbf{k}}} = \arg \left[ {\frac{{\cos \left( {ht} \right) + i\sin \left( {ht} \right){{\hat h}_z}}}{{i\sin \left( {ht} \right)\left( {{{\hat h}_x} - i{{\hat h}_y}} \right)}}} \right]. \hfill \\
\end{gathered} \right.
\end{equation}
The time- and quasimomentum-dependent Bloch vectors are then given by
\begin{equation}
{{\mathbf{n}}_{\mathbf{k}}}\left( t \right) = \left( {\sin {\theta _{\mathbf{k}}}\cos {\phi _{\mathbf{k}}},\sin {\theta _{\mathbf{k}}}\sin {\phi _{\mathbf{k}}},\cos {\theta _{\mathbf{k}}}} \right).
\end{equation}
The typical results for the state evolution visualized by the Bloch vectors are given in Fig.~\ref{Fig1}(e).
As indicated in Fig.~\ref{Fig1}(e), some of the Bloch vectors can reach the south pole region as time goes by. As the effective magnetic field [${\mathbf{h}}\left( {\mathbf{k}} \right)$ in Eq.~(\ref{H_eff})] that drives the rotation of Bloch vector is orthogonal to the latter, it should have a vanishing $z$ component for this particular case, i.e., we should have $h_z({\bf k}) = 0$. According to Eq.~(\ref{h0_hxyz_haldane}), we rewrite $h_z$ as
${h_z({\bf k})} = M_f - 2{J_1}\sin(\phi) g({\bf k})$,
where $g({\bf k})$ is given by
\begin{equation}\label{g_k_function}
g({\bf k}) = {\sin \left( {{\mathbf{k}} \cdot {{\mathbf{a}}_1}} \right) + \sin \left( {{\mathbf{k}} \cdot {{\mathbf{a}}_2}} \right) + \sin \left( {{\mathbf{k}} \cdot {{\mathbf{a}}_3}} \right)}.
\end{equation}
We plot $g({\bf k})$ in Fig.~\ref{Fig2}(b). We see that, $g({\bf k})$ peaks at $\pm K$ points, and thus is bounded as $|g({\bf k})|\le |g(k_x = \frac{4\pi}{3\sqrt{3}}, k_y = 0)| = \frac{3\sqrt{3}}{2}$. We thus see that, only when the Hamiltonian is in the topological phase (i.e., $|M_f| < 3\sqrt 3 |{J_1}\sin \phi |$) does the $h_z({\bf k})$ function features zeros.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig2}\\
\caption{Dynamics of the azimuthal phase $\phi_{\bf{k}}(t)$, with parameters the same as in Fig.~\ref{Fig1}(e). (a) The stroboscopic observation of $\phi_{\bf k}(t)$, for $t = 0.2\tau$ (I), $0.4\tau$ (II), $...$, $2\tau$ (X). The hexagon shows the first Brillouin zone. The closed loops show the contours with $h_z({\bf k}) = 0$. Dynamical vortices, as indicated by the red arrows in (IV, V, VI, VII and VIII), live on such contours within particular time slots. (b) The $g({\bf{k}})$ function [Eq.~(\ref{g_k_function})] in the quasimomentum space. $g({\bf{k}})=0.5=M_f/(2J_1 \sin\phi)$ gives the nodal lines of $h_z({\bf k})$ as shown in (a). (c) The time dependent dynamical vortex number. (d) The vortex trajectories in the $(k_x,k_y,t)$ space. The linking number between the loop of dynamical vortices and the loop of static vortices equals to the Chern number of the final Hamiltonian $C_1=1$ for this case. }\label{Fig2}
\end{figure*}
\section{The evolution of the azimuthal phase: static and dynamical vortices}\label{Sec:3}
We now focus on the evolution of the time- and quasimomentum-dependent azimuthal phase $\phi_{\bf k}(t)$, as shown in Fig.~\ref{Fig2}(a). The main features of the azimuthal phase dynamics is as follows:
(i) There are always static vortices in the phase pattern at the corners of the Brillouin zone (K and K' points) in Fig.~\ref{Fig2}(a).
(ii) There are dynamical vortices that emerge [Fig.~\ref{Fig2}(a)IV], propagate [Fig.~\ref{Fig2}(a)(V-VI)], and annihilate [Fig.~\ref{Fig2}(a)VII] in the quasimomentum space.
(iii) The trajectories of the dynamical vortices are just the zeros of $h_z({\bf k})$.
The locations of the static vortices correspond to the Dirac points of $h_x({\bf k})\sigma_x + h_y({\bf k})\sigma_y$, i.e., the $K$ and $K'$ points. Now that we have $h_x({\bf k}=K) = h_y({\bf k}=K) = 0$, the Bloch vectors of these points always point to the north pole.
We have shown that, only for the points satisfying $h_z({\bf k})=0$, the south pole is reachable at a suitable time. An accompanied question is that:
\begin{quote}
\emph{For a particular quasimomentum point $\bf k$, must there be a dynamical vortex once the south pole is reached?}
\end{quote}
This question can be also reformulated quantitatively as follows:
Consider a particular quasimomentum point $\left( {k_x^0,k_y^0} \right)$ at a particular time $t_v$ that satisfy
\begin{equation}\label{Eq:condition_for_south_pole}
\left\{ \begin{gathered}
{h_z}\left( {k_x^0,k_y^0} \right) = 0, \hfill \\
\cos \left( {h(k_x^0,k_y^0){t_v}} \right) = 0, \hfill \\
\end{gathered} \right.
\end{equation}
what is the vorticity $\nu$ for this point? The answer is (see Appendix~\ref{App:South-pole} for details)
\begin{equation}\label{Eq:vorticity}
\nu = {\text{sign}}\left[ {\frac{{\partial \left( { h,{{\hat h}_z}} \right)}}{{\partial \left( {{k_x},{k_y}} \right)}}{|_{(k_x^0,k_y^0)}}} \right].
\end{equation}
Thus we see that, as long as the Jacobian ${\frac{{\partial \left( { h,{{\hat h}_z}} \right)}}{{\partial \left( {{k_x},{k_y}} \right)}}{|_{(k_x^0,k_y^0)}}}$ is nonvanishing, there is always a vortex at $(k_x^0,k_y^0)$ at time $t_v$. And we can also show (see Appendix~\ref{App:South-pole}) that the vortex dynamics features a time periodicity ${t_{v,n}} = \left( {1 + 2n} \right){t_{v,0}}$, where ${t_{v,0}} = \frac{1}{{h(k_x^0,k_y^0)}}\frac{\pi }{2}$ is the first time that the dynamical vortex emerge at the point $(k_x^0,k_y^0)$.
It is also checked that, at the particular points that the vortices emerge or annihilate, the corresponding Jacobians do vanish. For instance, we consider the system shown in Fig.~\ref{Fig2} [i.e., in Fig.~\ref{Fig1}(e)]. At the point $(k_x^0 \simeq - 0.97,k_y^0 = 0)$, we have $h_z(k_x^0 \simeq - 0.97,k_y^0 = 0)=0$ and $\frac{{\partial h}}{{\partial {k_y}}}{|_{(k_x^0,k_y^0)}} = 0 = \frac{{\partial {{\hat h}_z}}}{{\partial {k_y}}}{|_{(k_x^0,k_y^0)}}$. This is also the point that the vortices are created at the time ${t_c} = \frac{1}{{h(k_x^0\simeq - 0.97,k_y^0=9)}}\frac{\pi }{2} \simeq 0.67\tau$. Similarly, at the point $(k_x^0 \simeq 0.15,k_y^0 = 2\pi /3)$ (located at the edge of the Brillouin zone), we also have Eq.~(\ref{Eq:condition_for_south_pole}) satisfied with zero vorticity at the vortex annihilation time ${t_a} \simeq 1.60\tau$. Between time $t_c$ and $t_a$, we have three pairs of dynamical vortices as shown in Fig.~\ref{Fig2}(c).
It is also interesting to see the vortex trajectories in the $(k_x,k_y,t)$ space [Fig.~\ref{Fig2}(d)] which is related to the Hopf mapping and linking numbers~\cite{Pengfei2016}: the trajectories of dynamical vortices always wind around the Dirac points where the static vortices locate.
The arguments on the vortex creation and annihilation time for the example can be extended to the whole phase diagram of the system. For instance, we fix $\phi = \pi/2$ and $J_1 = 0.1J_0$, by varying the sublattice imbalance $M_f$, we can get the vortex creation time $t_c$ and annihilation time $t_a$ as a function of $M_f/J_1$, as shown in Fig.~\ref{Fig3}(a). The primary information in Fig.~\ref{Fig3}(a) is that, the dynamical vortices only appear when the evolution Hamiltonian is topologically nontrivial, as $h_z(\mathbf{k})$ does not possess nodal lines otherwise.
It is seen that, $t_c$ (and hence $t_a$) diverges as the parameters approach the phase boundary of the static phase diagram [$M/\left( {3\sqrt 3 {J_1}\sin \phi } \right) = \pm 1$]. This statement is also true for a general $\phi$, and we thus get the {\lq\lq}dynamical phase diagram{\rq\rq} (defined as the parameter region that dynamical vortices can appear~\cite{Sengstock2016}) as shown in Fig.~\ref{Fig3}(b). This dynamical phase diagram coincides exactly with the phase diagram of the equilibrium Hamiltonian [Fig.~\ref{Fig1}(c)].
\begin{figure}
\includegraphics[width=\columnwidth]{Fig3}\\
\caption{(a) The vortex creation (annihilation) time $t_c$ ($t_a$) as a function of $M_f$, for $\phi = \pi/2$, $M_i \to \infty$, and $J_1 = 0.1J_0$. They diverge at $M_f / (3\sqrt(3)J_1)=\pm1$, which indicates that for $|M_f / (3\sqrt{3}J_1)|\ge1$ case, no dynamical vortex can be created. (b) The dynamical phase diagram with the shaded regions hosting dynamical vortices for the $M_i \to \infty$ case. }\label{Fig3}
\end{figure}
\section{Finite initial offset $M_i$: enlarged dynamical phase diagram}\label{Sec:Finite_Mi}
We now briefly discuss the case of preparing the initial state as the Hamiltonian with a finite (positive) $M_i$ in the topologically trivial phase. In the Bloch sphere representation, the initial Bloch vectors now point to a finite region lies in the northern hemisphere. We can use ${\Theta _{\mathbf{k}}}$ and ${\Phi_{\mathbf{k}}}$ to parameterize the Hamiltonian vector ${\mathbf{\hat h}}\left( {\mathbf{k}} \right)$ as
\begin{equation}
{\mathbf{\hat h}}\left( {\mathbf{k}} \right) = (\sin {\Theta _{\mathbf{k}}}\cos {\Phi _{\mathbf{k}}},\sin {\Theta _{\mathbf{k}}}\sin {\Phi _{\mathbf{k}}},\cos {\Theta _{\mathbf{k}}}).
\end{equation}
Then it can be shown (see Appendix~\ref{App:Finite_Mi}) that the above discussion of vortex dynamics with $M_i\to\infty$ in Sec.~\ref{Sec:3} and Appendix~\ref{App:South-pole} is still valid as long as we replace the vortex trajectory ${h_z}\left( {\mathbf{k}} \right) = 0$ by
\begin{equation}\label{Eq:hz_tilde}
{\hat{\tilde {h}}_z}\left( {\mathbf{k}} \right) = \cos \left( {\Theta _{\mathbf{k}}^f - \frac{{\Theta _{\mathbf{k}}^i}}{2}} \right) = 0.
\end{equation}
It recovers the previous results for $M_i\to\infty$, as ${\Theta _{\mathbf{k}}^i}=0$ and ${\hat{\tilde h}_z}\to{{\hat h}_z}$ for this case.
\begin{figure}
\includegraphics[width=\columnwidth]{Fig4}\\
\caption{The vortex dynamics with a finite initial energy offset $M_i$. (a) Enlarged dynamical phase diagram for the $M_i=3J_0$ case. Apart from the colored parameter region shown in Fig.~\ref{Fig3}(b), there exists a new gray region which is bounded by a $\phi$-independent critical final energy offset $M_f^c$. For the $M_f^c<M_f<0$ case, a second type of dynamical vortices emerges. (b) The critical final energy offset $M_f^c$ for different initial offset $M_i$. (c, d) The vortex trajectories in the $(k_x,k_y,t)$ space with parameters marked by the red crosses in (a). They give rise to linking numbers $1$ and $0$ respectively. }\label{Fig4}
\end{figure}
For this finite $M_i$ case, the vortex dynamics is summarized in Fig.~\ref{Fig4}. The dynamical phase diagram is enlarged with an additional parameter region bounded by a $\phi$-independent critical $M_f^c$ [Fig.~\ref{Fig4}(a)]. Two types of vortex trajectories are possible as shown in Fig.~\ref{Fig4}(c) and (d), with different linking numbers. The trajectory of dynamical vortices as shown in Fig.~\ref{Fig4}(d) shrinks to a point at $\Gamma$ point at critical $M_f^c$, and the relation between $M_i$ and $M_f^c$ is given by
\begin{equation}
\Theta _{{\mathbf{k}} = \Gamma }^f - \frac{{\Theta _{{\mathbf{k}} = \Gamma }^i}}{2} = \frac{\pi }{2},
\end{equation}
or, written explicitly,
\begin{equation}
M_f^c = - \tan \left[ {\frac{1}{2}\arccos \left( {\frac{{{M_i}}}{{\sqrt {9J_0^2 + M_i^2} }}} \right)} \right],
\end{equation}
as shown in Fig.~\ref{Fig4}(b). We see in Fig.~\ref{Fig4}(b) that, the critical value $M_f^c$ can be larger than $-3\sqrt{3}J_1$ as indicated by the dashed line, and it turns to zero for extremely large initial offset $M_i\to\infty$, which returns to the situation in Fig.~\ref{Fig3}(b). Indeed, the parameter regime $ M_f^c\le M_f \le 0$ just indicates the fact that the dynamical vortices around $\Gamma$ point can emerge for geometric reasons. Such type of dynamical vortices can also coexist with the ones winding around static vortices in the topological phase of the equilibrium Hamiltonian, as the former always give null contribution to linking numbers.
\section{Conclusion}\label{Sec:Conclusion}
Inspired by the recent Hamburg group experiment on the creation of dynamical vortices in a quenched Haldane-like model on a driven honeycomb lattice~\cite{Sengstock2016}, we have studied the dynamics of the phase vortices of the quenched Haldane model in detail. We have shown that, two types of vortices can appear in the azimuthal phase profile: the static vortices correspond to the Bloch states pointing to the north pole, while the dynamical vortices correspond to the Bloch states pointing to the south pole. The trajectories of the dynamical vortices also constitute of two types: one winds around Dirac point ($K$ or $K$'); the other winds around $\Gamma$ point, which could give rise to the enlarged dynamical phase diagram. The former gives linking number one, while the later gives linking number zero. As the linking number equals to the Chern number exactly as proved in Ref.~\cite{Pengfei2016}, the trajectories of the dynamical vortices provide an alternative method to determine the topological phase diagram of the Haldane model. The discussion in this paper can also be applied to the topological square-lattice model as realized recently by the USTC group~\cite{USTC_SOC2016}.
\\
\begin{acknowledgments}
The author would like to thank Xin Chen, Ce Wang, Hui Zhai, and Pengfei Zhang for valuable discussions.
\end{acknowledgments}
|
train/arxiv
|
BkiUfwI5qhDCZ21LDVPZ
| 5 | 1 |
\section{Introduction} \label{sec:intro}
Molecular gas ($\rm H_2$) %
is one of the fundamental physical quantities
of galaxies because it is the fuel for star formation.
It is well known that the gas surface density is correlated
with the star-formation rate (SFR) surface density
(the Schmidt--Kennicutt relation; \citealt{schmidt1959,kennicutt98}).
The total gas mass is also connected with the total star-forming activity \citep[e.g.,][]{daddi10,genzel10}.
The typical SFR of star-forming galaxies at a given stellar mass
appears to monotonically increases with increasing redshifts \citep[e.g.,][]{whitaker12,sobral14,tomczak16}.
More active star formation in galaxies at higher redshifts
is considered to be supported by a larger amount of gas
\citep[e.g.,][]{daddi10,genzel10,geach11,bothwell13_mnras429,tacconi13,birkin20}.
Investigating the gas contents in galaxies at high redshifts
is crucial to understand formation and evolution of
galaxies in the Universe \citep[e.g.,][]{walter16,riechers19}.
Observational studies over the past decade
revealed the gas properties
not only for dusty starburst galaxies, such as sub-millimeter bright galaxies (SMGs),
but also for ultraviolet (UV)/optical--selected star-forming galaxies
on the stellar mass--SFR relation,
the so-called ``main sequence'' of star-forming galaxies,
at $z\gtrsim1$
\citep[e.g.,][]{daddi10,genzel10,tacconi10,tacconi13,magdis17}.
The increasing number of galaxies with individual measurements of the gas mass
in a wide redshift range
makes it possible to establish the scaling relations for gas mass fraction
and gas depletion timescale ($\rm =M_{\rm gas}/{\rm SFR}$)
as a function of redshift, stellar mass, and SFR \citep[e.g.,][]{scoville17,tacconi18,freundlich19,liu19_II}.
At $z>3$, however,
the number of UV/optical-selected star-forming galaxies
with the individual measurements of gas content
is still small
(with CO emission lines: \citealt{magdis17,cassata20},
and with dust continuum: \citealt{schinnerer16,wiklind19,aravena20}).
How the gas properties of UV/optical-selected galaxies evolve at $z>3$
are not conclusive yet.
The atomic and/or molecular hydrogen gas mass
is also said to be correlated with the gas-phase metallicity
from both observations \citep[e.g.,][]{bothwell13,hunt15,bothwell16_mnras,seko16_alma,brown18}
and cosmological numerical simulations \citep[e.g.,][]{lagos16,torrey19}.
It has been suggested that the gas mass is more fundamental than the SFR
to explain the scatter of the mass--metallicity relation
of star-forming galaxies \citep[e.g.,][]{bothwell13,zahid_apj791,brown18}.
Indeed,
more gas-rich star-forming galaxies tend to be more metal-poor
and more actively forming stars.
\added{At high redshifts,
a direct comparison between gas mass
and gas-phase metallicity is
limited to a handful of galaxies at $z\sim$\ 1--3 \citep{saintonge13,seko16_alma,shapley20}.
\citet{seko16_alma} found a trend that the gas mass fraction decreases
with increasing metallicities at a fixed stellar mass
for star-forming galaxies at $z\sim1.4$.
Such a direct comparison between the two quantities
has not been done at $z>3$.}
Galaxies evolve by interacting with the intergalactic medium (IGM).
Gas accretes onto galaxies from the outside,
chemical enrichment proceeds as stars form,
and gas and metals are ejected from galaxies by outflow \citep[e.g.,][]{bouche10,dave11_1,lilly13,pengmaiolino14,tacchella20}.
Gas mass fraction and gas-phase metallicity are often used to
investigate the relative contributions between
star formation, gas outflow, and inflow \citep{erb08_apj674,cresci10,troncoso14,yabe15_apj,seko16,sanders20}.
\added{Most of these studies estimated gas mass fractions
by converting the SFR surface density to gas surface density
with the Schmidt-Kennicutt relation \citep{erb08_apj674,cresci10,troncoso14,yabe15_apj,sanders20}.}
Given that galaxies are more actively forming stars at higher redshifts,
they are expected to be more actively interacting with the surrounding IGM
via outflows and inflows \citep[e.g.,][]{yabe15_apj}.
At $z>3$,
it has been suggested that star-forming galaxies are no longer
in equilibrium \citep[e.g.,][]{mannucci10},
where the gas consumption due to star formation and outflows
is balanced with the gas acquisition by inflows (inflow $=$ star formation $+$ outflow),
due to the intense gas inflows onto galaxies in the early Universe.
Estimating both the gas mass and gas-phase metallicity
for star-forming galaxies at $z>3$
allows tests of whether
galaxies at $z>3$ are out of equilibrium or not.
Several methods are commonly used to estimate gas masses.
The first one is using CO emission line fluxes \citep[e.g.,][]{daddi10,genzel10,tacconi10,tacconi13}.
This method has uncertainties on
the CO-to-$\rm H_{2}$ conversion factor, which changes depending on metallicity \citep{genzel12}
and on the CO excitation states when using higher-$J$ CO lines \citep[e.g.,][]{daddi15,riechers20}.
Furthermore, observations of CO lines for galaxies at high redshifts
are observationally expensive.
The second one is converting
a dust mass to a gas mass with an assumed gas-to-dust mass ratio \citep[e.g.,][]{santini14,bethermin15}.
Because the gas-to-dust mass ratio depends on the metallicity \citep{leroy11,remy-ruyer14},
metallicity measurements are crucial to estimate the gas mass accurately.
Gas masses can also be estimated with an empirically calibrated relation between
a single-band dust continuum flux at the Rayleigh-Jeans (R-J) tail
and gas mass \citep[e.g.,][]{scoville14,scoville16,groves15}.
These scaling relations are calibrated with local galaxies
or with local galaxies and SMGs up to $z\sim2$.
In this method, the gas-to-dust mass ratio is included in the conversion factor,
and thus, is not needed to be considered.
It remains unclear whether the empirical calibration methods are applicable
to galaxies at $z>3$
or how much scatter there is in the relationships.
Given that dust continuum observations take much less time as compared to the CO observations,
using dust continuum as a tracer of gas has an advantage to increase the number of galaxies
at higher redshifts with individual gas estimates,
but these will only be reliable when precise metallicities are also available.
\added{Metallicity measurements based on rest-frame optical emission lines
for dustier star-forming galaxies
are thought to have larger uncertainties
due to strong dust obscuration \citep[e.g.,][]{santini10}.
\citet{herrera-camus18} reported a discrepancy between metallicities
derived with rest-frame optical emission lines and far-infrared (FIR)
fine-structure lines for local (Ultra) Luminous Infrared Galaxies ((U)LIRGs).}
In this paper,
we present the results from sub-millimeter observations
with the Atacama Large Millimeter/sub-millimeter Array (ALMA)
of star-forming galaxies at $z=$\ 3--4.
High quality near-infrared (NIR) spectra obtained with Keck/MOSFIRE \citep{mclean10,mclean12}
are available for all of the targets
and their gas-phase metallicities are already measured \citep{onodera16,suzuki17}.
By observing the dust continuum emission,
we can estimate their dust masses and convert them to gas masses
using the relation between the metallicity and gas-to-dust mass ratio.
We then investigate the gas properties,
namely, gas mass fractions and gas depletion timescales,
of star-forming galaxies at $z=$\ 3--4.
Comparing the gas contents with the gas-phase metallicities,
we aim to understand how star-forming galaxies at this epoch interact
with their surrounding IGM via gas inflows and outflows.
This paper is organized as follows.
In Section~\ref{sec:obs}, we introduce our parent sample
of star-forming galaxies at $z=$\ 3--4 and describe the observations conducted with ALMA.
We also describe the reduction and analysis for the obtained data
and stacking analysis.
In Section~\ref{sec:physicalquantity},
we present our estimates of the physical quantities, such as
gas-phase metallicity, ionization parameter, and gas mass.
In Section~\ref{sec:results},
we show our results on the dust and gas properties
of the star-forming galaxies at $z=$\ 3--4 and discuss their metallicity dependencies.
We also compare our observational results with a gas regulator model
to discuss how star-forming galaxies at this epoch interact with their surrounding IGM.
We summarize this paper in Section~\ref{sec:summary}.
Throughout of this paper,
we assume the cosmological parameters of $\rm \Omega_m = 0.3$, $\rm \Omega_{\Lambda} = 0.7$, and $H_{\rm 0} = 70\ {\rm km\ s^{-1}\ Mpc^{-1}}$.
We use a Chabrier initial mass function (IMF; \citealt{chabrier03}).
\section{Observation and reduction} \label{sec:obs}
\subsection{Spectroscopically confirmed galaxies at $z=$3--4}
Our parent sample is constructed
from the two different studies
of star-forming galaxies at $3 < z < 4$ in the COSMOS field
using NIR spectroscopy with Keck/MOSFIRE.
One study is based on
a spectroscopic and photometric redshift selection
\citep[Section~\ref{subsec:o16}]{onodera16},
while the other study is based on narrow-band selection
\citep[Section~\ref{subsec:s17}]{suzuki17}.
The parent sample from both studies consists of 53 galaxies with $z_{\rm spec}\sim$\ 3.0--3.8.
\subsubsection{UV-selected galaxies}\label{subsec:o16}
In \citet[][hereafter O16]{onodera16},
targets for spectroscopic observation were originally selected from
the zCOSMOS-Deep redshift catalog \citep{lilly07}
and the 30-band COSMOS photometric redshift catalog \citep{mccracken12,ilbert13}.
\citetalias{onodera16} conducted {\it H}- and {\it K}-band spectroscopy and
confirmed 43 galaxies at $z_{\rm spec}=$\ 3.0--3.8 based on the rest-frame optical emission lines.
The confirmed star-forming galaxies span a stellar mass range of
$\rm log(M_*/M_\odot) \sim$\ 8.5--11.0 and distribute around the star-forming main sequence
at $z\sim$ 3.3 \citepalias{onodera16}.
\subsubsection{[OIII] emission line galaxies}\label{subsec:s17}
In \citet[][hereafter S17]{suzuki17},
targets for spectroscopic observation were selected from
a catalog of narrow-band(NB)-selected [{\sc Oiii}]$\lambda$5007 emission line galaxies at $z=3.23$,
obtained by the High-Z Emission Line Survey
(HiZELS; \citealt{best13, sobral13, khostovan15}).
\citetalias{suzuki17} conducted {\it H}- and {\it K}-band spectroscopy
and confirmed ten [{\sc Oiii}] emitters at $z_{\rm spec}=$\ 3.23--3.28.
\added{The stellar mass range of the confirmed [{\sc Oiii}] emitters
is $\rm log(M_*/M_\odot) \sim$\ 9.1--10.2.}
The [{\sc Oiii}] emitters follow the star-forming main sequence at $z\sim3.2$
and the mass--metallicity relation established by \citetalias{onodera16} \citepalias{suzuki17}.
\subsection{ALMA Band-6 observation}\label{subsec:almaobs}
We selected galaxies with
$\rm log(M_*/M_\odot) \ge 10$ and $\ge 3\sigma$ detection of
[{\sc Oiii}]$\lambda$5007, H$\beta$, or [{\sc Oii}]$\lambda$3727 emission lines
from the parent sample
as targets for the ALMA observations.
We excluded two galaxies classified as the active galactic nuclei (AGNs) in \citetalias{onodera16}.
One has an X-ray counterpart detected with {\it Chandra}.
The other shows strong [Ne{\sc iii}]$\lambda$3869 emission and a high [{\sc Oiii}]$\lambda$4363/H$\gamma$ ratio,
which is likely to be powered by the AGN \citepalias{onodera16}.
As a result,
the sample for the ALMA observations consists of
12 galaxies, two of which are [{\sc Oiii}] emitters
from \citetalias{suzuki17} (Table~\ref{table:obssummary}).
Although the potential AGNs were excluded from our ALMA targets,
we found that one of the ALMA targets, 192129,
is detected in X-ray with {\it Chandra} \citep{elvis09,civano12,civano16}
and included in the X-ray-selected AGN catalog of \citet{kalfountzou14} as a type-2 AGN.
The optical--NIR spectral energy distribution (SED) of this source is not peculiar as compared to
the other galaxies \citepalias[][and as shown in the best-fit SEDs in Appendix~\ref{sec:appendix1}]{onodera16}
and its H$\beta$ emission line is narrow,
which is likely to be consistent with the classification by \citet{kalfountzou14}.
Although we expect that the optical--NIR emission is dominated by
emission from the host galaxy,
the physical quantities,
such as stellar mass, SFR, gas-phase metallicity,
and ionization parameter (Section~\ref{subsec:magphys} and \ref{subsec:ism}),
may be affected by the emission from the AGN.
On the other hand,
the dust continuum observed at ALMA Band-6 ($\lambda_{\rm rest}\sim280\ \mu {\rm m}$)
is expected to be dominated by cold dust emission from star-forming regions ($T\sim$ 20--40~K).
We do not exclude this source in the following analyses
but distinguish it from the other sources on each figure.
Our ALMA Cycle~6 program with Band-6 was
conducted during December 2018 -- March 2019
(2018.1.00681.S, PI: T. Suzuki).
Frequencies of four spectral windows are slightly different among the targets
depending on their spectroscopic redshifts (between $221.9$ GHz and $254.4$ GHz).
We set the frequencies of the spectral windows so that
we can cover the CO(9--8) line ($\nu_{\rm rest} = 1036.9$ GHz) with one of the four spectral windows.
The effective bandwidth of each spectral window is $~1.875$ GHz.
The data were taken with the Time Domain Mode (TDM).
The total on-source time is $\sim$5--90~min
depending on the stellar mass, SFR, and gas-phase metallicity of the targets as summarized in Table~\ref{table:obssummary}.
The brightest source at 1.3~mm, 208681,
appears to be detected with the CO(9--8) line.
\added{Given that quasar host galaxies tend to
have more extreme CO excitation states than
normal star-forming galaxies \citep{carilliwalter13},
the CO(9--8) line detection
may suggest that this source hosts an AGN.}
We will discuss the CO(9--8) line of this source in a forthcoming paper
(Suzuki et al. 2020 in preparation).
\added{We note that}
the contribution of the CO(9--8) line to the dust continuum flux is negligible.
We used the Common Astronomy Software Application package ({\sc casa}; \citealt{CASA})
to calibrate the raw data.
We run the {\sc clean} algorithm with natural weighting.
When there are sources detected with $\ge 5\sigma$ level,
we run {\sc clean} again by masking the sources.
\added{Because the synthesized beam sizes are slightly different among the targets,
we created the uv-tapered maps for
some of the sources to conduct flux measurement
under similar beam sizes.
The average beam size of 12 ALMA maps is $1.''52 \times 1.''32$.}
\added{
We used {\sc imfit} to fit a 2D Gaussian to each target.
The central position is fixed at the centroid determined in the {\it Ks}-band image
from UltraVISTA\footnote{\url{https://irsa.ipac.caltech.edu/data/COSMOS/index_cutouts.html}}.
Our detection criterion is that the peak flux obtained by {\sc imfit} has
$>3\sigma$ significance.
As a result, six out of 12 galaxies satisfy this criterion as summarized in Table~\ref{table:obssummary}.
We ran {\sc imfit} several times with different parameter settings to check the robustness
of the obtained fluxes.
We confirmed that
the fitting results for the six detected sources are not affected by the parameter settings.
In the following sections, we use peak fluxes measured with {\sc imfit}
as the total fluxes of the detected sources.
The obtained peak fluxes broadly agree with
the aperture fluxes ($r=1.5$~arcsec) measured at the position of
the {\it Ks}-band centroid within the uncertainties,
which suggests that our targets are not spatially resolved
in the ALMA maps.
As for the non-detected sources,
we assigned $3\sigma$ upper limit fluxes.
The measured fluxes and limits are summarized in Table~\ref{table:obssummary}.
The listed fluxes are corrected for the primary beam.
Because the ALMA targets are located at the center of each ALMA map,
the primary beam correction is less than 1~\% for all of the targets.}
Figure~\ref{fig:image} shows the ALMA maps of our target galaxies
together with the {\it Ks}-band centroids.
\added{Some of the ALMA-detected sources, such as 406444 and 434585, show a clear spatial offset
(up to $\sim0.5$~arcsec)
between the dust continuum emission and the {\it Ks}-band centroid.
Their relatively large positional offsets are probably due to
the lower signal-to-noise ratios of their dust continuum emission.
Indeed, among the ALMA-detected sources,
we found a trend that the positional offset between the dust emission peak
and the {\it Ks}-band centroid becomes larger with
decreasing signal-to-noise ratio of the dust continuum emission.}
\begin{table*}[]
\caption{Summary of the targets of this study and ALMA Band-6 observations.}
\begin{center}
\begin{tabular}{ccccccccc}\hline
ID$^{\rm a}$ & R.A.$^{\rm a}$ & DEC.$^{\rm a}$ & $z_{\rm spec}$ & Exp. time$^{\rm b}$ & Central freq.$^{\rm c}$ & RMS level$^{\rm d}$ & $S_{\rm 1.3mm}$ $^{\rm e}$ & Reference \\
& [deg] & [deg] & & [min] & [GHz] & [mJy $\rm beam^{-1}$] & [mJy] & \\ \hline
208681 & 149.90551 & 2.353990 & 3.267 & 5 & 232.1 & 0.05 & $1.02\pm0.05$ & \citetalias{onodera16} \\
214339 & 150.31607 & 2.372240 & 3.609 & 5 & 233.0 & 0.05 & $0.30 \pm 0.05$ & $''$ \\
406444 & 150.33032 & 2.072270 & 3.304 & 5 & 232.1 & 0.04 & $0.12\pm0.04$ & $''$ \\
3 & 149.97513 & 1.69375 & 3.230 & 41 & 235.6 & 0.01 & $0.11\pm0.01$ & \citetalias{suzuki17} \\
434585 & 149.84702 & 2.373020 & 3.363 & 11 & 230.9 & 0.03 & $0.11 \pm 0.03$ & \citetalias{onodera16} \\
$192129^{\rm f}$ & 150.30078 & 2.300540 & 3.495 & 49 & 237.3 & 0.01 & $0.05 \pm 0.01$ & $''$ \\
217753 & 149.89451 & 2.383700 & 3.254 & 5 & 232.1 & 0.05 & $< 0.15$ & $''$ \\
218783 & 149.92082 & 2.387060 & 3.297 & 5 & 232.1 & 0.04 & $< 0.11$ & $''$ \\
212298 & 150.34268 & 2.365390 & 3.108 & 5 & 246.2 & 0.04 & $< 0.11$ & $''$ \\
413391 & 149.78424 & 2.452890 & 3.365 & 11 & 230.9 & 0.03 & $< 0.10$ & $''$ \\
5 & 149.95568 & 1.68044 & 3.241 & 53 & 235.6 & 0.01 & $< 0.04$ & \citetalias{suzuki17} \\
434618 & 149.89213 & 2.414710 & 3.285 & 88 & 233.5 & 0.01 & $< 0.03$ & \citetalias{onodera16} \\ \hline
\end{tabular}
\end{center}
\tablecomments{
$^{\rm a}$Object IDs and coordinates are extracted from the original papers.
$^{\rm b}$\added{On-source observing time} for Band-6 observations.
$^{\rm c}$Central frequency of the four spectral windows.
$^{\rm d}$Measured in the tapered maps.
$^{\rm e}$Measured in the tapered maps with {\sc imfit} \added{and corrected for the primary beam}.
3$\sigma$ upper limits are shown for the ALMA non-detected sources.
$^{\rm f}$X-ray detected source (Section~\ref{subsec:almaobs}).
}
\label{table:obssummary}
\end{table*}
\begin{figure*}[t]
\centering\includegraphics[width=1.0\textwidth]{figure1.pdf}
\caption{
ALMA Band-6 images before tapering of 12 targets (image size: $5''\times5''$).
A black circle shows the beam size of each image.
Black contours correspond to 4$\sigma$, 8$\sigma$, $12\sigma$, and 16$\sigma$.
A plus mark represents the centroid determined in the {\it Ks}-band image.
\added{Some of the}
ALMA detected sources, \added{such as} 406444 and 434585,
show a larger spatial offset between the dust continuum emission and
the {\it Ks}-band centroid than the other detected sources.
This \added{is probably} due to the lower signal-to-noise ratios of their dust continuum emission.
The stacked image of the five individually ALMA non-detected sources with
$\rm log(M_*/M_\odot)=$ 10.0--10.4 is also shown.
The central position is shown with a plus mark.
The stacked emission is detected at $5\sigma$.
}
\label{fig:image}
\end{figure*}
\subsection{Stacking analysis}\label{subsec:stack}
We stacked the Band-6 images of the ALMA non-detected sources
to investigate their average dust continuum flux.
As a result of SED fitting in Section~\ref{subsec:magphys},
one source, 434618, turns out to be only $\rm log(M_*/M_\odot)=9.39_{-0.01}^{+0.12}$.
\added{This is probably due to using a different SED fitting code
and/or different photometric catalog with deeper NIR and {\it Spitzer} data
from the previous estimate \citepalias{onodera16}.}
Because the stellar mass of 434618 is $\gtrsim 0.6$~dex smaller than those of the other non-detected sources,
we excluded this source
so that we can investigate the average dust and gas properties of star-forming galaxies with similar stellar masses of
$\rm log(M_{*}/M_\odot)=$\ 10.0--10.4.
We cut out the tapered $20''\times20''$ ALMA images centered on
the {\it Ks}-band position.
Then, we stacked the cutout images by weighting with the RMS levels in the tapered maps (Table~\ref{table:obssummary}).
The stacked image is shown in Figure~\ref{fig:image}.
\added{We measured the total flux of the stacked image with {\sc imfit} as done in
Section~\ref{subsec:almaobs}.
The obtained flux is $0.06\pm0.01\ {\rm mJy}$, which satisfies our detection
criterion.}
\subsection{SED fitting}\label{subsec:magphys}
We conducted SED fitting
including the dust continuum flux or limit at $1.3$~mm obtained with ALMA.
We used the SED fitting code {\sc magphys} \citep{dacunha08,dacunha15,battisti19},
which can fit the SEDs from the optical to radio
wavelengths consistently.
{\sc magphys} combines the emission by stellar populations
with the attenuation and emission by dust
in galaxies based on the energy balance technique.
We used the updated version of {\sc magphys} for galaxies at high redshifts \citep{dacunha15}.
{\sc magphys} adopts stellar population synthesis models of \citet{bc03}
with the \citet{chabrier03} IMF.
The metallicity range is set to be from 0.2 to 2 times solar.
Star formation history is parameterized as a continuous delayed exponential function
i.e., the star formation rate rises at the earlier epoch and
then declines exponentially with a certain timescale between 0.075 and 1.5 $\rm Gyr^{-1}$.
The age is randomly drawn between 0.1 and 10 Gyrs.
{\sc magphys} also includes star bursts of random duration
and amplitude to account for the stochasticity on star formation history.
The current SFR is determined by averaging SFH over last
\added{100~Myr}.
As for the dust attenuation,
{\sc magphys} uses the two-component model of \citet{charlotfall00}.
A number of tests of the application of {\sc magphys}
to dust-obscured galaxies,
including simulated galaxies from EAGLE,
are discussed in \citet{dudzeviciute20}.
{\sc magphys} takes into account four main dust components,
namely, a polycyclic aromatic hydrocarbons, hot dust at mid-infrared,
warm dust, and cold dust in the thermal equilibrium.
The warm and cold dust components in the thermal equilibrium emit
as modified black bodies with emissivity index $\beta$ of 1.5 for the warm components
and 2 for the cold components \citep{dacunha15}.
{\sc magphys} assumes a dust mass absorption coefficient at $850\mu$m of $\kappa_{\rm abs}=0.77\ {\rm g^{-1}\ cm^2}$.
We combined the flux densities at $1.3$~mm from ALMA
with the optical-NIR broad-band photometries ($u,B, V, r, i_p, z_{pp}, Y, J, H, Ks$, 3.6, 4.5, 5.8, and 8.0 $\mu$m)
from the COSMOS2015 catalog
\citep{cosmos2015}.
Because {\sc magphys} does not include emission lines from the ionized gas,
we subtracted the emission line fluxes measured with the NIR spectra
from the {\it H}-band ([{\sc Oii}]) and {\it Ks}-band ([{\sc Oiii}] doublet, H$\beta$) fluxes.
We took into account the upper limits for the optical-NIR photometries
by giving 0 to the flux column and
a $3\sigma$ value to the photometric error column according to \citet{dacunha15}.
As for the 1.3~mm flux of the ALMA non-detected sources,
we gave a $1.5\sigma \pm 1\sigma$ value
as done in \citet{dudzeviciute20}.
Using a $1.5\sigma \pm 1\sigma$ value
provides a better weighting of the sub-millimeter constraint
in the best-fit model returned by {\sc magphys}
than using a $3\sigma$ upper limit.
\added{The derived physical parameters, such as stellar masses
and SFRs, do not significantly change
depending on the adopted flux and error values \citep{dudzeviciute20}.}
We also conducted SED fitting for the stacked sample with the obtained $1.3$~mm flux
in Section~\ref{subsec:stack}.
When taking an average of the optical--NIR photometries,
we used the same weights as used in the ALMA image stacking (Section~\ref{subsec:stack}).
The best-fit SEDs of the individual galaxies and the stacked sample are shown in Appendix~\ref{sec:appendix1}.
We use the median values of the probability distribution function (PDF)
for stellar mass, SFR, and dust mass in the following analyses.
These physical quantities are summarized in Table~\ref{tab:basicquantity}.
The uncertainties correspond to the 16--84th percentile values of the PDF.
As for the dust masses of the ALMA non-detected sources,
we use the 97.5th percentile values of the PDF as the upper limits.
In order to evaluate whether the upper limits on the dust masses
are reasonable,
we estimated dust mass upper limits with a different method.
We calculated a ratio between the dust mass and the luminosity density
at $997.6$~GHz in the rest-frame,
${\rm M_{dust}}/L_{\rm 997.6~GHz}$, for each detected source.
We then converted the $3\sigma$ upper limits of 1.3~mm fluxes to the dust mass upper limits
with the median ${\rm M_{dust}}/L_{\rm 997.6~GHz}$ ratio.
The difference of the rest-frame frequencies among the sources is corrected for assuming $L_{\nu} \propto \lambda^{-3.7}$.
The estimated dust mass upper limits are similar as
the 97.5th percentile values of the PDF from {\sc magphys}.
One of the uncertainties on the dust mass is the assumed dust mass absorption coefficient,
$\kappa_{\rm abs}$.
It is reported that dust masses obtained with {\sc magphys}
are lower by a factor of two as compared to
those estimated based on the \citet{draineli07} models,
which assume the smaller dust mass absorption coefficient of $\kappa_{\rm abs}=0.38\ {\rm g^{-1}\ cm^2}$ \citep{hunt19}.
\begin{table*}[]
\caption{Summary of the physical quantities of the star-forming galaxies at $z\sim3.3$ and the stacked sample.}
\begin{center}
\begin{tabular}{cccccccc}\hline
ID & $\rm log(M_*/M_\odot)^{a}$ & $\rm log(SFR)^{a}$ & $\rm 12+log(O/H)^{\rm b}$ & log($q$) & $\delta_{\rm GDR}$ & $\rm log(M_{\rm dust}/M_\odot)^{a,c}$ & $\rm log(M_{gas}/M_\odot)$ \\
& & [$\rm M_\odot\ yr^{-1}$] & & [$\rm cm\ s^{-1}$] & & & \\ \hline
208681 & $10.82_{-0.03}^{+0.00}$ & $2.10_{-0.01}^{+0.08}$ & $8.59_{-0.05}^{+0.04}$ & $7.64\pm0.03$ & $126_{-14}^{+13}$ & $8.89_{-0.08}^{+0.11}$ & $11.29_{-0.09}^{+0.11}$ \\
214339 & $10.48_{-0.05}^{+0.04}$ & $1.76\pm0.13$ & $8.30_{-0.12}^{+0.09}$ & $7.82_{-0.06}^{+0.05}$ & $264_{-70}^{+54}$ & $8.22_{-0.16}^{+0.17}$ & $10.94_{-0.20}^{+0.19}$ \\
406444 & $10.87_{-0.01}^{+0.08}$ & $2.31_{-0.11}^{+0.08}$ & $8.39_{-0.09}^{+0.08}$ & $7.77_{-0.09}^{+0.08}$ & $211_{-49}^{+40}$ & $7.64_{-0.13}^{+0.12}$ & $10.26_{-0.17}^{+0.15}$ \\
3 & $10.43_{-0.08}^{+0.06}$ & $1.73_{-0.15}^{+0.18}$ & $8.40_{-0.06}^{+0.05}$ & $7.69\pm0.06$ & $205_{-29}^{+26}$ & $7.71_{-0.13}^{+0.15}$ & $10.32_{-0.14}^{+0.16}$ \\
434585 & $10.13_{-0.12}^{+0.04}$ & $1.73_{-0.01}^{+0.17}$ & $8.47_{-0.14}^{+0.10}$ & $7.69\pm0.25$ & $172_{-64}^{+46}$ & $7.67_{-0.22}^{+0.20}$ & $10.21_{-0.28}^{+0.23}$ \\
192129 & $10.45_{-0.00}^{+0.12}$ & $1.35_{-0.08}^{+0.00}$ & $8.41_{-0.08}^{+0.07}$ & $7.83_{-0.04}^{+0.03}$ & $199_{-41}^{+33}$ & $7.40_{-0.28}^{+0.18}$ & $10.00_{-0.29}^{+0.19}$ \\
217753 & $10.39_{-0.06}^{+0.05}$ & $1.67_{-0.18}^{+0.12}$ & $8.57_{-0.05}^{+0.04}$ & $7.55\pm0.06$ & $131_{-17}^{+15}$ & $< 8.00$ & $< 10.42$ \\
218783 & $10.12_{-0.07}^{+0.08}$ & $1.70_{-0.17}^{+0.16}$ & $8.42_{-0.07}^{+0.06}$ & $7.67\pm0.03$ & $197_{-35}^{+30}$ & $< 7.84$ & $ < 10.43$ \\
212298 & $10.38_{-0.08}^{+0.14}$ & $1.74_{-0.20}^{+0.26}$ & $8.39_{-0.08}^{+0.07}$ & $7.76_{-0.04}^{+0.03}$ & $213_{-43}^{+35}$ & $< 7.84$ & $< 10.47$ \\
413391 & $10.08_{-0.01}^{+0.00}$ & $1.88\pm0.00$ & $8.33_{-0.10}^{+0.08}$ & $7.69_{-0.07}^{+0.06}$ & $246_{-56}^{+47}$ & $< 7.73$ & $< 10.42$ \\
5 & $10.14_{-0.05}^{+0.09}$ & $1.37\pm0.15$ & $8.37\pm0.05$ & $7.59\pm0.07$ & $220_{-27}^{+25}$ & $< 7.33$ & $< 9.97$ \\
434618 & $9.39_{-0.01}^{+0.12}$ & $1.18_{-0.09}^{+0.00}$ & $8.26_{-0.09}^{+0.08}$ & $7.78_{-0.04}^{+0.03}$ & $284_{-58}^{+49}$ & $< 7.28$ & $< 10.04$ \\ \hline
stack$^{\rm d}$ & $10.18_{-0.06}^{+0.05}$ & $1.52_{-0.14}^{+0.15}$ & $8.38\pm0.05$ & $7.62\pm0.06$ & $216\pm27$ & $7.33_{-0.15}^{+0.17}$ & $9.97_{-0.16}^{+0.18}$ \\ \hline
\end{tabular}
\end{center}
\tablecomments{
$^{\rm a}$Median value of the PDF obtained from {\sc magphys}.
Error bars correspond to the 16th--68th percentiles.
$^{\rm b}$Estimated using an empirical calibration method by \citet{curti16}.
$^{\rm c}$The 97.5th percentile values from {\sc magphys} are given as the upper limits.
$^{\rm d}$Stacking result for the five ALMA non-detected sources with $\rm log(M_*/M_\odot)=$\ 10.0--10.4 (Section~\ref{subsec:stack}).
}
\label{tab:basicquantity}
\end{table*}
\section{Analysis} \label{sec:physicalquantity}
\subsection{Gas-phase metallicity and ionization parameter}\label{subsec:ism}
We recalculated the gas-phase metallicities
\added{with the following four relations, which are locally calibrated
in \citet{curti16}:
\begin{eqnarray}
&{\rm log}\ {R_{\rm 2}}& = 0.418 - 0.961x - 3.505x^2 - 1.949x^3, \\
&{\rm log}\ {R_{\rm 3}}& = -0.277 - 3.549x - 3.593 x^2 - 0.981x^3, \\
&{\rm log}\ {O_{\rm 32}}& = -0.691 - 2.944x -1.308x^2, \\
&{\rm log}\ {R_{\rm 23}}& = 0.527 - 1.569x - 1.652x^2 - 0.421x^3,
\end{eqnarray}
\noindent
where $R_{\rm 2}=$ [{\sc Oii}]/H$\beta$, $R_{\rm 3}=$ [{\sc Oiii}]$\lambda$5007/H$\beta$,
$O_{\rm 32}=$ [{\sc Oiii}]$\lambda$5007/[{\sc Oii]},
$R_{\rm 23}=$ ([{\sc Oiii}]$\lambda\lambda$4959,5007 + [{\sc Oii}])/H$\beta$,
and $x$ is $\rm 12+log(O/H)$ normalized to the solar value.}
The emission line fluxes of the sources are available in \citetalias{onodera16} and \citetalias{suzuki17}.
We also estimated the ionization parameter, ${\rm log}(q)$, for the galaxies observed with ALMA
as done in \citetalias{onodera16}.
The ionization parameter is described as the ratio of the number of the ionizing photons
and the hydrogen atoms to be ionized.
The definition of $q$ is as follows:
\begin{equation}
q = \frac{Q_{\rm H^0}}{4\pi R_s^2n_{\rm n_H}},
\end{equation}
\noindent
where $Q_{\rm H^0}$ is the flux of the ionizing photons produced by the existing stars above the Lyman limit,
$R_s$ is the Str\"omgren radius, and $n_{\rm H}$ is the local density
of hydrogen atoms \citep{kewleydopita02}.
We use the following relation by \citet{KK04} to estimate the ionization parameter
from the [{\sc Oiii}]$\lambda\lambda$4959,5007/[{\sc Oii}] ratio ($O_{\rm 32}$) and gas-phase metallicity;
\begin{eqnarray}
{\rm log}(q) = &\{&32.81 - 1.153y^2 + [{\rm 12+log(O/H)}] \nonumber \\ \nonumber
&\times& (-3.396-0.025y+0.1444y^2)\} \\ \nonumber
&\times& \{4.63-0.3119y-0.163y^2 + [{\rm 12+log(O/H)}] \\
&\times& (-0.48 + 0.0271y+0.02037y^2)\}^{-1},
\end{eqnarray}
\noindent
where $y={\rm log}\ O_{\rm 32}$.
\subsection{$M_*$ -- SFR and $M_*$ -- gas-phase metallicity diagram}
Figure~\ref{fig:msmz} shows the star-forming main sequence and the mass--metallicity relation
for the star-forming galaxies at $z\sim3.3$.
In the left panel, we also show star-forming galaxies and SMGs at $z=$\ 3--4 from the literature, which
are introduced in Section~\ref{subsec:comparisonsample}.
\added{In the right panel, we show
stacking results at $z\sim3.3$ from the MOSDEF survey \citep{sanders20}.
We use the line ratios given in \citet{sanders20} and the same metallicity
calibration method as shown in Section~\ref{subsec:ism}.}
Our targets
distribute around the star-forming main sequence and the mass-metallicity relation,
and thus, are not biased in terms of the star-forming activity and gas-phase metallicity.
The stacked sample is also close to the star-forming main sequence
and the mass--metallicity relation,
indicating that the stacked sample has a typical SFR and gas-phase metallicity
for its stellar mass.
\begin{figure*}[t]
\begin{minipage}[cbt]{0.49\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figure2a.pdf}
\end{minipage}
\begin{minipage}[cbt]{0.49\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figure2b.pdf}
\end{minipage}
\caption{(Left) Stellar mass--SFR relation for the star-forming galaxies at $z=$\ 3--4 observed with ALMA
together with star-forming galaxies and SMGs at $z=$\ 3--4 from the literature (Section~\ref{subsec:comparisonsample}).
The solid line shows the star-forming main sequence at $z=3.3$ from \citet{speagle14}.
The dashed lines represent $\pm0.3\ {\rm dex}$ from the main sequence.
(Right) Stellar mass versus gas-phase metallicity diagram.
\added{We show the stacking results from the MOSDEF survey \citep{sanders20} for comparison.}
The thick solid line shows the mass--metallicity relation at $z=0$ from \citet{curti20}.
The black solid line shows the best-fitted relation for our parent sample at $z\sim3.3$,
and the dashed lines represent its scatter of $0.11$~dex.
The galaxies observed with ALMA distribute around the star-forming main sequence
and the mass--metallicity relation.
They are not biased in terms of
the star-forming activity and gas-phase metallicity.
}
\label{fig:msmz}
\end{figure*}
\subsection{Gas mass}\label{subsec:coldgasestimate}
We converted the dust masses from {\sc magphys} to gas masses
with the relation between the gas-phase metallicity and gas-to-dust mass ratio.
We use the relation shown in \citet{magdis12} as follows:
\begin{equation}
{\rm log}(\delta_{\rm gdr}) = (10.54 \pm 1.0) - (0.99 \pm 0.12) \times (\rm 12+log(O/H)),
\end{equation}
\noindent
which is based on the relation of \citet{leroy11} and
uses the metallicity estimated with the [{\sc Nii}]/H$\alpha$ ratio
of \citet{PP04}.
Note that the dust mass estimation in \citet{leroy11} and \citet{magdis12} is based on the
\citet{draineli07} models.
The scatter of this relation is $0.15$~dex \citep{magdis12}.
We need to convert the gas-phase metallicity in Table~\ref{tab:basicquantity}
to that based on the \citet{PP04} calibration.
We estimated [{\sc Nii}]/H$\alpha$ ratios using the relation between
$\rm 12+log(O/H)$ and [{\sc Nii}]/H$\alpha$ of \citet{curti16},
and then, converted the estimated [{\sc Nii}]/H$\alpha$ ratios
to the gas-phase metallicities using the \citet{PP04} calibration.
\added{The empirical relation between $\rm 12+log(O/H)$ and [{\sc Nii}]/H$\alpha$
has a scatter of 0.1~dex along the metallicity direction \citep{curti16}.
This scatter causes $\sim0.19$~dex uncertainty on average
on the estimated [{\sc Nii}]/H$\alpha$ ratios for our sample.
Given that the \citet{PP04} calibration has a scatter of $0.18$~dex,
the converted gas-phase metallicities have a typical uncertainty
of $0.26$~dex.}
Then, we estimated gas masses as follows:
\begin{equation}
{\rm M_{\rm gas}} = {\rm M_{\rm dust}} \times \delta_{\rm gdr},
\label{eq.dusttogas}
\end{equation}
\noindent
where $\rm M_{\rm gas}$ includes both molecular and atomic hydrogen.
We multiply our dust masses by a factor of two when converting them to gas masses
with Eq.~(\ref{eq.dusttogas}) to correct for the systematic difference of the dust mass estimation
between {\sc magphys} and \citet{draineli07} models (Section~\ref{subsec:magphys}).
The estimated gas masses and limits of the individual sources and the stacked sample are summarized
in Table~\ref{tab:basicquantity}.
Given
$\sim 0.30$~dex uncertainty coming from the assumed $\kappa_{\rm abs}$ \citep{hunt19},
\added{$\sim 0.26$~dex uncertainty on the converted gas-phase metallicities,}
and $\sim0.15$~dex scatter of the relation between
the gas-phase metallicity and gas-to-dust mass ratio \citep{magdis12},
the systematic uncertainty of our gas mass estimate is roughly
\added{$0.42$~dex}.
\subsection{Comparison sample from the literature}\label{subsec:comparisonsample}
We next introduce samples from the literature to which we compare our data in Section~\ref{sec:results}.
Since different works use different approaches to estimate
dust and/or gas masses,
these comparisons must be interpreted with care.
Please refer to the papers cited below for more details
about the samples selection, observations, and methods used to estimate
dust and/or gas masses.
\begin{itemize}
\setlength{\leftskip}{-0.5cm}
\setlength{\itemsep}{-0.1cm}
\item \citet{magdis17} investigated the dust and gas masses
of two massive Lyman Break Galaxies (LBGs) at $z\sim3$.
The dust masses are estimated with the \citet{draineli07} models.
They used several independent methods to estimate
the gas masses,
namely, CO(3--2) line, dust mass from the IR SED,
and the empirical relation of \citet{groves15}.
\item \citet{wiklind19} targeted star-forming galaxies at $z\sim3$.
They used the empirical relation of \citet{scoville16} to
estimate molecular gas masses.
We show 11 galaxies with the individual molecular gas estimates.
\item ASPECS: We extract two galaxies
at $z\sim3.6$ from the ASPECS $1.2$~mm continuum source catalog \citep{aravena20}.
The dust masses are estimated from SED fitting with {\sc magphys}.
We use the gas masses estimated with the dust mass and the fixed gas-to-dust mass ratio of 200
in their catalog.
\item \citet{cassata20} observed CO emission lines for massive LBGs at $z\sim$ 3--4.
They used the CO(5--4) emission lines to estimate molecular gas masses.
\item AS2UDS is an ALMA survey targeting 700 SMGs \citep{dudzeviciute20}.
Here we show the AS2UDS galaxies at $z=$\ 3--4.
\citet{dudzeviciute20} estimated the dust masses of the AS2UDS galaxies
from the SED fitting with {\sc magphys}.
They converted the dust mass to the gas mass
assuming the fixed gas-to-dust mass ratio of 100.
\item \citet{tan14} investigated the dust masses of three SMGs at $z=4.05$
with the multi-band photometry in the IR regime.
They used the \citet{draineli07} dust models
to estimate the dust masses.
\end{itemize}
We also introduce the following galaxy samples at lower redshifts,
which have individual measurements of gas mass and gas-phase metallicity.
\begin{itemize}
\setlength{\leftskip}{-0.5cm}
\setlength{\itemsep}{-0.1cm}
\item \citet{saintonge13} investigated the dust and
molecular gas masses
of 17 gravitationally lensed star-forming galaxies at $z\sim$\ 1.4--3.
They used the \citet{draineli07} models to estimate dust masses.
Molecular gas masses are estimated from the CO(3--2) lines.
We show four galaxies at $z\sim$ 2--3 in Section~\ref{subsec:msmdust} and \ref{subsec:fgas_OH_comparison}.
\item \citet{seko16_alma} investigated the dust and molecular gas masses
of star-forming galaxies at $z\sim1.4$.
They converted the dust continuum flux to a dust mass
assuming modified black body with fixed $T_{\rm dust}=30$~K and $\beta=1.5$.
They used the CO(5--4) line to estimate the molecular gas masses.
\item xCOLD-GASS is a CO(1--0) line survey for local SDSS galaxies.
We use the public catalog in \citet{saintonge17}
and combine it with the catalog of the xGASS project
\citep{catinella18} to estimate the total gas masses.
\added{The stellar masses of the xCOLD-GASS galaxies are in the range of
$\rm log(M_*/M_\odot)=$ 9.1--11.2}.
\item ALLSMOG is a CO(2--1) line survey for local SDSS galaxies \citep{bothwell14}.
Most of the ALLSMOG galaxies have the measurements of the atomic hydrogen gas by different studies
(please see \citet{cicone17} for more details).
\added{The stellar mass range of the ALLSMOG galaxies
is $\rm log(M_*/M_\odot)=$ 9.3--10.0}.
\end{itemize}
\section{Results and Discussion} \label{sec:results}
\subsection{Dust mass and its metallicity dependence}\label{subsec:msmdust}
\begin{figure*}[tb]
\begin{minipage}[cbt]{0.59\textwidth}
\centering\includegraphics[width=0.73\textwidth]{figure3a.pdf}
\end{minipage}
\begin{minipage}[cbt]{0.39\textwidth}
\centering\includegraphics[width=0.92\textwidth]{figure3b.pdf}
\end{minipage}
\caption{(Left) Relation between stellar mass and dust mass
for the star-forming galaxies at $z\sim3.3$ together with galaxies at $z\sim$\ \added{1.4--4}
from the literature.
The dashed lines correspond to constant dust-to-stellar mass ratios of
$\rm M_{dust}/M_* = 1\times10^{-3}$ and $3\times10^{-3}$.
We show the galaxy samples of \citet{tan14}, \citet{magdis17}, \added{and \citet{saintonge13}} after dividing their dust masses
by a factor of two \added{to correct for the difference of the assumed $\kappa_{\rm abs}$}.
The star-forming galaxies at $z\sim3.3$ have similar dust-to-stellar mass ratios as more massive
star-forming galaxies from \citet{magdis17} and \added{ASPECS} \citep{aravena20}.
(Right) Relation between the gas-phase metallicity and dust-to-stellar mass ratio
for the star-forming galaxies at $z\sim3.3$.
We find no clear correlation between the gas-phase metallicity and dust-to-stellar mass ratio \added{among our sample}.
}
\label{fig:ms-mdust}
\end{figure*}
The dust masses of the star-forming galaxies at $z\sim3.3$
are estimated to be $\rm log(M_{dust}/M_\odot)\sim$ 7.4--8.9
(Table~\ref{tab:basicquantity}).
The dust mass of the stacked sample is $\rm log(M_{dust}/M_\odot)=7.33^{+0.17}_{-0.15}$.
The left panel of Figure~\ref{fig:ms-mdust} shows the comparison of
dust masses between the star-forming galaxies at $z\sim3.3$ and
the galaxies at $z\sim$\ \added{1.4--4} in the literature (Section~\ref{subsec:comparisonsample}).
As mentioned in Section~\ref{subsec:comparisonsample},
\citet{tan14}, \citet{magdis17}, and \added{\citet{saintonge13}} estimated dust masses with the
\citet{draineli07} models.
To correct for the systematic difference of the dust mass estimate
between {\sc magphys} and \citet{draineli07} models,
the dust masses of the galaxies in these studies
are divided by a factor of two in the left panel of Figure~\ref{fig:ms-mdust}.
The right panel of Figure~\ref{fig:ms-mdust} shows the relation between
the gas-phase metallicity and dust-to-stellar mass ratio for the star-forming galaxies
at $z\sim3.3$.
Given that dust is produced from metals,
we would expect galaxies with higher metallicities to have larger dust masses at a given stellar mass.
We find no statistically significant trend between the dust-to-stellar mass ratio and gas-phase metallicity
\added{among our sample}.
The brightest source at 1.3~mm among our sample, 208681
(Table~\ref{table:obssummary} and Figure~\ref{fig:ms-mdust}),
has a dust mass of $\rm log(M_{dust}/M_\odot)=8.89_{-0.08}^{+0.11}$, which
is comparable to those of SMGs at $z\sim$\ 3--4 \citep{tan14,dudzeviciute20}.
This source can be classified as a SMG
in terms of its dust content.
The other five galaxies and the stacked sample have $\sim1$~dex lower dust masses
than the SMGs at $z\sim$\ 3--4 with similar stellar masses.
This brightest source, 208681, is also the most metal-rich galaxy with $\rm 12+log(O/H)=8.59_{-0.05}^{+0.04}$
among our sample
\added{and appears to distribute apart from the other galaxies
in the right panel of Figure~\ref{fig:ms-mdust}.
This may suggest that SMGs have a different relation
between the gas-phase metallicity and dust-to-stellar mass ratio
from UV/optical-selected star-forming galaxies.}
The star-forming galaxies at $z\sim$\ 3--4 except for the SMGs show
a positive correlation between the stellar mass and dust mass.
The dust-to-stellar mass ratio takes value roughly between $1\times10^{-3}$ and $5\times 10^{-3}$
(median value is $2\times10^{-3}$)
in the stellar mass range of $\rm log(M_*/M_\odot)\sim$\ 10.1--11.4.
When comparing
with star-forming galaxies
at $z\sim1.4$ \citep{seko16_alma} and $z\sim$~2--3
\citep{saintonge13},
star-forming galaxies at lower redshifts have
similar dust-to-stellar mass ratios ($1\times10^{-3}$ -- $5\times10^{-3}$)
as our galaxies at $z\sim3.3$ with similar stellar masses.
Although a fair comparison is difficult due to different sample selection
among the three studies,
the evolution of the dust-to-stellar mass ratios
seems to be mild since $z\sim1.4$ to 3.3 as shown in \citet{bethermin15}.
\subsection{Gas properties of star-forming galaxies at $z$=3--4}\label{subsec:fgastdep}
\begin{figure*}[t]
\begin{minipage}[cbt]{0.49\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{figure4a.pdf}
\end{minipage}
\begin{minipage}[cbt]{0.49\textwidth}
\centering
\includegraphics[width=0.8\textwidth]{figure4b.pdf}
\end{minipage}
\caption{Stellar mass versus gas mass fraction (left) and gas depletion timescale (right) diagram
for star-forming galaxies at $z\sim$ 3--4.
We show our sample at $z\sim3.3$
together with galaxies at $z\sim$ 3--4
from the literature \citep{magdis17,wiklind19,aravena20,cassata20,dudzeviciute20}.
The solid line in each panel represents the scaling relation
for star-forming galaxies on the main sequence at $z\sim3.3$ from \citet{tacconi18}.
The dashed lines correspond to the cases that galaxies are $0.3\ {\rm dex}$ above/below
the main sequence.
\added{The vertical line in the top right corner on each panel
represents an additional $\pm 1\sigma$ error for our sample coming from the systematic uncertainty on $\rm M_{gas}$ (Section~\ref{subsec:coldgasestimate}).}
Contrary to the tight distribution of the galaxies at $z\sim3.3$ around the main sequence,
the derived gas mass fractions and gas depletion timescales show a large scatter at a fixed stellar mass.
Gas properties of star-forming galaxies may have a larger intrinsic scatter than expected from the scaling relation.
}
\label{fig:fgas_tdep}
\end{figure*}
Figure~\ref{fig:fgas_tdep} shows the gas mass fraction, $f_{\rm gas} = {\rm M_{gas}/(M_{gas}+M_*)}$,
and gas depletion timescale, $t_{\rm dep} = {\rm M_{gas}/SFR}$, as a function of stellar mass
for the star-forming galaxies at $z\sim3.3$.
Our estimated gas mass fractions are 0.20--0.75
and the gas depletion timescales are 0.09--1.55~Gyr.
The typical uncertainties of $f_{\rm gas}$ and $t_{\rm dep}$
are $\pm0.09$ and $\pm0.22$~dex, respectively.
\added{Given that the gas masses have the systematic uncertainty of
$\sim0.42$~dex (Section~\ref{subsec:coldgasestimate}),
$f_{\rm gas}$ and $t_{\rm dep}$ have
an additional error of $\sim0.19$ and 0.42~dex ($1\sigma$), respectively.}
As for the stacked sample,
the gas mass fraction and gas depletion timescale
are estimated to be
$f_{\rm gas}=0.38_{-0.09}^{+0.10}$ and $t_{\rm dep}=0.28_{-0.14}^{+0.15}$~Gyr, respectively.
We show star-forming galaxies and SMGs
at $z\sim$\ 3--4 from the literature (Section~\ref{subsec:comparisonsample}) in Figure~\ref{fig:fgas_tdep}.
The solid line in each panel represents the scaling relation
for galaxies on the star-forming main sequence at $z\sim3.3$
from \citet{tacconi18}.
The dashed lines correspond to the case when galaxies are at $\pm 0.3$~dex
from the star-forming main sequence.
Our sample including the stacking result reaches down to
$f_{\rm gas}\sim$ 0.2--0.3, which is lower by a factor of $\gtrsim2$ than the scaling relation.
We also find a scatter of $\gtrsim1$~dex for the gas depletion timescale
at a fixed stellar mass.
Such a large scatter of the gas properties
is also seen in the samples of \citet{wiklind19} and \citet{dudzeviciute20}.
It has been reported that the gas mass fraction and depletion timescale
of the main sequence galaxies
gradually change depending on the deviation from the star-forming main sequence
($\Delta_{\rm MS}$; e.g., \citealt{saintonge12,sargent14,tacconi18}).
We checked how the offset of the gas mass fraction or depletion timescale
from the scaling relation for galaxies on the main sequence
changes depending on $\rm \Delta_{MS}$.
We find a trend consistent with \citet{tacconi18} when combining our sample with
the samples of the literature \citep{magdis17,wiklind19,cassata20,aravena20}.
However,
at a fixed $\Delta_{\rm MS}$,
our sample shows a large scatter of the gas mass fraction and depletion timescale.
The observed scatter of the gas mass fraction and depletion timescale in our sample
cannot be explained by $\Delta_{\rm MS}$ alone.
These results suggest that the fundamental gas properties of galaxies
have a large diversity even when they have similar stellar masses and SFRs \citep{elbaz18}.
Given that the scaling relations are possibly biased toward dusty and gas-rich galaxies
especially at higher redshifts,
the scaling relations may not be representative of the majority of the galaxy populations
at $z\sim$ 3--4.
\added{Given that we use the metallicities to derive the gas properties,
the observed trends as a function of stellar mass in Figure~\ref{fig:fgas_tdep}
may be partly caused by the mass--metallicity relation.
However, the distribution of our sample in Figure~\ref{fig:fgas_tdep} does not change significantly
when assuming a constant gas-to-dust mass ratio.
This means that our results are not affected
by the fact that we use the gas-phase metallicities to derive the gas properties.}
\subsection{Gas mass fraction versus physical conditions of the ionized gas}\label{subsec:fgas_OH}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figure5.pdf}
\caption{
Gas mass fraction versus gas-phase metallicity
for our galaxy sample at $z\sim3.3$.
\added{The horizontal bar in the bottom right corner
shows an additional $\pm 1\sigma$ error on the gas mass fraction
coming from the systematic uncertainty on $\rm M_{gas}$.
The dashed lines show how the two quantities depend on each other
when fixing the dust-to-stellar mass ratio at $1\times10^{-3}$ and $3\times10^{-3}$.}
We find no statistically significant correlation
between the gas mass fraction and gas-phase metallicity among our sample.
}
\label{fig:fgas_OH_z3}
\end{figure}
\added{We investigate the relation between the gas mass fraction
and the physical conditions of the ionized gas, namely,
gas-phase metallicity and ionization parameter (Section~\ref{subsec:ism}),
for our galaxy sample at $z\sim3.3$.
Figure~\ref{fig:fgas_OH_z3} shows the comparison between the gas mass
fraction and gas-phase metallicity.}
We note that the gas mass fraction and gas-phase metallicity
(and also ionization parameter) are not independent
\added{as mentioned in the previous section.
In Figure~\ref{fig:fgas_OH_z3}, we show how the two quantities
depend on each other when fixing the dust-to-stellar mass ratio
with dashed lines.
}
Given that the abundance of oxygen with respect to hydrogen
changes depending on the amount of the hydrogen gas in galaxies,
the gas-phase metallicity, 12+log(O/H), would be expected to decrease
as the gas mass fraction increases
\citep[e.g.,][]{bothwell13,zahid_apj791,bothwell16_aa,seko16_alma}.
However, we find no statistically significant correlation between the gas mass fraction
and gas-phase metallicity
for the star-forming galaxies at $z\sim3.3$,
which is the same as the result obtained from the comparison between the gas-phase metallicity
and dust-to-stellar mass ratio (Figure~\ref{fig:ms-mdust}).
We find no clear correlation between the gas mass fraction and ionization parameter as well.
According to \citet{kashino19},
the gas mass fraction and ionization parameter are related indirectly via
three parameters,
namely, specific SFR (sSFR), gas-phase metallicity, and electron density.
When the gas-phase metallicity increases or sSFR decreases, both the gas mass fraction and ionization parameter decreases.
When the electron density increases, the gas mass fraction increases but the ionization parameter decreases \citep{kashino19}.
Because the gas mass fraction and ionization parameter depend on the three parameters in a different way,
how the gas mass fraction correlates with the ionization parameter is not straightforward.
We would need to fix some of the parameters to investigate the trend between the two quantities.
A lack of a clear correlation between gas mass fractions
and the physical conditions of the ionized gas
may reflect stochastic star-formation histories
for star-forming galaxies at high redshifts.
Star formation in galaxies at higher redshifts
are suggested to be burstier than local galaxies \citep{yicheng16,faucher-giguere18,tacchella20}.
When the star-forming activity in galaxies changes on a short timescale,
it becomes more difficult to identify a global trend between the physical quantities.
Note that our sample size may be too small to find
any correlation.
We need a larger sample of galaxies covering a wider range of stellar mass
to confirm whether
the gas mass fraction correlates with gas-phase metallicity and ionization parameter.
\subsection{Comparison of galaxies at $z=$ 0--3.3 on the $f_{gas}$ versus 12+log(O/H) diagram}\label{subsec:fgas_OH_comparison}
\begin{figure*}[tb]
\centering
\includegraphics[width=0.5\textwidth]{figure6.pdf}
\caption{Relation between gas mass fraction and gas-phase metallicity
for star-forming galaxies from $z=0$ to $z\sim3.3$.
\added{The horizontal bar in the left bottom corner represents
an additional $\pm 1\sigma$ error on the gas mass fraction
coming from the systematic uncertainty on $\rm M_{gas}$ for our sample.}
Only the molecular gas components are considered for the galaxies in \citet{seko16_alma}
and \citet{saintonge13}.
The star-forming galaxies at $z\gtrsim2$ show an offset
toward the lower gas-phase metallicity from the distribution of the local galaxies.
The black lines show the model tracks from the gas regulator model of \citet{pengmaiolino14}
assuming different mass-loading factors between $\lambda=$ 0.5 and 2.5.
The distribution of the star-forming galaxies at $z\sim3.3$ on this diagram
can be broadly explained with the model tracks with $\lambda \sim$ 2--2.5,
suggesting the redshift evolution of the mass-loading factor for star-forming galaxies.
}
\label{fig:fgas_OH_lowz}
\end{figure*}
Figure~\ref{fig:fgas_OH_lowz}
shows the star-forming galaxies from $z=0$ to 3.3 (Section~\ref{subsec:comparisonsample})
on the gas mass fraction versus metallicity diagram.
In the literature \citep{saintonge13,seko16_alma,saintonge17,cicone17},
the gas-phase metallicities are estimated with the [{\sc Nii}]/H$\alpha$ ratios.
In order to compare with the gas-phase metallicities of our sample, which are
estimated based on [{\sc Oiii}], H$\beta$, and [{\sc Oii}] lines \citep{curti16},
we convert the given [{\sc Nii}]/H$\alpha$ ratios in the previous studies
to the gas-phase metallicities using the empirical relation between $\rm 12+log(O/H)$
and [{\sc Nii}]/H$\alpha$
of \citet{curti16}.
The gas mass fraction of the galaxies
at $z\sim0$ and $3.3$
is the total (molecular$+$atomic)
gas mass fraction
to compare with a gas regulator model, in which
the atomic and molecular hydrogen are indistinguishable,
in the following sections.
The gas mass fraction of the galaxies
in \citet{seko16_alma} and \citet{saintonge13} is the molecular gas mass fraction.
\color{black}
We expect that the comparison in Figure~\ref{fig:fgas_OH_lowz}
is not significantly affected by the fact that
we do not include the atomic gas for the two samples.
The fraction of the molecular gas is suggested to increase with increasing redshifts
because of the higher surface density of galaxies at higher redshifts
\citep[e.g.,][]{popping15}.
\citet{popping15} suggest the fraction of the molecular gas
in the total gas is $\sim$\ 0.6--0.8 at $z\sim$\ 1.5--3.0
based on their simulations.
Focusing on the local galaxies in Figure~\ref{fig:fgas_OH_lowz},
the gas-phase metallicity gradually decreases with increasing gas mass fraction
as shown in \citet{bothwell13} and \citet{hunt15}.
Such a gradual decrease of the gas-phase metallicity
with increasing gas mass fraction
indicates that
we need to cover a wide range of gas mass fraction
to identify the correlation between the two quantities.
Whereas the star-forming galaxies at $z\sim1.4$ from \citet{seko16_alma}
appear to be located at the gas-rich end of the distribution of the local star-forming galaxies,
the galaxies at $z\gtrsim2$ from this study and \citet{saintonge13}
show an offset toward the lower gas-phase metallicities ($\sim0.2$~dex)
with respect to the distribution of the local galaxies.
This result suggests that
star-forming galaxies at $z\gtrsim2$ are less chemically enriched
than those at $z=0$ and even at $z\sim1.4$ with similar gas mass fractions.
The molecular gas mass is estimated with the CO lines in the literature (Section~\ref{subsec:comparisonsample}).
Although the systematic difference caused by using different methods to estimate gas mass
could change the relative distribution of the galaxies in the horizontal direction,
it cannot explain the offset of the galaxies at $z\sim3.3$ toward the low gas-phase metallicity
with respect to the local galaxies.
As for the gas-phase metallicity,
\citet{curti16} showed
that the offset of gas-phase metallicities
calibrated with different line ratios is 0.04~dex on average.
The systematic uncertainty caused by using different line ratios to calibrate the metallicity
is also unlikely to affect our results.
\added{We have a caveat on our gas-phase metallicity measurement
for the ALMA-detected, dusty star-forming galaxies in our sample.
\citet{herrera-camus18} reported that gas-phase metallicities
calibrated with rest-frame optical emission lines tend to be lower
than those calibrated with FIR fine-structure lines up to by a factor of two
for local (U)LIRGs.
The FIR lines are likely to trace the ionized gas in the dense and dusty star-forming regions, which are no longer traced by the optical emission lines,
and such dense and dusty star-forming regions would be more metal enriched \citep[e.g.,][]{santini10}.
When this is also the case for our ALMA-detected galaxies at $z\sim3.3$,
the offset toward the low metallicity
with respect to the local galaxies in Figure~\ref{fig:fgas_OH_lowz} could be
explained by the underestimated gas-phase metallicities for our sample.
However, when comparing the dust extinction values, $\rm A_V$, between
our sample and local (U)LIRGs in \citet{rupke08},
the median $\rm A_V$ of our ALMA-detected galaxies ($\sim0.6$~mag)
is much smaller than that of local (U)LIRGs ($\sim3.6$~mag).
This implies that
the ALMA-detected galaxies in our sample are not as dusty as
the local (U)LIRGs, and thus, that
the metallicities calibrated with the optical emission lines
can be regarded as representative values for our sample at $z\sim3.3$.}
\subsubsection{Comparison with a gas regulator model} \label{subsec:outflowrate}
``Equilibrium'', ``bathtub'' or ``gas regulator'' models
are used to track the evolution of the fundamental physical quantities of galaxies,
such as gas mass, SFR, and metallicity,
by considering gas inflows, outflows,
star formation, and metal production in galaxies
\citep[e.g.,][]{finlator08,bouche10,dave12,Dayal13,lilly13,pengmaiolino14,tacchella20}.
We compare the observational data at $z=$\ 0--3.3 with a gas regulator model by \citet{pengmaiolino14}.
\citet{pengmaiolino14} derived the analytic formula to track the evolution of the physical quantities,
such as gas mass, SFR, metallicity, and stellar mass.
The input parameters of this model are
gas inflow rate ($\Phi$), star formation efficiency ($\varepsilon =$\ SFR/$\rm M_{gas}$),
mass-loading factor ($\lambda=$\ outflow rate/SFR), and return mass fraction ($R$).
The gas accretion to the galaxy is assumed to scale with the growth rate of the dark matter halo.
The dark matter halo growth rate is derived from the cosmological hydrodynamic simulations \citep{faucher-giguere11}.
The outflow rate is assumed to be proportional to SFR.
The return mass fraction takes on values $\sim0.2$ to $\sim0.5$ depending on the IMF.
This model assumes that these input parameters are constant
with time or change with longer timescales than
the equilibrium timescale.
The equilibrium timescale is
the timescale to reach the equilibrium state,
where gas acquisition by inflows balance with gas consumption
by star formation and outflows.
The equilibrium timescale is expressed as follows:
\begin{equation}
\tau_{\rm eq} = \frac{1}{\varepsilon(1-R+\lambda)}.
\label{eq:taueq}
\end{equation}
The time evolution of the gas mass fraction and gas-phase metallicity
is described as follows:
\begin{equation}
f_{\rm gas}(t) = \frac{1}{1 + \varepsilon (1-R) \left(\frac{t}{1-e^{-\frac{t}{\tau_{\rm eq}}}} - \tau_{\rm eq} \right)},
\end{equation}
\begin{equation}
Z_{\rm gas}(t) = [Z_0 + y\tau_{\rm eq}\varepsilon(1 - e^{-\frac{t}{\tau_{\rm eq}}})][1-e^{-\frac{t}{\tau_{\rm eq}(1-e^{-t/\tau_{\rm eq}})}}],
\end{equation}
\noindent
where
$Z_0$ is the metallicity of the infalling gas,
and $y$ is the average yield per stellar generation.
In Figure~\ref{fig:fgas_OH_lowz},
we show the model tracks obtained from the gas regulator model of \citet{pengmaiolino14}.
We assume $R=0.4$ (for the Chabrier IMF; \citealt{madau14}) and
$y = 1.5 Z_\odot$ \citep[e.g.,][]{yabe15_apj}.
The gas depletion timescale ($1/\varepsilon$) is set to be $t_{\rm dep} = 0.8\ {\rm Gyr}$.
Note that the normalization of model tracks in Figure~\ref{fig:fgas_OH_lowz}
does not depend on the absolute value of $t_{\rm dep}$.
We assume five different mass-loading factors between $\lambda=$\ 0.5 and 2.5 (Figure~\ref{fig:fgas_OH_lowz}).
We find that
the distribution of the star-forming galaxies at $z\sim3.3$ and
those from \citet{saintonge13} can be broadly explained by the model tracks with the high mass-loading factor of
$\lambda \sim$\ 2.0--2.5 rather than the lower values such as $\lambda \sim$\ 0.5 or 1.
We need stronger outflow with larger $\lambda$ to
achieve the lower gas-phase metallicity for star-forming galaxies at $z\gtrsim2$
than the local ones with similar gas mass fractions.
This result may suggest a redshift evolution of the mass-loading factor $\lambda$ from $z=0$ to $3.3$, as we discuss below.
\citet{yabe15_apj} showed the increasing outflow rate normalized by SFR with increasing redshifts up to $z\sim2$
by comparing the observational data (stellar mass, gas mass fraction, and gas-phase metallicity)
with a simple chemical evolution model (see also \citealt{troncoso14}).
Some theoretical studies
based on analytic models or numerical simulations
showed the redshift evolution of the mass-loading factor
\citep[e.g.,][]{barai15,mitra15,muratov15,hayword17}.
Observationally, \citet{sugahara17} showed a trend
that the star-forming galaxies at higher redshift (up to $z=2$)
have larger mass-loading factor at a fixed circular velocity.
Our results obtained from the comparison
between the observational data and the model tracks
support the idea that star-forming galaxies at higher redshifts have larger mass-loading factors,
and thus, more massive outflow.
\subsubsection{Equilibrium timescale}
We estimate the equilibrium timescales (Eq.~(\ref{eq:taueq})) for the star-forming galaxies at $z\sim3.3$
with the gas depletion timescale obtained from the observation
and the mass-loading factor inferred from the comparison with the model tracks
in Figure~\ref{fig:fgas_OH_lowz}.
The equilibrium timescales of the ALMA-detected sources are estimated to be 0.03--2.21~Gyr (average value: 0.52~Gyr)
assuming $R=0.4$ for the Chabrier IMF \citep{madau14}.
According to \citet{pengmaiolino14},
when the equilibrium timescale is much shorter than the Hubble time,
galaxies are expected to be in the equilibrium state,
where gas acquisition by inflows and
gas consumption by star formation and outflows are balanced.
On the other hand,
when the equilibrium timescale is comparable to the Hubble time,
galaxies are considered to have much larger gas reservoir
and to be out of equilibrium.
The average equilibrium timescale of the detected sources
is roughly one order of magnitude shorter than the Hubble time at $z=3.3$ ($2.82$~Gyr).
However, not
all of the galaxies necessarily start
forming stars at the beginning of the Universe.
Given that the age of galaxies must be smaller than the Hubble time,
the equilibrium timescale should probably be compared with the age of the
galaxies rather than the Hubble time.
We here use the ratio of $\rm M_*/SFR$, which can be
regarded as the minimum age of a galaxy.
The star-forming galaxies at $z\sim3.3$ have $\rm M_*/SFR=$ 0.25--1.25~Gyr
(average: 0.57~Gyr),
which is closer to the equilibrium timescales
than the Hubble time.
Especially, the galaxies with relatively larger gas mass fractions,
$f_{\rm gas}\sim$\ 0.6--0.8, in our sample
tend to have the equilibrium timescales comparable to the minimum ages.
This result may suggest that
normal star-forming galaxies at $z\sim3$ with relatively large gas mass fractions
have not yet reached the equilibrium state
as suggested in \citet{mannucci10}.
In the future, it will be of interest to study how our results
are affected by relaxing the assumptions about galaxies being in equilibrium
and by considering bursty star formation histories \citep{tacchella20}.
\added{Direct measurements of gas outflow and inflow rates would be
also important to further investigate whether
the star-forming galaxies at $z\sim3.3$ are out of equilibrium.
The spatially resolved emission line maps for the individual galaxies
will enable us to search for outflow signatures and
estimate the mass outflow rates \citep[e.g.,][]{genzel11,davies19}.
Furthermore,
a simulation study suggests a correlation between metallicity gradients
and gas accretion rates \citep{collacchioni20}.
We would be able to investigate the gas inflow rates by obtaining
metallicity gradients from the spatially resolved emission line maps.}
\section{Summary} \label{sec:summary}
We conducted ALMA Band-6 observations
of star-forming galaxies at $z\sim3.3$,
which have measurements of their metallicities based on
the rest-frame optical spectroscopy.
Thus we can directly compare the metallicities
with the dust and inferred gas properties from our ALMA observations
for star-forming galaxies at $z\sim3.3$.
We detected the dust continuum emission individually from six out of 12 galaxies.
We stacked the ALMA maps of the five ALMA non-detected sources with
$\rm log(M_*/M_\odot)=$\ 10.0--10.4 and obtained a $\sim5\sigma$ detection of this sample.
We estimated dust masses from SED fitting with {\sc magphys}
including the $1.3$~mm fluxes from ALMA.
We converted the dust mass to the gas mass
with a relation between the gas-phase metallicity and gas-to-dust mass ratio.
With the estimates of dust mass, gas mass, and the physical conditions of the ionized gas,
we conclude the following:
\begin{itemize}
\item The median value of the dust-to-stellar mass ratios is
$\rm M_{dust}/M_* \sim 3.0 \pm 2.0 \times 10^{-3}$.
The dust-to-stellar mass ratio of the stacked sample is $\sim 1.4\pm0.5 \times 10^{-3}$.
We find no clear trend between the dust-to-stellar mass ratio and gas-phase metallicity.
\item \added{The estimated gas mass fractions and gas depletion timescales are $f_{\rm gas}=$ 0.20--0.75 and $t_{\rm dep}=$ 0.09--1.55~Gyr, respectively.}
The stacked sample shows $f_{\rm gas}=0.38_{-0.09}^{+0.10}$ and $t_{\rm dep}=0.28_{-0.14}^{+0.15}$~Gyr.
The gas mass fractions and gas depletion timescales of the galaxies at $z\sim3.3$ show a wider spread
at a fixed stellar mass as compared to the scaling relations of galaxies on the
main sequence at $z\sim3.3$.
Given that most of our galaxies at $z\sim3.3$ distribute around the star-forming main sequence with $\pm0.3$~dex,
the large scatter of the gas mass fraction and depletion timescale may suggest a significant diversity
of these fundamental properties within the so-called main sequence.
\item We find no clear correlation between the gas mass fraction and
the physical conditions of the ionized gas, namely,
gas-phase metallicity and ionization parameter, at $z\sim3.3$.
We may require a large sample of galaxies covering a wider range of the physical quantities
to confirm whether gas mass fractions correlate with the ionized gas conditions or not.
\item Comparing star-forming galaxies at different redshifts
on the gas mass fraction versus metallicity diagram,
we find that the star-forming galaxies at $z\gtrsim2$
show an offset toward lower metallicities
as compared to the distribution of local star-forming galaxies,
in the sense that star-forming galaxies
at $z\gtrsim2$ appear to be more metal-poor
than the local galaxies with similar gas mass fractions.
\item We find that the distribution of star-forming galaxies at $z\sim3.3$ on the
gas mass fraction versus gas-phase metallicity diagram can be broadly explained by
models assuming higher mass-loading factors in outflows of $\lambda\sim$\ 2.0--2.5
from the gas regulator model of \citet{pengmaiolino14}.
This result supports the idea that star-forming galaxies at higher redshfits have
powerful outflows with higher mass-loading factors.
\item Comparing the equilibrium timescales \citep{pengmaiolino14} and the minimum ages of the galaxies ($\rm M_*/SFR$),
we find that the equilibrium timescale of the relatively gas-rich galaxies ($f_{\rm gas}\sim$~0.7)
is comparable to their minimum ages,
suggesting that they may be out of equilibrium.
\end{itemize}
It remains unclear whether star-forming galaxies at high redshifts follow the same relation
between gas-phase metallicity and gas-to-dust mass ratio as local galaxies
(\citealt{saintonge13,seko16_alma}, but see also \citealt{magdis12,shapley20}).
Observations of independent gas tracers,
such as CO or [{\sc CI}] emission lines, will be required to
investigate the relation between the gas-phase metallicity and gas-to-dust mass ratio at $z>3$.
\added{Another caveat is whether the metallicities derived
from the rest-frame optical emission lines are applicable to
dusty star-forming galaxies at high redshifts.
Metallicity measurements with FIR fine structure lines are required to
investigate this further.}
High-resolution integral-field-unit (IFU) observation
with
{\it the James Webb Space Telescope (JWST)}
will enable us to investigate the metallicity gradients within
the individual galaxies and to search for the outflow signatures within them.
\added{The spatially resolved emission line maps} would be useful
to investigate the effects of gas inflows and outflows more directly.
\acknowledgments
We thank the anonymous referee for careful reading and comments
that improved the clarity of this paper.
TLS would like to thank Ken-ichi Tadaki and Nao Fukagawa for useful comments.
IRS acknowledges support from STFC (ST/T000244/1).
This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2018.1.00681.S.
ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan),
together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in
cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by
ESO, AUI/NRAO and NAOJ.
Some of the data presented herein were obtained at the W. M. Keck Observatory,
which is operated as a scientific partnership among the California Institute of Technology,
the University of California and the National Aeronautics and Space Administration.
The Observatory was made possibility by the generous financial support of the W. M. Keck Foundation.
The authors wish to recognize and acknowledge the very significant cultural role
and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community.
We are most fortune to have the opportunity to conduct observations from this mountain.
Data analyses were in part carried out on the open use data
analysis computer system at the Astronomy Data Center, ADC, of
the National Astronomical Observatory of Japan (NAOJ).
\vspace{5mm}
\facilities{ALMA, Keck:I (MOSFIRE)}
\software{astropy \citep{astropy:2013,astropy:2018},
{\sc casa} \citep{CASA},
{\sc topcat} \citep{topcat}
}
|
train/arxiv
|
BkiUdfo5qoTA9Gqu2fkB
| 5 | 1 |
\section{Introduction}
Reinforcement learning (RL) has increasingly greater real-world applications, ranging from autonomous vehicle coordination \cite{autocar}, and controlling robotic swarms \cite{guided} which can be framed as multi-agent cooperative problems. In these settings, agents act only on their partial observability and/or communication constraints leading to learning {\em decentralized policies}, which only rely on each agent's local state-action history. Although single-agent RL algorithms can be applied to these environments \cite{iql}, this approach is not inherently scalable nor does it allow for agents to leverage each others experience or encourage structured exploration in the cooperative multi-agent setting.
In Multi-Agent Reinforcement Learning (MARL), the nature of the joint state action space leads to complexity in estimation, as the size of the joint action spaces grow exponentially with the number of agents. Leveraging traditional Reinforcement Learning approaches which model the combined joint action and state space is often impractical or does not appropriate reflect constraints which may be present in the evaluation environment such as partial observability and/or communication constraints leading to learning {\em decentralized policies}. Attempting to leverage experience of other agents in Q-Learning set up does not greatly improve performance as off-policy samples would not have been corrected from the underlying policy distribution \cite{seac}, whereas using maximum entropy RL on an individual agent may not guarantee diverse exploration across all agents, requiring a network to learn the appropriate credit assignment \cite{lica}. Structured explorations have been explored through to explicitly learning a latent policy conditional on global state information \cite{maven}. MARQ aims to address these limitations in the MARL context without explicit cooperative structures through applying regularization constraints which can correct bias in off-policy out-of-distribution agent experiences and promote diverse exploration.
To achieve this, rather than attempting to model and learn cooperative structures \cite{qmix,lica,maddpg,maven}, we instead treat the MARL challenge as an independent reinforcement learning problem \cite{iql} and examine different approaches to how we can leverage information from other agents in the MARL setting. Our approach involves encouraging more structured exploration through regularization the gradient updates across different agents. In effect, this could allow agents to search in orthogonal directions to encourage the maximum coverage of state-action combinations or facilitate in using their collective experience to determine the most promising approaches. In our experiments we demonstrate the viability of these approaches which can lead to greater returns in less iterations.
We demonstrate these ideas in a Q-learning framework, which we call {\em Multi-Agent Regularized Q-learning} (MARQ), which operates on independent learning \cite{iql} and leverages distributional reinforcement learning approach \cite{qrdqn}. Using a distributional approach allows for the distribution of returns to be modeled explicitly instead of estimating the mean. In estimating the distribution, we can begin to apply ideas including importance sampling and divergence measures to control how experiences are collected, and the direction in which policies gravitate towards to enable a more structured exploration approach leading to faster learner.
Although previous work leverage shared experiences in the actor critic framework and regularized policies in the off-policy setting, they are not directly applicable in Multi-Agent settings where additional considerations in the joint state-action space needs to be considered. Our primary contribution is introducing {\em Multi-Agent Regularized Q-learning} (MARQ), and can be summarized as follows.
\begin{itemize}
\item MARQ regularizes the Q-values during training to encourage more structured exploration compared with existing Q-learning approaches, leading to more efficient training
\item We demonstrate empirically our approach which builds on top of independent learning approaches across a range of benchmark MARL environments. MARQ outperforms other Q-learning approaches and independent learning approaches with minimal overhead.
\end{itemize}
\section{Related Work}
\subsection*{Structured Exploration}
One area of interest is promoting diversification when exploring the environment. This has been approached through attempting to discover new states in an unsupervised manner \cite{diayn}, which trains on diversity of rewards and through encouraging action diversification in considering the joint state and action through mutual information in MARL setting \cite{maven}. Another way is to modify the rewards in order to improve the tradeoff between exploitation and exploration \cite{vime,curious}.
\subsection*{Entropy Regularization}
A related approach to structured exploration is entropy regularization. A common technique to improve policy optimization is through stochastic actions which is achieved through adding a entropy regularization penalty. Existing methods are typically used in single-agent RL which modify the training objective \cite{sac,softql,energyentropy}, other approaches include augmenting the reward itself \cite{mrl}. When used in the multi-agent setting it has been shown that an adaptive approach to regularization is required in order to keep consistent exploration across all agents to generate optimal joint actions in MARL environments \cite{lica}. In additional to entropy regularization, we provide explicit consideration to encourage diversity across the agents.
\subsection*{Distributional Reinforcement Learning}
Recent approaches rely on distributional reinforcement learning in order to estimate the distribution of the value function in Q-learning approaches \cite{qrdqn}. These have allowed for implicit regularization of the policy through augmenting the reward \cite{mrl}, providing off-policy learning guarantees when leveraging external, offline experience gathering techniques \cite{cql}. We offer extensions to the distributional reinforcement learning approach in the multi-agent setting, with cooperative considerations.
\subsection*{Learning from Experience}
Offline reinforcement learning \cite{cql} is a promising field which can handle distribution shifts due to experiences being sampled from an alternative policy which are not possible in standard single-agent RL \cite{offlinetutorial}, in which distributional RL has shown promise. In the multi-agent setting, shared experience for actor critic models has been tackled through correcting off-policy sampling through importance sampling \cite{seac}. Our approach leverages these corrections in the multi-agent Q-learning space which previous was explored with limited impact \cite{seac}.
\section{Preliminaries}
\subsection*{Markov Games}
The goal in reinforcement learning is to maximize the expected cumulative reward in a Markov decision process, through optimizing a learned policy. We define the tuple $\langle \mathcal{N}, G, \{S_{\mathsf{a}_i} \}_{\mathsf{a}_i \in \mathcal{N}}, \mathcal{A}, T, r, \gamma \rangle$ to represent partially observable Markov game for agents $\mathsf{a}_i \in \mathcal{N} = \{\mathsf{a}_1, \dots, \mathsf{a}_N\}$ \cite{markovgame}, global state space $G$, and joint action space $\mathcal{A} = A^1 \times \dots \times A^N$. Each agent $\mathsf{a}_i$ only perceives $s_{\mathsf{a}_i} \in S_{\mathsf{a}_i}$ which depends on the current global state, $T(s^\prime_{\mathsf{a}_i} \vert s_{\mathsf{a}_i}, a_{\mathsf{a}_i})$ and $r(s_{\mathsf{a}_i}, a_{\mathsf{a}_i}$ represent the dynamics and reward function, with $\gamma \in (0, 1)$ being the discount factor. Let $\langle s, a, r, s^\prime \rangle \sim \mathcal{D}_{\mathsf{a}_i}$ represent the dataset of experiences stored as a transition tuple for agent $\mathsf{a}_i$, with $s^\prime$ representing the state observed after taking action $a$ in state $s$ and receiving reward $r$. In this paper we are particularly interested in multi-agent algorithms which only use each agent's local observation during training and execution, meaning no global state information is used.
\subsection*{Deep Q-Learning}
Q-learning is an off-policy RL algorithm based on dynamic programming, which uses the action-value function $Q(s, a)$. In Deep learning approaches parameters this Q function uses a deep neural network \cite{dqn,qrdqn}. Q-learning methods apply the Bellman operator $\mathcal{B}^*Q(s, a) = r(s, a) + \gamma \mathbb{E}_{s^\prime \sim P(s^\prime \vert s, a)}[ \max_{a^\prime} Q(s^\prime, a^\prime)]$. Given the tuple $\langle s, a, r, s^\prime \rangle \sim \mathcal{D}_{\mathsf{a}_i}$, the iterative updating for training a particular agent $\mathsf{a}_i$ with tradeoff factor $\alpha$:
\begin{equation}\label{eq:dqn}
{{\hat{Q}}^{k+1}} \leftarrow \argmin_Q \alpha \cdot \mathbb{E}_{s \in {\mathcal{D}}, a \sim \pi(a \vert s)}[Q(s, a)]+ \frac{1}{2} \mathbb{E}_{s,a, r s^\prime \sim \mathcal{D}}\left[(Q(s, a) - \mathcal{B}^*{\hat{Q}^{k}}(s, a) )^2\right]
\end{equation}
Where $\hat{Q}$ is typically a {\em target network} where the parameters kept constant for a number of iterations and periodically copied.
\subsection*{Conservative Q-Learning}
\label{cqlr}
Conservative Q-Learning (CQL) extends approaches taken in DQN, which leverages information from the empirical behavioral distribution. Let $\hat{\pi}_{\mathcal{D}}(a \vert s) := \frac{\sum_{s, a \in \mathcal{D}} \mathbf{1}[s=s, a=a]}{\sum_{s \in \mathcal{D}} \mathbf{1}[s=s]}$ denote the empirical behavioral distribution at the state $s$. Then by adding an additional Q-value {\em maximization} term (in red) under the data $\hat{\pi}_{\mathcal{D}}$, we have the resulting iterative update for CQL:
\begin{align}
{{\hat{Q}}^{k+1}} \leftarrow \argmin_Q \alpha \cdot \Big(\mathbb{E}_{s \sim {\mathcal{D}}, a \sim \pi(a \vert s)}[Q(s, a)] {\color{red}- \mathbb{E}_{s \sim \mathcal{D}, a \sim \hat{\pi}(a \vert s)} [Q(s, a)]}\Big) \nonumber \\
\label{eq:cql}
+ \frac{1}{2} \mathbb{E}_{s,a, r s^\prime \sim \mathcal{D}}\left[(Q(s, a) - \mathcal{B}^*{\hat{Q}^{k}}(s, a) )^2\right]
\end{align}
This approach provides lower bound on the value of the current policy which provides theoretical improvement guarantees. In our work, we are interested in extensions of this approach through choices of regularizers $\mathcal{R}(\pi)$, in the multi-agent setting, when additional considerations are made with respective to leveraging other agent experiences or inducing a more structured exploration strategy. This is described in the form:
\begin{align}
{{\hat{Q}}^{k+1}} \leftarrow \argmin_Q \alpha \cdot \Big(\mathbb{E}_{s \sim {\mathcal{D}}, a \sim \pi(a \vert s)}[Q(s, a)] {- \mathbb{E}_{s \sim \mathcal{D}, a \sim \hat{\pi}(a \vert s)} [Q(s, a)]}\Big) \nonumber \\
\label{eq:cqlr}
+ \frac{1}{2} \mathbb{E}_{s,a, r s^\prime \sim \mathcal{D}}\left[(Q(s, a) - \mathcal{B}^*{\hat{Q}^{k}}(s, a) )^2 \right] + {\color{red} \mathcal{R}(\pi)}
\end{align}
CQL presents several variations for the regularizer $\mathcal{R}$ with closed form solutions, including entropy approach $\mathcal{R}(\pi) = H(\pi)$ and a KL-divergence approach to a prior distribution $\rho$ in the form $\mathcal{R}(\pi) = -D_{\text{KL}}(\pi \Vert \rho)$. These approaches have been empirically demonstrated to be more stable in high-dimensional action spaces \cite{cql}.
\section{Multi-Agent Regularized Q-Learning}
In this section, we develop {\em Multi-Agent Regularized Q-learning} (MARQ) approach, which employs independent Q-learning with allowances for shared experiences and structured exploration to promote cooperation. In the MARL setting independent Q-learning operates through having each agent learn its own policy, conditioned on its own partial observation \cite{iql}. We consider a {\em Conservative Q-Learning} framework leveraging distributional Q-learning for discrete actions in the form of Quantile Regression DQN, in which the distribution of returns is explicitly modelled rather than only the mean \cite{qrdqn}. The advantages of our approach is that it does not require learning centralized training frameworks such as QMIX, and can scale to an arbitrary number of agents, leading to faster training iterations. This is achieved through accounting for the distributional shift between the dataset and the learned policy, which is particularly importance when considering the large joint state space.
\subsection{Correcting for Shared Experiences}
\label{sharedexp}
One approach in the multi-agent setting is for Q-learning is to leverage other agents experience in the iterative update for the Q-function. It can be shown that using sampling other agent experiences indiscriminately can lead to underestimation of the Q function.
\begin{proposition}
Suppose we are interested in updating the Q-function for agent $1$ using experiences from agent $2$, with the iterative update based on applying the DQN iterative update in Equation \ref{eq:dqn} with other agent experiences:
\begin{equation}\label{qiterate}
{{\hat{Q}}^{k+1}_{\mathsf{a}_1}} \leftarrow \argmin_Q \alpha \cdot \mathbb{E}_{s \sim \mathcal{D}_{\mathsf{a}_2}, a \sim \pi_{\mathsf{a}_1}}[Q(s, a)]+ \frac{1}{2} \mathbb{E}_{s,a \mathcal{D}_{\mathsf{a}_2}}\left[(Q(s, a) - \mathcal{B}^*{\hat{Q}^{k}_{\mathsf{a}_1}}(s, a) )^2\right]
\end{equation}
Then the resulting Q-function $\hat{Q}_{\mathsf{a}_2} := \lim_{k \rightarrow \infty} \hat{Q}_{\mathsf{a}_1}$, lower bounds $\hat{Q}_{\mathsf{a}_2}$ at all $(s, a)$.
\end{proposition}
\begin{proof}
By setting the derivative of Equation \ref{qiterate} to $0$, we obtain the following expression
\begin{equation}
\forall s, a \in \mathcal{D}_{\mathsf{a}_2},k,\quad {{\hat{Q}}^{k+1}_{\mathsf{a}_1}}(s, a) = \mathcal{B}^*{\hat{Q}^{k}_{\mathsf{a}_1}}(s, a) - \alpha \frac{\pi_{\mathsf{a}_1}(a \vert s)}{\pi_{\mathsf{a}_2}(a \vert s)}
\end{equation}
Since $\pi_{\mathsf{a}_1}(a \vert s) > 0, \pi_{\mathsf{a}_2}(a \vert s) > 0, \alpha > 0 $, then at each iteration the Q-value iterate will be underestimated, i.e. ${\hat{Q}}^{k+1}_{\mathsf{a}_1}(s, a) \leq \mathcal{B}^*{\hat{Q}^{k}_{\mathsf{a}_1}}(s, a)$. This underestimation demonstrates that naively using experience realised from another agent may not be appropriate without adjustment. This observation can be extended across all pairwise agent combinations when leveraging different agent experiences.
\end{proof}
One possible adjustment is to interpret the correction through the lens of value loss function. The interpretation being if the value loss estimation caused under sampling another agent's experience is high, then this is an out of distribution sample which requires higher weight. This provides another interpretation to the {\em Shared Experience Actor-Critic} framework \cite{seac}, whereby the value loss component is a {\em regularization penalty} to the MARL policy. This extends Equation \ref{eq:cqlr} to the multi-agent scenario. We denote $\hat{Q}^{k+1}_{\mathsf{a}_i,\mathsf{a}_j}$ to indicate an update step for agent $\mathsf{a}_i$ using a sample draw from agent $\mathsf{a}_j$ and is shown below with the regularizer component in red:
\begin{align}
{\hat{Q}}^{k+1}_{\mathsf{a}_i,\mathsf{a}_j} &\leftarrow \text{ } { \argmin_Q \alpha \cdot \left( \mathbb{E}_{s \sim \mathcal{D}_{\mathsf{a}_j}, a \sim \pi_{\mathsf{a}_i}}[Q(s, a)] - \mathbb{E}_{s \sim \mathcal{D}_{\mathsf{a}_j}, a \sim \hat{\pi}_{\mathsf{a}_j}(a \vert s) }[Q(s, a)] \right) } \nonumber \\
&+ \frac{1}{2} \mathbb{E}_{s,a \sim \mathcal{D}_{\mathsf{a}_j}}\left[(Q(s, a) - \mathcal{B}^*{\hat{Q}^{k}_{\mathsf{a}_i}}(s, a) )^2\right] +{\color{red} \lambda \cdot\mathbb{E}_{s,\cdot,r, s^\prime \sim \mathcal{D}_{\mathsf{a}_j}, a \sim \pi_{\mathsf{a}_i}} \frac{\pi_{\mathsf{a}_i}(a \vert s)}{\pi_{\mathsf{a}_j}(a \vert s)} \lVert V_{\mathsf{a}_i}(s) - y_{\mathsf{a}_i,\mathsf{a}_j}\rVert } \nonumber \\
y_{\mathsf{a}_i,\mathsf{a}_j} &= r + \gamma V_{\mathsf{a}_i}(s^\prime), \text{ where } \langle \cdot, \cdot, \cdot, s^\prime \rangle \sim \mathcal{D}_{\mathsf{a}_j} \label{eq:icql}
\end{align}
where $\lambda$ is the hyperparameter which weights the experience of other agents. This hyperparameter is largely insensitive to different values, hence we set the parameter to be $\lambda = 1$ for our experiments. An ablation study over $\lambda$ was conducted in Section \ref{se_sensitivity}. This approach differs from the DQN set up used by \cite{seac} as our approach uses distributional RL as its independent learning algorithm which allows for this correction to be used, which brings the performance in line with the observations made in the {\em Shared Experience Actor Critic} algorithm.
\subsection{Explicit KL-Divergence Regularization through Adaptive Cross-Entropy}
\label{kldiv}
Rather than using experiences from other agents, another approach is to explicitly regularize using KL divergence to encourage structured exploration. This approach was explored as part of Conservative Q-learning \cite{cql} in single-agent framework and {\em Learning Implicit Credit Assignment} (LICA) within the multi-agent framework, however the structured exploration was only through the entropy by treating each agent independently. The {\em adaptive entropy regularization}, $\mathcal{H}$ is created through dynamically controlling the magnitudes of the entropy gradients so that they are inversely proportional to the policy entropy during training
\begin{align*}
\mathcal{H}_i &:= \alpha H(\pi_{\mathsf{a}_i}) \\
\partial \mathcal{H}_i &:= - \alpha \cdot \frac{\log \pi_{\mathsf{a}_i} + 1}{H(\pi_{\mathsf{a}_i})}
\end{align*}
where $H$ is the entropy, $\pi_{\mathsf{a}_i}$ is the policy of agent $\mathsf{a}_i$, and $\alpha$ is a constant controlling the regularization strength. In this particularly instance, as we are not operating in an actor critic framework, we are regularizing in maximum entropy RL framework using distributional RL. This approach is used to ensure there is sufficient sustained exploration through training. To extend this to our approach where there is no mixing network or explicit cooperation, we use cross-entropy penalty to encourage agents to explore diverse trajectories.
In order to address this, we first observe the definition of cross entropy $H(p, q) = H(p) + D_{\text{KL}}(p \Vert q)$. Then we construct Q-learning update based on Equation \ref{eq:cqlr} with the pairwise adaptive cross entropy regularization penalty (in red) to be:
\begin{align}
{{\hat{Q}}^{k+1}_{\mathsf{a}_i}} \leftarrow \argmin_Q \alpha \cdot \Big(\mathbb{E}_{s \sim {\mathcal{D}_{\mathsf{a}_i}}, a \sim \pi_{\mathsf{a}_i}(a \vert s)}[Q(s, a)] {- \mathbb{E}_{s \sim \mathcal{D}_{\mathsf{a}_i}, a \sim \hat{\pi}_{\mathsf{a}_i}(a \vert s)} [Q(s, a)]}\Big) \nonumber \\
\label{eq:cqlr}
+ \frac{1}{2} \mathbb{E}_{s,a, r s^\prime \sim \mathcal{D}_{\mathsf{a}_i}}\left[(Q(s, a) - \mathcal{B}^*{\hat{Q}^{k}}(s, a) )^2\right] + {\color{red} \lambda \sum_{\mathsf{a}_j \in \mathcal{N}}\left( H(\pi_{\mathsf{a}_i}) + D_{\text{KL}} (\pi_{\mathsf{a}_i} \Vert \pi_{\mathsf{a}_j}) \right)}
\end{align}
Where $\lambda$ is the regularization strength. Under this framework, in addition to the adaptive regularizer used in LICA, the regularizer would leverage pairwise experience information. Since cross-entropy is defined as $H(p, q) = H(p) + D_{\text{KL}}(p \Vert q)$, then the non-negativity of the KL divergence on the right-hand side implies that a agent policies which behave in the same manner induces a regularizer penalty, encouraging diversity, which is offset by entropy. Not only does our approach extend the LICA adaptive entropy, it is a natural extension to a MARL variation of CQL models introduced in Section \ref{cqlr}.
\section{Experiments}
\begin{figure}[ht]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.2\textwidth,height=0.2\textwidth]{pettingzoo-envs/pong.png} & \includegraphics[width=0.2\textwidth,height=0.2\textwidth]{pettingzoo-envs/pistonball.png} &
\includegraphics[width=0.2\textwidth,height=0.2\textwidth]{pettingzoo-envs/pursuit.png} &
\includegraphics[width=0.2\textwidth,height=0.2\textwidth]{pettingzoo-envs/waterworld.png}
\end{tabular}
\caption{Left to right: Pong, Pistonball, Pursuit, Waterworld environments.}
\label{fig:pettingzoo}
\end{figure}
In this section we describe experiments we use to empirically justify our algorithm and approach. We use several reinforcement learning benchmarks \cite{terry2020pettingzoo,maddpg,gupta2017cooperative}, shown in Figure \ref{fig:pettingzoo}.
{\em Cooperative Pong} is a multi-agent game of pong from ``Butterfly'' environments \cite{terry2020pettingzoo}, where the goal is for both agents (paddles) to keep the ball in play. The game is over if the ball goes out of bounds from either the left or right edge of the screen. In this setting the action space is discrete, and the paddles can either move up or down. To make learning more challenging the right paddle is tiered cake-shaped by default, and the observations of each agent are its own half of the screen. The agents receive a positive, fixed, combined reward based on successful completion of a game frame, otherwise a negative reward is given, and the game ends.
{\em Pistonball} is a physics based cooperative game from ``Butterfly'' environments \cite{terry2020pettingzoo} where the goal is move the a ball to the left wall of the game border by activating vertically moving pistons. Each piston agent's observation is an RGB image of is local surrounding area. The agents have a discrete action space, and can choose to stay still, move down or move up. Pistons must learn highly coordinated emergent behavior to achieve an optimal policy for the environment. Each agent gets a reward based on the overall movement of the ball, and how much the ball moved in a left-wards direction if it was in the agent's local observation.
{\em Pursuit} is an environment from the ``SISL'' set of environments \cite{gupta2017cooperative}, where our agents are control of pursuer agents, which aim to capture randomly controlled evader agents. Evader agents are only removed from the environment when pursuers fully surround an evader. Agents receive a small reward when they touch an evader, and a large reward when they successfully surround and remove an evader agent. In this scenario, each pursuer agent observes a local RGB grid, and operate in a discrete action space.
{\em Waterworld} is a simulation of archea trying to navigate and surviving their environment from the ``SISL'' set of environments \cite{gupta2017cooperative}. The goal of the agent is to control the archea to simultaneously avoid poison and find food. The original environment consists of a continuous action space, indicating their horizontal and vertical thrust within the environment. We instead modify this to a discrete action space, in order to assess Q-learning approaches. Within this environment, the agent's observation is based on range limited sensors to detect neighboring entities, indicated by black lines in the right-most image in Figure \ref{fig:pettingzoo}.
{\em Reference} and {\em Spread} are environments consisting of agents and landmarks from the ``MPE'' set of environments \cite{maddpg}. In {\em Reference}, each agent wants to get closer to their target landmark, which is known only by other agents. The agents are rewarded by the distance to their target landmark. In {\em Spread} Agents must learn to cover each landmark without colliding with other agents. The agents are rewarded based on how far the closest agent is to each landmark, and are penalized if they collide with other agents. For both environments, the agent's observation space consists of distance to every landmark, and the action space consists of the movement action. In addition the {\em Reference} environment has an additional actions to communicate with other agents.
\subsection{Architecture Overview and Implementation Details}
Across all environments, we consistently preprocessed the inputs as suggested in the PettingZoo benchmarks \cite{terry2020pettingzoo}. We evaluated on 1000 evaluation episode rollouts (separate from the train distributions) every training iteration and used the average score and variance for the plots and tables, shown in Figure \ref{resall}
\subsection{Hyper-parameters}
\begin{table*}[!htb]
\centering
\begin{tabular}{c|c}
\hline
\textbf{Hyperparameters} & \textbf{Value} \\ \hline
Policy Hidden Sizes & {[}64, 64, 64{]} \\
Mixer Hidden Sizes & {[}32, 32, 32{]} \\
Policy Hidden Activation & ReLU \\
Target Network $\tau$ & 0.005 \\
Learning Rate & 0.0003 \\
Batch Size & 256 \\
Replay Buffer Size & 1000000 \\
Number of pretraining steps & 1000 \\
Steps per Iteration & 1000 \\
Discount & 0.99 \\
Reward Scale & 1 \\ \hline
\end{tabular}
\caption{Hyper-parameters used for MARL experiments}
\label{app:tab:hyper}
\end{table*}
\label{ref:hyper}
The policies leverage the default hyper-parameters based on the official implementation of LICA, QMIX, IQL and their accompanying code based on the pymarl library \cite{qmix,lica,qtran}. SEAC was re-implemented in rlkit framework with the same hyperparameters proposed in their paper, but using Soft Actor-Critic instead of A2C. Similarly, MADDPG extended the rlkit implementation of DDPG to support multi-agent setting. In order to facilitate discrete action spaces gumbel softmax \cite{gumbel_softmax1,gumbel_softmax2} is used as proposed by the approach taken by by SEAC \cite{seac}. Similar to the usage in the pymarl library, parameter sharing was used in all models in order to improve the general performance of the agents.
\subsection{Results}
We demonstrate that our approach MARQ is comparable to existing state of the art approaches, and performs well against approaches which use explicit cooperation as shown in Figure \ref{resall}.
\begin{figure}[!htb]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.75\textwidth]{results/line-baseline-low.png} \\
\includegraphics[width=0.75\textwidth]{results/bar-baseline-low.png}
\end{tabular}
\end{center}
\caption{Performance over a variety of benchmark environments. The bar graph report the mean over the last 10 evaluation steps, where each evaluation point is average over 1000 episodes. Error bars for plots represent the standard deviation over these rollouts.}
\label{resall}
\end{figure}
From the plots, we observe our approach competitive with the state-of-the-art approaches across a range of benchmark environments as shown in Figure \ref{resall} and generally completes training in less time compared with approaches which use a mixer network. There are some performance improvements over actor critic methods (SEAC) as it does not need to train both an actor and critic network as shown in Figure \ref{restime}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.95\textwidth]{results/bar-time-low.png}
\end{center}
\caption{Comparison of the relative total time taken in training the different approaches, using IQL as the benchmark.}
\label{restime}
\end{figure}
The increase in training time compared with IQL can be explained through the difference in training distribution RL model in MARQ compared with vanilla DQN in the IQL implementation.
\begin{figure}[!htb]
\centering
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.75\textwidth]{ablation/line-baseline-low.png} \\
\includegraphics[width=0.75\textwidth]{ablation/bar-baseline-low.png}
\end{tabular}
\end{center}
\caption{Comparison the sensitivity of the $\lambda$ parameter when correcting for shared experiences as described in Section \ref{sharedexp}. The bar graph report the mean over the last 10 evaluation steps, where each evaluation point is average over 1000 episodes. Error bars for plots represent the standard deviation over these rollouts.}
\label{plot:abl}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.75\textwidth]{ablation-kl/line-baseline-low.png} \\
\includegraphics[width=0.75\textwidth]{ablation-kl/bar-baseline-low.png}
\end{tabular}
\end{center}
\caption{Comparison the sensitivity of the $\lambda$ parameter when correcting for KL-divergence variation as described in Section \ref{kldiv}. The bar graph report the mean over the last 10 evaluation steps, where each evaluation point is average over 1000 episodes. Error bars for plots represent the standard deviation over these rollouts.}
\label{plot:ablkl}
\end{figure}
\subsection{Ablation}
\label{se_sensitivity}
We compare our results with different hyperparameters for our regularizer. We demonstrate empirically for both cases, regularization in the shared experience framework and regularization using KL-divergence, our algorithm is not sensitive to weights. As expected, when $lambda \rightarrow 0$, the model approaches the unregularized variation, which leads to decreased performance. This is shown for Shared Experience variation in Figure \ref{plot:abl} and for KL-divergence variation in Figure \ref{plot:ablkl}. In conclusion MARQ is not sensitive to the choice of hyperparameter relating to the level of shared experience or divergence weight and emits a stable policy.
\section{Conclusion}
We have demonstrated the efficacy of our approach through demonstrated the speed of convergence and also the overall performance against other state-of-the-art algorithms and have demonstrated the algorithms ability to generalise. Furthermore we have justified the construction of our algorithm through ablation studies, revealing the stability of our approach.
Future work could include an automated way of tuning the amount of sharing dependent on the type of multi-agent environment used. For example, training MARL algorithms in a competitive multi-agent environment in the implicit importance sampling perform worse compared with an orthogonal search strategy in adaptive cross-entropy which promotes more diverse behavior for learning. The conservative Q-learning bound assists in understanding and optimising agents when the joint action space from a global perspective is sparse.
\clearpage
|
train/arxiv
|
BkiUa5LxK7Tt522WYPNe
| 5 | 1 |
\section{Introduction}
Taking the geometrical complexity into account is one of the major issues in the computational mechanics. Although some spectacular advances in mesh generation have been achieved in recent years, constructing and using the meshes fitting the geometry of, for example, human organs may still be prohibitively expensive in realistic 3D configurations. Moreover, when the geometry is changing in time or on iterations of an optimization algorithm, the mesh should be frequently adapted, either by complete remeshing (expensive) or by moving the nodes (may lead to a degradation of the mesh quality, impacting the accuracy and the stability of computations).
Geometrically unfitted methods, i.e. the numerical methods using the computational meshes that do not fit the boundary of the domain, and/or the internal interfaces, have been widely investigated in the computational mechanics for decades. Their variants come under the name of Immersed Boundary \cite{IBMrev} or Fictitious Domain \cite{glowinski} methods. However, these classical approaches suffer from poor accuracy because of their rudimentary (but easy to implement) treatment of the boundary conditions, cf. \cite{girault}. For example, in the case of the linear elasticity equations, these methods start by extending the displacement $\mathbf{u}$, from the physical domain $\Omega$ to a fictitious domain (typically a rectangular box) $\mathcal{O}\supset\Omega$ assuming that $\mathbf{u}$ still solves the same governing equations on $\mathcal{O}$ as on $\Omega$. This creates an artificial singularity on the boundary of $\Omega$ (a jump in the normal derivative) so that the resulting numerical approximation is, at best, $\sqrt{h}$-accurate in the energy norm with whatever finite elements (from now on, $h$ denotes the mesh size).
\begin{figure}[t]
\centering
\includegraphics[height=35mm]{fictif_clean_phi.png}
\qquad
\includegraphics[height=35mm]{cerveau.pdf}
\caption{Left: Meshes and notations for a 2D domain $\Omega=\{\phi<0\}$; the computational mesh $\mathcal{T}_{h}$ is obtained from a structured background mesh and is represented by both white and yellow triangles forming the domain $\Omega_h$; the yellow triangles constitute the submesh $\mathcal{T}_{h}^\Gamma$ occupying the domain $\Omega_h^\Gamma$. Right: a more involved example of an active mesh in 3D that can be used in $\phi$-FEM; a hexahedral mesh covering a brain geometry. }
\label{fig}
\end{figure}
The last two decades have seen the arrival of more accurate geometrically unfitted methods such as {XFEM} \cite{moes,haslin}, {CutFEM} \cite{burman1,burman2,cutfemrev,cutfemelasticity} and Shifted Boundary Method (SBM) \cite{SBM,SBMelasticity}. We are citing here only the methods based on the finite element (FE) approach; the list would be much longer if the methods based on finite differences were included. In the case of XFEM/CutFEM, the optimal accuracy, i.e. the same convergence rates as those of the standard FEM on a geometrically fitted mesh, is achieved at the price of a considerable sophistication in the implementation of boundary conditions. The idea is to introduce the unfitted mesh (known as the \textit{active mesh}) starting from the simple background mesh and getting rid of the cells lying entirely outside the physical domain, as illustrated at Fig. \ref{fig}. The finite elements are then set up on the active mesh, the variational formulation is imposed on the \textit{physical} domain, and an appropriate stabilization is added. In practice, one should thus compute the integrals on the actual boundary and on the parts of the active mesh cells cut by the boundary (the cut cells). To this end, one should typically construct a boundary fitted mesh, now only locally near the boundary and only for the numerical integration purposes, but the generation of a non trivial mesh is still not completely avoided.
On the other hand, the non trivial integration is completely absent from SBM. This method introduces again an active mesh as a submesh of the background mesh (unlike CutFEM, the active mesh here contains only the cells inside $\Omega$) and then imposes the approximate boundary conditions on the boundary of the active mesh by a Taylor expansion around the actual boundary. The absence of non-standard numerical integration is an important practical advantage of SBM over XFEM/CutFEM. We note however that, to the best of our knowledge, SBM is readily available only for the lowest order FE. Moreover, in the case of Neumann boundary conditions, the original version of SBM \cite{SBM} needs an extrapolation of the second derivatives of the solution that makes its implementation rather tricky. This difficulty can be alleviated if the problem is recast in a mixed form introducing the secondary variables for the gradient \cite{mixedSBM}.
In this chapter, we present yet another unfitted FE-based method, first introduced in \cite{phifem,phiFEM2} and baptised $\phi$-FEM to emphasize the prominent role played in it by the level set (LS) function, traditionally denoted by $\phi$. From now on, we suppose that the physical domain is characterized by a given LS function:\footnote{In some settings presented further, the level set $\phi$ will describe an interior interface inside $\Omega$ rather than the geometry of $\Omega$ itself.}
\begin{equation}\label{Omphi}
\Omega=\{\phi<0\}\,.
\end{equation}
Similarly to CutFEM/XFEM/SBM, we suppose that $\Omega$ is embedded into a simple background mesh and we introduce the \textit{active} computational mesh $\mathcal{T}_{h}$ as in CutFEM, cf. Fig.~\ref{fig}. However, unlike CutFEM, we abandon the variational formulation on $\Omega$. We rather introduce a non-standard formulation on the extended domain $\Omega_h$ (slightly larger than $\Omega$) occupied by the active mesh $\mathcal{T}_{h}$. The general procedure is as follows:
\begin{itemize}
\item Extend the governing equations from $\Omega$ to $\Omega_h$ and write down a formal variational formulation on $\Omega_h$ without taking into account the boundary conditions on $\partial\Omega$.
\item Impose the boundary conditions using appropriate ansatz or additional variables, explicitly involving the level set $\phi$ which provides the link to the actual boundary. For instance, the homogeneous Dirichlet boundary conditions ($\mbf{u}=0$ on $\partial\Omega$) can be imposed by the ansatz $\mbf{u}=\phi\mbf{w}$ thus reformulating the problem in terms of the new unknown $\mbf{w}$ (modifications for non-homogeneous conditions, mixed boundary conditions and other settings are introduced further in the text).
\item Add appropriate stabilization, including the ghost penalty \cite{ghost} as in CutFEM plus a least square imposition of the governing equation on the mesh cells near the boundary, to guarantee coerciveness/stability on the discrete level.
\end{itemize}
This approach allows us to achieve the optimal accuracy using classical FE spaces of any order and the usual numerical integration: all the integrals in $\phi$-FEM can be computed by standard quadrature rules on entire mesh cells and on entire boundary facets; no integration on cut cells or on the actual boundary is needed.
This is the principal advantage of $\phi$-FEM over CutFEM/XFEM. Moreover, we can cite the following features of $\phi$-FEM which distinguish it from both CutFEM/XFEM and SBM:
\begin{itemize}
\item FE of any order can be straightforwardly used in $\phi$-FEM. The geometry is naturally taken into account with the needed optimal accuracy: it suffices to approximate the LS function $\phi$ by piecewise polynomials of the same degree as that used for the primal unknown. This should be contrasted to CutFEM where a special additional treatment is needed if one uses FEM of order $\ge 2$. Indeed, a piecewise linear representation of the boundary is not sufficient in this case. One needs either a special implementation of the isoparametric method \cite{lehren} or a local correction by Taylor expansions \cite{boiveau}. The extension to higher order FE is not trivial for SBM either.
\item Contrary to SBM, $\phi$-FEM is based on a purely variational formulation so that the existing standard FEM libraries suffice to implement it. The geometry of the domain comes into the formulation only through the level set $\phi$. We emphasize that $\phi$ is not necessarily the signed distance to the boundary of $\Omega$. It is sufficient to give to the method any $\phi$ satisfying (\ref{Omphi}) which is the minimal imaginable geometrical input. This can be contrasted with SBM which assumes that the distance to the actual boundary in the normal direction is known on all the boundary facets of the active mesh.
\end{itemize}
Moreover, $\phi$-FEM is designed so that the matrices of the problems on the discrete level are reasonably conditioned, i.e. their condition numbers are of the same order as those of a standard fitting FEM on a mesh of comparable size. $\phi$-FEM shares this feature with both CutFEM/XFEM and SBM.
Up to now, the $\phi$-FEM approach has been proposed, tested and substantiated mathematically only in some simplest settings: Poisson equation with Dirichlet boundary conditions \cite{phifem}, or with Neumann/Robin boundary conditions \cite{phiFEM2}. The goal of the present chapter is to demonstrate its applicability to some more sophisticated governing equations arising in the computational mechanics. In section \ref{secElast}, we adapt $\phi$-FEM to the linear elasticity equations accompanied by either pure Dirichlet boundary conditions, or with mixed conditions (both Dirichlet and Neumann on parts of the boundary). In Section \ref{sectInterface}, we consider the interface problem (elasticity with material coefficients abruptly changing over an internal interface). Section \ref{sectFracture} is devoted to the treatment of internal cracks. Finally, our method is adapted to the heat equation in Section \ref{sectHeat}. In all these settings, we start by deriving an appropriate variant of $\phi$-FEM and then illustrate it by numerical tests on manufactured solutions. We also compare the accuracy and efficiency of $\phi$-FEM with those of the standard fitted FEM on the meshes of similar size, revealing the substantial gains that can be achieved by $\phi$-FEM in both the accuracy and the computational time.
All the codes used in the present work have been implemented thanks to the open libraries \texttt{fenics} \cite{fenics} and \texttt{multiphenics} \cite{multiphenics}. They are available at the link\\ \url{https://github.com/michelduprez/phi-FEM-an-efficient-simulation-tool-using-simple-meshes-for-problems-in-structure-mechanics.git}
\section{Linear elasticity}\label{secElast}
In this section, we consider the static linear elasticity for homogeneous and isotropic materials. The governing equation for the displacement $\boldsymbol u$ is thus
\begin{equation}\label{eq:elast}
\Div{\boldsymbol\sigma}(\mbf{u})+{\boldsymbol f}=0,
\end{equation}
where the stress $\boldsymbol\sigma(\mbf{u})$ is given by
$${\boldsymbol {\sigma }}(\mbf{u})=2\mu {\boldsymbol {\varepsilon }}(\mbf{u})+\lambda (\Div \mbf{u})I,
$$
$\boldsymbol{\varepsilon}(\mbf{u})=\frac{1}{2}(\nabla{\boldsymbol u} + \nabla{\boldsymbol u}^T)$ is the strain tensor, and Lamé parameters $\lambda,\mu$ are defined via the Young modulus $E$ and the Poisson coefficient $\nu$ by
\begin{equation}\label{muEnu}
\mu = \dfrac{E}{2(1+\nu)}\mbox{ and }\lambda=\dfrac{E\nu}{(1+\nu)(1-2\nu)}\,.
\end{equation}
Equation (\ref{eq:elast}) is posed in a domain $\Omega$, which can be two or three dimensional, and should be accompanied with Dirichlet and Neumann boundary conditions on $\Gamma=\partial\Omega$. We assume that $\Gamma$ is decomposed into two disjoint parts, $\Gamma=\Gamma_D\cup\Gamma_N$ with $\Gamma_D\neq \varnothing$, and
\begin{align}
\mbf{u}&=\mbf{u}^g \text{ on } \Gamma_D, \label{bcGammaD} \\
\mbf{\sigma}(\mbf{u})\mbf{n}&=\mbf{g} \ \text{ on } \Gamma_N, \label{bcGammaN}
\end{align}
with the given displacement $\mbf{u}^g$ on $\Gamma_D$ and the given force $\mbf{g}$ on $\Gamma_N$.
Let us first recall the weak formulation of this problem (to be compared with
forthcoming $\phi$-FEM formulations): find the vector field $\mbf{u}$ on $\Omega$ s.t. $\mbf{u}
|_{\Gamma_D} =\mbf{u}^g $ and
\begin{equation}\label{weakSimple}
\int_{\Omega} \mbf{\sigma} (\mbf{u}) : \nabla\mbf{v}
= \int_{\Omega} \mbf{f} \cdot \mbf{v}+
\int_{\Gamma_N} \mbf{g} \cdot \mbf{v}, \quad \forall
\mbf{v} \text{ on }\Omega \text{ such that } \mbf{v} |_{\Gamma_D} = 0 .
\end{equation}
This is obtained by multiplying the equation by a test function
$\mbf{v}$, integrating over $\Omega$ and taking into account the boundary
conditions. Formulation (\ref{weakSimple}) is routinely used to construct conforming FE methods, which necessitate a mesh that fits the domain $\Omega$ in order to approximate the integrals on $\Omega$ and $\Gamma_N$ and to impose $\mbf{u} =\mbf{u}^g $ on ${\Gamma_D}$.
We now consider the situation where a fitting mesh of $\Omega$ is not available. We rather assume that $\Omega$ is inscribed in a box ${\mathcal{O}}$ which is covered by a simple background mesh $\mathcal{T}_{h}^{\mathcal{O}}$. We further introduce the computational mesh $\mathcal{T}_{h}$ (also referred to as the active mesh) by getting rid of cells lying entirely outside $\Omega$. In practice, $\Omega$ is given by the level-set function $\phi$: $\Omega=\{\phi<0\}$. Usually, the level set is known only approximately. Accordingly, we assume that we are given a FE function $\phi_h$, i.e. a piecewise polynomial function on mesh $\mathcal{T}_{h}^{\mathcal O}$, which approximate sufficiently well $\phi$. The selection of the mesh cells forming the active mesh is done on the basis of $\phi_h$ rather than $\phi$:
\begin{equation}\label{Th}
\mathcal{T}_{h}:=\{T\in \mathcal{T}_{h}^{\mathcal{O}}:T\cap \{\phi_h<0\}\neq \varnothing\}\,.
\end{equation}
The domain occupied by $\mathcal{T}_{h}$ is denoted by $\Omega_h$, \textit{i.e.}
${\Omega_h} = (\cup_{T \in \mathcal{T}_{h}}T)^o$. In some of our methods, we shall also need a submesh of $\mathcal{T}_{h}$, referred to as $\mathcal{T}_{h}^\Gamma$, consisting of the cells intersected with the curve (surface) $\{\phi_h=0\}$, approximating $\Gamma$:
\begin{equation}\label{ThGam}
\mathcal{T}_{h}^\Gamma:=\{T\in \mathcal{T}_{h}^{\mathcal{O}}:T\cap \{\phi_h=0\}\neq \varnothing\}\,.
\end{equation}
The domain covered by mesh $\mathcal{T}_{h}^\Gamma$ will be denoted by $\Omega_h^\Gamma$, cf. Fig.~\ref{fig}.
The starting point of all variants of $\phi$-FEM is a variational formulation of problem (\ref{eq:elast}) extended to $\Omega_h$, in which we do not impose any boundary conditions since they are lacking on $\partial\Omega_h$. We thus assume that the right-hand side $\mbf{f}$ is given on the whole $\Omega_h$ rather than on $\Omega$ alone, and suppose moreover that $\mbf{u}$ can be extended from $\Omega$ to $\Omega_h$ as the solution to the governing equation (\ref{eq:elast}), now posed on $\Omega_h$ instead of $\Omega$. In a usual manner, we take then any test function
$\mbf{v}$ on $\Omega_h$, multiply the governing equation by $\mbf{v}$ and
integrate it over $\Omega_h$. This gives the following formulation: find a vector field $\mbf{u}$ on $\Omega_h$ such that
\begin{equation}\label{weakExt}
\int_{\Omega_h} \mbf{\sigma} (\mbf{u}) : \nabla\mbf{v}
- \int_{\partial\Omega_h} \mbf{\sigma} (\mbf{u}) \mbf{n} \cdot \mbf{v}
= \int_{\Omega_h} \mbf{f} \cdot \mbf{v}, \quad \forall \mbf{v} \text{ on }\Omega_h
\end{equation}
We emphasize that this formulation is fundamentally different from the standard formulation (\ref{weakSimple}). First of all, no boundary conditions are incorporated in (\ref{weakExt}) so that we cannot expect it to admit a unique solution. Furthermore, if we add somehow the boundary conditions on $\partial\Omega$ to (\ref{weakExt}), which we shall do indeed when constructing our $\phi$-FEM variants, the resulting formulation will still be ill posed, meaning that its solution (on the continuous level) either does not exist, or is not unique. However, we shall be able to turn these problems into well defined numerical schemes by adding an appropriate stabilization on the discrete level.
\subsection{Dirichlet conditions}
Let us first consider the case of pure Dirichlet conditions: $\Gamma=\Gamma_D$.
On the continuous level, we want thus to impose $\mbf{u}=\mbf{u}^g$ on $\Gamma = \Gamma_D = \{ \phi = 0 \}$ on top of the general formulation (\ref{weakExt}) of the problem on $\Omega_h$. We consider here 2 options to achieve this: 1) \textbf{direct} Dirichlet $\phi$-FEM, as proposed in \cite{phifem}, introducing a new unknown $\mbf{w}$ and redefining $\mbf{u}$ through the product $\phi\mbf{w}$ which automatically vanishes on $\Gamma$; 2) \textbf{dual} Dirichlet $\phi$-FEM, inspired by \cite{phiFEM2}, keeping the original unknown $\mbf{u}$ and imposing $\mbf{u}=\mbf{u}^g$ on $\Gamma$ with the aid of an auxiliary variable $\mbf{p}$ in a least-square manner. In more details, our two approaches can be described as follows:
\begin{itemize}
\item \textbf{Direct Dirichlet $\phi$-FEM} \textit{(on continuous level)}. Supposing that $\mbf{u}^g$ is actually given on the whole $\Omega_h$ rather than on $\Gamma$ alone, we make the ansatz
\begin{equation}\label{ansatz}
\mbf{u}=\mbf{u}^g + \phi \mbf{w},\text{ on }\Omega_h
\end{equation}
and substitute it into (\ref{weakExt}). To make the formulation more symmetric we also
replace the test functions $\mbf{v}$ by $\phi \mbf{z}$. This yields: find a vector field $\mbf{w}$ on $\Omega_h$ such that
\begin{multline}\label{DirectPhi} \int_{\Omega_h} \mbf{\sigma} (\phi\mbf{w}) : \nabla(\phi\mbf{z})
- \int_{\partial \Omega_h} \mbf{\sigma} (\phi \mbf{w}) \mbf{n} \cdot \phi
\mbf{z}
= \int_{\Omega_h} \mbf{f} \cdot \phi \mbf{z} \\
- \int_{\Omega_h} \mbf{\sigma} (\mbf{u}^g)
: \nabla (\phi \mbf{z}) + \int_{\partial
\Omega_h} \mbf{\sigma} (\mbf{u}^g) \mbf{n}
\cdot \phi \mbf{z}, \quad
\forall \mbf{z}\text{ on }\Omega_h .
\end{multline}
The idea is thus to work with the new unknown $\mbf{w}$ on $\Omega_h$,
discretize it by FEM starting from the variational formulation above, and to
reconstitute the approximation to $\mbf{u}$ by the ansatz (\ref{ansatz}).
\item \textbf{Dual Dirichlet $\phi$-FEM} \textit{(on continuous level)}. We now suppose that $\mbf{u}^g$ is defined on $\Omega_h^{\Gamma}$, cf. (\ref{ThGam}),
rather than on the whole of $\Omega_h$. We keep the primal unknown
$\mbf{u}$ in (\ref{weakExt}) and we want to impose
\begin{equation}\label{ansatzDual}
\mbf{u}=\mbf{u}^g + \phi \mbf{p},\text{ on }\Omega_h^\Gamma
\end{equation}
on top of it, with a new auxiliary unknown $\mbf{p}$ on $\Omega_h^\Gamma$. The new variable $\mbf{p}$ lives beside $\mbf{u}$ inside a variational formulation that combines (\ref{weakExt}) with (\ref{ansatzDual}): find vector fields $\mbf{u}$ on $\Omega_h$ and $\mbf{p}$ on $\Omega^{\Gamma}_h$ such that
\begin{multline}\label{DualPhi}
\int_{\Omega_h} \mbf{\sigma} (\mbf{u}) : \nabla\mbf{v}
- \int_{\partial\Omega_h} \mbf{\sigma} (\mbf{u}) \mbf{n} \cdot \mbf{v}
+ {\gamma} \int_{\Omega_h^{\Gamma}} (\mbf{u}- \phi \mbf{p}) \cdot
(\mbf{v}- \phi \mbf{q}) \\
= \int_{\Omega_h} \mbf{f} \cdot
\mbf{v}+ {\gamma}\int_{\Omega_h^{\Gamma}} \mbf{u}^g \cdot
(\mbf{v}- \phi \mbf{q}), \quad \forall \mbf{v} \text{ on }
\Omega_h, \mbf{q} \text{ on } \Omega_h^{\Gamma}
\end{multline}
with a positive parameter $\gamma$. Comparing the direct and dual variants, we observe that the expressions (\ref{ansatz}) and (\ref{ansatzDual}) are of course pretty similar, but their roles are quite different in the corresponding methods. The variable $\mbf{w}$ replaces $\mbf{u}$ in (\ref{DirectPhi}), while $\mbf{p}$ lives alongside $\mbf{u}$ in (\ref{DualPhi}). The introduction of the additional variable $\mbf{p}$ makes the dual method only slightly more expensive than the direct one, since this new variable is introduced only on a narrow strip around $\Gamma$. On the other hand, a certain advantage of the dual variant over the direct one lies in the fact that both $\phi$ and $\mbf{u}^g$ should be here known only locally around $\Gamma$ since they enter into equation (\ref{DualPhi}) only on $\Omega_h^{\Gamma}$. This can facilitate the construction of $\phi$ and $\mbf{u}^g$ in practice. More importantly, it is the dual method that we shall be able to adapt to various, more and more complicated settings below.
\end{itemize}
As mentioned above, both variational problems (\ref{DirectPhi}) and (\ref{DualPhi}) are derived on a very formal level. They are not valid in any mathematically rigorous way: we cannot expect to have a meaningful boundary value problems on a domain $\Omega_h$ with no
boundary conditions on $\partial \Omega_h$, while prescribing some conditions
on a curve (surface) $\Gamma$ which is inside $\Omega_h$. However, both
formulations can serve as starting problems to write down FE problems which
become well-posed once an appropriate stabilization is added.
We start by introducing the FE spaces: fix an integer $k\ge 1$ and let
\begin{equation} \label{spaceVh}
V_{h} := \left\lbrace \mbf{v}_h :\Omega_{h}\to\mathbb{R}^d : \mbf{v}_{h |T} \in \mathbb{P}^k(T)^d \ \ \forall T \in \mathcal{T}_h, \ \mbf{v}_h \text{ continuous on }\Omega_h \right\rbrace .
\end{equation}
For future reference, we introduce the local version of this space for any submesh $\mathcal{M}_h$ of $\mathcal{T}_{h}$ and polynomial degree $l\ge 0$
\begin{equation}
\label{spaceQh}
Q_{h}^l(\mathcal{M}_h) := \left\lbrace \mbf{q}_h:\mathcal{M}_{h} \to\mathbb{R}^d : \mbf{q}_{h |T} \in \mathbb{P}^{l}(T)^{d} \ \ \forall T \in \mathcal{M}_h , \ \mbf{q}_h \text{ continuous on }\mathcal{M}_h\text{ if }l\ge 0\right\rbrace.
\end{equation}
In particular, we shall need the space $Q_{h}^k(\Omega_{h}^{\Gamma})$ on the submesh $\Omega_{h}^{\Gamma}$ in the Dual version of Dirichlet $\phi$-FEM.
The two variants of $\phi$-FEM introduced above can now be written on the fully
discrete level as:
\begin{itemize}
\item \textbf{Direct Dirichlet $\phi$-FEM}: find $\mbf{w}_h\in V_h$ such that
\begin{multline}\label{DirectDiscrete} \int_{\Omega_h} \mbf{\sigma} (\phi_h \mbf{w}_h) :
\nabla (\phi_h \mbf{z}_h)
- \int_{\partial \Omega_h}
\mbf{\sigma} (\phi_h \mbf{w}_h) \mbf{n} \cdot \phi_h\mbf{z}_h
+ G_h(\phi_h\mbf{w}_h,\phi_h\mbf{z}_h)
+ J_h^{lhs} (\phi_h\mbf{w}_h,\phi_h\mbf{z}_h) \\
= \int_{\Omega_h} \mbf{f} \cdot \phi_h \mbf{z}_h
- \int_{\Omega_h} \mbf{\sigma} (\mbf{u}_h^g)
: \nabla (\phi_h \mbf{z}_h)
+ \int_{\partial\Omega_h} \mbf{\sigma} (\mbf{u}_h^g) \mbf{n}
\cdot \phi_h \mbf{z}_h, \\
+ J_h^{rhs} (\phi_h\mbf{z}_h),\quad\forall \mbf{z}_h\in V_h
\end{multline}
and set $\mbf{u}_h=\mbf{u}_h^g + \phi_h \mbf{w}_h$. Here $\phi_h, \mbf{u}^g_h$ are FE
approximations for $\phi, \mbf{u}^g$ on the whole $\Omega_h$, and $G_h,J_h^{lhs}, J_h^{rhs}$ stand for the stabilization terms
\begin{equation}\label{Gh}
G_h(\mbf{u}, \mbf{v}) : = \sigma_D h \sum_{E \in \mathcal{F}_h^{\Gamma}} \int_E \left[ \mbf{\sigma}(\mbf{u})\mbf{n} \right] \cdot \left[ \mbf{\sigma}(\mbf{v})\mbf{n}\right] ,
\end{equation}
\begin{equation}\label{Jhlrhs}
J_h^{lhs}(\mbf{u}, \mbf{v}) : = \sigma_D h^2 \sum_{T \in \mathcal{T}_h^{\Gamma}} \int_T \Div \mbf{\sigma}(\mbf{u})\cdot \Div\mbf{\sigma}(\mbf{v}) \,,\qquad
J_h^{rhs} (\mbf{v}) : =- \sigma_D h^2\sum_{T \in \mathcal{T}_{h}^{\Gamma}} \int_T \mbf{f} \cdot\Div\mbf{\sigma}(\mbf{v})
\,.
\end{equation}
The stabilization $G_h$ (\ref{Gh}) is known as the ghost penalty. $\sigma_D$ in (\ref{Gh}) is a positive stabilization parameter which should be chosen sufficiently big (in a mesh independent manner). $\mathcal{F}_h^{\Gamma}$ stands for the set of internal facets of mesh $\mathcal{T}_{h}$ which are also the facets of $\mathcal{T}_{h}^\Gamma$ (these are the facets either intersected by $\Gamma$, or belonging to the cells intersected by $\Gamma$). Stabilization (\ref{Gh}) was first introduced in \cite{ghost} in the form of penalization of jumps in the normal derivatives of the FE solution. Here, we prefer to penalize the jumps of internal elastic forces, following \cite{claus2018stable}, thus controlling appropriate combinations of the derivatives, rather than the normal derivatives themselves. We emphasize however that the original ghost penalty in \cite{cutfemelasticity} also involved the jumps of higher order derivatives of $\mbf{u}$ (up to the highest order of polynomials present in the FE formulation), while our variant affects the first order derivatives only. We can allow ourselves to reduce the order of stabilized derivatives thanks to the presence of additional stabilization terms $J_h^{lhs}$ (\ref{Jhlrhs}), as first suggested in \cite{phifem} (a similar idea can also be found in \cite{elfverson2019new}). The combination of $G_h$ and $J_h^{lhs}$ allows us indeed to get rid of possible spurious oscillations of the approximate solution on ``badly cut" cells near $\Gamma$ and to guarantee the coerciveness of the bilinear form in our FE formulation. Note that the terms $J_h^{lhs}$ are not consistent by themselves but they are consistently compensated by their right-hand side counterpart $J_h^{rhs}$. Indeed, the exact solution satisfies $\Div\mbf{\sigma}(\mbf{u})=-\mbf{f}$ so that $J_h^{lhs}(\mbf{u}, \mbf{v})=J_h^{rhs}(\mbf{v})$ if $\mbf{u}$ is the exact solution.
\item Dual $\phi$-FEM-Dirichlet: find $\mbf{u}_h\in V_h$, $\mbf{p}_h\in Q_{h}^k(\Omega_{h}^{\Gamma})$ such that
\begin{multline} \label{DualDiscrete}
\int_{\Omega_h} \mbf{\sigma} (\mbf{u}_h) : \nabla\mbf{v}_h
- \int_{\partial\Omega_h} \mbf{\sigma}(\mbf{u}_h) \mbf{n} \cdot \mbf{v}_h
+ \frac{\gamma}{h^2} \int_{\Omega_h^{\Gamma}} (\mbf{u}_h- \frac{1}{h}\phi_h \mbf{p}_h) \cdot
(\mbf{v}_h- \frac{1}{h}\phi_h \mbf{q}_h)
\\+ G_h (\mbf{u}_h,\mbf{v}_h)
+ J_h^{lhs} (\mbf{u}_h,\mbf{v}_h) \\
= \int_{\Omega_h} \mbf{f} \cdot\mbf{v}_h
+ \frac{\gamma}{h^2} \int_{\Omega_h^{\Gamma}} \mbf{u}_h^g \cdot
(\mbf{v}_h- \frac{1}{h}\phi_h \mbf{q}_h)
+ J_h^{rhs} (\mbf{v}_h), \quad \forall \mbf{v}_h \in V_h,\
\mbf{q}_h \in Q_{h}^k(\Omega_{h}^{\Gamma}).
\end{multline}
With respect to (\ref{DualPhi}, we have added here the factors $\frac{1}{h}$, $\frac{1}{h^2}$. They serve to control the condition numbers, cf. \cite{phiFEM2}. The stabilizations $G_h$, $J_h^{lhs}$, $J_h^{rhs}$ are again defined by (\ref{Gh}) and (\ref{Jhlrhs}).
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.48\textwidth]{mesh_dirichlet_phi_fem.png}
\quad \includegraphics[width=0.48\textwidth]{mesh_dirichlet_standard.png}
\caption{Circular domain given by (\ref{phiCircle}). Left: active meshes for $\phi$-FEM (with cells from $\mathcal{T}_{h}^\Gamma$ in yellow). Right: a fitted mesh for the standard FEM. }\label{fig:meshes dirichlet}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[name = ax1, width = .45\textwidth, xlabel = $h$,
ylabel = $L^2$ relative error
legend style = { at={(0.7,1.3)}, legend columns =1,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(0.08838834764831845,1.5588115850660593e-06)
(0.04419417382415922,9.469566130085001e-08)
(0.02209708691207961,7.2400075783873776e-09)
(0.011048543456039806,8.204951465874365e-10)
(0.005524271728019903,1.0117386213743662e-10) };
\addplot coordinates {
(0.08838834764831845,0.00017927946943448)
(0.04419417382415922,9.288747432430552e-06)
(0.02209708691207961,8.561278085915226e-07)
(0.011048543456039806,6.216451875056968e-08)
(0.005524271728019903,6.4733637784675345e-09)};
\addplot coordinates {
(0.08814237266741118,0.0013863242202072467)
(0.044176082108213596,0.0003411362090257958)
(0.022095901417288732,8.270844843658769e-05)
(0.01104624219043868,2.0738926136251253e-05)
(0.005523930001854928,5.168354870491673e-06)};
\logLogSlopeTriangle{0.53}{0.2}{0.12}{3}{blue};
\legend{Direct $\phi$-FEM,Dual $\phi$-FEM, Standard FEM}
\end{loglogaxis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{loglogaxis}[name = ax1, width = .45\textwidth, xlabel = $h$,
ylabel = $H^1$ relative error
legend style = { at={(0.7,1.3)}, legend columns =1,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(0.08838834764831845,4.4726648824471084e-05)
(0.04419417382415922,6.698604600427322e-06)
(0.02209708691207961,1.4472949769396173e-06)
(0.011048543456039806,3.5820477725646584e-07)
(0.005524271728019903,8.9819527477158e-08)};
\addplot coordinates{
(0.08838834764831845,0.0023935969083683033)
(0.04419417382415922,0.00027798873140552567)
(0.02209708691207961,5.4548588878934805e-05)
(0.011048543456039806,9.040565425995086e-06)
(0.005524271728019903,1.8279013328542586e-06)};
\addplot coordinates{
(0.08814237266741118,0.00874593661775123)
(0.044176082108213596,0.0031309191906407226)
(0.022095901417288732,0.0011619674908133126)
(0.01104624219043868,0.0003941847087612613)
(0.005523930001854928,0.00013882795038475216)};
\logLogSlopeTriangle{0.53}{0.2}{0.12}{2}{blue};
\legend{Direct $\phi$-FEM,Dual $\phi$-FEM, Standard FEM}
\end{loglogaxis}
\end{tikzpicture}
\caption{Test case with pure Dirichlet conditions. $L^2$ relative errors on the left, $H^1$ relative errors on the right.}\label{fig:diriclet_elasticity1}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[name = ax1, width = .45\textwidth, xlabel = $L^2$ relative error,
ylabel = Computing time
anchor=north west,legend style = { at={(1.2,1.2)}, legend columns =2,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(1.5588115850660593e-06,0.06556248664855957)
(9.469566130085001e-08,0.1428232192993164)
(7.2400075783873776e-09,0.43224072456359863)
(8.204951465874365e-10,1.778151512145996)
(1.0117386213743662e-10,8.303027629852295)
};
\addplot coordinates {
(0.00017927946943448,0.0573728084564209)
(9.288747432430552e-06,0.11432981491088867)
(8.561278085915226e-07,0.36284589767456055)
(6.216451875056968e-08,1.687530279159546)
(6.4733637784675345e-09,8.336758613586426)};
\addplot coordinates {
(0.0013863242202072467,0.023479223251342773)
(0.0003411362090257958,0.08866548538208008)
(8.270844843658769e-05,0.31266045570373535)
(2.0738926136251253e-05,1.3940625190734863)
(5.168354870491673e-06,6.85805869102478)};
\legend{Direct $\phi$-FEM, Dual $\phi$-FEM, Standard FEM}
\end{loglogaxis}
\end{tikzpicture}
\caption{Test case with pure Dirichlet conditions. Computing time (in seconds) vs. the $L^2$ relative errors.}\label{fig:computedtime_elasticity}
\end{figure}
\paragraph*{Test case:}\label{testcaseDir}
Let $\mathcal{O}$ be the square $(0,1)^2$ and $\mathcal{T}_h^{\mathcal{O}}$ a uniform mesh on $\mathcal{O}$. Let $\Omega$ be the circle centered at the point $(0.5,0.5)$ of radius $\frac{\sqrt{2}}{4}$. The level set function $\phi$ is thus given by
\begin{equation}\label{phiCircle}
\phi(x,y) = -\frac{1}{8} + (x-0.5)^2 + (y-0.5)^2\,.
\end{equation}
We take the elasticity parameters $E=2$ and $\nu=0.3$,
and the scheme parameters $\gamma = \sigma_D = 20.0$. We use $\mathbb{P}^2$-Lagrange polynomials for both FE spaces $V_h$ and $Q_h$, i.e. we set $k=2$ in (\ref{spaceVh}) and (\ref{spaceQh}).
We finally choose a manufactured exact solution
\begin{equation}\label{manufu}
\mbf{u}=\mbf{u}_{ex} := (\sin(x) \exp(y), \sin(y) \exp(x))
\end{equation}
giving the right hand side $\mbf{f}$ by substitution to (\ref{eq:elast}) and the
boundary conditions $\mbf{u}^g=\mbf{u}_{ex}$ on $\Gamma$. In order to set up both $\phi$-FEM schemes above, we should extend $\mbf{u}^g$ from $\Gamma$ to $\Omega_h$ (in the case of the direct method) or to $\Omega_h^\Gamma$ (in the case of the dual method). To mimic the realistic situation where $\mbf{u}^g$ is known on $\Gamma$ only, we prefer not to extend $\mbf{u}^g$ by $\mbf{u}_{ex}$ everywhere. We rather set
$$
\mbf{u}^g = \mbf{u}_{ex}(1+\phi), \quad \text{ on } \Omega_h\text{ or on }\Omega_h^{\Gamma}
$$
adding to $\mbf{u}_{ex}$ a perturbation which vanishes on the boundary.
The typical active meshes $\mathcal{T}_{h}$ and $\mathcal{T}_{h}^\Gamma$ for $\phi$-FEM are illustrated on Fig.~\ref{fig:meshes dirichlet} (left). Besides the direct $\phi$-FEM (\ref{DirectDiscrete}) and the dual $\phi$-FEM (\ref{DualDiscrete}), we shall present the numerical results obtained by the standard FEM with $\mathbb{P}^2$-Lagrange polynomials on fitted meshes for approximately the same values of $h$, as illustrated on Fig.~\ref{fig:meshes dirichlet} (right). The results obtained by both variants of $\phi$-FEM and by the standard FEM are reported in Figs.~\ref{fig:diriclet_elasticity1} and \ref{fig:computedtime_elasticity}.
We first illustrate the numerical convergences order for the relative errors in $L^2$ and $H^1$ norms at Fig.~\ref{fig:diriclet_elasticity1}. We observe that both variants of $\phi$-FEM demonstrate indeed the expected optimal convergence orders: $h^2$ is the $H^1$-seminorm and $h^3$ in the $L^2$-norm, and the direct variant performs significantly better than the dual one. This can be attributed to a better representation of the solution near the boundary in the direct variant: indeed it is effectively approximated there by fourth-order polynomials ($\mathbb{P}^2$ for $\mbf{w}_h$ times $\mathbb{P}^2$ for $\phi_h$). Moreover, both $\phi$-FEMs, even the dual one, significantly outperform the standard FEM (the latter is even of a suboptimal order in the $L^2$-norm). This can be partially attributed to a coarse geometry approximation. Indeed, we use triangular meshes so that the curved boundary of $\Omega$ is actually approximated by a collection of straight segments, i.e. the boundary facets of the fitted mesh, cf. Fig.~\ref{fig:meshes dirichlet} (right). The superior efficiency of $\phi$-FEM with respect to the standard FEM is further confirmed by Fig.~\ref{fig:computedtime_elasticity}. We report there the computing times on different meshes for the 3 methods and set them against the relative $L^2$ error. These computing times include assembling of the FE matrices and resolution of the resulting linear systems. For a given relative error, the calculations are always much faster with $\phi$-FEM than with the standard FEM. The advantage would be even more significant if the mesh generation times were included, since the construction of active meshes in $\phi$-FEM only involves choosing a subset of cells according to a simple criterion, and some renumbering of the degrees of freedom. We do not dispose however of an efficient implementation of cell selection algorithm at the moment. All our computations are performed using the Python interface for the popular FEniCS computing platform, and the selection of active cells is done by a simple, non-optimized Python script.
\subsection{Mixed boundary conditions}
We now consider the much more complicated case of mixed conditions (\ref{bcGammaD})--(\ref{bcGammaN}) on the boundary $\Gamma = \Gamma_N \cup \Gamma_D$ with $\Gamma_D\neq \varnothing$ and $\Gamma_N\neq \varnothing$. This setting is challenging for any geometrically unfitted method since the junction between the Dirichlet and Neummann boundary parts can occur inside a mesh cell, so that approximating polynomials in this cell should account simultaneously for both boundary conditions. In \cite{cutfemelasticity}, it is demonstrated that the linear elasticity with mixed boundary conditions can be successfully treated by CutFEM. A rigorous mathematical substantiation allowing of the low regularity of the solution is available in \cite{burmanmixed}.
Here, we shall adapt $\phi$-FEM (in the dual form) to the mixed boundary conditions by adopting a ``lazy" approach: we choose to do not impose any boundary conditions on a mesh cell if the Dirichlet/Neumann junction happens to be inside it.
To set up the geometry of the problem, we recall that the domain $\Omega$ is given by the level set function $\phi$, $\Omega=\{\phi<0\}$, and assume furthermore that the boundary partition into the Dirichlet and Neumann parts is governed by a secondary level set $\psi$, $$\Gamma_D=\Gamma\cap\{\psi<0\},\quad \Gamma_N=\Gamma\cap\{\psi>0\}\,.$$ Introducing the active meshes $\mathcal{T}_{h}$ and $\mathcal{T}_{h}^\Gamma$ as above, cf. (\ref{Th}), (\ref{ThGam}), and Fig.~\ref{fig}, we want now further partition the submesh $\mathcal{T}_{h}^\Gamma$ into two parts: $\mathcal{T}_{h}^{\Gamma_D}$ around $\Gamma_D$, serving to impose the Dirichlet boundary conditions, and $\mathcal{T}_{h}^{\Gamma_N}$ around $\Gamma_N$ for the Neumann ones. The natural choice for these is
\begin{equation}\label{ThGamDN}
\mathcal{T}_h^{\Gamma_D} := \{ T \in \mathcal{T}_h^{\Gamma} : \psi\leqslant 0\text{ on }T \} \qquad \text{ and } \qquad
\mathcal{T}_h^{\Gamma_N} := \{ T \in \mathcal{T}_h^{\Gamma} : \psi\geqslant 0\text{ on }T \}\,.
\end{equation}
As before, we denote the domains occupied by meshes $\mathcal{T}_{h}$,$\mathcal{T}_h^{\Gamma}$,$\mathcal{T}_h^{\Gamma_D}$,$\mathcal{T}_h^{\Gamma_N}$ by $\Omega_h$,$\Omega_h^\Gamma$,$\Omega_h^{\Gamma_D}$,$\Omega_h^{\Gamma_N}$ respectively.
Note that these definitions may leave a small number of cells of $\mathcal{T}_h^{\Gamma}$ out of both $\mathcal{T}_h^{\Gamma_D}$ and $\mathcal{T}_h^{\Gamma_N}$. Indeed, there may be mesh cells, near the junction of Dirichlet and Neumann parts, where $\psi$ changes sign inside the cell, so that $\psi$ is neither everywhere positive not everywhere negative on such a cell. This is illustrated at Fig.~\ref{fig:meshes mixed 2} (left) where the Dirichlet/Neumann junction is supposed at $x=0.5$, i.e. the secondary level set is $\psi(x,y)=0.5-y$, c.f. Fig.~\ref{fig:illustration_mixed_elasticity}. The active mesh cells intersected by $\Gamma$ on Fig.~\ref{fig:meshes mixed 2} are either on the Dirichlet side (they form thus $\mathcal{T}_{h}^{\Gamma_D}$ and are colored in red), or on on the Neumann side (they form thus $\mathcal{T}_{h}^{\Gamma_N}$ and are colored in blue), or in between (they are then in $\mathcal{T}_{h}^{\Gamma}$ but not in $\mathcal{T}_{h}^{\Gamma_D}$ or $\mathcal{T}_{h}^{\Gamma_N}$, and are colored in yellow).
Assuming once more that $\mbf{u}$, the solution to (\ref{eq:elast})--(\ref{bcGammaD})--(\ref{bcGammaN}), can be extended from $\Omega$ to $\Omega_h$ as the solution to the same governing equation (\ref{eq:elast}), we introduce a $\phi$-FEM scheme, combining the Dual $\phi$-FEM Dirichlet approach, as introduced in (\ref{DualPhi}) and (\ref{DualDiscrete}), with the indirect imposition of Neumann boundary condition as proposed in \cite{phiFEM2}. We thus keep $\mbf{u}$ as the primary unknown on $\Omega_h$ and recall that it satisfies the variational formulation (\ref{weakExt}). The Dirichlet boundary condition affects the solution on $\Omega_h^{\Gamma_D}$ through the introduction of the auxiliary variable $\mbf{p}_D$ there. We thus adapt (\ref{ansatzDual}) from the pure Dirichlet case as \begin{equation}\label{ansatzDirMixt}
\mbf{u}=\mbf{u}^g + \phi \mbf{p}_D,\text{ on }\Omega_h^{\Gamma_D}\,.
\end{equation}
We have assumed here that $\mbf{u}^g$ is extended from ${\Gamma_D}$ to $\Omega_h^{\Gamma_D}$.
The Neumann boundary condition will affect $\mbf{u}$ on $\Omega_h^{\Gamma_N}$ through the introduction of two auxiliary variables there. We first introduce a tensor-valued variable $\mbf{y}$ on $\Omega_h^{\Gamma_N}$ setting $\mathbf{y}=-\mbf{\sigma}(\mbf{u})$. It remains to impose $\mbf{y}\mbf{n}=-\mbf{g}$ on $\Gamma_N$. To this end, we note that the outward-looking unit normal $\mbf{n}$ is given on $\Gamma$ by $\mbf{n}=\frac{1}{|\nabla\phi|}\nabla\phi$ so that the Neumann boundary condition is satisfied by setting $\mbf{y}\nabla\phi+\mbf{g}|\nabla\phi| =- \mbf{p}_N\phi$ on $\Omega_h^{\Gamma_N}$ where $\mbf{p}_N$ is yet another (vector-valued) auxiliary variable on $\Omega_h^{\Gamma_N}$. This can be summarized as
\begin{subequations}\label{phiN}
\begin{align}
\mathbf{y}+\mbf{\sigma}(\mbf{u})=0,&\quad \text{on } \Omega_h^{\Gamma_N}\,, \\
\mbf{y}\nabla\phi+ \mbf{p}\phi=-\mbf{g}|\nabla\phi|,&\quad \text{on } \Omega_h^{\Gamma_N}\,.
\end{align}
\end{subequations}
Note that the combination of (\ref{ansatzDirMixt}) with (\ref{phiN}a-b) does not impose the mixed Dirichlet/Neumann conditions on the whole of $\Gamma$ since the latter may be not completely covered by $\Omega_h^{\Gamma_D}\cup\Omega_h^{\Gamma_N}$. Fortunately, this defect of the formulation on the continuous level can be repaired on the discrete level by adding the appropriate stabilization to the FE discretization.
To describe the resulting FE scheme, we start by introducing the FE spaces. As before, we fix an integer $k\ge 1$ and keep the space $V_h$, as defined in (\ref{spaceVh}), for the approximation $\mbf{u}_h$ of the primary variable $\mbf{u}$. We need also the spaces for the approximation of the auxiliary variables $\mbf{p}_{h,D}$ and $\mbf{p}_{h,N}$, respectively $Q_h^k(\Omega_h^{\Gamma_D})$ and $Q_h^{k-1}(\Omega_h^{\Gamma_N})$ as defined in (\ref{spaceQh}), as well as the space $Z_h(\Omega_h^{\Gamma_N})$ to approximate $\mbf{y}$,
where for each submesh $\mathcal{M}_h$ of $\mathcal{T}_{h}$, $Z_h(\mathcal{M}_h)$ is defined by
\begin{equation}
\label{espaceZh}
Z_{h}(\mathcal{M}_h) := \left\lbrace \mbf{z}_h :\mathcal{M}_h \to\mathbb{R}^{(d\times d) } : \mbf{z}_{h |T} \in \mathbb{P}^k(T)^{(d\times d)} \ \ \forall T \in \mathcal{M}_h, \ \mbf{z}_h \text{ continuous on }\mathcal{M}_h \right\rbrace \,.
\end{equation}
Now, combining the variational formulation (\ref{weakExt}) with the (\ref{ansatzDirMixt}) and (\ref{phiN}a-b) imposed in a least-squares manner, we get the following scheme: find $\mbf{u}_h \in V_h$, $\mbf{p}_{h,D} \in Q_{h}^k(\Omega_h^{\Gamma_D})$, $\mbf{y}_h \in Z_h(\Omega_h^{\Gamma_N})$ and $\mbf{p}_{h,N} \in Q_{h}^{k-1}(\Omega_h^{\Gamma_N})$ such that
\begin{multline}\label{mixtDicrete}
\int_{\Omega_h} {\mbf{\sigma}}(\mbf{u}_h) : \nabla\mbf{v}_h
- \int_{\partial\Omega_h\setminus\partial\Omega_{h,N}}\mbf{\sigma}(\mbf{u}_h)\mbf{n} \cdot \mbf{v}_h
+ \int_{\partial\Omega_{h,N}}\mbf{y}_h\mbf{n} \cdot \mbf{v}_h
\\ + \gamma_u \int_{\Omega_h^{\Gamma_N}}(\mbf{y}_h+ {\mbf{\sigma}}(\mbf{u}_h)) : (\mbf{z}_h+ {\mbf{\sigma}}(\mbf{v}_h)) + \frac{\gamma_p}{h^2} \int_{\Omega_h^{\Gamma_N}}\left( \mbf{y}_h\nabla \phi_h + \frac{1}{h}\mbf{p}_{h,N} \phi_h \right)\cdot\left( \mbf{z}_h\nabla \phi_h + \frac{1}{h}\mbf{q}_{h,N}\phi_h \right)
\\ + \frac{\gamma}{h^2} \int_{\Omega_h^{\Gamma_D}} (\mbf{u}_h - \frac{1}{h}\phi_h \mbf{p}_{h,D}) \cdot (\mbf{v}_h - \frac{1}{h}\phi_h \mbf{q}_{h,D})
+ G_h(\mbf{u}_h,\mbf{v}_h)
+ J_h^{lhs,D}(\mbf{u}_h,\mbf{v}_h) + J_h^{lhs,N}(\mbf{y}_h,\mbf{z}_h) \\
= \int_{\Omega_h} \mbf{f} \cdot \mbf{v}_h + \frac{\gamma}{h^2} \int_{\Omega_h^D} \mbf{u}^g_h \cdot (\mbf{v}_h - \frac{1}{h}\phi_h \mbf{q}_{h,D}) - \frac{\gamma_p}{h^2} \int_{\Omega_h^{\Gamma_N}} \mbf{g} \cdot |\nabla \phi_h| (\mbf{z}_h \cdot \nabla \phi_h + \frac{1}{h} \mbf{q}_{h,N} \phi_h) \\
+ J_h^{rhs,D}(\mbf{v}_h) + J_h^{rhs,N}(\mbf{z}_h), \\ \forall \mbf{v}_h\in V_h, \mbf{q}_{h,D} \in Q_{h}^k(\Omega_h^{\Gamma_D}), \mbf{z}_h \in Z_h(\Omega_h^{\Gamma_N}), \mbf{q}_{h,N} \in Q_{h}^{k-1}(\Omega_h^{\Gamma_N})\,.
\end{multline}
We have added here the ghost stabilization $G_h$ defined by (\ref{Gh}) as in the pure Dirichlet case.
The additional stabilizations terms $J_h^{lhs}$, $J_h^{rhs}$ are now adapted from (\ref{Jhlrhs}) and separated into the terms acting on $\mbf{u}_h$ on the Dirichlet cells of $\mathcal{T}_{h}^\Gamma$ (and also those not marked), and the terms acting on $\mbf{y}_h$ on the Neumann cells:
\begin{align}
J_h^{lhs,D}(\mbf{u}, \mbf{v}) &: = \sigma_D h^2 \sum_{T \in \mathcal{T}_{h}^{\Gamma} \setminus \mathcal{T}_{h}^{\Gamma_N}} \int_T \Div \mbf{\sigma}(\mbf{u})\cdot \Div\mbf{\sigma}(\mbf{v}) \,,
&& \notag \\
J_h^{rhs,D} (\mbf{v}) &: =- \sigma_D h^2\sum_{T \in \mathcal{T}_{h}^{\Gamma} \setminus \mathcal{T}_{h}^{\Gamma_N}} \int_T \mbf{f} \cdot\Div\mbf{\sigma}(\mbf{v})
\,,
&& \notag \\
J_h^{lhs,N}(\mbf{y},\mbf{z}) &=\gamma_{div} \int_{\Omega_h^{\Gamma_N}}\Div \mbf{y} \cdot \Div \mbf{z}
\,,
& J_h^{rhs,N}(\mbf{z}) &= \gamma_{div} \int_{\Omega_h^{\Gamma_N}} \mbf{f} \cdot \Div\mbf{z}.
\label{JhN}
\end{align}
These stabilizations are consistent with the governing equations $\Div \mbf{\sigma}(\mbf{u})=-\mbf{f}$, rewritten as $\Div \mbf{y}=\mbf{f}$, using (\ref{phiN}a), wherever possible, i.e. on $\Omega_h^{\Gamma_N}$. Note that a similar treatment is applied to the boundary integral terms on $\partial\Omega$ in (\ref{weakExt}). In \eqref{mixtDicrete}, they are rewritten in terms of $\mbf{y}$, using (\ref{phiN}a) and (\ref{phiN}b), wherever possible. We thus introduce a part of the boundary $\partial\Omega_h$, referred to as $\partial\Omega_{h,N}$, formed by the boundary facets of $\mathcal{T}_{h}$ belonging to the cells in $\mathcal{T}_{h}^{\Gamma_N}$. We replace ${\mbf{\sigma}}(\mbf{u}_h)$ by $-\mbf{y}_h$ on $\partial\Omega_{h,N}$, while keeping the boundary term as is on the remaining part of the boundary. All this contributes to the coerciveness of the bilinear form in (\ref{mixtDicrete}) and good conditioning of the matrix as can be proven following the ideas of \cite{phiFEM2}. We emphasize again that neither Dirichlet nor Neumann boundary conditions are imposed in any way in scheme (\ref{mixtDicrete}) on the cells in $\mathcal{T}_{h}^{\Gamma}\setminus(\mathcal{T}_{h}^{\Gamma_D}\cup\mathcal{T}_{h}^{\Gamma_N})$ (the cells in yellow on Fig.~\ref{fig:meshes mixed 2}). On the other hand, both stabilizations $G_h$ and $J_h$ are active on the whole $\mathcal{T}_{h}^{\Gamma}$, comprising these cells not marked as Dirichlet or Neumann.
\begin{figure}[tbp]
\centering
\begin{tikzpicture}[scale = 2.5]
\filldraw[color=white, fill=violet!10, thick](0,0) circle (1);
\draw [red, very thick] (0:1) arc [radius=1, start angle=0, end angle=90];
\draw [red, very thick] (270:1) arc [radius=1, start angle=270, end angle=360];
\draw [blue, very thick] (90:1) arc [radius=1, start angle=90, end angle=270];
\draw(0,0)[color = violet!100]node{\huge$\Omega$};
\draw(1,0)[right, color = red!100]node{\huge$\Gamma_D$};
\draw(-1,0)[left, color = blue!100]node{\huge$\Gamma_N$};
\draw(-1,-0.2)[left, color = blue!100]node{$\mbf{\sigma}\mbf{n} = \mbf{g}$};
\draw(1,-0.2)[right, color = red!100]node{$\mbf{u} = \mbf{u}^g$};
\end{tikzpicture}
\caption{Test case with mixed boundary conditions: the geometry of Dirichlet and Neumann boundary parts.}\label{fig:illustration_mixed_elasticity}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width = 0.45\textwidth]{mesh_mixed_phi_fem.png}
\quad \includegraphics[width=0.45\textwidth]{mesh_mixed_standard.png}
\caption{Test case with mixed boundary conditions, meshes resolving the Dirichlet/Neumann junction. Left: active meshes for $\phi$-FEM, red for $\mathcal{T}_{h}^{\Gamma_D}$, blue for $\mathcal{T}_{h}^{\Gamma_N}$. Right: a mesh for standard FEM, red boundary facets on ${\Gamma_D}$, blue boundary facets on ${\Gamma_N}$. }\label{fig:meshes mixed 1}
\end{figure}
\begin{figure}[p]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[name = ax1, width = .45\textwidth, xlabel = $h$,
ylabel = relative error,
legend style = { at={(0.8,1.35)}, legend columns =1,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(0.08838834764831845,0.00019162789405352987)
(0.04419417382415922,9.019019807546487e-06)
(0.02209708691207961,7.722996330926221e-07)
(0.011048543456039806,6.463963785481317e-08)
(0.005524271728019903,6.723748171411495e-09)};
\addplot coordinates {
(0.08838834764831845,0.0022236508129968484)
(0.04419417382415922,0.00025661468575222617)
(0.02209708691207961,4.506471015826728e-05)
(0.011048543456039806,8.087493674563759e-06)
(0.005524271728019903,1.6960635243482559e-06)};
\addplot coordinates {
(0.08789154903752427,0.0035246050135470824)
(0.044129789705730664,0.0008236790908525067)
(0.022096512804264105,0.0002031809648554605)
(0.011048263536162476,5.020191340038304e-05)
(0.005524239072190211,1.2340916877939585e-05)};
\addplot coordinates{
(0.08789154903752427,0.016974433240575463)
(0.044129789705730664,0.006034873183160213)
(0.022096512804264105,0.0015285621307403931)
(0.011048263536162476,0.0007285213745054552)
(0.005524239072190211,0.0002601134487378277)};
\logLogSlopeTriangle{0.53}{0.2}{0.45}{2}{red};
\logLogSlopeTriangle{0.53}{0.2}{0.12}{3}{blue};
\legend{$L^2$ error $\phi$-FEM, $H^1$ error $\phi$-FEM, $L^2$ error standard FEM, $H^1$ error standard FEM}
\end{loglogaxis}
\begin{loglogaxis}[width = .45\textwidth, ylabel = Computing time (s), name = ax2, at = {(ax1.south east)}, xshift = 2cm,
xlabel =$L^2$ relative error,
legend style = { at={(1,1.2)}, legend columns =2,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(0.00019162789405352987,0.07287120819091797)
(9.019019807546487e-06,0.13202333450317383)
(7.722996330926221e-07,0.3721940517425537)
(6.463963785481317e-08,1.2812981605529785)
(6.723748171411495e-09,5.520588159561157)};
\addplot coordinates {
(0.0035246050135470824,0.020807504653930664)
(0.0008236790908525067,0.05296206474304199)
(0.0002031809648554605,0.21299290657043457)
(5.020191340038304e-05,0.9861962795257568)
(1.2340916877939585e-05,4.93593955039978)};
\legend{$\phi$-FEM, Standard FEM}
\end{loglogaxis}
\end{tikzpicture}
\caption{Test case with mixed boundary conditions, results on meshes as on Fig.~\ref{fig:meshes mixed 1}. Left: $L^2$ and $H^1$ relative errors under the mesh refinement. Right: computing time vs. the $L^2$ relative error. }\label{fig:mixed_elasticity}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width = 0.45\textwidth]{mesh_mixed_phi_fem_bc.png}
\quad \includegraphics[width=0.45\textwidth]{mesh_mixed_standard_BC.png}
\caption{Test case with mixed boundary conditions, meshes not resolving the Dirichlet/Neumann junction. Left: active meshes for $\phi$-FEM, red for $\mathcal{T}_{h}^{\Gamma_D}$, blue for $\mathcal{T}_{h}^{\Gamma_N}$, yellow for $\mathcal{T}_{h}^{\Gamma}$ otherwise unmarked. Right: a mesh for standard FEM, red boundary facets on ${\Gamma_D}$, blue boundary facets on ${\Gamma_N}$, note that some boundary facets contain both Dirichlet and Neumann parts. }\label{fig:meshes mixed 2}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[name = ax1, width = .45\textwidth, xlabel = $h$,
ylabel = relative error,
legend style = { at={(0.8,1.45)}, legend columns =1,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(0.08318903308077032,0.00014071942503931245)
(0.04285495643554845,1.6398220861748423e-05)
(0.021757131728816926,8.037137350273614e-07)
(0.010962895832349594,1.7379414354329388e-07)
(0.005502776507288445,1.0190675532414034e-08)};
\addplot coordinates {
(0.08318903308077032,0.001685377218347434)
(0.04285495643554845,0.00025228072880838193)
(0.021757131728816926,4.1981937129849344e-05)
(0.010962895832349594,8.04580293961358e-06)
(0.005502776507288445,1.7062245894835732e-06)};
\addplot coordinates {
(0.08814237266741118,0.003318016542529396)
(0.044176082108213596,0.0007701415118214259)
(0.022095901417288732,0.0001959052803742703)
(0.01104624219043868,4.965445115252627e-05)
(0.005523930001854928,1.2033909691066592e-05)};
\addplot coordinates{
(0.08814237266741118,0.0170779277806841)
(0.044176082108213596,0.005916646533061518)
(0.022095901417288732,0.001611950067370548)
(0.01104624219043868,0.0007352560307018717)
(0.005523930001854928,0.000260614890322195)};
\logLogSlopeTriangle{0.53}{0.2}{0.45}{2}{red};
\logLogSlopeTriangle{0.53}{0.2}{0.12}{3}{blue};
\legend{$L^2$ error $\phi$-FEM, $H^1$ error $\phi$-FEM, $L^2$ error standard FEM, $H^1$ error standard FEM}
\end{loglogaxis}
\begin{loglogaxis}[width = .45\textwidth, ylabel = Computing time (s), name = ax2, at = {(ax1.south east)}, xshift = 2cm,
xlabel =$L^2$ relative error,
legend style = { at={(1.1,1.1)}, legend columns =2,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(0.00014071942503931245,0.07939004898071289)
(1.6398220861748423e-05,0.16697168350219727)
(8.037137350273614e-07,0.8911266326904297)
(1.7379414354329388e-07,5.4493019580841064)
(1.0190675532414034e-08,26.988556385040283)};
\addplot coordinates {
(0.003318016542529396,0.022522687911987305)
(0.0007701415118214259,0.07380819320678711)
(0.0001959052803742703,0.21341729164123535)
(4.965445115252627e-05,0.9971983432769775)
(1.2033909691066592e-05,5.366613149642944)};
\legend{$\phi$-FEM, Standard FEM}
\end{loglogaxis}
\end{tikzpicture}
\caption{Test case with mixed boundary conditions, results on meshes as on Fig.~\ref{fig:meshes mixed 2}. Left: $L^2$ and $H^1$ relative errors under the mesh refinement. Right: computing time vs. the $L^2$ relative error. }\label{fig:mixed_elasticity_2}
\end{figure}
\paragraph*{Test case: }
We are now going to present some numerical results with method (\ref{mixtDicrete}) highlighting the optimal convergence of $\phi$-FEM and comparing it with a standard FEM.
We use the same geometry (\ref{phiCircle}), elasticity parameters and the exact solution (\ref{manufu}) as for the case of pure Dirichlet conditions on page~\pageref{testcaseDir}. We set furthermore the Dirichlet boundary conditions (\ref{bcGammaD}) for $x>0.5$ and the Neumann boundary conditions (\ref{bcGammaN}) for $x<0.5$, c.f. Fig.~\ref{fig:illustration_mixed_elasticity}, i.e. we choose the secondary level set as $\psi=0.5-x$. The data $\mbf{u}^g$ and $\mbf{g}$ are computed from the exact solution. In $\phi$-FEM they should be extended from $\Gamma$ to appropriate portion of the strip $\Omega_h^\Gamma$. We choose these extensions as
\[\begin{cases} \mbf{u}^g = \mbf{u}_{ex}(1+\phi), \quad &\text{ on } \Omega_h^{\Gamma} \cap \{ x \geqslant 0.5 \} \,, \\
\mbf{g} = \mbf{\sigma}(\mbf{u}_{ex}) \frac{\nabla \phi}{\| \nabla \phi\|} + \mbf{u}_{ex} \phi, \quad &\text{ on } \Omega_h^{\Gamma} \cap \{ x < 0.5 \}\,. \end{cases} \]
Again, both expressions are perturbed away from $\Gamma$ to mimic the real-life situation where the data are available only on $\Gamma$. The stabilization parameters are set to $\gamma_{div} = \gamma_u = \gamma_p = 1.0$, $\sigma = 0.01$ and $\gamma = \sigma_D = 20.0$.
We start by studying mesh configurations where the Dirichlet-Neumann junction line $\{x=0.5\}$ happens to be covered by the mesh facets both in the background mesh used by $\phi$-FEM, and in the fitted mesh used by FEM, as illustrated in Fig.~\ref{fig:meshes mixed 1}. All the boundary cells in $\mathcal{T}_{h}^\Gamma$ are marked in this case either as Dirichlet or as Neumann ones, according to the criterion (\ref{ThGamDN}), giving, respectively, red and blue cells on Fig.~\ref{fig:meshes mixed 1} (left). There is no ambiguity for the standard FEM fitted meshes: all the boundary facets are straightforwardly marked either as Dirichlet or as Neumann, cf. Fig.~\ref{fig:meshes mixed 1} (right) with the same color code as for the unfitted mesh. The results obtained by both $\phi$-FEM (\ref{mixtDicrete}) and the standard FEM, using $\mathbb{P}^2$-Lagrange polynomials for $\mbf{u}_h$ in both cases, are reported in Fig.~\ref{fig:mixed_elasticity}. On the left, the relative errors are plotted with respect to the mesh step. We observe again the optimal convergence orders for $\phi$-FEM, while the convergence of the standard FEM is sub-optimal in the $L^2$-norm. The $\phi$-FEM approach is again systematically more precise in both norms. On the right side of the same figure, we plot the computing times and notice again that $\phi$-FEM is less expensive than the standard FEM.
Let us now turn to a less artificial mesh configuration where the Dirichlet/Neumann junction point can turn up inside a mesh cell of the background mesh, or inside a boundary facet of the fitted mesh. We study these situations on a series of meshes, as illustrated in Fig.~\ref{fig:meshes mixed 2}. In the case of the background meshes used for $\phi$-FEM, we ensure in particular that there are no vertical grid line with the abscissa $x=0.5$ so that there are exactly 4 cells cells in $\mathcal{T}_h^{\Gamma}$ that are neither in $\mathcal{T}_h^{\Gamma_D}$ nor in $\mathcal{T}_h^{\Gamma_N}$ (yellow cells on the left side of Fig.~\ref{fig:meshes mixed 2}). We recall that scheme (\ref{mixtDicrete}) does not impose any boundary conditions on these cells, but retains the stabilization there (in particular, the governing equation is still re-enforced on these cells in the least squares manner). Note that the fitted FEM is not straightforward to implement in this case either, since the Dirichlet boundary conditions cannot be strongly imposed on the boundary facets which turn up only partially on the Dirichlet side. We bypass this difficulty by treating the Dirichlet conditions by penalization, so that the "standard" FEM is now defined as: find $\mbf{u}_h$ in the $\mathbb{P}^k$ FE space (without any restrictions on the boundary) such that
\begin{equation}\label{FEMmixed}
\int_{\Omega} \mbf{\sigma} (\mbf{u}_h) : \nabla\mbf{v}_h
+ \frac{1}{\varepsilon}\int_{\Gamma_D} \mbf{u}_h \cdot \mbf{v}_h
= \int_{\Omega} \mbf{f} \cdot \mbf{v}_h+
\int_{\Gamma_N} \mbf{g} \cdot \mbf{v}_h
+ \frac{1}{\varepsilon}\int_{\Gamma_D} \mbf{u}^g \cdot \mbf{v}_h
\end{equation}
for all $\mbf{v}_h$ in the same FE space as $\mbf{u}_h$, with a small parameter $\varepsilon>0$.
The mesh refinement study in this case is reported at Fig.~\ref{fig:mixed_elasticity_2}. Comparing the results with those of Fig.~\ref{fig:mixed_elasticity} (obtained on idealized unrealistic meshes without any unmarked cells), we observe that the behavior of $\phi$-FEM (\ref{mixtDicrete}) is almost unaffected by the presence (or not) of the unmarked ``yellow" cells, although the convergence curve for the $L^2$ relative error is now slightly less regular. In particular, the conclusions about the relative merits of $\phi$-FEM and the fitted FEM, now in version (\ref{FEMmixed}), remain unchanged: $\phi$-FEM is more precise on comparable meshes and less expensive in terms of the computing times for a given error tolerance.
\FloatBarrier
\section{Linear elasticity with multiple materials.}\label{sectInterface}
We now consider the case of interfaces problems, i.e. partial differential equations with coefficients jumping across an interface, which can cut the computational mesh in an arbitrary manner. The simplest meaningful example in the realm of linear elasticity is given by structures consisting of multiple materials having different elasticity parameters.
This situation has already been treated in XFEM \cite{carraroXFEMinterface, NitscheXFEMinterface, xiaoXFEM, Xiao}, CutFEM \cite{CutFEMinterface, Hansbo2005interface,hansbointerface,lehrenfeldinterface}, and SBM \cite{interfaceSBMli} paradigms.
We are now going to demonstrate the applicability of $\phi$-FEM in this context.
Let us assume that the structure occupies a domain $\Omega$ and it consists of two materials that occupy two subdomains $\Omega_1$ and $\Omega_2$ separated by the interface $\Gamma$. To fix the ideas, we further assume that $\Omega_1$ is surrounded by $\Omega_2$, so that the interface $\Gamma$ can actually be described as $\Gamma = \partial\Omega_1$, as illustrated at Fig.~\ref{fig:interface}. We also assume that the displacement $\mbf{u}$ is given on the external boundary (these assumptions are not restrictive and the forthcoming method can be easily adapted to other situations, e.g. with $\Gamma$ touching $\partial\Omega$ or with Neumann boundary conditions on the external boundary). We then consider the problem for the displacement $\mbf{u}$ on $\Omega$:
\begin{equation}\label{eq:interface_elasticity}
\begin{cases}
- \Div \mbf{\sigma} (\mbf{u}) &= \mbf{f} \,, \text{ on } \ \Omega\backslash\Gamma \,, \\
\mbf{u} &= \mbf{u}^g \,, \text{ on } \ \partial \Omega \,, \\
[\mbf{u} ] &= 0\,, \text{ on } \ \Gamma \,, \\
[\mbf{\sigma}(\mbf{u}) \mbf{n}] &= 0 \,, \text{ on } \ \Gamma \,,
\end{cases}
\end{equation}
where $\mbf{n}$ is the unit normal pointing from $\Omega_1$ to $\Omega_2$, and the brackets $[\cdot]$ stand for the jump across $\Gamma$.
The elasticity parameters are assumed constant on each sub-domain, but different from each other. The stress tensor is thus given by
\[ \mbf{\sigma} (\mbf{u}) =
\begin{cases}
\mbf{\sigma}_1(\mbf{u})=2\mu_1 \mbf{\varepsilon}(\mbf{u})+\lambda_1 (\Div \mbf{u})I\,,\text{ on }\Omega_1\,, \\
\mbf{\sigma}_2(\mbf{u})=2\mu_2 \mbf{\varepsilon}(\mbf{u})+\lambda_2 (\Div \mbf{u})I\,,\text{ on }\Omega_2\,,
\end{cases}
\]
with the Lamé parameters $\lambda_i $ and $\mu_i$ defined via the formulas (\ref{muEnu}) with given $E_i,\nu_i$, $i=1,2$. Introducing the displacements $\mbf{u}_i=\mbf{u} |_{\Omega_i}$, $i=1,2$ on $\Omega_1$ and $\Omega_2$ separately, problem (\ref{eq:interface_elasticity}) can be rewritten as the system of two coupled sub-problems:
\begin{equation}\label{eq:interface_elasticity2}
\begin{cases}
- \Div \mbf{\sigma}_i (\mbf{u}_i) &= \mbf{f} \,, \text{ on } \ \Omega_i\,,\ i=1,2, \\
\mbf{u}_2 &= \mbf{u}^g \,, \text{ on } \ \partial \Omega \,, \\
\mbf{u}_1 &= \mbf{u}_2\,, \text{ on } \ \Gamma \,, \\
\mbf{\sigma}_1(\mbf{u}_1) \mbf{n} &= \mbf{\sigma}_2(\mbf{u}_2) \mbf{n} \,, \text{ on } \ \Gamma \,.
\end{cases}
\end{equation}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=3]
\filldraw[color=black!100, fill=lightgray!40, thick]rectangle (2,2);
\filldraw[color=violet!100, fill=blue!20, thick](1,1) circle (0.65);
\draw(1,1)[color = blue!100]node{\Huge$\Omega_1$};
\draw(1,1.65)[above, color = violet!100]node{\Huge$\Gamma$};
\draw(1,0.35)[below, color = violet!100]node{$[\mbf{\sigma}(\mbf{u}) \mbf{n} ] = \mbf{0}$};
\draw(1,0.25)[below, color = violet!100]node{$[\mbf{u}] = 0$};
\draw(0.3,1.80)[color = black!100]node{\Huge$\Omega_2$};
\draw(1,0.0)[below, color = black!100]node{$\mbf{u} = \mbf{u}^g$};
\end{tikzpicture}
\caption{Geometry with the interface $\Gamma$: elasticity with multiple materials.}\label{fig:interface}
\end{figure}
We suppose that $\Omega$ is sufficiently simple-shaped so that a matching mesh $T_h$ on $\Omega$ is easily available (again, this assumption is not restrictive; we have seen that a complex-shape domain $\Omega$ can be also treated by $\phi$-FEM). On the contrary, the mesh $\mathcal{T}_{h}$ is not supposed to match the internal interface $\Gamma$ and we are going to adapt $\phi$-FEM to this situation. The starting point is the reformulation (\ref{eq:interface_elasticity2}). We are thus going to discretize separately $\mbf{u}_1$ on $\Omega_1$ and $\mbf{u}_2$ on $\Omega_2$. To this end, we introduce two active meshes $\mathcal{T}_{h,1}$ and $\mathcal{T}_{h,2}$, sub-meshes of $\mathcal{T}_{h}$, constructed by retaining in $\mathcal{T}_{h,i}$ the cells of $\mathcal{T}_{h}$ having a non-empty intersection with $\Omega_i$. In practice, the sub-domains are defined through a level-set $\phi$:
\begin{equation*}
\Omega_1 = \{ \phi > 0 \}\cap\Omega, \qquad \Omega_2 = \{ \phi < 0 \}, \qquad \Gamma = \{ \phi = 0 \}\cap\Omega\, .
\end{equation*}
The sub-meshes $\mathcal{T}_{h,i}$ are defined using a piecewise-polynomial approximation $\phi_h$ of $\phi$, rather than $\phi$ itself:
\begin{equation}\label{Th12}
\mathcal{T}_{h,1}:=\{T\in \mathcal{T}_{h}:T\cap \{\phi_h>0\}\neq \varnothing\}\text{ and }
\mathcal{T}_{h,2}:=\{T\in \mathcal{T}_{h}:T\cap \{\phi_h<0\}\neq \varnothing\}\,.
\end{equation}
We also introduce the sub-mesh $\mathcal{T}_{h}^\Gamma$ as the intersection $\mathcal{T}_{h,1}\cap\mathcal{T}_{h,2}$ and denote by $\Omega_{h,1}$, $\Omega_{h,2}$, $\Omega_h^{\Gamma}$ the domains covered by meshes $\mathcal{T}_{h,1}$, $\mathcal{T}_{h,2}$, $\mathcal{T}_{h}^\Gamma$ respectively. Similarly to the simpler settings considered above, the unknowns $\mbf{u}_1$ and $\mbf{u}_2$, living physically on $\Omega_1$ and $\Omega_2$, will be discretized on larger domains $\Omega_{h,1}$ and $\Omega_{h,2}$, introducing artificial extensions on narrow fictitious strips near $\Gamma$. On the discrete level, the unknowns will be thus redoubled on the joint sub-mesh $\mathcal{T}_{h}^\Gamma$. Several auxiliary unknowns will be introduced on $\Omega_h^\Gamma$ similar to the case of mixed boundary conditions above (indeed, we have to discretize both Dirichlet and Neumann conditions on the interface $\Gamma$ in the current setting).
We now put the program above into the equations, first on the continuous level. Similarly to (\ref{weakExt}), the unknowns $\mbf{u}_i$ extended to larger domains $\Omega_h^i$ satisfy formally the variational formulations, cf. the first equation in (\ref{eq:interface_elasticity2}):
\begin{equation}\label{weakExtomi}
\int_{\Omega_{h,i}} \mbf{\sigma}_i (\mbf{u}_i) : \nabla\mbf{v}_i
- \int_{\partial\Omega_{h,i}} \mbf{\sigma}_i (\mbf{u}_i) \mbf{n}_i \cdot \mbf{v}_i
= \int_{\Omega_{h,i}} \mbf{f} \cdot \mbf{v}_i, \quad \forall \mbf{v}_i \text{ on }\Omega_{h_i}\text{ s.t.} \mbf{v}_i=\mbf{0}\text{ on }\partial\Omega\,.
\end{equation}
Here, with a slight abuse of notations, ${\partial\Omega_{h,i}}$ denotes the component of the boundary of $\Omega_{h,i}$ other than $\partial\Omega$, and $\mbf{n}_i$ denotes the unit normal vector on $\partial\Omega_{h,i}$ pointing outside $\Omega_{h,i}$. The boundary conditions on the external boundary $\partial\Omega$, i.e. the second equation in (\ref{eq:interface_elasticity2}), will be imposed strongly. The remaining equations in (\ref{eq:interface_elasticity2}), i.e. the interface conditions on $\Gamma$, will be imposed by introduction of auxiliary variables on $\Omega_h^\Gamma$: the vector-valued $\mbf{p}$ (similar to the dual version of $\phi$-FEM for the Dirichlet boundary conditions above) and matrix-valued $\mbf{y}_1$,$\mbf{y}_2$ (similar to $\phi$-FEM for the Neumann boundary conditions). This gives, cf. the last two equations in \eqref{eq:interface_elasticity2}:
\begin{align}
\label{InterMix1}
\mbf{u}_1 - \mbf{u}_2 + \mbf{p}\phi = 0 \,, \quad &\text{ on } \ \Omega_h^{\Gamma}, \\
\label{InterMix2}
\mbf{y}_i + \mbf{\sigma}_i (\mbf{u}_i) = 0 \,, \quad &\text{ on } \ \Omega_h^{\Gamma},\ i=1,2, \\
\label{InterMix3}
\mbf{y}_1 \nabla \phi- \mbf{y}_2\nabla \phi = 0 \,, \quad &\text{ on } \ \Omega_h^{\Gamma} \,.
\end{align}
Equation (\ref{InterMix3}) above extends the last equation in (\ref{eq:interface_elasticity2}) from $\Gamma$ to $\Omega_h^\Gamma$ since the normal on $\Gamma$ is colinear with the vector $\nabla\phi$ there.\phantom{\ref{InterMix2}}
We are now going to discretize equations (\ref{weakExtomi})--(\ref{InterMix3}). We fix an integer $k\ge 1$ and introduce the FE spaces for the primary variables $\mbf{u}_i$:
\begin{multline}\label{espaceVhi}
V_{h,i} := \big\lbrace \mbf{v}_h:\Omega_{h,i}\to\mathbb{R}^d : \mbf{v}_{h |T} \in \mathbb{P}^k(T)^d \ \ \forall T \in \mathcal{T}_h, \ \mbf{v}_h\text{ continuous on }\Omega_{h,i}\,, \\
\text{ and } \mbf{v}_h = I_h\mbf{u}^g\ \text{ on } \partial \Omega \big\rbrace \,
\end{multline}
with the standard FE interpolation $I_h$, and their homogeneous counterparts $V_{h,i}^0$ with the constraint $\mbf{v}_h = \mbf{0}\ \text{ on } \partial \Omega$, to be used for the test functions. We recall moreover the spaces $Q_{h}(\Omega_h^{\Gamma})$ and $Z_h(\Omega_h^{\Gamma})$ defined respectively by (\ref{spaceQh}) and (\ref{espaceZh}). Combining (\ref{weakExtomi}) with (\ref{InterMix1})--(\ref{InterMix3}) taken in the least square sense, gives the following scheme:\\
find $\mbf{u}_{h,1}\in V_{h,1}$, $\mbf{u}_{h,2}\in V_{h,2}$, $\mbf{p}_h \in Q_{h}^k(\Omega_h^{\Gamma})$, $\mbf{y}_{h,1}, \mbf{y}_{h,2} \in Z_h(\Omega_h^{\Gamma})$ such that,
\begin{multline}\label{discreteInterf}
\sum_{i=1}^2 \int_{\Omega_{h,i}} \mbf{\sigma}_i(\mbf{u}_{h,i}):\nabla\mbf{v}_{h,i}
+ \sum_{i=1}^2\int_{\partial\Omega_{h,i}}\mbf{y}_{h,i}\mbf{n} \cdot \mbf{v}_h \\
+ \frac{\gamma_p}{h^2} \int_{\Omega_h^{\Gamma}} (\mbf{u}_{h,1} - \mbf{u}_{h,2} + \frac{1}{h} \mbf{p}_h\phi_h)\cdot( \mbf{v}_{h,1} - \mbf{v}_{h,2} + \frac{1}{h} \mbf{q}_h\phi_h) \\
+ \gamma_u \sum_{i=1}^2\int_{\Omega_h^{\Gamma}}(\mbf{y}_{h,i}+ \mbf{\sigma}_i(\mbf{u}_{h,i})) : (\mbf{z}_{h,i}+ \mbf{\sigma}_i(\mbf{v}_{h,i})) \\
+ \frac{\gamma_y}{h^2} \int_{\Omega_h^{\Gamma}} ( \mbf{y}_{h,1}\nabla \phi_h - \mbf{y}_{h,2}\nabla \phi_h )\cdot( \mbf{z}_{h,1}\nabla \phi_h- \mbf{z}_{h,2} \nabla \phi_h) \\
+ \sum_{i=1}^2 \left(G_h(\mbf{u}_{h,i}, \mbf{v}_{h,i}) + J_h^{lhs,N} (\mbf{y}_{h,i}, \mbf{z}_{h,i}) \right)
= \sum_{i=1}^2\int_{\Omega_{h,i}} \mbf{f} \cdot \mbf{v}_{h,i}
+ \sum_{i=1}^2 J_h^{rhs,N} (\mbf{z}_{h,i}) \,, \\\quad \forall \,
\mbf{v}_{h,1}\in V_{h,1}^0, \mbf{v}_{h,2}\in V_{h,2}^0, \mbf{q}_h \in Q_{h}^k(\Omega_h^{\Gamma}), \mbf{z}_{h,1}, \mbf{z}_{h,2} \in Z_h(\Omega_h^{\Gamma}) \,.
\end{multline}
Similarly to the previous settings, we have added here the ghost stabilization $G_h$ defined by (\ref{Gh}) and the additional stabilization $J_h^{rhs,N}$ defined by (\ref{JhN}) with $\Omega_h^{\Gamma_N}$ replaced by $\Omega_h^{\Gamma}$ and imposing $\Div\mbf{y}_i=\mbf{f}$ on $\Omega_h^{\Gamma}$ in the least squares sense.
\begin{figure}[btp]
\centering
\includegraphics[width=0.45\textwidth]{mesh_interface_phi_fem.png}
\includegraphics[width=0.45\textwidth]{mesh_interface_standard.png}
\caption{Linear elasticity with multiple materials. Left: a mesh used for $\phi$-FEM ($\Omega_h^{\Gamma}$ painted in yellow); Right: a mesh matching the interface for standard FEM (yellow and white represent the two materials). }\label{fig:interface mesh}
\end{figure}
\begin{figure}[btp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[name = ax1, width = .45\textwidth, xlabel = $h$, ylabel = relative error,
legend style = { at={(1,1.4)}}]
\addplot coordinates{
(0.14142135623730964,5.675910662284101e-05)
(0.07071067811865482,6.2668613054719945e-06)
(0.03535533905932741,7.89051810290236e-07)
(0.01767766952966378,9.732876356767048e-08)
(0.00883883476483197,1.2751093753131592e-08)
};
\addplot coordinates{
(0.14142135623730964,0.0005631985529905538)
(0.07071067811865482,0.00013288249461713975)
(0.03535533905932741,3.2590636010544636e-05)
(0.01767766952966378,8.08676312960304e-06)
(0.00883883476483197,2.014362599383071e-06)};
\addplot coordinates{
(0.1552914270615126,0.011092067908400144)
(0.07940855385524764,0.002876516858202795)
(0.03977439957208735,0.0007593967526539493)
(0.019883151483778883,0.00018689260740527478)
(0.009943258702793148,4.682948959033794e-05)};
\addplot coordinates{
(0.1552914270615126,0.040116831985460386)
(0.07940855385524764,0.014226141479733757)
(0.03977439957208735,0.005225673219304294)
(0.019883151483778883,0.0018109814763434579)
(0.009943258702793148,0.0006366421849599564)};
\logLogSlopeTriangle{0.3}{0.2}{0.3}{2}{red};
\logLogSlopeTriangle{0.53}{0.2}{0.12}{3}{blue};
\legend{$L^2$ error $\phi$-FEM,$H^1$ error $\phi$-FEM,$L^2$ error standard FEM,$H^1$ error standard FEM}
\end{loglogaxis}
\begin{loglogaxis}[name = ax2, at = {(ax1.south east)}, xshift = 2cm, width = .45\textwidth,
ylabel = Computing time (s), xlabel = $L^2$ relative error,
legend style = { at={(1,1.1)}, legend columns =2,
/tikz/column 2/.style={column sep = 10pt}}]
\addlegendentry{Standard FEM}
\addplot coordinates{
(0.03945192586664578,0.15355443954467773)
(0.011092067908400144,0.24408245086669922)
(0.002876516858202795,0.5149095058441162)
(0.0007594702182368449,1.0383923053741455)
(0.00018690079814396443,3.6394455432891846)
};
\addlegendentry{$\phi$-FEM}
\addplot coordinates{
(5.675910662284101e-05,0.26125335693359375)
(6.2668613054719945e-06,0.37996554374694824)
(7.89051810290236e-07,0.7896356582641602)
(9.732876356767048e-08,3.172820568084717)
(1.2751093753131592e-08,9.983308553695679)
};
\end{loglogaxis}
\end{tikzpicture}
\caption{Test case with multiple materials. Left: $H^1$ and $L^2$ relative error obtained with $\phi$-FEM and the standard FEM. Right: computing times for $\phi$-FEM and the standard FEM.}\label{fig:error_interface}
\end{figure}
\paragraph*{Test case: }
Consider $\Omega=(0,1)^2$ and $\Omega_1$, $\Omega_2$ defined by the the level-set $\phi$
\begin{equation*}
\phi(x,y) = - R^2 + (x-0.5)^2 + (y-0.5)^2\,,
\end{equation*}
with $R=0.3$ as illustrated on Fig.~\ref{fig:interface}. We want to solve \eqref{eq:interface_elasticity} with the manufactured radial solution
\[
\mbf{u}=\mbf{u}_{ex} = \left\{\begin{array}{ll}
\frac{1}{E_1}(\cos(r)-\cos(R))(1,1)^T&\mbox{ if }r<R,\\
\frac{1}{E_2}(\cos(r)-\cos(R))(1,1)^T&\mbox{ else,}
\end{array}\right.\,,
\]
where $r=\sqrt{(x-0.5)^2+(y-0.5)^2}$.
Thus
\[\mbf{f}=-\Div(\sigma_1((\cos(r)-\cos(R))(1,1)^T)/E_1\]
and $\mbf{u}_g=\mbf{u}_{ex}$.
The material parameters are given by $E_1 = 7$, $E_2=2.28$ and $\nu_1 = \nu_2 = 0.3$. The meshes used for $\phi$-FEM and for the standard FEM are illustrated in Fig.~\ref{fig:interface mesh}. In the latter case, the mesh should resolve the interface $r=R$ so that the solution $\mbf{u}_h\in V_h$ is obtained by the straight-forward scheme
\begin{equation}\label{FEMinterf}
\sum_{i=1}^2 \int_{\Omega_{h,i}} \mbf{\sigma}_i(\mbf{u}_h):\nabla\mbf{v}_h
= \int_{\Omega} \mbf{f} \cdot \mbf{v}_h,~\forall~ \mbf{v}_h\in V_h^0\,,
\end{equation}
where $V_h$ is the conforming $\mathbb{P}^k$ FE space approximating $\mbf{u}_g$ on $\partial\Omega$ and $V_h^0$ is its homogeneous analogue. The results obtained with $\phi$-FEM (\ref{discreteInterf}) and FEM (\ref{FEMinterf}) using $\mathbb{P}^2$ piecewise polynomials ($k=2$) are reported in Fig.~\ref{fig:error_interface}. The conclusions remain the same as in the previous setting: $\phi$-FEM is more precise on comparable meshes and less expensive in terms of the computing times for a given error tolerance.
\section{Linear elasticity with cracks }\label{sectFracture}
We now want to consider the linear elasticity problem posed on a cracked domain $\Omega\setminus\Gamma_f$ with $\Gamma_f$ being a line (a surface) inside $\Omega$:
\begin{equation}\label{eq:fracture_elasticity0}
\begin{cases}
- \Div \mbf{\sigma} (\mbf{u}) &= \mbf{f} \,, \text{ on } \ \Omega\setminus\Gamma_f \,, \\
\mbf{u} &= \mbf{u}^g \,, \text{ on } \ \partial \Omega \,, \\
\mbf{\sigma}(\mbf{u})\mbf{n} &= \mbf{g} \,, \text{ on } \ \Gamma_{f} \,.
\end{cases}
\end{equation}
This problem is actually what XFEM was originally designed for, cf. \cite{moes}. We are now going to adapt $\phi$-FEM to it.
\begin{figure}[b]
\centering
\begin{tikzpicture}
\begin{axis}[
xmin = 0, xmax = 1,
ymin = 0, ymax = 1.0, xtick={0,1},ytick={1}, width = 0.45\textwidth]
\addplot[ domain = 0:0.5,samples = 1000, line width=1pt, cyan] { 0.25*( sin(deg(2*pi*x))) + 0.5}
node [pos=0.5, above left]{\Large$\Gamma_{int}$};
\addplot[ domain = 0.5:1,samples = 1000, line width=1pt, red] { 0.25*( sin(deg(2*pi*x))) + 0.5}
node[pos = 0.7, above left]{\Large$\Gamma_f$};
\draw(0.7,0.7)node[black]{\Large$\Omega_1$};
\draw(0.3,0.3)node[black]{\Large$\Omega_2$};
\end{axis}
\draw(2.5,-0.1)node[below]{$\mbf{u} = \mbf{u}^g$};
\end{tikzpicture}
\caption{Geometry notations to represent the crack. $\Gamma_{int}$ and $\Gamma_f$ represents the fictitious interface and the actual crack respectively.}\label{fig:figurefracture_phifem}
\end{figure}
In practice, the crack geometry is given by the primary level set $\phi$ (to locate the line or surface of the crack) and the secondary level set $\psi$ (to locate the tip or the front of the crack):
$$
\Gamma_{f} := \Omega \cap \lbrace \phi=0 \rbrace \cap \lbrace \psi < 0 \rbrace \,.
$$
To fix the ideas, let us suppose that the line (surface) $\Gamma:=\{\phi=0\}$ splits $\Omega$ into two sub-domains $\Omega_1$ and $\Omega_2$, characterized by $\{\phi<0\}$ and $\{\phi>0\}$ respectively, as illustrated at Fig.~\ref{fig:figurefracture_phifem}. The interface $\Gamma$ thus consists of the fracture location $\Gamma_f$ and the remaining (fictitious) part $ \Gamma_{int}$:
\[ \Gamma_{int} := \Omega \cap \lbrace \phi=0 \rbrace \cap \lbrace \psi > 0 \rbrace \,.
\]
In order to reuse the $\phi$-FEM scheme (\ref{discreteInterf}) introduced for the interface problem above, we reformulate problem (\ref{eq:fracture_elasticity0}) in terms of two separate unknowns $\mbf{u}_i=\mbf{u} |_{\Omega_i}$, $i=1,2$:
\begin{equation}\label{eq:fracture_elasticity}
\begin{cases}
- \Div \mbf{\sigma} (\mbf{u}_i) &= \mbf{f} \,, \text{ on } \ \Omega_i \,, \\
\mbf{u}_i &= \mbf{u}^g \,, \text{ on } \ \partial \Omega \,, \\
[\mbf{u} ] &= 0\,, \text{ on } \ \Gamma_{int} \,, \\
[\mbf{\sigma}(\mbf{u})\mbf{n}] &= 0 \,, \text{ on } \ \Gamma_{int} \,, \\
\mbf{\sigma}(\mbf{u})\mbf{n} &= \mbf{g} \,, \text{ on } \ \Gamma_{f} \,.
\end{cases}
\end{equation}
We are interested again in a situation where $\Omega$ is sufficiently simple-shaped so that a matching mesh $T_h$ on $\Omega$ is easily available, but this mesh does not match the internal interface $\Gamma$. As in the preceding section, we are thus going to discretize separately $\mbf{u}_1$ on $\Omega_1$ and $\mbf{u}_2$ on $\Omega_2$ starting from the reformulation (\ref{eq:fracture_elasticity}). To this end, we introduce two active sub-meshes $\mathcal{T}_{h,1}$, $\mathcal{T}_{h,2}$ as in (\ref{Th12}), based on the piecewise polynomial approximation $\phi_h$ of $\phi$.
We also introduce the interface mesh $\mathcal{T}_{h}^\Gamma=\mathcal{T}_{h,1}\cap\mathcal{T}_{h,2}$\,, which we further split into two sub-meshes with respect to the secondary level set $\psi$, similarly to our treatment of the mixed boundary conditions, cf. (\ref{ThGamDN}):
\begin{equation*
\mathcal{T}_h^{\Gamma_f} := \{ T \in \mathcal{T}_h^{\Gamma} : \psi\leqslant 0\text{ on }T \} \qquad \text{ and } \qquad
\mathcal{T}_h^{\Gamma_{int}} := \{ T \in \mathcal{T}_h^{\Gamma} : \psi\geqslant 0\text{ on }T \}\,.
\end{equation*}
Note that there may be some cells in $\mathcal{T}_h^{\Gamma}$ that are not marked as either $\mathcal{T}_h^{\Gamma_f}$ or $\mathcal{T}_h^{\Gamma_{int}}$. This is illustrated by the mesh example on the right of Fig.~\ref{fig:meshes fracture}, where the cells in $\mathcal{T}_h^{\Gamma_f}$ and $\mathcal{T}_h^{\Gamma_{int}}$ are painted in red and blue respectively, but there remain some cells $\mathcal{T}_h^{\Gamma}$ that are in neither of these categories. The are painted in yellow on the picture. These are the cells intersected by the line $\{\psi=0\}$. The crack tip happens to be thus inside one of the yellow cells.
Everything is now set up to adapt the $\phi$-FEM approaches of the two preceding sections to the equations (\ref{eq:fracture_elasticity}). We choose an integer $k\ge 1$ and introduce first the FE spaces $V_{h,1}$, $V_{h,2}$ together with their homogeneous counterparts $V_{h,1}^0$, $V_{h,2}^0$ as in (\ref{espaceVhi}) to approximate $\mbf{u}_1$ and $\mbf{u}_2$. These will be used in the discretization of the variational formulation of the first equation in (\ref{eq:fracture_elasticity}) together with the boundary conditions on $\partial\Omega$. The remaining equations in (\ref{eq:fracture_elasticity}), i.e. the relations on $\Gamma_{int}$ and $\Gamma_f$ will be treated by the introduction of auxiliary variables on the appropriate parts of $\Omega_h^\Gamma$ (the domain of the mesh $\mathcal{T}_{h}^\Gamma$):
\begin{itemize}
\item the vector-valued unknown $\mbf{p}$ and the matrix-valued unknowns $\mbf{y}_1$,$\mbf{y}_2$ on $\Omega_h^{\Gamma_{int}}$ (the domain of the mesh $\mathcal{T}_{h}^{\Gamma_{int}}$). These will serve to impose the continuity of both the displacement and the normal force on ${\Gamma_{int}}$ thorough the equations
\begin{align*}
\mbf{u}_1 - \mbf{u}_2 + \mbf{p}\phi = 0 \,, \quad &\text{ on } \ \Omega_h^{\Gamma_{int}}\,, \\
\mbf{y}_i = - \mbf{\sigma} (\mbf{u}_i) \,, \quad &\text{ on } \ \Omega_h^{\Gamma_{int}}\,, \\
\mbf{y}_1 \cdot \nabla \phi- \mbf{y}_2 \cdot \nabla \phi = 0 \,, \quad &\text{ on } \ \Omega_h^{\Gamma_{int}} \,,
\end{align*}
which are exactly the same as (\ref{InterMix1})--(\ref{InterMix3}) with the only exception that they are posed on the appropriate portion of $\Omega_h^\Gamma$ rather than on entire $\Omega_h^\Gamma$. These variables will be discretized in FE spaces $Q_{h}^k(\Omega_h^{\Gamma_{int}})$ for $\mbf{p}$ and $Z_h(\Omega_h^{\Gamma_{int}})$ for $\mbf{y}_1$,$\mbf{y}_2$, defined by (\ref{spaceQh}) and (\ref{espaceZh}) respectively.
\item the vector-valued unknowns $\mbf{p}_i^N$ and the matrix-valued unknown $\mbf{y}_i^N$, $i=1,2$ on $\Omega_h^{\Gamma_f}$ (the domain of the mesh $\mathcal{T}_{h}^{\Gamma_f}$). These will serve to impose the Neumann boundary conditions on both sides of ${\Gamma_f}$ thorough the equations
\begin{align*}
\mbf{y}_i^N = - \mbf{\sigma} (\mbf{u}_i) \,, \quad &\text{ on } \ \Omega_h^{\Gamma_{f}}\,, \\
\mbf{y}_i^N \nabla \phi + \mbf{p}_i^N \phi + \mbf{g} |\nabla \phi | = 0 \,, \quad &\text{ on } \ \Omega_h^{\Gamma_{f}} \,,
\end{align*}
which are exactly the same as (\ref{phiN}a-b) with the only exception that the domain is renamed to $\Omega_h^{\Gamma_f}$ from $\Omega_h^{\Gamma_N}$. These variables will be discretized in FE spaces $Q_{h}^{k-1}(\Omega_h^{\Gamma_f})$ for $\mbf{p}_i^N$ and $Z_{h}(\Omega_h^{\Gamma_f})$ for $\mbf{y}_i^N$, defined again by (\ref{spaceQh}) and (\ref{espaceZh}) respectively.
\end{itemize}
Note that the combination of equations above does not impose the appropriate interface conditions on the whole of $\Gamma$ since the latter may be not completely covered by $\Omega_h^{\Gamma_f}\cup\Omega_h^{\Gamma_{int}}$. Fortunately, this defect of the formulation on the continuous level can be repaired on the discrete level by adding the appropriate stabilization to the FE discretization, similarly to what we have already seen in the setting with mixed boundary conditions.
All this results in the following FE scheme:
find $\mbf{u}_{h,1}\in V_{h,1}$, $\mbf{u}_{h,2}\in V_{h,2}$, $\mbf{p}_h\in Q_{h}^{k}(\Omega_h^{\Gamma_D})$, $\mbf{y}_{h,1}, \mbf{y}_{h,2}\in Z_{h}(\Omega_h^{\Gamma_N})$, $\mbf{p}_{h,1}^N, \mbf{p}_{h,2}^N \in Q_{h}^{k-1}(\Omega_h^{\Gamma_N})$, $\mbf{y}_{h,1}^N,\mbf{y}_{h,2}^N\in Z_{h}(\Omega_h^{\Gamma_N})$ such that
\begin{multline}
\sum_{i = 1}^2 \big( \int_{\Omega_{h, i}} \mbf{\sigma} (\mbf{u}_{h, i}) : \mbf{\nabla} \mbf{v}_{h, i} + \int_{\partial \Omega_{h, i, int}} \mbf{y}_{h, i} \mbf{n} \cdot \mbf{v}_{h, i} + \int_{\partial \Omega_{h, i, f}} \mbf{y}_{h, i}^N \mbf{n} \cdot \mbf{v}_{h, i} \\
- \int_{\partial\Omega_{h, i}\setminus(\partial\Omega_{h, i, int}\cup\partial\Omega_{h, i, f})} \mbf{\sigma} (\mbf{u}_{h, i})\mbf{n} \cdot \mbf{v}_{h, i} \big) \\
+ \frac{\gamma_p}{h^2} \int_{\Omega_h^{\Gamma_{int}}} (\mbf{u}_{h, 1} - \mbf{u}_{h, 2} + \frac{1}{h} \mbf{p}_h \phi_h) \cdot (\mbf{v}_{h, 1} - \mbf{v}_{h, 2} + \frac{1}{h} \mbf{q}_h \phi_h) \\
+ \gamma_u \sum_{i = 1}^2 \int_{\Omega_h^{\Gamma_{int}}} (\mbf{y}_{h, i} + \mbf{\sigma} (\mbf{u}_{h, i})) : (\mbf{z}_{h, i} + \mbf{\sigma} (\mbf{v}_{h, i})) \\
+ \frac{\gamma_y}{h^2} \int_{\Omega_h^{\Gamma_{int}}} ( \mbf{y}_{h,1}\nabla \phi_h - \mbf{y}_{h,2}\nabla \phi_h )\cdot( \mbf{z}_{h,1}\nabla \phi_h- \mbf{z}_{h,2} \nabla \phi_h)
\\
+ \gamma_{u, N} \sum_{i = 1}^2 \int_{\Omega_h^{\Gamma_f}} (\mbf{y}_{h, i}^N + \mbf{\sigma} (\mbf{u}_{h, i})) : (\mbf{z}_{h, i}^N + \mbf{\sigma} (\mbf{v}_{h, i})) \\ + \frac{\gamma_{p, N}}{h^2} \sum_{i = 1}^2 \int_{\Omega_h^{\Gamma_f}} (\mbf{y}_{h, i}^N \nabla \phi_h + \frac{1}{h} \mbf{p}_{h, i}^N \phi_h) \cdot (\mbf{z}_{h, i}^N \nabla \phi_h + \frac{1}{h} \mbf{q}_{h, i}^N \phi_h) \\
+ \sum_{i = 1}^2 \left( G_h \left( \mbf{u}_{h, i}, \mbf{v}_{h, i} \right)
+ J_h^{{lhs}, {int}} \left( \mbf{y}_{h, i}, \mbf{z}_{h, i} \right)
+ J_h^{{lhs}, f} \left( \mbf{y}_{h, i}^N, {\mbf{z }_{h, i}^N} \right) \right) \\
= \sum_{i = 1}^2 \int_{\Omega_{h, i}} \mbf{f} \cdot \mbf{v}_{h, i} - \frac{\gamma_{p, N}}{h^2} \sum_{i = 1}^2 \int_{\Omega_h^{\Gamma_f}} \mbf{g} | \nabla \phi_h | (\mbf{z}_{h, i}^N \nabla \phi_h + \frac{1}{h} \mbf{q}_{h, i}^N \phi_h) \\
+ \sum_{i = 1}^2 \left(
J_h^{{rhs}, {int}} \left( \mbf{z}_{h, i} \right)
+ J_h^{{rhs}, f} \left( {\mbf{z }_{h, i}^N} \right) \right), \\
\forall \mbf{v}_{h, 1} \in V_{h, 1}^0, \mbf{v}_{h, 2} \in V_{h, 2}^0, \mbf{q}_h \in Q_{h}^{k}(\Omega_h^{\Gamma_D}), \mbf{z}_{h, 1}, \mbf{z}_{h, 2} \in Z_h, \mbf{q}_{h, 1}^N, \mbf{q}_{h, 2}^N \in Q_{h}^{k-1}(\Omega_h^{\Gamma_N}),\\ \mbf{\mbf{z}_{h, 1}^N, z}_{h, 2}^N \in Z_{h}(\Omega_h^{\Gamma_N}) \,.
\label{discreteCrack}
\end{multline}
As usual, we have added here the ghost stabilization $G_h$ (\ref{Gh}) and the additional stabilizations $J_h^{lhs,int}$, $J_h^{lhs,f}$
(accompanied by the their counterparts on the right-hand side for the consistency) that are copied from the $J_h^{lhs,N}$ in (\ref{JhN}) but adjusted to the corresponding sub-meshes:
$$ J_h^{lhs,int}(\mbf{y},\mbf{z}) =\gamma_{div} \int_{\Omega_h^{\Gamma_{int}}}\Div \mbf{y} \cdot \Div \mbf{z}
\,,
\quad J_h^{lhs,f}(\mbf{y},\mbf{z}) =\gamma_{div} \int_{\Omega_h^{\Gamma_f}}\Div \mbf{y} \cdot \Div \mbf{z}\,.
$$
The boundary integrals are rewritten in terms of $\mbf{y}_i$,$\mbf{y}_i^N$ wherever possible. We have here denoted by $\partial\Omega_{h,i}$ the part of the boundary of $\Omega_{h,i}$ other than $\partial\Omega$ and introduced $\partial\Omega_{h,i,int}$ as the part of $\partial\Omega_{h,i}$ formed by the boundary facets of $\mathcal{T}_{h,i}$ belonging to the cells in $\mathcal{T}_{h}^{\Gamma_{int}}$. The same for $\partial\Omega_{h,i,f}$.
\begin{figure}[btp]
\centering
\includegraphics[width=0.45\textwidth]{mesh_fracture_conform.png}
\includegraphics[width=0.45\textwidth]{mesh_fracture_non_conform.png}
\caption{Test case with a crack, meshes used for $\phi$-FEM. Left: a mesh resolving the crack tip; the cells in $\mathcal{T}_{h}^{\Gamma_{int}}$ in blue; the cells in $\mathcal{T}_{h}^{\Gamma_{f}}$ in red. Right: a mesh not resolving the crack tip; in addition to blue and red cells, there are yellow cells not belonging to $\mathcal{T}_{h}^{\Gamma_{int}}$ or $\mathcal{T}_{h}^{\Gamma_{f}}$. }\label{fig:meshes fracture}
\end{figure}
\begin{figure}[btp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[name = plot, width = .45\textwidth, ymin=3e-8, ymax=5e-3, xlabel = $h$, ylabel = Relative error,legend pos=north west]
\addplot coordinates{
(0.07071067811865482,9.777452793013566e-05)
(0.03535533905932741,1.0419211256548806e-05)
(0.01767766952966378,2.967883868554893e-06)
(0.00883883476483197,4.3827759106063176e-07)
(0.004419417382415985,5.625169839168734e-08)};
\addplot coordinates{
(0.07071067811865482,0.0006052966265906567)
(0.03535533905932741,0.00011243253507143713)
(0.01767766952966378,3.6174590887733385e-05)
(0.00883883476483197,5.723173873151866e-06)
(0.004419417382415985,8.466975252389401e-07)};
\logLogSlopeTriangle{0.53}{0.2}{0.46}{2}{red};
\logLogSlopeTriangle{0.53}{0.2}{0.23}{3}{blue};
\legend{$L^2$ error, $H^1$ error}
\end{loglogaxis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{loglogaxis}[name = plot, width = .45\textwidth, ymin=3e-8, ymax=5e-3, xlabel = $h$, ylabel = Relative error,legend pos=north west]
\addplot coordinates{
(0.06734350297014746,0.00011415468313107934)
(0.034493013716416984,1.7495400368412347e-05)
(0.017459426695964213,2.954050638336371e-06)
(0.008783935170019241,4.182817999120714e-07)
(0.004405649727018988,5.5993729055912355e-08)
};
\addplot coordinates{
(0.06734350297014746,0.0006725963282258503)
(0.034493013716416984,0.00013063709444329212)
(0.017459426695964213,3.121288024010827e-05)
(0.008783935170019241,5.393354911850969e-06)
(0.004405649727018988,8.035075055809038e-07)
};
\logLogSlopeTriangle{0.53}{0.2}{0.46}{2}{red};
\logLogSlopeTriangle{0.53}{0.2}{0.23}{3}{blue};
\legend{$L^2$ error, $H^1$ error}
\end{loglogaxis}
\end{tikzpicture}
\caption{Test case with a crack, $H^1$ and $L^2$ relative errors. Left: on meshes resolving the crack tip. Right: on meshes not resolving the crack tip. }\label{fig:convergence_fracture}
\end{figure}
\paragraph*{Test case: }
Let $\Omega = (0,1)^2$ and the interface $\Gamma$ be given by the level set
\begin{equation*}
\phi (x,y) = y - \frac{1}{4} \sin(2 \pi x) - \frac{1}{2}\,.
\end{equation*}
We choose the crack tip to be at $x=0.5$ so that
\[ \Gamma_{int} := \{\phi=0\}\cap \lbrace x < 0.5 \rbrace \ \quad \text{ and } \quad \Gamma_{f} := \{\phi=0\}\ \cap \lbrace x > 0.5 \rbrace \,. \]
This is the setting represented at Fig.~\ref{fig:figurefracture_phifem}.
We use the $\phi$-FEM (\ref{discreteCrack}) to solve \eqref{eq:fracture_elasticity0} with the manufactured solution
\[ \mbf{u} = \mbf{u}_{ex} = (\sin(x) \times \exp(y), \sin(y) \times \exp(x))^T \]
which gives $\mbf{f}$, $\mbf{g}$, and $\mbf{u}^g$ by substitution. The force on the crack $\mbf{g}$ should be extended to a vicinity of $\Gamma_f$ and we implement it by
\[
\mbf{g} = \mbf{\sigma}(\mbf{u}_{ex}) \frac{\nabla \phi}{\| \nabla \phi\|} + \phi\mbf{u}_{ex} \,. \]
We choose $\gamma_u = \gamma_p = \gamma_{div} = \gamma_{u,N} = \gamma_{p,N} = \gamma_{div,N} = 1.0$, $\sigma_p = 1.0$ and $\sigma_D = 20.0$.
We have conducted two series of numerical experiments using $\phi$-FEM (\ref{discreteCrack}) with $\mathbb{P}^2$ Lagrange polynomials ($k=2$) on families of meshes presented at Fig.~\ref{fig:meshes fracture}, either resolving the crack tip (the mesh on the left) or not (the mesh on the right). The results are reported on Fig.~\ref{fig:convergence_fracture}. We see that $\phi$-FEM converges optimally, giving very similar results on both types of meshes.
\section{Heat equation}\label{sectHeat}
We finally demonstrate the applicability of the $\phi$-FEM approach to time-dependent problems. We take the example of the heat equation with Dirichlet boundary conditions: given a bounded domain $\Omega\in\mathbb{R}^d$, the initial conditions $u^0$ on $\Omega$, and the final time $T>0$, find the scalar field $u=u(x,t)$ such that
\begin{equation}\label{eq:heat}
\left\{\begin{array}{ll}
u_t - \Delta u = f& \mbox{ in } \Omega\times(0,T), \\
u=0&\mbox{ on }\Gamma\times(0,T), \\
u(.,0)=u^0&\mbox{ in }\Omega.
\end{array}\right.
\end{equation}
We are interested again in the situation where a fitting mesh of $\Omega$ is not available. We rather assume that $\Omega$ is inscribed in a box ${\mathcal{O}}$ which is covered by a simple background mesh $\mathcal{T}_{h}^{\mathcal{O}}$, and introduce the active mesh $\mathcal{T}_{h}$ as in (\ref{Th}). We then follow the Direct Dirichlet $\phi$-FEM approach (\ref{DirectPhi}), (\ref{DirectDiscrete})
with the following modifications:
\begin{itemize}
\item We introduce the uniform partition of the time interval $I=[0,T]$ into time steps of length $\Delta t$ by the nodes $t_i=i\Delta t$. We discretize then \eqref{eq:heat} in time using implicit Euler scheme. On the continuous level this is formally written as: find $u^n$ (the approximation to $u$ at time $t_n$) in the form $u^n=\phi w^n$ successively for $n=1,2,\ldots$ solving
\begin{equation}\label{DiscTime}
\frac{\phi w^n - \phi w^{n-1}}{\Delta t} -\Delta(\phi w^n)=f^n
\end{equation}
where $f^n(\cdot)=f(t_n,\cdot)$.
\item We extend (\ref{DiscTime}) to $\Omega_h$, integrate by parts on $\Omega_h$, and discretize the resulting variational formulation using a FE space and adding appropriate stabilizations.
\end{itemize}
The $\phi$-FEM for \eqref{eq:heat} reads thus as: find $w_h^n \in V_h$ for $n=1,2,\ldots$ with $V_h$ defined by (\ref{spaceVh}) such that
\begin{multline}\label{discreteHet}
\int_{\Omega_h} \frac{\phi_h w_h^n }{\Delta t} \phi_h v_h +
\int_{\Omega_h} \nabla (\phi_h w_h^n) \cdot \nabla (\phi_h v_h)
- \int_{\partial \Omega_h} \frac{\partial}{\partial n} (\phi_h w_h^{n}) \phi_h v_h \\
+ \sigma_D h \sum_{E \in \mathcal{F}_h^{\Gamma}} \int_E \left[ \partial_n(\phi_h w_h^n)\right] \cdot \left[ \partial_n(\phi_h v_h)\right]
- \sigma h^2 \sum_{T\in\mathcal{T}_{h}^{\Gamma}}\int_{T} \left( \frac{\phi_h w_h^n
}{\Delta t} - \Delta (\phi_h w_h^n) \right) \Delta (\phi_h v_h ) \\
= \int_{\Omega_h} \left( \frac{\phi_h w_h^{n-1} }{\Delta t} + f^n \right)
\phi_h v_h - \sigma h^2 \sum_{T\in\mathcal{T}_{h}^{\Gamma}}\int_{T} \left( \frac{\phi_h w_h^{n-1}
}{\Delta t} + f^n \right) \Delta (\phi_h v_h ) .
\end{multline}
We have added here the ghost stabilisation, similar to (\ref{Gh}) but in simpler scalar setting, and additional stabilization inspired by (\ref{Jhlrhs}). The idea for the latter is to take the governing equation in the strong form, which is now (\ref{DiscTime}), and to impose it in a least squares manner cell by cell.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[name = ax1, width = .45\textwidth, xlabel = $h$,
ylabel = $\frac{\max_{t_i}\| \mbf{u}(t_i)-\mbf{u}_h(t_i) \|_{0,\Omega}}{\max_{t_i}\|\mbf{u}(t_i)\|_{0,\Omega}}$,
legend style = { at={(0.7,1.2)}, legend columns =1,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(0.14142135623730964,0.014613260310956685)
(0.07071067811865482,0.0012018592924686047)
(0.03535533905932741,0.00017023448816856176)
(0.01767766952966378,7.376483560822227e-05)
};
\addplot coordinates {
(0.1737929702933656,0.029956607570528575)
(0.08814237266741118,0.006878808462825571)
(0.044176082108213596,0.0018242512029954185)
(0.022095901417288732,0.0004871954091249405)
(0.01104624219043868,0.00013234126714823852)
};
\logLogSlopeTriangle{0.53}{0.2}{0.12}{1}{blue};
\legend{$\phi$-FEM, Standard FEM}
\end{loglogaxis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{loglogaxis}[name = ax1, width = .45\textwidth, xlabel = $h$,
ylabel = $\frac{\| \mbf{u}-\mbf{u}_h \|_{L^2(H^1)}}{\|\mbf{u}\|_{L^2(H^1)}}$,
legend style = { at={(0.7,1.2)}, legend columns =1,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(0.14142135623730964,0.03483439404608014)
(0.07071067811865482,0.008569471429621481)
(0.03535533905932741,0.0033349304158489697)
(0.01767766952966378,0.0015937182371751916)
};
\addplot coordinates{
(0.1737929702933656,0.23856788662022907)
(0.08814237266741118,0.11417978400648772)
(0.044176082108213596,0.05933004823534846)
(0.022095901417288732,0.02969997580122884)
(0.01104624219043868,0.0147143156195001)};
\logLogSlopeTriangle{0.53}{0.2}{0.12}{1}{blue};
\legend{ $\phi$-FEM, Standard FEM}
\end{loglogaxis}
\end{tikzpicture}
\caption{Test case for the heat equation; $\Delta t =h$. Left: $L^{\infty}(0,T;L^2(\Omega))$ relative errors. Right: $L^2(0,T;H^1(\Omega))$ relative errors.}\label{fig:heat}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
\begin{loglogaxis}[name = ax1, width = .45\textwidth, xlabel = $h$,
ylabel = $\frac{\max_{t_i}\| \mbf{u}(t_i)-\mbf{u}_h(t_i) \|_{0,\Omega}}{\max_{t_i}\|\mbf{u}(t_i)\|_{0,\Omega}}$,
legend style = { at={(0.7,1.2)}, legend columns =1,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(0.141421356237,0.0145429360786)
(0.0707106781187,0.00109257635152)
(0.0353553390593,0.000120961344644)
(0.0176776695297,3.0745030175e-05)
(0.00883883476483,7.81655564878e-06)
};
\addplot coordinates {
(0.141421356237,0.0347422759486)
(0.0707106781187,0.00840243388952)
(0.0353553390593,0.00333272225131)
(0.0176776695297,0.00159026149511)
(0.00883883476483,0.000788380527752)
};
\logLogSlopeTriangle{0.53}{0.2}{0.12}{2}{blue};
\legend{$\phi$-FEM, Standard FEM}
\end{loglogaxis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{loglogaxis}[name = ax1, width = .45\textwidth, xlabel = $h$,
ylabel = $\frac{\| \mbf{u}-\mbf{u}_h \|_{L^2(H^1)}}{\|\mbf{u}\|_{L^2(H^1)}}$,
legend style = { at={(0.7,1.2)}, legend columns =1,
/tikz/column 2/.style={column sep = 10pt}}]
\addplot coordinates {
(0.173792970293,0.0301514946318)
(0.0881423726674,0.00686051308391)
(0.0441760821082,0.00178094355391)
(0.0220959014173,0.000454820848822)
(0.0110462421904,0.000112234933215)
};
\addplot coordinates{
(0.173792970293,0.238572310108)
(0.0881423726674,0.114179466332)
(0.0441760821082,0.0593294708849)
(0.0220959014173,0.0296996298576)
(0.0110462421904,0.0147141332572)
};
\logLogSlopeTriangle{0.53}{0.2}{0.12}{2}{blue};
\legend{ $\phi$-FEM, Standard FEM}
\end{loglogaxis}
\end{tikzpicture}
\caption{Test case for the heat equation; $\Delta t =10 h^2$. Left: $L^{\infty}(0,T;L^2(\Omega))$ relative errors. Right: $L^2(0,T;H^1(\Omega))$ relative errors.}\label{fig:heat2}
\end{figure}
\paragraph*{Test case: } We consider again the geometry of $\Omega$ and of the surrounding box $\mathcal{O}$ as in our first test case on page~\pageref{testcaseDir}. In particular, the level set is given by (\ref{phiCircle}) so that $\Omega$ is the circle centered at $(0.5,0.5)$. Examples of meshes used both by $\phi$-FEM and by the standard FEM are given in Fig.~\ref{fig:meshes dirichlet}. We want to solve (\ref{eq:heat}) with the manufactured solution \[u=u_{ex} = \exp(x)\sin(2\pi y)\sin(t) \] and extrapolated boundary conditions \[{u}^g = {u}_{ex}(1+\phi). \]
We are going to compare the convergence of the $\phi$-FEM (\ref{discreteHet}) with that of the standard FEM using $\mathbb{P}^1$ Lagrange polynomials in space and the implicit Euler scheme in time in both cases. The $\phi$-FEM stabilization parameter is taken as $\sigma = 20$. The results are reported in Figs.~\ref{fig:heat} and \ref{fig:heat2}, for $\Delta t=h$ and $\Delta t=10h^2$, respectively. Once again, $\phi$-FEM converges faster than standard FEM. In the test considered here, the predominant source of error seems to be in the time discretization. In particular, we observe only $O(h)$ convergence in the $L^2$-norm in space in the regime $\Delta t=h$ on Fig.~\ref{fig:heat}. A cleaner 2nd order in time should be possible to achieve using the BDF2 marching scheme, but this remains out of the scope of the present paper.
\FloatBarrier
\section{Conclusions and perspectives}
$\phi$-FEM is a relative newcomer to the field of unfitted FE methods. Up to now, it was only applied to scalar 2nd order elliptic equations with pure Dirichlet or pure Neumann/Robin boundary conditions in \cite{phifem,phiFEM2}. The purpose of the present contribution is to demonstrate its applicability to more sophisticated settings including the linear elasticity with mixed boundary conditions and material properties jumping across the internal interfaces, elasticity with cracks, and the heat transfer. In all the cases considered here, the numerical tests confirm the optimal accuracy on manufactured smooth solutions. $\phi$-FEM is easily implementable in standard FEM packages (we have chosen FEniCS for the numerical illustration in this chapter).
In particular, $\phi$-FEM uses classical finite element spaces and avoids the mesh generation and any non-trivial numerical integration.
Interestingly, our methods systematically outperform the standard FEM on comparable meshes. This can be attributed to a better representation of the boundary and of the solution near the boundary, as opposed to the approximation of the domain by a polyhedron/polygon in standard FEM. We recall that the computing times, reported in some of our tests with $\phi$-FEM and favourably compared with those of the standard FEM, only include assembling of the matrices and the resolution of the linear systems. It would be interesting to add the mesh generation time to the comparison, which should be even more in favour of $\phi$-FEM (when efficiently implemented).
Admittedly, the test cases presented in this contribution do not comprise all the complexity of the real-life problems. We have restricted ourselves to simple geometries in 2D only. Even more importantly, we have tested the methods only on smooth solutions, which is not supposed to happen in practice in problems with cracks, for example. Taking accurately into account the singularity at the crack tip remains an important challenge for the future $\phi$-FEM developments. A relatively easily implementable approach would be to combine $\phi$-FEM with a local mesh refinement by quadtree/octree structures near the crack tip (front). We emphasize that such a refinement should be necessary only in the vicinity of the front, since the discontinuous solution along the crack should be efficiently approximated by $\phi$-FEM on a reasonably coarse unfitted mesh.
The mathematical analysis of the schemes presented in this paper is in progress. We also plan to adapt $\phi$-FEM to fluid-structure simulations starting by the creeping flow around of a Newtonian fluid (Stokes equations) in the presence of rigid particles.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUf6c5qhDCz6hCLBKa
| 5 | 1 |
\section{PRELIMINARIES}
Let $\mathbb{D}$ be the open unit disk in the complex plane $\Bbb{C}$.
The \textit{Hardy space} $H^{2}$ is the Hilbert space of all analytic functions $f$ on $\mathbb{D}$ such that
\begin{align*}
\|f\|^{2}=\lim\limits_{r\rightarrow 1}\frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{2}d\theta <\infty.
\end{align*}
It is well known that the Hardy space $H^{2}$ is a reproducing kernel Hilbert space, with the inner product \begin{align*}
\langle f,g\rangle=\frac{1}{2\pi}\int_{0}^{2\pi}f(e^{i\theta})\overline{g(e^{i\theta})}d\theta,
\end{align*}
and with
kernel functions $K_{w}^{(n)}(z)=\frac{n!z^{n}}{(1-\overline{w}z)^{n+1}}$, where $ n$ is a non-negative integer and $z,w \in \mathbb{D}$. These kernel functions satisfy $ \langle f, K_{w}^{(n)}\rangle=f^{(n)}(w)$ for each $f \in H^{2}$. To simplify notation we write $K_{w}$ in case $n=0$.
In particular note that $\|K_{w}\|^{2}=K_{w}(w)=\frac{1}{1-|w|^{2}}$. Let $\hat{f}(n)$ be the $n$th coefficient of $f$ in its
Maclaurin series. Moreover, we have another representation for the norm
of $f$ on $H^{2}$ as follows
$$\|f\|^{2}=\sum_{n=0}^{\infty}|\hat{f}(n)|^{2}<\infty.$$
The space $H^{\infty}$ is the Banach space of bounded analytic functions $f$ on $\mathbb{D}$ with
$\|f\|_{\infty}=\sup \{|f(z)|:z\in \mathbb{D}\}$. \par
For $\varphi$ an analytic self-map of $\mathbb{D}$, the \textit{composition operator} $C_{\varphi}$ is defined for analytic functions $f$ on $\mathbb{D}$ by $C_{\varphi}(f)= f\circ \varphi$. It is well known that every composition operator $C_{\varphi}$ is bounded on $H^{2}$ (see \cite[Corollary 3.7]{2}).
For each positive integer $k$, the operator $D^{(k)}$ for any $f \in H^{2}$ is defined by the rule $D^{(k)}(f)=f^{(k)}$. This operator is called the \textit{differentiation operator of order $k$}. For convenience, we use the notation $D$ when $k=1$. The differentiation operators $D^{(k)}$ are unbounded on $H^{2}$, whereas Ohno \cite{13} found a characterization for $C_{\varphi}D$ and $DC_{\varphi}$ to be bounded and compact on $H^{2}$. The study of operators $C_{\varphi}D$ and $DC_{\varphi}$ was initially addressed by Hibschweiler, Portnoy, and Ohno (see \cite{12} and \cite{13}) and has been noticed by many researchers (\cite{fh}, \cite{fh1}, and \cite{15}). In this paper, we will be considering a slightly broader class of these operators.
For each positive integer $n$, we write $D_{\varphi,n}$ to denote the operator on $H^{2}$ given by the rule $D_{\varphi,n}(f)=C_{\varphi}D^{(n)}(f)=f^{(n)}\circ\varphi$.
Our main results provide complete characterizations of the boundedness and compactness of operators $D_{\varphi,n}$ on $H^{2}$ (Theorems \ref{thm1} and \ref{thm2}). In addition, we characterize the Hilbert-Schmidt operators $D_{\varphi,n}$ on $H^{2}$ (Theorem \ref{thm3333}). In this paper, we use some ideas which are found in \cite{13}.
Let $\varphi$ be an analytic self-map of $\mathbb{D}$. The \textit{Nevanlinna counting function} $N_{\varphi}$ of $\varphi$ is defined by
\begin{align*}
N_{\varphi}(w)=\sum_{\varphi(z)=w}\log \big(1/|z|\big)\qquad w\in \mathbb{D} \setminus \{\varphi(0)\}
\end{align*}
and $N_{\varphi}(\varphi(0))=\infty$. Note that $ N_{\varphi}(w)=0$ when $w$ is not in $\varphi(\mathbb{D})$.
For each $f \in H^{2}$, by using change of variables formula and Littlewood-Paley Identity, the norm of $C_{\varphi}f$ is determined as follows:
\begin{equation}\label{1-1}
\|f\circ \varphi\|^{2}=\big| f\big(\varphi (0)\big)\big|^{2}+2\int_{\mathbb{D}}|f^{\prime}(w)|^{2}N_{\varphi}(w)dA(w),
\end{equation}
where $dA$ is the normalized area measure on $\mathbb{D}$ (see \cite[Theorem 2.31]{2}).
Moreover, to obtain the lower bound estimate on $\|D_{\varphi,n}\|$ we need the following well known lemma as follows (see \cite[p. 137]{2}):\\ \par
Suppose that $\varphi$ is an analytic self-map of $\mathbb{D}$ and $f$ is analytic in $\mathbb{D}$. Assume that $\Delta$ is any disk not containing $\{f^{-1}(\varphi(0))\}$ and centered at $a$. Then
\begin{equation}\label{1-2}
N_{\varphi}(f(a))\leq \frac{1}{|\Delta|}\int_{\Delta}N_{\varphi}(f(w))dA(w),
\end{equation}
where $|\Delta|$ is the normalized area measure of $\Delta$.
\section{Boundedness and compactness of $D_{\varphi,n}$}
The goal of this section is to determine which of these operators $D_{\varphi,n}$ are bounded and compact.
\begin{thm}\label{thm1}
Let $\varphi$ be an analytic self-map of $\mathbb{D}$ and $n$ be a positive integer. The operator $D_{\varphi , n}$ is bounded on $H^{2}$ if and only if
\begin{align*}
N_{\varphi}(w)=O\bigg(\bigg[\log \big(1/ |w|\big)\bigg]^{2n+1}\bigg)\qquad \big(|w|\rightarrow 1\big).
\end{align*}
\end{thm}
\begin{proof}
Suppose that $D_{\varphi,n}$ is bounded on $H^{2}$. Let $f(z)=\frac{K_{\lambda}(z)}{\|K_{\lambda}\|}=\frac{\sqrt{1-|\lambda|^{2}}}{1-\overline{\lambda}z}$ for $\lambda\in\mathbb{D}$. By (\ref{1-1}), we see that
\begin{align}\label{2-1}
\|D_{\varphi ,n}\|^{2}&\geq \| D_{\varphi ,n}f\|^{2}\notag\\
&=\bigg\| C_{\varphi}\bigg(\frac{n!\overline{\lambda}^{n}\sqrt{1-|\lambda |^{2}}}{\big(1-\overline{\lambda}z\big)^{n+1}}\bigg)\bigg\|^{2}\notag\\
&=\bigg |\frac{n!\overline{\lambda}^{n}\sqrt{1-|\lambda|^{2}}}{\big(1-\overline{\lambda} \varphi(0)\big)^{n+1}}\bigg |^{2}+2\int_{\mathbb{D}} \bigg |\frac{(n+1)!\overline{\lambda}^{n+1}\sqrt{1-|\lambda |^{2}}}{\big(1-\overline{\lambda}w\big)^{n+2}}\bigg |^{2}N_{\varphi}(w)dA(w)\notag\\
&\geq \int_{\mathbb{D}}\frac{2\big((n+1)!\big)^{2}|\lambda |^{2n+2}\big(1-|\lambda |^{2}\big)}{\big|1-\overline{\lambda}w\big|^{2n+4}}N_{\varphi}(w)dA(w).
\end{align}
Substitute
$w=\alpha_{\lambda}(u)=\frac{\lambda-u} {1-\overline{\lambda}u}$ back into (\ref{2-1}) and using \cite[Theorem 7.26]{726} to obtain
\begin{align}\label{2-2}
\|D_{\varphi,n}\|^{2}&\geq \int_{\mathbb{D}}\frac{2\big((n+1)!\big)^{2}|\lambda |^{2n+2}\big(1-|\lambda |^{2}\big)}{\big|1-\overline{\lambda}\alpha_{\lambda}(u)\big|^{2n+4}}N_{\varphi}(\alpha_{\lambda}(u))\big|\alpha_{\lambda}^{\prime}(u)\big|^{2}dA(u).
\end{align}
Since $1-\overline{\lambda}\alpha_{\lambda}(u)=\frac{1-|\lambda|^{2}}{1-\overline{\lambda}u}$
and
$\alpha_{\lambda}^{\prime}(u)=\frac{|\lambda|^{2}-1}{\big(1-\overline{\lambda}u\big)^{2}}$, by substituting $\alpha_{\lambda}^{\prime}$ and $1-\overline{\lambda}\alpha_{\lambda}$ back into (\ref{2-2}), we see that
\begin{align}\label{2-3}
\|D_{\varphi,n}\|^{2}&\geq\int_{\mathbb{D}}\frac{2\big((n+1)!\big)^{2}|\lambda |^{2n+2}|1-\overline{\lambda}u|^{2n}}{\big(1-|\lambda |^{2}\big)^{2n+1}}N_{\varphi}(\alpha_{\lambda}(u))dA(u).
\end{align}
Because $\big|1-\overline{\lambda}u\big|\geq\frac{1}{2}$ for any $u\in{\mathbb{D}}/{2}$, we get from (\ref{2-3}) that
\begin{align}
\|D_{\varphi,n}\|^{2}\geq \int_{{\mathbb{D}}/{2}}\frac{2\big((n+1)!\big)^{2}|\lambda |^{2n+2}}{2^{2n}\big(1-|\lambda |^{2}\big)^{2n+1}}N_{\varphi}(\alpha_{\lambda}(u))dA(u).\label{eq1}
\end{align}
There exists $r<1$ such that for each $\lambda$ with $r<|\lambda|<1$, $\alpha_{\lambda}^{-1}(\varphi(0))\notin {\mathbb{D}}/{2}$ because $|\alpha_{\lambda}^{-1}(\varphi(0))|=|\alpha_{\varphi(0)}(\lambda)|$ and $\alpha_{\varphi(0)}$ is an automorphism of $\mathbb{D}$. By (\ref{1-2}) and (\ref{eq1}), we have
\begin{align}
\|D_{\varphi,n}\|^{2}&\geq \frac{2\big((n+1)!\big)^{2}|\lambda |^{2n+2}}{2^{2n}\big(1-|\lambda|^{2}\big)^{2n+1}}\int_{{\mathbb{D}}/{2}}N_{\varphi}(\alpha_{\lambda}(u))dA(u)\notag\\
&\geq \frac{2\big((n+1)!\big)^{2}|\lambda |^{2n+2}}{2^{2n}\big(1-|\lambda|^{2}\big)^{2n+1}}\cdot\frac{N_{\varphi}(\alpha_{\lambda}(0))}{4}\notag\\
&=\frac{\big((n+1)!\big)^{2}|\lambda |^{2n+2}}{2^{2n+1}\big(1-|\lambda|^{2}\big)^{2n+1}}N_{\varphi}(\lambda)\label{2.22}
\end{align}
for each $\lambda$ with $r<|\lambda|<1$.
Since $D_{\varphi ,n}$ is bounded, there exists a constant number $M$ so that
\begin{align}\label{eq2-5*}
\lim\limits_{|\lambda|\rightarrow1}\frac{\big((n+1)!\big)^{2}|\lambda|^{2n+2}}{2^{2n+1}\big(1-|\lambda|^{2}\big)^{2n+1}}N_{\varphi}(\lambda)\leq M.
\end{align}
We know that $\log\big({1}/{|\lambda|}\big)$ is comparable to $1-|\lambda|$ as $|\lambda| \rightarrow 1^{-}$.
Note that
\begin{align}
& \lim\limits_{|\lambda |\rightarrow 1}\frac{\big((n+1)!\big)^{2}|\lambda|^{2n+2}}{2^{2n+1}\big(1-|\lambda|^{2}\big)^{2n+1}}N_{\varphi}(\lambda)\notag\\
&= \lim\limits_{|\lambda |\rightarrow 1}\frac{\big((n+1)!\big)^{2}|\lambda |^{2n+2}}{2^{2n+1}\big(1+|\lambda |\big)^{2n+1}}\bigg(\frac{\log \big({1}/{|\lambda |}\big)}{1-|\lambda |}\bigg)^{2n+1}\frac{N_{\varphi}(\lambda)}{\big(\log \big({1}/{|\lambda|}\big)\big)^{2n+1}}\notag\\
& \geq \frac{\big((n+1)!\big)^{2}}{2^{6n+4}}\lim\limits_{|\lambda |\rightarrow 1}\frac{N_{\varphi}(\lambda)}{\big(\log\big({1}/{|\lambda|}\big)\big)^{2n+1}}.\label{2-6}
\end{align}
By \eqref{eq2-5*} and \eqref{2-6}, we can see that
$$N_{\varphi}(\lambda)=O\bigg(\bigg[\log\big({1}/{|\lambda|}\big)\bigg]^{2n+1}\bigg)\qquad (|\lambda|\rightarrow 1).$$
Conversely, suppose that for some $R$ with $0<R<1$, there exists a constant $M$ satisfying
\begin{align*}
\sup_{R<|w|<1}N_{\varphi}(w)/\bigg [\log\big({1}/{|w|}\big)\bigg]^{2n+1}\leq M.
\end{align*}
Let $f$ be an arbitrary function in $H^2$. It follows from (\ref{1-1}) that
\begin{align}
\| D_{\varphi ,n}f\| ^{2}&=\big |f^{(n)}\big(\varphi (0)\big)\big |^{2}+2\int_{\mathbb{D}}\big |f^{(n+1)}(w)\big |^{2}N_{\varphi}(w)dA(w)\notag\\
&=\big |f^{(n)}\big(\varphi (0)\big)\big |^{2}\notag\\
&+2\bigg(\int_{R\mathbb{D}}\big|f^{(n+1)}(w)\big |^{2}N_{\varphi}(w)dA(w)+\int_{\mathbb{D}\backslash R\mathbb{D}}\big| f^{(n+1)}(w)\big |^{2}N_{\varphi}(w)dA(w)\bigg).\label{2.2}
\end{align}
First we estimate the first and the second terms in the right-hand of (\ref{2.2}). Observe that
\begin{align*}
f^{(n)}(z)=\big\langle f,K_{z}^{(n)}\big\rangle=\int_{0}^{2\pi}\frac{n!e^{-in\theta}f(e^{i\theta})}{\big(1-e^{-i\theta}z\big)^{n+1}}\frac{d\theta}{2\pi}
\end{align*}
and hence
\begin{align}\label{222}
&\big|f^{(n)}(z)\big|\leq \frac{n!}{\big(1-|z|\big)^{n+1}}\int_{0}^{2\pi}\big|f(e^{i\theta})\big|\frac{d\theta}{2\pi}\leq \frac{n!}{\big(1-|z|\big)^{n+1}}\|f\|
\end{align}
for any $z\in \mathbb{D}$. It follows from (\ref{222}) that
\begin{align}\label{eq2.3}
\big|f^{(n)}\big(\varphi (0)\big)\big|\leq \frac{n!\|f\|}{\big(1-|\varphi(0)|\big)^{n+1}}.
\end{align}
Moreover, we can see that
\begin{align}\label{eq2}
\big|f^{(n+1)}(z)\big|=\big|\big\langle f,K_{z}^{(n+1)}\big\rangle\big|=\frac{(n+1)!}{\big(1-|z|\big)^{n+2}}\|f\|
\end{align}
for any $z\in\mathbb{D}$.
Therefore by \eqref{eq2}, we see that
\begin{align*}
\int_{R\mathbb{D}}\big |f^{(n+1)}(w)\big |^{2}N_{\varphi}(w)dA(w)&\leq \bigg(\frac{(n+1)!}{(1-R)^{n+2}}\bigg)^{2}\|f\|^{2}\int_{R\mathbb{D}}N_{\varphi}(w)dA(w).
\end{align*}
Since $\|\varphi\|^{2}=|\varphi(0)|^{2}+2\int_{\mathbb{D}} N_{\varphi}(w)dA(w)$ by (\ref{1-1}),
we obtain
\begin{align}\label{2.11}
\int_{\mathbb{D}} N_{\varphi}(w)dA(w)=\frac{1}{2}\big(\|\varphi\|^{2}-|\varphi(0)|^{2}\big)<1.
\end{align}
From (\ref{eq2}) and \eqref{2.11}, we see that
\begin{align}\label{2.5}
\int_{R\mathbb{D}}\big |f^{(n+1)}(w)\big |^{2}N_{\varphi}(w)dA(w) &\leq \bigg(\frac{(n+1)!}{(1-R)^{n+2}}\bigg)^{2}\|f\|^{2}.
\end{align}
Now we estimate the third term in the right-hand of (\ref{2.2}). We have
\begin{align}\label{eq3}
&\int_{\mathbb{D}\setminus R\mathbb{D}}\big |f^{(n+1)}(w)\big |^{2}N_{\varphi}(w)dA(w)\notag \\
&=\int_{\mathbb{D}\setminus R\mathbb{D}}\big |f^{(n+1)}(w)\big |^{2}\big(\log({1}/{|w|})\big)^{2n+1}\frac{N_{\varphi}(w)}{\big(\log({1}/{|w|})\big)^{2n+1}}dA(w)\notag\\
&\leq \sup_{R<|w|<1}\frac{N_{\varphi}(w)}{\big(\log({1}/{|w|})\big)^{2n+1}}\int_{\mathbb{D}\setminus R\mathbb{D}}\big |f^{(n+1)}(w)\big |^{2}\big(\log({1}/{|w|})\big)^{2n+1}dA(w)\notag\\
&\leq M\int_{\mathbb{D}\setminus R\mathbb{D}}\big |f^{(n+1)}(w)\big |^{2}\big(\log(1/|w|)\big)^{2n+1}dA(w).
\end{align}
Let $f(z)=\sum_{m=0}^{\infty}a_{m}z^{m}$.
We get
\begin{align}\label{4}
&\int_{\mathbb{D}\setminus R\mathbb{D}}\big |f^{(n+1)}(w)\big |^{2}\big(\log ({1}/{|w|})\big)^{2n+1}dA(w) \notag \\
&\leq\int_{\mathbb{D}\setminus R\mathbb{D}}\bigg |\sum_{m=n+1}^{\infty}m(m-1)...(m-n)a_{m}(w)^{m-(n+1)}\bigg|^{2}\big(\log ({1}/{|w|})\big)^{2n+1}dA(w)\notag\\
&\leq \sum_{m=n+1}^{\infty}m^{2}(m-1)^{2}...(m-n)^{2}|a_{m}|^{2}\int_{\mathbb{D}\setminus R\mathbb{D}}\bigg |(w)^{m-(n+1)}\bigg |^{2}\big(\log ({1}/{|w|})\big)^{2n+1}dA(w)\notag\\
&\leq \sum_{m=n+1}^{\infty}m^{2}(m-1)^{2}...(m-n)^{2}|a_{m}|^{2}\int_{\mathbb{D}}\bigg |(w)^{m-(n+1)}\bigg |^{2}\big(\log ({1}/{|w|})\big)^{2n+1}dA(w)\notag\\
&=\sum_{m=n+1}^{\infty}m^{2}(m-1)^{2}...(m-n)^{2}|a_{m}|^{2}\int_{0}^{1}\int_{0}^{2\pi}|re^{i\theta}|^{2(m-(n+1))}\big(\log({1}/{r})\big)^{2n+1}rdr\frac{d\theta}{\pi}\notag\\
&\leq\sum_{m=n+1}^{\infty}m^{2}(m-1)^{2}...(m-n)^{2}|a_{m}|^{2}\int_{0}^{1}(r)^{2(m-(n+1))}\big(\log({1}/{r})\big)^{2n+1}2rdr.\notag\\
\end{align}
Now substitute $t=r^{2}$ and $u=\log ({1}/{t})$ to obtain
\begin{align}\label{eq2.152}
\int_{0}^{1}(r)^{2(m-(n+1))}\big(\log({1}/{r})\big)^{2n+1}2rdr &=\int_{0}^{1}t^{(m-(n+1))}\bigg(\frac{1}{2}\log ({1}/{t})\bigg)^{2n+1}dt\notag\\
& =({1}/{2})^{2n+1}\int_{0}^{\infty}e^{-u(m-n)}u^{2n+1}du.
\end{align}
By substituting $x=(m-n)u$ back into \eqref{eq2.152}, we have
\begin{align}
({1}/{2})^{2n+1}\int_{0}^{\infty}e^{-u(m-n)}u^{2n+1}du &=\frac{1}{2^{2n+1}(m-n)^{2n+2}}\int_{0}^{\infty}e^{-x} x^{2n+1}dx\notag\\
&=\frac{\Gamma(2n+2)}{2^{2n+1}(m-n)^{2n+2}}.\label{eq5}
\end{align}
By \eqref{eq3}, \eqref{4}, \eqref{eq2.152} and \eqref{eq5}, we can see that
\begin{align}
\int_{\mathbb{D}\setminus R\mathbb{D}}\big |f^{(n+1)}(w)\big |^{2}N_{\varphi}(w)dA(w) &\leq M\sum_{m=n+1}^{\infty}m^{2}(m-1)^{2}...(m-n)^{2}|a_{m}|^{2}\frac{\Gamma (2n+2)}{2^{2n+1}(m-n)^{2n+2}}\notag\\
&=M\frac{(2n+1)!}{2^{2n+1}}\sum_{m=n+1}^{\infty}\frac{m^{2}(m-1)^{2}...(m-n+1)^{2}}{(m-n)^{2n}}|a_{m}|^{2}\notag\\
&\leq M\lambda\frac{(2n+1)!}{2^{2n+1}}\sum_{m=n+1}^{\infty}|a_{m}|^{2}\notag\\
&\leq M\lambda\frac{(2n+1)!}{2^{2n+1}}\|f\|^{2},\label{eq6}
\end{align}
where $\lambda$ is a constant so that $\frac{m^{2}(m-1)^{2}...(m-n+1)^{2}}{(m-n)^{2n}}\leq \lambda$ for each $m\geq n+1$
(note that the function $f(x)=\frac{x^{2}(x-1)^{2}...(x-n+1)^{2}}{(x-n)^{2n}}$ is bounded on $[n+1,+\infty)$).
Then \eqref{2.2}, \eqref{eq2.3}, \eqref{2.5} and \eqref{eq6} show that
$D_{\varphi,n}$ is bounded.
\end{proof}
\begin{thm}\label{thm2}
Let $\varphi$ be an analytic self-map of $\mathbb{D}$ and $n$ be a positive integer. The operator $D_{\varphi, n}$ is compact on $H^{2}$ if and only if
\begin{align}\label{eq19}
N_{\varphi}(w)=o\bigg(\bigg[\log\big(1/|w|\big)\bigg]^{2n+1}\bigg)\qquad (|w|\rightarrow 1).
\end{align}
\end{thm}
\begin{proof}
Let $h_{m}(z)=\frac{\sqrt{1-|\lambda_{m}|^{2}}}{1-\overline{\lambda}_{m}z}$ for a sequence $\{\lambda_{m}\}$ in $\mathbb{D}$ so that $|\lambda_{m}|\rightarrow 1$ as $m\rightarrow \infty$. Then $h_{m}\rightarrow 0$ weakly as $m\rightarrow \infty$ by \cite[Theorem 2.17]{2}. First suppose that $D_{\varphi ,n}$ is compact. Hence $\|D_{\varphi , n}h_{m}\|\rightarrow 0$ as $m\rightarrow \infty$. Therefore (\ref{2.22}) shows that
\begin{align*}
\lim\limits_{m\rightarrow \infty}\frac{\big((n+1)!\big)^{2}|\lambda_{m}|^{2n+2}}{2^{2n+1}(1-|\lambda_{m}|^{2})^{2n+1}}N_{\varphi}(\lambda_{m})=0.
\end{align*}
Since $\log(1/ |\lambda_{m}|)$ is comparable to $1-|\lambda_{m}|$ as $ m\rightarrow \infty$, the result follows.
Conversely, suppose that \eqref{eq19} holds. Let $\epsilon>0$. Then there exists $R, 0<R<1$, such that
\begin{align}\label{2020}
\underset{R<|w|<1}{\sup}N_{\varphi}(w)/\big[\log(1/|w|)\big]^{2n+1}<\epsilon.
\end{align}
Let $\{f_{m}\}$ be any bounded sequence in $H^{2}$. By using the idea which was stated in the proof of \cite[Proposition 3.11]{2}, we can see that $\{f_{m}\}$ is a normal family and there exists a subsequence $\{f_{m_{k}}\}$ which converges
to some function $f\in H^{2}$
uniformly on all compact subsets of $\mathbb{D}$. Let $g_{m_{k}}=f_{m_{k}}-f$ for each positive integer $k$. Note that $\{g_{m_{k}}\}$ is a bounded sequence in $H^{2}$ which converges to $0$ uniformly on all compact subsets of $\mathbb{D}$.
By \eqref{2.2}, we obtain
\begin{align}
\|D_{\varphi ,n}g_{m_{k}}\|^{2}&=\big |g_{m_{k}}^{(n)}(\varphi(0))\big|^{2}+2\int_{R\mathbb{D}}\big |g_{m_{k}}^{(n+1)}(w)\big |^{2}N_{\varphi}(w)dA(w)\notag\\
&+2\int_{\mathbb{D}\setminus R\mathbb{D}}\big |g_{m_{k}}^{(n+1)}(w)\big|^{2}N_{\varphi}(w)dA(w).\label{eq2.20}
\end{align}
By \cite[Theorem 2.1, p. 151]{34},
we can choose $k_{\epsilon}$ so that
\begin{align}
\big |g_{m_{k}}^{(n)}\big(\varphi(0)\big)\big|<\sqrt{\epsilon}\label{eq2.200}
\end{align}
and $\big |g_{m_{k}}^{(n+1)}\big|<\sqrt{\epsilon}$ on $R\mathbb{D}$
whenever $k>k_{\epsilon}$. Substituting $f(z)=z$ into \eqref{1-1}, we see that
\begin{align}\label{eq2.22}
\int_{R\mathbb{D}}\big |g_{m_{k}}^{(n+1)}(w)\big |^{2}N_{\varphi}(w)dA(w)&\leq \epsilon \int_{R\mathbb{D}} N_{\varphi}(w)dA(w)\notag\\
&\leq \frac{\epsilon}{2}\big(\|\varphi \|^{2}-|\varphi(0)|^{2}\big)
\end{align}
for $k>k_{\epsilon}$.
On the other hand by \eqref{2020} and the same idea as stated in the proof of \eqref{eq3} and \eqref{eq6}, we see that
\begin{align}
&\int_{\mathbb{D}\setminus R\mathbb{D}}\big |g_{m_{k}}^{(n+1)}(w)\big |^{2}N_{\varphi}(w)dA(w)\notag\\
&\leq \underset{R<|w|<1}{\sup}\frac{N_{\varphi}(w)}{\big[\log (1/|w|)\big]^{2n+1}}\int_{\mathbb{D}\setminus R\mathbb{D}}\big |g_{m_{k}}^{(n+1)}(w)\big |^{2}\big[\log (1/|w|)\big]^{2n+1}dA(w)\notag\\
&\leq C \epsilon \|g_{m_{k}}\|,\label{eq2.23}
\end{align}
where $C$ is a constant.
Hence we conclude that $\|D_{\varphi ,n}g_{m_{k}}\|$ converges to zero as $k\rightarrow \infty$ by \eqref{eq2.20}, \eqref{eq2.200}, \eqref{eq2.22} and \eqref{eq2.23} and so $D_{\varphi,n}$ is compact.
\end{proof}
The preceding theorems lead to characterizations of all bounded and compact operators $D_{\varphi ,n}$ when $\varphi$ is a univalent self-map.
\begin{cor}
Let $\varphi$ be a univalent self-map of $\mathbb{D}$ and $n$ be a positive integer. Then the following hold.
\begin{itemize}
\item[(i)]
$D_{\varphi ,n}$ is bounded on $H^{2}$ if and only if
\begin{align*}
\sup_{w\in \mathbb{D}}\frac{1-|w|}{\big(1-|\varphi(w)|\big)^{2n+1}}<\infty
\end{align*}
\item[(ii)]
$D_{\varphi ,n}$ is compact on $H^{2}$ if and only if
\begin{align*}
\lim\limits_{|w|\rightarrow 1}\frac{1-|w|}{\big(1-|\varphi(w)|\big)^{2n+1}}=0.
\end{align*}
\end{itemize}
\end{cor}
\begin{proof}
Since $\varphi$ is univalent, we can see that
$N_{\varphi}(w)=\log \big(1/|z|\big)$,
where $\varphi(z)=w$. We observe that
\[\frac{N_{\varphi}(w)}{\big[\log(1/|w|)\big]^{2n+1}}=\frac{-\log \big(|z|\big)}{\big(-\log \big(|\varphi(z)|\big)\big)^{2n+1}}.\]
Moreover, we know that $\log\big(1/|z|\big)$ is comparable to $1-|z|$ as $|z|\rightarrow 1^{-}$. Furthermore $|z|\rightarrow 1$ as $|\varphi(z)|\rightarrow 1$. Therefore the results follow immediately from Theorems \ref{thm1} and \ref{thm2}.
\end{proof}
\section{Hilbert-Schmidt operator $D_{\varphi,n}$}
We begin with a few easy observations that help us in the proof of Theorem \ref{thm3333}. In the proof of the following lemma, we assume that $0^{0}=1$.
\begin{lem}\label{lm1}
Let $n$ be a positive integer and $\alpha_{k}>0$ for each $0\leq k\leq n$. Then for $0\leq x< 1$, the following statements hold.
(a) $ \sum_{k=0}^{n}\frac{\alpha_{k}x^{k}}{(1-x)^{n+k+1}}\leq \frac{\sum_{k=0}^{n}\alpha_{k}}{(1-x)^{2n+1}}.$
(b) There exists a positive number $\beta$ such that
$\sum_{k=0}^{n}\frac{\alpha_{k}x^{k}}{(1-x)^{n+k+1}} \geq\frac{\beta}{(1-x)^{2n+1}} $.
\end{lem}
\begin{proof}
(a) We can see that
\begin{align*}
\sum_{k=0}^{n}\frac{\alpha_{k}x^{k}}{(1-x)^{n+k+1}}=\frac{\sum_{k=0}^{n}\alpha_{k}x^{k}(1-x)^{n-k}}{(1-x)^{2n+1}}.
\end{align*}
Since $0\leq x<1$ and $\alpha_{k}>0$, we conclude that
$ \sum_{k=0}^{n}\alpha_{k}x^{k}(1-x)^{n-k}\leq \sum_{k=0}^{n}\alpha_{k}.$ Hence the conclusion follows.
(b) We have
\begin{align*}
(1-x)^{2n+1}\sum_{k=0}^{n}\frac{\alpha_{k}x^{k}}{(1-x)^{n+k+1}}=\sum_{k=0}^{n}\alpha_{k}x^{k}(1-x)^{n-k}>0.
\end{align*}
Since $\sum_{k=0}^{n}\alpha_{k}x^{k}(1-x)^{n-k}$ is a continuous function on $[0,1]$, there exists a positive number $\beta$ such that $\sum_{k=0}^{n}\alpha_{k}x^{k}(1-x)^{n-k}\geq \beta$.
Hence the result follows.
\end{proof}
\begin{lem}\label{lm3}
Let $n$ be a positive integer. Then
\begin{align*}
\sum_{m=n}^{\infty}\big[m(m-1)...(m-n+1)\big]^{2}x^{m-n}
&=\big(n!\big)^{2}\sum_{k=0}^{n}\frac{(n+k)!}{\big(k!\big)^{2}(n-k)!}\frac{x^{k}}{(1-x)^{n+k+1}}
\end{align*}
for $0\leq x<1$.
\end{lem}
\begin{proof}
See \cite[Lemma 1]{15} and the general Leibniz rule.
\end{proof}
A \textit{Hilbert–Schmidt operator} on a separable Hilbert space $H$ is a bounded operator $A$ with finite \textit{Hilbert–Schmidt norm}
$\|A\|_{HS}=\left(\sum_{n=1}^{\infty}\|A e_{n}\|^{2}\right)^{1/2}$,
where $\{e_{n }\}$ is an orthonormal basis of $H$. These definitions are independent of the choice of the basis (see \cite[Theorem 3.23]{2}).
\begin{thm} \label{thm3333}
Let $D_{\varphi,n}$ be a bounded operator on $H^{2}$. Then $D_{\varphi,n}$ is a Hilbert-Schmidt operator on $H^{2}$ if and only if
\begin{align}\label{3.I}
\lim\limits_{r\rightarrow 1}\frac{1}{2\pi}\int_{0}^{2\pi}\frac{1}{\big(1-\big|\varphi(re^{i\theta})\big|^{2}\big)^{2n+1}}<\infty .
\end{align}
\end{thm}
\begin{proof}
Suppose that \eqref{3.I} holds. Lemmas \ref{lm1}, \ref{lm3} and \cite[Theorem 1.27]{726} imply that
\begin{align}\label{3.II}
\sum_{m=0}^{\infty}\big\|D_{\varphi,n}z^{m}\big\| &=\sum_{m=n}^{\infty}\big\| m(m-1)...(m-n+1)\varphi^{m-n}\big\|\notag\\
&=\sum_{m=n}^{\infty}\lim\limits_{r\rightarrow 1}\frac{1}{2\pi}\int_{0}^{2\pi}\big|m(m-1)...(m-n+1)\varphi^{m-n}(re^{i\theta})\big|^{2}d\theta\notag\\
&=\lim\limits_{r\rightarrow 1}\sum_{m=n}^{\infty}\frac{1}{2\pi}\int_{0}^{2\pi}\big|m(m-1)...(m-n+1)\varphi^{m-n}(re^{i\theta})\big|^{2}d\theta\notag\\
&=\lim\limits_{r\rightarrow 1}\frac{1}{2\pi}\int_{0}^{2\pi}\sum_{m=n}^{\infty}\big|m(m-1)...(m-n+1)\varphi^{m-n}(re^{i\theta})\big|^{2}d\theta\notag\\
&=\lim\limits_{r\rightarrow 1}\frac{1}{2\pi}\int_{0}^{2\pi}\sum_{k=0}^{n} \frac{\big(n!\big)^{2}(n+k)!}{\big(k!\big)^{2}(n-k)!}\frac{\big|\varphi (re^{i\theta})\big |^{2k}}{\big(1-\big|\varphi(re^{i\theta})\big|^{2}\big)^{n+k+1}} \notag\\
&\leq \lim\limits_{r\rightarrow 1}\frac{1}{2\pi}\int_{0}^{2\pi}\frac{\alpha}{\big(1-\big|\varphi(re^{i\theta})\big|^{2}\big)^{2n+1}},
\end{align}
where $\alpha=\sum_{k=0}^{n}\frac{(n!)^{2}(n+k)!}{(k!)^{2}(n-k)!}$
(note that the interchange of limit and summation is justified by \cite[Corollary 2.23]{2} and using Lebesgue's Monotone Convergence Theorem with counting measure).
It follows that $\sum_{m=0}^{\infty}\big\| D_{\varphi,n}z^{m}\big\|<\infty$ and so $D_{\varphi,n}$ is a Hilbert-Schmidt operator on $H^{2}$ by \cite[Theorem 3.23]{2}.
Conversely, suppose that $D_{\varphi,n}$ is a Hilbert-Schmidt operator on $H^{2}$. We infer from \cite[Theorem 3.23]{2} that
\begin{align}\label{3.}
\sum_{m=0}^{\infty}\big\|D_{\varphi,n}z^{m}\big\|^{2}<\infty.
\end{align}
On the other hand, by the proof of (3.2) and Lemma \ref{lm1}, there exists a positive number $\beta$ such that
\begin{align}\label{3.*}
\sum_{m=0}^{\infty}\big\|D_{\varphi,n}z^{m}\big\|^{2}&=\lim\limits_{r\rightarrow 1}\frac{1}{2\pi}\int_{0}^{2\pi}\sum_{k=0}^{n}\frac{(n!)^{2}(n+k)!}{(k!)^{2}(n-k)!}\frac{\big|\varphi (re^{i\theta})\big|^{2k}}{\big(1-\big|\varphi(re^{i\theta})\big|^{2}\big)^{n+k+1}}\notag\\
&\geq \lim\limits_{r\rightarrow 1} \frac{1}{2\pi}\int_{0}^{2\pi}\frac{\beta}{\big(1-\big|\varphi(re^{i\theta})\big|^{2}\big)^{2n+1}}.
\end{align}
Hence the result follows from \eqref{3.} and \eqref{3.*}.
\end{proof}
|
train/arxiv
|
BkiUchQ5qhLAB45oJIZ1
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
The standard way of specifying the behaviour of a system of communicating processes is to describe their individual behaviours.
Some important questions about these systems are ``Is it free from deadlocks?'' and ``Is it free from livelocks?''. Answering these questions is undecidable in general.
To answer these these and other similar questions, we can study a more general problem: what does the system \emph{do}?
In particular, what are the communications that the system will enact?
In this paper, we develop an automatic procedure that answers this question and that is efficient enough in practice.
As an example, consider the following (pseudocode) specification of a simple single sign-on scenario inspired by the OpenID protocol~\cite{openid}.
It describes a network with three processes: a user (\pid u) tries to access a third-party web service (\pid w) by verifying their identity at an authentication service (\pid a). The processes interact by using primitives for sending and receiving values (\texttt{send} and \texttt{recv}), and choosing from and offering alternative behaviours (\texttt{choose} and \texttt{offer}).
\begin{center}
\ttfamily\small
\begin{tabular}{l|l|l}
\toprule
\multicolumn1c{\rmfamily Program for \pid u} & \multicolumn1c{\rmfamily Program for \pid a} & \multicolumn1c{\rmfamily Program for \pid w} \\
\midrule
procedure X: & procedure X: & procedure X: \\
\quad send cred to \pid a & \quad recv c from \pid u & \quad offer to \pid a: \\
\quad offer to \pid a: & \quad if check(c): & \qquad OK: send t to \pid u \\
\qquad OK: recv token from w & \qquad choose OK at \pid u & \qquad KO: call X \\
\qquad KO: call X & \qquad choose OK at \pid w\\
& \quad else: \\
& \qquad choose KO at \pid u \\
& \qquad choose KO at \pid w\\
& \qquad call X \\
call X & call X & call X \\
\bottomrule
\end{tabular}
\end{center}
To answer the question of what this system does, we can use \emph{choreographic languages}---languages that describe the behaviour of an entire system from a global viewpoint. Examples of such languages are Message Sequence Charts~\cite{msc}, the W3C Web Services Choreography Description Language~\cite{wscdl}, and the Business Process Modelling Notation~\cite{bpmn}.
In the language that we use in this article, the behaviour of the system above can be given as the following choreography.
\begin{align*}
\m{def}\ X ={} &
\com{u}{cred}{a}{c};\\
& \m{if}\ \pid a.check(c)\ \m{then}\
\sel{a}{u}{ok};\
\sel{a}{w}{ok};\
\com{w}{t}{u}{token}\\
& \phantom{\m{if}\ \pid a.check\ }\makebox[0em][l]{\m{else}}\phantom{\m{then}\ }
\sel{a}{u}{ko};\
\sel{a}{w}{ko};\
X
\\
\m{in}\ X\hspace*{1.8em}
\end{align*}
Here, \code{-\hspace{-0.3mm}>} denotes a communication from the left- to the right-hand process. This choreography describes the global protocol: the authentication service receives the user's credentials, and then decides whether the user should get a session token ($t$) from the web service, or reattempt authentication (by reinvoking procedure $X$).
The general problem of synthesising a representative choreography from a set of process specifications is called \emph{choreography extraction} (extraction for short)~\cite{CMS18}.
Extraction is a hard problem, since it requires predicting how concurrent processes can communicate with each other. Approaching this problem with brute force leads to the typical case explosion for static analysis of concurrent programs \cite{O18}.
Extraction is also connected to deadlock-freedom: any system that can be represented by a choreography is necessarily deadlock-free~\cite{CM20}; however, some systems are deadlock-free but cannot be extracted to a choreography~\cite{CLM17}.
The state-of-the-art implementation of extraction~\cite{LTY15} has worst-case super-factorial complexity. This limits the feasibility of thorough testing, and to date the practical limits of extraction are still largely unexplored and unclear.
\subsection*{Contribution}
In this article, we present a simple yet effective choreography extraction procedure, whose correctness and efficiency are systematically tested in practice.
We revisit and expand on the key ideas that we previously presented in~\cite{CLM17}, where we informally described an extraction algorithm. We formally define this algorithm and prove its main properties. In this process, we also made some small improvements and extensions.
Then, we introduce an implementation of our algorithm and carry out the first thorough and systematic testing of choreography extraction in the literature.
Our contribution is three-fold.
\paragraph{Theory.} Our theory for choreography extraction focuses on simplicity.
The languages for choreographies and processes respectively build upon Core Choreographies and Stateful Processes, which have been previously proposed as languages for foundational studies on choreographies: they are designed to be minimalistic, yet representative of the choreographic approach and Turing complete~\cite{CM20,CMP21a}.
We extend the process language with an abstract operational semantics that overapproximates the possible executions of a network (a system of processes).
This abstract semantics allows us to construct a finite graph that represents the (abstract) execution space of a system. Choreography extraction can then be formulated as a procedure that reconstructs a choreography by following paths in this graph.
Our extraction also helps in debugging: if a potential deadlock is present, we pinpoint it with a special term ($\dlock$). Choreographies that are successfully extracted guarantee deadlock-freedom. The soundness of our approach is proven in terms of strong bisimilarity~\cite{S11}.
Using our theory as foundation, we design an algorithm that is significantly simpler and more efficient than previous work: it consists of only two phases (the construction of the graph and its visit) and has better complexity.
\paragraph{Implementation}
The design of our implementation includes choosing adequate data structures, optimising substeps, parallelisation, and proving all these choices correct. An example is devising an efficient decision procedure for guaranteeing the absence of livelocks.
As a result, we obtain an implementation that successfully manages our test suite (described next) in reasonable time.
\paragraph{Evaluation}
Designing a test suite for extraction poses a major challenge: we cannot simply generate random networks since nearly none of them will be extractable (it is very unlikely that randomly-generated processes have matching communication actions throughout execution). In order to ensure that we generate extractable networks, we rely on a compilation procedure for choreographies that has been proven formally correct~\cite{CMP21b}.
Specifically, by generating choreographies and compiling them, we obtain a first set of networks that are guaranteed to be extractable. This set is then extended to a more comprehensive test suite by applying additional transformations that simulate realistic software development:
we devised an automatic tool that simulates the typical changes (both correct and incorrect) that are introduced when a programmer edits a local process program, and then tried to extract choreographies from the edited networks. This provides information on how quickly our program fails for unextractable networks.
Our test suite represents the first systematic and comprehensive approach to the evaluation of extraction. Thus, we believe it to be a useful reference also for the future design and implementations of new extraction algorithms.
\subsection{Related Work}
\label{sec:related}
Most works on choreographic languages focus on the inverse (and simpler) operation to extraction: Endpoint Projection (EPP), the translation of choreographies into distributed implementations~\cite{CHY12,Hetal16}.
EPP supports a top-down development methodology: developers first write choreographies and then execute the output mechanically generated by EPP. However, there are scenarios
where this methodology is not applicable:
\begin{itemize}
\item The analysis or use of ``legacy code'', i.e., code that was not generated by EPP.
This might be code that was developed previously, or new code written in a technology that does not adopt EPP.
With legacy code, EPP is not helpful.
\item Code updates: the programs generated by EPP are typically updated locally later on (for
configuration or optimisations, for example). Since the original choreography is not automatically
updated, rerunning EPP loses these changes. Also, we lose the information on what the system is actually doing, since the original choreography does not represent it anymore.
\end{itemize}
To attack these issues, researchers started investigating choreography extraction, which is the topic of this article~\cite{CMS18,LT12,LTY15}.
Extraction still represents a green field of research. Early attempts developed theories based on session types~\cite{LT12}, linear logic~\cite{CMS18}, or communicating automata~\cite{LTY15}.
The theory in~\cite{LTY15} comes with an implementation, which is the state of the art in the area. However, the proposed algorithm does not focus on efficiency, nor simplicity: it consists of several complex phases, one of which has worst-case super-factorial complexity.
This limits the feasibility of thorough testing, and indeed the implementation has been tested on small selected examples, which tell us little about its applicability on a larger scale and its correctness.
The choreographic language in~\cite{LTY15} is different than ours, for example it cannot capture internal computation and it has internal threads (which we represent as separate processes). Nevertheless, many of the manually-written examples in~\cite{LTY15} can be reformulated in our framework. These reformulations are included in the testing of our implementation, and some of them benefit greatly from our parallelisation of extraction.
\section{Networks, Choreographies, and Extraction}
\label{sec:theory}
We introduce the languages that we use in this work to model process networks and choreographies.
These languages are very similar to those studied in~\cite{CM20}, where the interested reader can
find a formal treatment of these calculi, as well as statements and proofs of the most relevant
properties.
Syntactically, there are minor differences due to the goal of obtaining a process language closer to
real implementation languages, as depicted by the example in the introduction.
These changes are inspired by the languages discussed in~\cite{CM17a}.
Semantically, the reduction semantics for these calculi is also extended with labels in order to
allow for a formalisation of the link between choreographies and their process implementations as a
bisimilarity.
\subsection{Networks}
\label{sec:networks}
Process networks, or simply networks, represent systems of concurrent communicating processes.
Each process has an internal memory where values can be stored, identified by
variables.
Our model of networks is a calculus, which we call \emph{Stateful Processes} (SP), parameterised on
sets of process names, expressions, labels, variables, and procedure names.
We assume these sets to be fixed, as they are immaterial for our presentation.
We abstract from the concrete language of expressions, which models internal computation and is
orthogonal to our development, assuming only that: expressions can contain values $v$ and variables;
and evaluation of expressions always terminates and returns a value.
To simplify the presentation, we use $\pid p,\pid q,\ldots$ to range over process
names, $e,e',e_1,\ldots$ to range over expressions, $\ell,\ell',\ell_1,\ldots$ to range over labels,
$x,x',y,y_1,\ldots$ to range over variables, and $X,Y,X',Y_1,\ldots$ to range over procedure names.
Networks are ranged over by $N,N',N_1,M,\ldots$.
\paragraph{Syntax.}
Formally, a network is a map from a finite set of process names to \emph{processes} of the form $\procterm{\{\brec{X_i}{B_i}\}_{i\in I}}{B}$, where each $B_i$ and $B$ are process \emph{behaviours}.
We denote by $\actor{p_1}{P_1}\parp\cdots\parp\actor{p_n}{P_n}$ the network $N$ that maps each
process name $\pid p_i$ to the process term $P_i$, i.e., $N(\pid p_i)=P_i$ for all $i\in[1,n]$, and
every other process name to $\nil$.
Note that the order of processes in this representation is immaterial.
The network mapping all process names to $\nil$ is denoted $\nil$.
In $\procterm{\{\brec{X_i}{B_i}\}_{i\in I}}{B}$, behaviour $B$ is the \emph{main behaviour} of the process, and $\{\brec{X_i}{B_i}\}_{i\in I}$ is a
set of \emph{procedure definitions}, assigning each $X_i$ to the corresponding behaviour $B_i$ (the
\emph{body} of the procedure).
Behaviours are syntactically defined by the grammar in Figure~\ref{fig:sp_syntax}.
\begin{figure}
\centering
\[B ::= \nil \mid \gencall \mid \bsend qe;B \mid \brecv px;B \mid \bsel q\ell;B
\mid\bbranch p{\ell_1:B_1,\ldots,\ell_n:B_n} \mid \bcond e{B_1}{B_2}
\]
\caption{Syntax of Stateful Processes.}
\label{fig:sp_syntax}
\end{figure}
We use $P,P',P_1,\ldots$ to range over processes and $B,B',B_1,\ldots$ to range over behaviours.
We also write $\procs[p]$ for the set of all procedure definitions at $\pid p$, and we often
abbreviate $\actor p{\procterm{\procs[p]}B}$ to $\actor[{\procs[p]}]pB$, or simply $\actor pB$ if $\procs[p]$ is clear from the context.
Term $\nil$ is the behaviour of a process that has terminated.
Term $X$ is a procedure call, i.e., the invocation of the procedure called $X$ in the process
executing the behaviour.
Procedure calls are executed by replacing them with their definition.
Term $\bsend qe;B$ is a send action, which evaluates expression $e$, sends the resulting value to
process $\pid q$, and continues as $B$.
Dually, term $\brecv px;B$ receives a value from process $\pid p$, stores it in a local variable
$x$, and continues as $B$.
Term $\bsel q\ell;B$ sends to $\pid q$ the selection of a behaviour labelled by $\ell$ (labels are
constants), and then proceeds as $B$.
Selections are received by the branching term $\bbranch p{\ell_1:B_1,\ldots,\ell_n:B_n}$, which
models the offering of different possible behaviours: the process executing this term waits to
receive from $\pid p$ the selection of one of the labels $\ell_i$ in $\ell_1,\ldots,\ell_n$, and
then proceeds with the associated behaviour $B_i$.
Term $\bcond e{B_1}{B_2}$ is the standard conditional term.
It evaluates the Boolean expression $e$ and proceeds as $B_1$ if the result is \m{true}, and as
$B_2$ otherwise.
Networks are expected to satisfy some well-formedness conditions, corresponding to usual
requirements in practice:
\begin{itemize}
\item processes do not contain subterms that attempt self-communication (for example, $\actor p{\brecv px}$
is not allowed);
\item all expressions in guards of conditionals evaluate to \m{true} or \m{false};
\item all procedure calls refer to procedures defined in the enclosing process;
\item all defined procedures are distinct, i.e., in $\procterm{\{\rec{X_i}{B_i}\}_{i \in I}}{B}$,
$X_i\neq X_j$ for every $i\neq j\in I$.
\end{itemize}
Note that we do \emph{not} require procedure calls to be guarded.
\begin{example}\label{ex:auth_sp}
The example network from the introduction can be formalised as follows.
\begin{align*}
& \actor u{
\procterm
{\brec X{\bsend a{cred};\bbranch a{\mathit{ok}: \brecv w{token},\ \mathit{ko}: \call X}}
\\ &\hspace*{1.5em}}
{\call X}}
\\
\parp \
& \actor a{
\procterm
{\brec X{\brecv u{cred};\bcond{check(cred)}
{\left( \bsel u{ok}; \bsel w{ok} \right)}
{\left( \bsel u{ko}; \bsel w{ko}; \call X \right)}}
\\ &\hspace*{1.5em}}
{\call X}}
\\
\parp \
& \actor w{
\procterm
{\brec X{\bbranch a{\mathit{ok}: \bsend u{token},\ \mathit{ko}: \call X }}
\\ &\hspace*{1.5em}}
{\call X}}
\end{align*}
This corresponds precisely to the example written earlier, but now using the formal language of SP.
We follow the usual practice of omitting trailing $\nil$ terms in behaviours.
\eoe
\end{example}
The inductive definition of process behaviours gives rise to a notion of context in the usual
way~\cite{SW01}, by allowing the terminal $\nil$ to be replaced by a hole.
\paragraph{Semantics.}
The semantics of SP is given in terms of labelled reductions of the form
$N,\sigma\lto\lambda N',\sigma'$, where $\sigma$ is a \emph{state function} (which, given a
process and a variable, returns the value stored in that variable in the process's memory) and
$\lambda$ is a \emph{reduction label}.
The syntax of reduction labels is given in Figure~\ref{fig:cc_reductions}.
\begin{figure}
\[
\lambda ::= \lcom pvq \mid \gensel \mid \lcond p{then} \mid \lcond p{else}
\]
\caption{Reduction labels for reductions in Stateful Processes}
\label{fig:cc_reductions}
\end{figure}
The role of labels is to identify the action that has been performed; this will be useful later to
state and prove results about the extraction algorithm.
In realistic implementations, the state of each process's memory would be stored locally with each process. The formulation chosen here is trivially equivalent, but using a global state function simplifies
the formulation of some of our later results, as in other works on choreographies~\cite{CMP21a,CMP21b}.
The semantics of SP is defined by the rules in Figure~\ref{fig:sp_semantics}.
\begin{figure}
\begin{eqnarray*}
&\infer[\rname{S}{Com}]{
\actor[{\procs[p]}]p{\bsend qe;B_1} \parp
\actor[{\procs[q]}]q{\brecv px;B_2},\sigma
\lto[]{\lcom pvq}
\actor[{\procs[p]}]p{B_1} \parp
\actor[{\procs[q]}]q{B_2},\sigma[\tuple{\pid q,x} \mapsto v]
}{
\eval e\sigma pv
}
\\[1ex]
&\infer[\rname{S}{Sel}]{
\actor[{\procs[p]}]p{\bsel q{\ell_j};B} \parp
\actor[{\procs[q]}]q{\bbranch p{\ell_1:B_1,\ldots,\ell_n:B_n}},\sigma
\lto[]\gensel
\actor[{\procs[p]}]pB \parp \actor[{\procs[q]}]q{B_j},\sigma
}{
1\leq j\leq n
}
\\[1ex]
&\infer[\rname{S}{Then}]{
\actor[{\procs[p]}]p{\bcond e{B_1}{B_2}},\sigma
\lto[]{\lcond p{then}}
\actor[{\procs[p]}]p{B_1},\sigma
}{
\eval e\sigma p{\m{true}}
}
\\[1ex]
&\infer[\rname{S}{Else}]{
\actor[{\procs[p]}]p{\bcond e{B_1}{B_2}},\sigma
\lto[]{\lcond p{else}}
\actor[{\procs[p]}]p{B_2},\sigma
}{
\eval e\sigma p{\m{false}}
}
\\[1ex]
&
\infer[\rname{S}{Par}]{
N \parp M,\sigma \lto[]\lambda N' \parp M,\sigma'
}{
N,\sigma \lto[]\lambda N',\sigma'
}
\qquad
\infer[\rname{S}{Struct}]{
N,\sigma \lto[]\lambda N', \sigma'
}{
N \precongr[] M
& M,\sigma \lto[]\lambda M',\sigma'
& M' \precongr[] N'
}
\end{eqnarray*}
\caption{Semantics of Stateful Processes.}
\label{fig:sp_semantics}
\end{figure}
Two processes can synchronise when they refer to each other.
In rule~\rname{S}{Com}, an output at $\pid p$ directed at $\pid q$ synchronises with the dual input
action at $\pid q$ -- intention to receive from $\pid p$.
The communicated value ($v$) is obtained by evaluating expression $e$ locally at the sender $\pid p$ taking into account the memory state $\sigma$, denoted $\eval e\sigma pv$, and stored in the corresponding variable $x$ at $\pid q$ in the reductum.
The label in the reduction summarises the observable part of the communication.
Rule~\rname{S}{Sel} follows the same intuition, but for a label selection -- where $\pid p$ selects
between different possible behaviours offered at $\pid q$ by sending the appropriate label.
Rules~\rname{S}{Then} and~\rname{S}{Else} model conditionals in the expected way, while rule
\rname{S}{Par} allows for reductions involving only a subset of processes in the network.
Rule~\rname{S}{Struct} closes reductions under a \emph{structural precongruence relation}
$\precongr[]$, generated by closing the rule in Figure~\ref{fig:sp_precongr} under reflexivity,
transitivity, and context.
\begin{figure}
\begin{eqnarray*}
\infer[\rname{S}{Unfold}]{
\actor[{\procs[p]}]p{\call X} \precongr[] \actor[{\procs[p]}]p{B_X}
}{
\rec{X}{B_X}\in\procs[p]
}
\end{eqnarray*}
\caption{Structural precongruence in Stateful Processes.}
\label{fig:sp_precongr}
\end{figure}
This rule allows procedure calls to be replaced by their definition anywhere inside a process's
behaviour.
\begin{lemma}[Determinism of SP]
\label{lem:sp-confluent}
Let $N$ be a network, $\sigma$ be a state, and $\lambda$ be a reduction label.
For any networks $N_1$ and $N_2$ and states $\sigma_1$ and $\sigma_2$, if
$N,\sigma\lto\lambda N_i,\sigma_i$ for each $i=1,2$, then $\sigma_1=\sigma_2$ and there exists a
network $N'$ such that $N_i\precongr[]N'$ for $i=1,2$.
\end{lemma}
\begin{proof}[Proof (sketch).]
First observe that labels uniquely identify the process(es) involved in the reduction: this is
trivially the case for rules \rname{S}{Com}, \rname{S}{Sel}, \rname{S}{Then} and \rname{S}{Else},
and rules \rname{S}{Par} and \rname{S}{Struct} preserve this property.
Furthermore, the action(s) being executed must be the head action(s) in each participating
process, possibly after unfolding a behaviour consisting of a procedure call: once again, this is
trivially the case for the rules that execute reductions, and preserved by \rname{S}{Par} and
\rname{S}{Struct} (in the latter case, because structural congruence cannot change the head action
of a process unless it is a procedure call).
Therefore the label of the reduction uniquely determines the resulting state; and the resulting
networks may differ only in the procedure calls that have been unfolded in each process.
If $N,\sigma\lto[]\lambda N_1,\sigma_1$ and $N,\sigma\lto[]\lambda N_2,\sigma_2$, it thus follows
that $\sigma_1=\sigma_2$, and that $N_1\precongr[]N'$ and $N_2\precongr[]N'$ for the network $N'$
obtained from $N_1$ by unfolding all procedure calls that have been unfolded in $N_2$ and vice versa.
\end{proof}
\subsection{Core Choreographies}
Networks define the local actions that each process should perform, as in Example~\ref{ex:auth_sp},
but they can be hard to read and error-prone to write: each process can have a different structure,
because they carry out interactions with different other processes at different times, yet we must
ensure that each action aiming at interacting with another process is going to be matched eventually
by a compatible action at that process.
Conversely, choreographies are specifications on a higher level of abstraction that make the flow of
interactions easy to read and write, instead of focusing on the local view of each process.
We express choreographies using a minimalistic formal language (but still expressive enough to
capture relevant practical examples from the literature, as we show later).
Like networks, choreographies range over sets of process names, expressions, labels, and procedure
names, with the same conventions as above.
We call the choreography language in this work Core Choreographies (CC).
\paragraph{Syntax.}
A choreography is a term of the form $\chorterm{\{\rec{X_i}{C_i}\}_{i \in I}}{C}$, where: $C$ is the
\emph{main body} of the choreography; and $\{\rec{X_i}{C_i}\}_{i\in I}$ is a set of procedure
definitions, mapping each recursion variable $X_i$ (the \emph{name} of the procedure) to the
respective choreography \emph{body} $C_i$, for some finite set of indices $I$.
Choreography bodies are defined inductively by the grammar in Figure~\ref{fig:cc_syntax}.
\begin{figure}
\[ C ::= \nil \mid \gencom; C \mid \gensel; C \mid \gencond \mid \gencall \]
\caption{Syntax of Core Choreographies}
\label{fig:cc_syntax}
\end{figure}
We often abuse terminology and refer to choreography bodies as ``choreographies'', when no confusion
can arise.
Term $\nil$ is the terminated choreography, and again we typically omit it in examples when it is
the trailing term of a non-terminated choreography.
The next two terms both model systems that execute an interaction and proceed as $C$.
There are two kinds of interactions.
\begin{itemize}
\item In a value communication $\gencom$, process $\pid p$ evaluates expression $e$ and sends the
result to process $\pid q$, which stores it in its variable $x$,
replacing the value previously stored there.
We abstract from the concrete language of expressions $e$, which models internal computation and
is orthogonal to our development, assuming only that: expressions can contain values $v$ and
variables; and evaluation of expressions always terminates and returns a value.
\item In a selection $\gensel$, $\pid p$ selects $\ell$ among the set of branches offered by
$\pid q$.
\end{itemize}
We use $\eta$ to range over interactions, when we do not need to distinguish between value
communications and label selections.
In a conditional $\gencond$, $\pid p$ evaluates the (Boolean) expression $e$ and checks whether the
result is \m{true} or \m{false} to decide whether the system proceeds as $C_1$ or $C_2$,
respectively.
Finally, term $\gencall$ is a procedure call.
Intuitively, executing $\gencall$ corresponds to executing the body of the procedure with name $X$.
As for networks, we often omit the first part of choreography terms that have the empty set as the
set of procedure definitions, i.e., we simply write $C$ instead of $\chorterm{\emptyset}{C}$.
We assume that all choreographies are well-formed, meaning that:
\begin{itemize}
\item there are no self-communications, i.e., $\pid p$ and $\pid q$ are distinct in every subterm of
the form $\gencom$ or $\gensel$;
\item all expressions in guards of conditionals evaluate to \m{true} or \m{false};
\item all procedure calls are guarded in procedure definitions, i.e., there is no procedure
definition of the form $X=Y$ for some variables $X$ and $Y$;
\item all defined procedures are distinct, i.e., in $\chorterm{\{\rec{X_i}{C_i}\}_{i \in I}}{C}$,
$X_i\neq X_j$ for every $i\neq j\in I$.
\end{itemize}
As before, from the inductive definition of choreographies we define contexts in the usual
way~\cite{SW01}.
\paragraph{Semantics.}
The semantics of CC is given by labelled reductions $C,\sigma\lto\lambda C',\sigma'$, with labels
$\lambda$ as in SP.
The reduction rules are given in Figure~\ref{fig:cc_semantics}.
\begin{figure}
\begin{eqnarray*}
&\infer[\rname{C}{Com}]{
\gencom;C,\sigma \lto{\lcom pvq} C,\sigma[\tuple{\pid q,x} \mapsto v]
}{
\eval e\sigma pv
}
\qquad
\infer[\rname{C}{Sel}]{
\gensel;C,\sigma \lto\gensel C,\sigma
}{}
\\[1ex]
&\infer[\rname{C}{Then}]{
\gencond,\sigma \lto{\lcond p{then}} C_1, \sigma
}{
\eval e\sigma p{\m{true}}
}
\quad
\infer[\rname{C}{Else}]{
\gencond,\sigma \lto{\lcond p{else}} C_2, \sigma
}{
\eval e\sigma p{\m{false}}
}
\\[1ex]
&
\infer[\rname{C}{Struct}]{
C_1,\sigma \lto\lambda C'_1,\sigma'
}{
C_1 \precongr C_2
& C_2,\sigma \lto\lambda C'_2, \sigma'
& C'_2 \precongr C'_1
}
\end{eqnarray*}
\caption{Semantics of Core Choreographies}
\label{fig:cc_semantics}
\end{figure}
The first four rules formalise the above informal description of the involved syntactic terms, and follow the same intuitions as the corresponding rules for SP.
Rule~\rname{C}{Struct} closes reductions under a structural precongruence $\precongr$ that allows
procedure calls to be unfolded and non-interfering actions to be executed in any order.
The main rules defining this relation are given in Figure~\ref{fig:cc_precongr}; the missing rules
close this relation under reflexivity, transitivity, and context.
\begin{figure}
\begin{eqnarray*}
&\infer[\rname{C}{Eta-Eta}]{
\eta;\eta';C \precongr \eta';\eta;C
}{
\pn(\eta)\cap\pn(\eta') = \emptyset
}
\\[1ex]
&\infer[\rname{C}{Eta-Cond}]{
\cond pe{(\eta;C_1)}{(\eta;C_2)}
\precongr
\eta;\gencond
}{
\pid p\notin \pn(\eta)
}
\\[1ex]
&\infer[\rname{C}{Cond-Eta}]{
\eta;\gencond
\precongr
\cond pe{(\eta;C_1)}{(\eta;C_2)}
}{
\pid p\notin \pn(\eta)
}
\\[1ex]
&\infer[\rname{C}{Cond-Cond}]{
\begin{array}{c}
\cond pe{(\cond q{e'}{C_1}{C_2})}{(\cond q{e'}{C'_1}{C'_2})}
\\
\precongr
\\
\cond q{e'}{(\cond pe{C_1}{C'_1})}{(\cond pe{C_2}{C'_2})}
\end{array}
}{
\pid p \neq \pid q
}
\\[1ex]
&\infer[\rname{C}{Unfold}]{
\gencall \precongr C_X
}{
\genrec\in\procs}
\end{eqnarray*}
\caption{Structural precongruence in Core Choreographies}
\label{fig:cc_precongr}
\end{figure}
The key idea behind $\precongr$ is illustrated by rule~\rname{C}{Eta-Eta}, which swaps
communications between disjoint sets of processes (modeling concurrency).
In this rule, $\pn(C)$ denotes the set of process names that appear in $C$.
Rules~\rname{C}{Eta-Cond} and \rname{C}{Cond-Cond} are similar, as well as rule~\rname{C}{Cond-Eta}, which is dual to~\rname{C}{Eta-Cond}.
Rule~\rname{C}{Unfold} allows procedure calls to be replaced by the corresponding definition.
Since all rules except for~\rname{C}{Unfold} are reversible, one is often working with
choreographies $C_1$ and $C_2$ such that $C_1\precongr C_2$ and $C_2\precongr C_1$.
In this case, we write simply $C_1\equiv C_2$.
\begin{example}\label{ex:auth_cc}
We can write the client authentication protocol in the introduction as a choreography in the
following way, where $\sel a{u,w}\ell$ is a shortcut for $\sel au\ell;\sel aw\ell$.
For presentation purposes, we write each procedure definition as a separate equation, and abuse notation by identifying the main body with procedure $\m{main}$.
\begin{align*}
X = {} & \com u{pwd}ax;\\
& \cond a{\m{ok}(x)}{\left(\sel a{u,w}{ok};\com wtux\right)\\
&\hspace{4em}}{\left(\sel a{u,w}{ko};X\right)} \\
\m{main} = {} & X
\end{align*}
Here, $\pid u$ sends a password to $\pid a$.
If this password is correct, $\pid a$ notifies $\pid u$ and $\pid w$, and $\pid w$ sends an
authentication token $t$ to $\pid u$.
Otherwise, $\pid a$ notifies $\pid u$ and $\pid w$ that authentication failed, and a new attempt
is made (by recursively invoking $X$).
This choreography can be obtained from the network in Example~\ref{ex:auth_sp} by the extraction
algorithm defined in later sections.
\eoe
\end{example}
\subsection{EndPoint Projection}
Choreographies satisfying some realisability conditions can be translated automatically into
networks by a transformation known as \emph{EndPoint Projection} (EPP).
We summarise this procedure, as it is a key ingredient to stating soundness of extraction.
Intuitively, EPP is defined by translating each choreography action into its local counterparts.
For example, a communication action $\gencom$ is projected as $\bsend qe$ for process $\pid p$, as
$\brecv px$ for process $\pid q$, and as a no-op for any other process.
The interesting case (where realisability plays a role) is the case of conditionals: for any process
other than $\pid p$, $\gencond$ must be projected as a unique behaviour.
This is dealt with by a partial operator called \emph{merging}~\cite{CHY12,CMP21b}.
Two behaviours are mergeable if every place where they differ is protected by a label selection.
The key rule defining merge is that for branching terms:
\begin{multline*}
\bbranch p{\ell_i:B_i\mid i\in J} \merge
\bbranch p{\ell_i:B'_i \mid i\in K} =\\
\pid p\&\left(\{\ell_i:(B_i \merge B'_i)\mid i\in J\cap K\}
\cup\{\ell_i:B_i\mid i\in J \setminus K\}\cup\{\ell_i:B'_i \mid i\in K \setminus J\}\right)
\end{multline*}
The remaining rules extend this operator homomorphically,
e.g., $(\bsend qe;B_1)\merge(\bsend qe;B_2)=\bsend qe;(B_1\merge B_2)$.
Merge is undefined for two behaviours that start with different actions, e.g., $\bsend qe;B$ and
$\brecv qx;B'$.
The EPP of a choreography body for process $\pid r$, denoted $\epp[r]C$, is defined in
Figure~\ref{fig:epp}.
This extends to a choreography $\chorterm{\{\rec{X_i}{C_i}\}_{i \in I}}{C}$ by defining, for each
$\pid r$, $\procs[r]=\{\brec{X_i}{\epp[r]{C_i}}\}_{i\in I}$ and $\epp C$ as the function mapping
each process $\pid r$ to $\procterm{\procs[r]}{\epp[r]C}$.
If this is defined for every $\pid r$, the choreography is said to be
\emph{projectable}.\footnote{In practice, some static analysis is performed to optimise the projection of procedure invocations so that $\epp[r]{X} = \nil$ if $\pid r$ cannot be involved in the execution of $X$~\cite{CM20}. This optimisation does not affect our results.}
\begin{figure}
\begin{align*}
&\epp[r]{\gencom;C} =
\begin{cases}
\bsend qe;\epp[r]C & \mbox{if $\pid r = \pid p$} \\
\brecv px;\epp[r]C & \mbox{if $\pid r = \pid q$} \\
\epp[r]C & \mbox{o.w.}
\end{cases}
\quad\qquad
\epp[r]{\gensel;C} =
\begin{cases}
\bsel q\ell;\epp[r]C & \mbox{if $\pid r = \pid p$} \\
\bbranch p{\ell:\epp[r]C} & \mbox{if $\pid r = \pid q$} \\
\epp[r]{C} & \mbox{o.w.}
\end{cases}
\\[1ex]
&\epp[r]{\gencond} =
\begin{cases}
\bcond e{\epp[r]{C_1}}{\epp[r]{C_2}} & \mbox{if $\pid r = \pid p$} \\
\epp[r]{C_1} \merge \epp[r]{C_2} & \mbox{o.w.}
\end{cases}
\qquad
\epp[r]\nil = \nil
\qquad\epp[r]{X} = X
\end{align*}
\caption{Endpoint Projection of choreography bodies.}
\label{fig:epp}
\end{figure}
\begin{example}
The network in Example~\ref{ex:auth_sp} is the EPP of the choreography in Example~\ref{ex:auth_cc}.
\end{example}
\section{Extraction from SP}
\label{sec:extraction}
In this section, we develop the theory of extracting a choreography from a network.
In a nutshell, the idea is to execute the network symbolically (abstracting from the actual
values that are communicated, for example) and use the trace of the execution to write down a choreography.
Since network reduction is non-deterministic and networks may have infinite behaviour, this poses
some challenges even to ensure termination.
We divide this presentation in two parts.
First, we focus on the fragment of SP without recursive definitions, which we use to discuss the
intuition behind our extraction algorithm in a simple setting.
We present extraction for this fragment, and formally state and prove its soundness.
In the second part, we extend the construction to deal with infinite behaviour.
\subsection{The finite case}
\label{sec:finite}
In this section we focus on \emph{finite SP}, the fragment of SP without recursive definitions.
Formally, networks in SP are well-formed networks where all processes are of the form
$\procterm\emptyset B$; in particular, $B$ cannot contain any procedure calls.
We start by formalising our intuitive notion of ``executing a network symbolically'' by means of a
rewriting relation over a language of extended choreography bodies.
\begin{definition}
An \emph{extended choreography body} is a term written in the grammar of
Figure~\ref{fig:cc_syntax} using the additional constructs $\extract{N}$, where $N$ is a network
in finite SP, and $\dlock$, which stands for a deadlocked system.
\end{definition}
\begin{definition}
\label{defn:extraction}
We generate a rewriting relation $\rwto$ on extended choreography bodies by the
rules
\begin{align*}
\extract \nil
& \rwto \nil
\\
\extract{\actor[]p{\bsend qe;B_{\pid p}} \parp \actor[]q{\brecv px;B_{\pid q}} \parp N'}
& \rwto\gencom;\extract{\actor[]p B_{\pid p} \parp \actor[]q B_{\pid q} \parp N'}
\\
\extract{
\actor[]p{\bsel q{\ell_k};B_{\pid p}} \parp
\actor[]q{\bbranch p{\ell_1:B_{\pid q_1},\ldots,\ell_n:B_{\pid q_n}}} \parp N'}
& \rwto\sel pq{\ell_k};\extract{\actor[]p B_{\pid p} \parp \actor[]q B_{\pid q_k} \parp N'}
\\
\extract{\actor[]p{\bcond e{B_1}{B_2}}\parp N'}
& \rwto\cond pe{\extract{\actor[]p B_1\parp N'}}{\extract{\actor[]p B_2\parp N'}} \\
\extract N & \rwto\dlock\mbox{, if no other rule applies}
\end{align*}
closed under choreography contexts.
\end{definition}
Extraction operates by finding an action or a pair of matching actions in a network and replacing them
by the corresponding choreography action.
In general, there may be different options for these choices, making extraction nondeterministic.
\begin{example}
\label{ex:non-deterministic}
We illustrate this rewriting system with three example networks.
\begin{itemize}
\item Consider the network $N_1$ defined as
$\actor[] p{\bsend qe} \parp\actor[]q{\brecv px}
\parp\actor[]r{\bsend s{e'}} \parp\actor[]s{\brecv ry}$.
There are two sequences of extraction steps from $N_1$, namely
\begin{align*}
\extract{N_1}
& \rwto \gencom;\extract{\actor[]r{\bsend s{e'}} \parp\actor[]s{\brecv ry}} \\
& \rwto \gencom;\com r{e'}sy;\extract\nil \\
& \rwto \gencom;\com r{e'}sy;\nil \\
\mbox{and }\extract{N_1}
& \rwto \com r{e'}sy;\extract{\actor[]p{\bsend qe} \parp\actor[]q{\brecv px}} \\
& \rwto \com r{e'}sy;\gencom;\extract\nil \\
& \rwto \com r{e'}sy;\gencom;\nil
\end{align*}
Observe that the resulting choreographies can be rewritten into each other by
Rule~\rname{C}{Eta-Eta} (Figure~\ref{fig:cc_semantics}).
\item Consider now the network $N_2$ defined as
$\actor[]p{B_p} \parp \actor[]q{B_q}$, where
$B_p=\bcond e{\bsel q\lleft;\bsend q1}{\bsel q\lright;\brecv qx}$ and
$B_q=\bbranch p{\lleft:\brecv py,\ \lright:\bsend p2}$.
The only sequence of extraction steps from $N_2$ is
\begin{align*}
\extract{N_2}
& \rwto \cond pe
{\extract{\actor[]p{\bsel q\lleft;\bsend q1} \parp \actor[]q{B_q}}}
{\extract{\actor[]p{\bsel q\lright;\brecv qx} \parp \actor[]q{B_q}}} \\
& \rwto \cond pe
{\sel pq\lleft;\extract{\actor[]p{\bsend q1} \parp \actor[]q{\brecv py}}}
{\sel pq\lright;\extract{\actor[]p{\brecv qx} \parp \actor[]q{\bsend p2}}} \\
& \rwto \cond pe
{\left(\sel pq\lleft;\com p1qy;\extract\nil\right)}
{\left(\sel pq\lright;\com q2px;\extract\nil\right)} \\
& \rwto \cond pe
{\left(\sel pq\lleft;\com p1qy;\nil\right)}
{\left(\sel pq\lright;\com q2px;\nil\right)}
\end{align*}
\item We now introduce an example involving the deadlocked term.
Consider the network $N_3$ defined as
$\actor[]p{\bsend q1;\bsend r2}
\parp\actor[]q{\brecv px;\bsend r3}
\parp\actor[]r{\bcond e{\brecv py}{\brecv qy}}$.
Again, there are two possible sequences of extraction steps from $N_3$, but both include a
deadlocked term in the result.
\begin{align*}
\extract{N_3}
& \rwto \com p1qx;\extract{
\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\bcond e{\brecv py}{\brecv qy}}
} \\
& \rwto \com p1qx;\cond re
{\extract{\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\brecv py}}}
{\extract{\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\brecv qy}}} \\
& \rwto \com p1qx;\cond re
{\left(\com p2ry;\extract{\actor[]q{\bsend r3}}\right)}
{\left(\com q3ry;\extract{\actor[]p{\bsend r2}}\right)} \\
& \rwto \com p1qx;\cond re{\left(\com p2ry;\dlock\right)}{\left(\com q3ry;\dlock\right)}
\end{align*}
Alternatively, we can first rewrite the conditional on $\pid r$.
\begin{align*}
\extract{N_3}
& \rwto \cond re
{\extract{\actor[]p{\bsend q1;\bsend r2}\parp\actor[]q{\brecv px;\bsend r3}\parp\actor[]r{\brecv py}}}
{\extract{\actor[]p{\bsend q1;\bsend r2}\parp\actor[]q{\brecv px;\bsend r3}\parp\actor[]r{\brecv qy}}} \\
& \rwto \cond re
{\left(\com p1qx;\extract{\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\brecv py}}\right)}
{\left(\com p1qx;\extract{\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\brecv qy}}\right)} \\
& \rwto \cond re
{\left(\com p1qx;\com p2ry;\extract{\actor[]q{\bsend r3}}\right)}
{\left(\com p1qx;\com q3ry;\extract{\actor[]p{\bsend r2}}\right)} \\
& \rwto \cond re{\left(\com p1qx;\com p2ry;\dlock\right)}{\left(\com p1qx;\com q3ry;\dlock\right)}
\end{align*}
Note that the resulting extended choreographies can again be rewritten into each other (using
rules~\rname{C}{Cond-Eta} and \rname{C}{Eta-Cond}).\eoe
\end{itemize}
\end{example}
Indeed, non-determinism of extraction is of no practical consequence.
\begin{lemma}
\label{lem:sound-fin}
If $\extract N \rwtot C_1$ and $\extract N \rwtot C_2$, then $C_1\equiv C_2$.
\end{lemma}
\begin{proof}
This follows by induction from the fact that $\rwto$ has the diamond property.
To see that this is the case, observe that, if $N\rwto T_1$ and $N\rwto T_2$ with $T_1\neq T_2$,
then these two rewrites cannot use the first or last rules in the definition of $\rwto$, and the
choreography actions introduced in $T_1$ and $T_2$ cannot share process names.
Therefore we can continue the reduction from $T_1$ by adding the choreography action in $T_2$,
obtaining $T'_1$, and we can continue the reduction from $T_2$ by adding the choreography action
from $T_1$, obtaining $T'_2$.
Then $T'_1$ and $T'_2$ differ only in the choreography actions at the top, which can be exchanged
by one of the precongruence rules for CC, as in the previous example.
\end{proof}
The converse also holds: if $N$ can be extracted to a choreography, then it can be extracted to any
structurally congruent choreography.
\begin{lemma}
\label{lem:extract-equiv}
If $\extract N \rwtot C_1$ and $C_1\precongr[] C_2$, then $\extract N \rwtot C_2$.
\end{lemma}
\begin{proof}
By induction on the derivation of $C_1\precongr[] C_2$.
If this derivation consists of a single step, then it is an application of one of the rules in
Figure~\ref{fig:cc_precongr}, and that rule cannot be \rname{C}{Unfold}.
It follows immediately that the thesis holds.
Otherwise the thesis follows immediately from the induction hypothesis.
\end{proof}
There is one important design option to consider when extracting a choreography from a process
implementation: what to do with actions that cannot be matched, i.e., processes that get stuck.
There are two alternatives: restrict extraction to lock-free networks (networks where all processes
eventually progress, in the sense of~\cite{CDM14}), so that it becomes a partial relation;
or extract stuck processes to a new choreography term $\dlock$, with the same semantics as $\nil$.
We choose the latter option for debugging reasons.
Specifically, practical applications of extraction may annotate $\dlock$ with the code of the
deadlocked processes, giving the programmer a chance to see exactly where the system is unsafe, and
attempt at fixing it manually.
Better yet: since the code to unlock deadlocked processes in process calculi can be efficiently
synthesised~\cite{CDM14}, our method may be integrated with the technique in~\cite{CDM14} to suggest
an automatic system repair.
\begin{remark}
If $\extract N \rwto C$ and $C$ does not contain $\dlock$, then $N$ is lock-free.
However, even if $C$ contains $\dlock$, $N$ may still be lock-free: the code causing the deadlock
may be dead code in a conditional branch that is never chosen during execution.
Other kinds of liveness issues, e.g., livelocks and starvation, are not possible in finite SP, but will be relevant later when dealing with recursion.
\end{remark}
In order to relate a network with its extracted choreography, we use the standard notion of bisimilarity, noting that transition labels for choreographies and networks are the same.
\begin{definition}
A binary relation $\mr$ between choreographies and networks is a \emph{bisimulation} if:
\begin{itemize}
\item If $C \mr N$ and $C \lto[]\mu C'$, then there exists $N'$ such that $N \lto[]\mu N'$ and $C' \mr N'$.
\item If $C \mr N$ and $N \lto[]\mu N'$, then there exists $C'$ such that $C \lto[]\mu C'$ and $C' \mr N'$.
\end{itemize}
$C$ is \emph{bisimilar} to $N$, written $C \bisim N$, if there exists a bisimulation $\mr$ such that $C \mr N$.
\end{definition}
Extraction is sound: it yields a choreography that is bisimilar to the original network.
Also, for finite SP, it behaves as an inverse of EPP.
\begin{theorem}
\label{thm:correct}
Let $N$ be a finite SP. Then:
\begin{enumerate}[(i)]
\item If there exists a choreography $C$ such that $\extract N\rwto C$, then $C\bisim N$.
\item If $N=\epp C$ for some choreography $C$, then $\extract N\rwto C$.
\end{enumerate}
\end{theorem}
\begin{proof}\mbox{}
\begin{enumerate}[(i)]
\item We show that the relation $\mr$ defined by $C \mr N$ if $\extract N \rwto C$ is a bisimulation by induction on the size of $C$.
We detail one representative case.
Suppose that $C$ is $\gencom;C'$, whence $N$ is of the form
$\actor[]p{\bsend qe;B_p}\parp\actor[]q{\brecv px;B_q}\parp N'$.
The case when either $C$ or $N$ reduces by making a reduction labelled by $\lcom pvq$ is
trivial, since both $C,\sigma\lto[\emptyset]{\lcom pvq}C',\sigma'$ and
$N,\sigma\lto[]{\lcom pvq}\actor[]p{B_p}\parp\actor[]q{B_q}\parp N',\sigma'$ (assuming that
$\eval e\sigma pv$), and the latter network extracts to $C'$
Suppose that $C,\sigma\lto[\emptyset]\lambda C'',\sigma'$ for some other label $\lambda$.
Due to the way structural congruence is defined, and since there are no procedure definitions,
it follows that also $C',\sigma\lto[\emptyset]\lambda C''',\sigma'$, and that
$C''\precongr[\emptyset]\gencom;C'''$.
Since $\actor[]p{B_p}\parp\actor[]q{B_q}\parp N'\rwto C'$, by the induction hypothesis,
$\actor[]p{B_p}\parp\actor[]q{B_q}\parp N',\sigma\lto[]\lambda\actor[]p{B_p}\parp\actor[]q{B_q}\parp N'',\sigma'$;
but $\lambda$ cannot involve $\pid p$ or $\pid q$, so also
$\actor[]p{\bsend qe;B_p}\parp\actor[]q{\brecv px;B_q}\parp N',\sigma\lto[]\lambda\actor[]p{\bsend qe;B_p}\parp\actor[]q{\brecv px;B_q}\parp N'',\sigma'$.
The latter network extracts to $\gencom;C'''$, and Lemma~\ref{lem:extract-equiv} allows us to
conclude that $\actor[]p{\bsend qe;B_p}\parp\actor[]q{\brecv px;B_q}\parp N''\rwto C''$.
The case where $N,\sigma\lto[]\lambda N'',\sigma$ is similar, using Lemma~\ref{lem:sound-fin}.
\item By structural induction on $C$.
We detail one representative case.
Suppose that $C$ is $\gencom;C'$.
Then $\epp C$ can be written as $\actor[]p{\bsend qe;B_p}\parp\actor[]q{\brecv px;B_q}\parp N'$,
where $\epp{C'}=\actor[]p{B_p}\parp\actor[]q{B_q}\parp N'$, and the thesis follows trivially by
the induction hypothesis.\qedhere
\end{enumerate}
\end{proof}
As we show later, the second part of this theorem does not hold in the presence of recursive
definitions.
The definition of $\rwto$ is convenient for finite SP: it is simple, and easy to analyse.
However, when we add the possibility of infinite behaviour, it will in general not be the case that
a network can be rewritten to a choreography in finitely many steps.
Therefore, we now restate extraction by means of constructing and analysing a particular graph.
This alternative method, which is the hallmark of our development, is easily seen to be equivalent
to the previous definition -- but it can be extended to the whole language of SP.
We start by introducing an abstract semantics for networks, $N \lto\alpha N'$, defined as in
Figure~\ref{fig:sp_semantics} with the following two differences: (i)~the state $\sigma$ is removed,
and (ii)~the rules for value communication and conditionals are replaced by those in
Figure~\ref{fig:asp_semantics}.
In particular, conditionals are nondeterministic in this semantics.
Labels $\alpha$ in the abstract semantics are like $\lambda$, but the labels for communications now
contain expressions and the variable for storing the result (see the new rule \rname{S}{Com}); in
all omitted rules, the label is the same as before.
We write $N\lmto{\til\alpha} N'$ for $N\lto{\alpha_1}\cdots\lto{\alpha_n}N'$.
\begin{figure}
\begin{eqnarray*}
&\infer[\rname{S}{Com}]{
\actor[{\procs[p]}]p{\bsend qe;B_1} \parp
\actor[{\procs[q]}]q{\brecv px;B_2}
\lto[]{\com peqx}
\actor[{\procs[p]}]p{B_1} \parp
\actor[{\procs[q]}]q{B_2}
}{}
\\[1ex]
&\infer[\rname{S}{Then}]{
\actor[{\procs[p]}]p{\bcond e{B_1}{B_2}}
\lto[]{\lthen pe}
\actor[{\procs[p]}]p{B_1}
}{}
\\[1ex]
&\infer[\rname{S}{Else}]{
\actor[{\procs[p]}]p{\bcond e{B_1}{B_2}}
\lto[]{\lelse pe}
\actor[{\procs[p]}]p{B_2}
}{}
\end{eqnarray*}
\caption{Abstract semantics for Stateful Processes. Besides these rules, the semantics includes all other rules in Figure~\ref{fig:sp_semantics} with the state removed.}
\label{fig:asp_semantics}
\end{figure}
\begin{definition}
\label{defn:extr-graph}
Let $N$ be a network.
The \emph{Abstract Execution Space (AES)} of $N$ is the directed graph obtained by considering all
possible abstract reduction paths from $N$.
Its vertices are all the networks $N'$ such that $N\lmto{\til\alpha} N'$, and there is an edge
between two vertices $N_1$ and $N_2$ labelled $\alpha$ if $N_1\lto\alpha N_2$.
A \emph{Symbolic Execution Graph (SEG)} for $N$ is a subgraph of its AES that contains $N$ and
such that each vertex $N'\neq\nil$ has either one outgoing edge labelled by an interaction $\eta$ or two
outgoing edges labelled $\lthen pe$ and $\lelse pe$, respectively.
\end{definition}
Intuitively, the AES of $N$ represents all possible evolutions of $N$ (each such evolution is a path
in this graph).
A SEG fixes the order of execution of actions, but still abstracts from the state (and thus
considers both branches of conditionals).
If $N$ is a network in finite SP, these graphs are trivially finite.
\begin{example}
We revisit the networks in Example~\ref{ex:non-deterministic}.
\begin{itemize}
\item Network $N_1$ in the example has the following AES.
\[\xymatrix{
&
\actor[] p{\bsend qe} \parp\actor[]q{\brecv px}
\parp\actor[]r{\bsend s{e'}} \parp\actor[]s{\brecv ry}
\ar[dl]_{\gencom} \ar[dr]^{\com r{e'}sy}
\\
\actor[]r{\bsend s{e'}} \parp\actor[]s{\brecv ry}
\ar[dr]_{\com r{e'}sy}
&&
\actor[]p{\bsend qe} \parp\actor[]q{\brecv px}
\ar[dl]^\gencom
\\
& \nil
}\]
This AES admits two SEGs, namely the two paths from the top node to the bottom node.
Reading the labels of this path, one obtains the two choreographies that can be extracted from
this network.
\item Network $N_2$ illustrates how conditionals are treated.
This network's AES, which coincides with its SEG, is the following.
\[\xymatrix{
& \smallnet{
\actor[]p{\bcond e{\bsel q\lleft;\bsend q1}{\bsel q\lright;\brecv qx}}
\parp
\actor[]q{\bbranch p{\lleft:\brecv py,\ \lright:\bsend p2}}}
\ar[]+D;[dl]+U_{\lthen pe} \ar'[]+D;[dr]+0^{\lelse pe} [dr]+0;[ddr]
\\
\smallnet{
\actor[]p{\bsel q\lleft;\bsend q1} \parp
\actor[]q{\bbranch p{\lleft:\brecv py,\ \lright:\bsend p2}}
} \ar[dd]_{\sel pq\lleft}
&& \\
&& \smallnet{
\actor[]p{\bsel q\lright;\brecv qx} \parp
\actor[]q{\bbranch p{\lleft:\brecv py,\ \lright:\bsend p2}}
} \ar[d]^{\sel pq\lright}
\\
\actor[]p{\bsend q1} \parp \actor[]q{\brecv py} \ar[]+D;[dr]_{\com p1qy}
&& \actor[]p{\brecv qx} \parp \actor[]q{\bsend p2} \ar[]+D;[dl]^{\com q2px}
\\
& \actor[]p\nil \parp \actor[]q\nil
}\]
Again, reading the labels on the edges of this graph, one obtains the choreography
$\cond pe{\left(\sel pq\lleft;\com p1qy\right)}{\left(\sel pq\lright;\com q2px\right)}$, which
describes the global behaviour of the original network.
\item Finally, network $N_3$ gives the following AES.
\[\xymatrix{
& \smallnet{\actor[]p{\bsend q1;\bsend r2}
\parp\actor[]q{\brecv px;\bsend r3}
\parp\actor[]r{\bcond e{\brecv py}{\brecv qy}}}
\ar[]+D;[dl]+U_{\lthen re} \ar[]+D;[dd]^{\com p1qx} \ar[]+D;[dr]+U^{\lelse re} \\
\smallnet{\actor[]p{\bsend q1;\bsend r2}
\parp\actor[]q{\brecv px;\bsend r3}
\parp\actor[]r{\brecv py}} \ar[]+D;[dd]+U_{\com p1qx}
&&
\smallnet{\actor[]p{\bsend q1;\bsend r2}
\parp\actor[]q{\brecv px;\bsend r3}
\parp\actor[]r{\brecv qy}} \ar[]+D;[dd]+U^{\com p1qx} \\
&
\smallnet[6em]{\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}
\parp\actor[]r{\bcond e{\brecv py}{\brecv qy}}}
\ar[]+D;[dl]_{\lthen re} \ar[]+D;[dr]^{\lelse re}
\\
\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\brecv py} \ar[]+D;[d]^{\com p2ry}
&&
\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\brecv qy} \ar[]+D;[d]^{\com q3ry}
\\
\actor[]p\nil\parp\actor[]q{\bsend r3}\parp\actor[]r\nil
&&
\actor[]p{\bsend r2}\parp\actor[]q\nil\parp\actor[]r\nil
}\]
There are two SEGs for this AES:
\[\xymatrix{
& \smallnet{\actor[]p{\bsend q1;\bsend r2}
\parp\actor[]q{\brecv px;\bsend r3}
\parp\actor[]r{\bcond e{\brecv py}{\brecv qy}}}
\ar[]+D;[dl]+U_{\lthen re} \ar[]+D;[dr]+U^{\lelse re} \\
\smallnet{\actor[]p{\bsend q1;\bsend r2}
\parp\actor[]q{\brecv px;\bsend r3}
\parp\actor[]r{\brecv py}} \ar[]+D;[d]+U_{\com p1qx}
&&
\smallnet{\actor[]p{\bsend q1;\bsend r2}
\parp\actor[]q{\brecv px;\bsend r3}
\parp\actor[]r{\brecv qy}} \ar[]+D;[d]+U^{\com p1qx} \\
\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\brecv py} \ar[]+D;[d]^{\com p2ry}
&&
\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\brecv qy} \ar[]+D;[d]^{\com q3ry}
\\
\actor[]p\nil\parp\actor[]q{\bsend r3}\parp\actor[]r\nil
&&
\actor[]p{\bsend r2}\parp\actor[]q\nil\parp\actor[]r\nil
}\]
and
\[\xymatrix{
& \smallnet{\actor[]p{\bsend q1;\bsend r2}
\parp\actor[]q{\brecv px;\bsend r3}
\parp\actor[]r{\bcond e{\brecv py}{\brecv qy}}}
\ar[]+D;[d]^{\com p1qx} \\
&
\smallnet{\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}
\parp\actor[]r{\bcond e{\brecv py}{\brecv qy}}}
\ar[]+D;[dl]^{\lthen re} \ar[]+D;[dr]_{\lelse re}
\\
\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\brecv py} \ar[]+D;[d]^{\com p2ry}
&&
\actor[]p{\bsend r2}\parp\actor[]q{\bsend r3}\parp\actor[]r{\brecv qy} \ar[]+D;[d]^{\com q3ry}
\\
\actor[]p\nil\parp\actor[]q{\bsend r3}\parp\actor[]r\nil
&&
\actor[]p{\bsend r2}\parp\actor[]q\nil\parp\actor[]r\nil
}\]
Both SEGs end in deadlocked networks, in line with the fact that $N_3$ cannot be extracted to a
choreography.
Representing these networks by $\dlock$ and reading the labels on the edges in these graphs
allows us to reconstruct the extracted extended choreographies
$\com p1qx;\cond re{\left(\com p2ry;\dlock\right)}{\left(\com q3ry;\dlock\right)}$ and
$\cond re{\left(\com p1qx;\com p2ry;\dlock\right)}{\left(\com p1qx;\com q3ry;\dlock\right)}$.
\eoe
\end{itemize}
\end{example}
As these examples illustrate, there is a strong connection between these graphs and the previous
definition of extraction: each rule in Definition~\ref{defn:extraction} naturally corresponds to an
edge, except for the first (which characterises terminated networks) and the last (which
characterises deadlocked networks). Therefore,
each particular sequence of steps extracting a choreography corresponds to a SEG, and
conversely.
\begin{lemma}
The AES for any network in finite SP is a directed acyclic graph (DAG).
\end{lemma}
\begin{proof}
Since there are no procedure calls, every reduction strictly decreases the size of the network
(measured by the number of nodes in its abstract syntax tree).
Therefore no network can ever reduce to itself in any number of steps, and as such no AES can
have loops.
\end{proof}
As a consequence, every SEG for a network in finite SP is also a DAG.
\begin{definition}
Let $S$ be a SEG for a network.
The extended choreography body extracted from node $N$, $\extract[S]N$, is defined inductively as
follows.
\begin{itemize}
\item $\extract[S]\nil=\nil$
\item If $N$ has no descendants and $N\neq\nil$, then $\extract[S]N=\dlock$.
\item If $N$ has one descendant $N'$ and the edge from $N$ to $N'$ has label $\eta$, then
$\extract[S]N=\eta;\extract[S]N'$.
\item If $N$ has two descendants $N'$ and $N''$ and the edges from $N$ to those nodes are labelled
$\lthen pe$ and $\lelse pe$, respectively, then
$\extract[S]N=\cond pe{\extract[S]N'}{\extract[S]N''}$.
\end{itemize}
\end{definition}
\begin{example}
The (extended) choreographies informally presented in the previous example correspond exactly to
the (extended) choreographies extracted from the given SEGs.
\eoe
\end{example}
As the examples suggest, this new notion of extraction coincides precisely with the old one.
\begin{lemma}
Let $N$ be a network.
\begin{enumerate}[(i)]
\item If $S$ is a SEG for $N$, then $\extract N\rwtot\extract[S]N$.
\item For every choreography body $C$, if $\extract N\rwtot C$, then there exists a SEG $S$ for
$N$ such that $\extract[S]N=C$.
\end{enumerate}
\end{lemma}
\begin{proof}\mbox{}
\begin{enumerate}[(i)]
\item Straightforward by induction on the definition of $\extract[S]N$, since every case in its
definition corresponds directly to a rule in the definition of $\rwto$.
\item The sequence of reductions in $\extract N\rwtot C$ defines a graph $S$ as follows:
\begin{itemize}
\item an application of the first or last rule does not add anything to the graph;
\item an application of the second or third rule generates an edge from the network on the left
to the reductum network on the right, labelled with the choreography action that is introduced
by the rule;
\item an application of the fourth rule generates two edges from the network on the left to each
reductum network on the right, labelled by the appropriate conditional label.
\end{itemize}
It is immediate to check that $S$ is a SEG for $N$, and that $\extract[S]N=C$.\qedhere
\end{enumerate}
\end{proof}
\subsection{Adding recursion}
\label{sec:recursion}
Formulating extraction in terms of SEGs allows us to extend it to networks with recursive
definitions.
The tricky step is defining the AES: abstract executions of a network can be infinite, and due to
recursion unfolding there are in general infinite possible future states of a network with truly
recursive definitions.
Defining extraction from such infinite graphs would be problematic already, since choreographies are
finite; furthermore, we are interested in \emph{computing} extracted choreographies, which requires
at least building a SEG.
To ensure finiteness, we restrict the applications of rule~\rname{S}{Unfold} in the abstract
semantics (Figure~\ref{fig:asp_semantics}).
\begin{enumerate}[(i)]
\item Rule \rname{S}{Unfold} can only be applied inside a derivation occurring in the first premise
of rule~\rname{S}{Struct}.
\item If rule~\rname{S}{Unfold} is applied to process $\pid p$ inside a derivation proving
$N\precongr[] M$, then $N(\pid p)$ is a procedure call.
\item If rule~\rname{S}{Unfold} is applied to process $p$ inside a derivation using rule
\rname{S}{Struct}, then process $\pid p$ appears in the label $\lambda$ of the reduction.
\end{enumerate}
In other words: we only allow unfolding recursive definitions in order to execute a reduction that
would otherwise not be enabled.
With these restrictions, the AES and SEGs for a network are defined as in the finite case.
However, these graphs no longer need to be DAGs, since a network may evolve into itself after some
reductions.
\begin{example}
\label{ex:aes}
Consider the network
\[
\actor p{\bsend qe;X}
\parp
\actor qY
\parp
\actor rZ
\]
where procedures $X$, $Y$, and $Z$ are defined at $\pid p$, $\pid q$, and $\pid r$, respectively, as
\begin{align*}
X &= \bsend qe;\bbranch q{\lleft:\bsend qe;X,\ \lright:\nil} \\
Y &= \brecv px;\brecv px;\brecv ry; \bcond{(x=y)}{\bsel p\lleft;Y}{\bsel p\lright;\nil} \\
Z &= \bsend q{e'};Z
\end{align*}
This network generates the AES in Figure~\ref{fig:aes}.
Since execution of this network is deterministic, the same graph is also its SEG.
\eoe
\end{example}
\begin{figure}[t]
\[\xymatrix@R=1em{
& \makebox[0cm][c]{$\actor p{\bsend qe;X} \parp \actor qY \parp \actor rZ$}
\ar[d]^{\com peqx}
\\
& \makebox[0cm][c]{$\actor pX \parp \actor q{\brecv px;\brecv ry;\bcond{(x=y)}{\bsel p\lleft;Y}{\bsel p\lright;\nil}} \parp \actor rZ$}
\ar[d]^{\com peqx}
\\
& {\begin{array}c \displaystyle\actor p{\bbranch q{\lleft:\bsend qe;X,\ \lright:\nil}} \parp {} \\
\displaystyle \actor q{\brecv ry;\bcond{(x=y)}{\bsel p\lleft;Y}{\bsel p\lright;\nil}} \parp \actor rZ
\end{array}}
\ar[d]^{\com r{e'}qy}
\\
& {\begin{array}c \displaystyle\actor p{\bbranch q{\lleft:\bsend qe;X,\ \lright:\nil}} \parp {} \\
\displaystyle \actor q{\bcond{(x=y)}{\bsel p\lleft;Y}{\bsel p\lright;\nil}} \parp \actor rZ
\end{array}}
\ar '[]+L `l[ddl]_{\lthen q{(x=y)}} [ddl]
\ar[d]^{\lelse q{(x=y)}}
\\
& \makebox[8em][c]{$\actor p{\bbranch q{\lleft:\bsend qe;X,\ \lright:\nil}} \parp \actor q{\bsel p\lright;\nil} \parp \actor rZ$}
\ar[d]^{\sel qp\lright}
\\
\makebox[6em][c]{$\actor p{\bbranch q{\lleft:\bsend qe;X,\ \lright:\nil}} \parp \actor q{\bsel p\lleft;Y} \parp \actor rZ$}
\ar '[]+UL `u[uuuuur]-<6em,0em>^{\sel qp\lleft} [uuuuur]-<6em,0em>
& \makebox[0em][c]{$\actor p\nil \parp \actor q\nil \parp \actor rZ$}
}\]
\caption{The AES and SEG for the network in Example~\ref{ex:aes}.}
\label{fig:aes}
\end{figure}
The key insight to define extraction in this case is that the definitions of recursive procedures
are extracted from the loops in the SEG, rather than from the recursive definitions in the source
network.
\begin{definition}
\label{def:DAG-ification}
Let $S$ be a SEG for a network $N$.
A \emph{loop node} is a node $n$ in $S$ such that: (i)~$n$ has more than one incoming edge or
(ii)~$n$ is the initial node labelled $N$ and $n$ has at least one incoming edge.
The \emph{DAG-ification} of $S$ is the graph $S^D$ defined as follows.
\begin{itemize}
\item The nodes of $S^D$ are all the nodes of $S$ together with new nodes $X_n$ for each loop node
$n$.
\item For each edge in $S$ from $n$ to $n'$, $S^D$ contains one edge with source $n$ and target $X_{n'}$,
if $n'$ is a loop node, and source $n$ and target $n'$, otherwise.
\end{itemize}
\end{definition}
\begin{lemma}
Graph $S^D$ is a DAG.
\end{lemma}
\begin{proof}
Suppose there is a cycle $n_1,n_2\ldots,n_k=n_1$ in $S$.
If one of $n_1,\ldots,n_k$ is the initial node, then this path is no longer a path in $S^D$ by
construction.
Otherwise, one of these nodes must have at least two incoming edges (since all nodes are
accessible from the initial node), which again implies that it is no longer a path in $S^D$.
\end{proof}
From the root node of each connected component of $S^D$, we can extract a choreography as before,
adding the rule $\extract[S^D]{X_n}=X_n$ where $X_n$ is a procedure name.
\begin{definition}
\label{defn:extr-unsound}
The choreography extracted from $N$, $\extract[S]N$ is defined as follows.
\begin{itemize}
\item The set of procedure definitions is
$\{\rec{X_{n'}}{\extract[S^D]{n'}\mid n'\mbox{ is a loop node in $S$}}\}$.
\item The main choreography is $X_n$, if the starting node $n$ is a loop node, and
$\extract[S^D]N$, otherwise.
\end{itemize}
\end{definition}
\begin{example}
Consider the SEG in Figure~\ref{ex:aes}.
To extract a choreography, we split the topmost node into two nodes; the new node is labelled with
a procedure identifier $X$, which is the target of the upgoing arrow in the figure.
Thus, $X$ is extracted to
\[ \com peqx;\com peqx;\com r{e'}qy;\cond q{(x=y)}{\sel qp\lleft;X}{\sel qp\lright;X} \]
and the extracted choreography itself is simply $X$.
The body of $X$ is not projectable (the branches for $\pid r$ are not mergeable,
cf.~\cite{CM20}), but it faithfully describes the behaviour of the original network.
\eoe
\end{example}
The procedure in Definition~\ref{defn:extr-unsound} always terminates, but sometimes
it extracts incomplete choreographies that lack some behaviours from the original network.
We illustrate the possible problems with some examples.
\begin{example}
\label{ex:starve}
Consider the network $N$ defined as $\actor pX \parp \actor qY \parp \actor rZ \parp \actor sW$,
where the recursive procedures $X$, $Y$, $Z$, and $W$ are as follows.
\[
\rec X{\bsend qe;X} \qquad
\rec Y{\brecv px;Y} \qquad
\rec Z{\bsend s{e'};Z} \qquad
\rec W{\brecv ry;W}
\]
The AES for $N$ is:
\[\xymatrix{
\actor pX\parp\actor qY\parp\actor rZ\parp\actor sW
\ar@(d,l)[]+L^{\com peqx} \ar@(d,r)[]+R_{\com r{e'}sy}
}\]
There are two SEGs for this AES:
\[\xymatrix{
\actor pX\parp\actor qY\parp\actor rZ\parp\actor sW
\ar@(d,l)[]+L^{\com peqx}
& \text{ and } &
\actor pX\parp\actor qY\parp\actor rZ\parp\actor sW
\ar@(d,r)[]+R_{\com r{e'}sy}
}\]
which extract to choreographies consisting of a call to procedure $X$, defined as
$\rec X{\com peqx;X}$ and $\rec X{\com r{e'}sy;X}$, respectively, none of which captures all the behaviours of~$N$.
\eoe
\end{example}
\begin{example}
\label{ex:finite}
A similar situation may occur if there are processes with finite behaviour (no procedure calls):
the network $\actor pX \parp \actor qY \parp \actor r{\bsend s{e'}} \parp \actor s{\brecv ry}$
where $\rec X{\bsend qe;X}$ and $\rec Y{\brecv px;Y}$ can be extracted to the choreography $Z$,
with $\rec Z{\com peqx};Z$, where $\pid r$ and $\pid s$ never communicate.
\eoe
\end{example}
Both these examples exhibit a form of starvation: there is a loop involving some processes that can
reduce (as can be seen in the AES), but they are not allowed to do so in a particular SEG.
Example~\ref{ex:starve} is particularly relevant, since there is no SEG where all involved processes
reduce.
In order to avoid such situations, we change the definitions of AES and SEG slightly.
We annotate all processes in networks with either $\circ$ (unmarked) or $\bullet$ (marked).
In the initial network, all processes are unmarked.
Processes are marked when they are involved in a reduction; the marking is reset when all processes
are marked.
To make this formal, we extend the semantics to annotated networks as follows.
Let $N$ and $N'$ be annotated networks, and $N^-$ and ${N'}^-$ be the underlying networks obtained
by erasing the annotations.
Then $N\lto\alpha N'$ if:
\begin{itemize}
\item $N^-\lto\alpha{N'}^-$;
\item all processes in $N'$ are unmarked iff all unmarked processes in $N$ appear in $\alpha$;
\item otherwise, a process is marked in $N'$ iff it is marked in $N$ or it appears in $\alpha$.
\end{itemize}
\begin{definition}
\label{defn:valid-seg}
A SEG for a network $N$ is \emph{valid} if all its loops include a node where all processes are
unmarked.
A network $N$ extracts to a choreography $C$ if $C$ can be constructed (as in
Definition~\ref{defn:extr-unsound}) from a valid SEG for $N$.
\end{definition}
In a valid SEG, every process is guaranteed to reduce at least once inside every loop.
\begin{example}
The AES for the annotated network $N$ in Example~\ref{ex:starve} is:
\[\xymatrix@R=1em{
&\makebox[2em][c]{$\actor{p^\circ}X\parp\actor{q^\circ}Y\parp\actor{r^\circ}Z\parp\actor{s^\circ}W$}
\ar@/^/[dl]_(.4){\com peqx}
\ar@/_/[dr]^(.4){\com r{e'}sy}
\\
\makebox[10em][c]{$\actor{p^\bullet}X\parp\actor{q^\bullet}Y\parp\actor{r^\circ}Z\parp\actor{s^\circ}W$}
\ar@/^/[ur]+DL+<-5.7em,.7em>^{\com r{e'}sy}
\ar@(dr,dl)[]+DL_{\com peqx}
&&\makebox[10em][c]{$\actor{p^\circ}X\parp\actor{q^\circ}Y\parp\actor{r^\bullet}Z\parp\actor{s^\bullet}W$}
\ar@/_/[ul]+DR+<5.7em,.7em>_{\com peqx}
\ar@(dl,dr)[]+DR^{\com r{e'}sy}
}\]
This AES now has the following two SEGs:
\[\xymatrix@R=1em{
\actor{p^\circ}X\parp\actor{q^\circ}Y\parp\actor{r^\circ}Z\parp\actor{s^\circ}W
\ar@/^/[d]^{\com peqx}
&&
\actor{p^\circ}X\parp\actor{q^\circ}Y\parp\actor{r^\circ}Z\parp\actor{s^\circ}W
\ar@/^/[d]^{\com r{e'}sy}
\\
\actor{p^\bullet}X\parp\actor{q^\bullet}Y\parp\actor{r^\circ}Z\parp\actor{s^\circ}W
\ar@/^/[u]^{\com r{e'}sy}
&&
\actor{p^\circ}X\parp\actor{q^\circ}Y\parp\actor{r^\bullet}Z\parp\actor{s^\bullet}W
\ar@/^/[u]^{\com peqx}
}\]
Observe that the self-loops from the AES are discarded because they do not go through a node where
all processes are unmarked.
From these SEGs, we can extract two definitions for $X$:
\[
\rec{X}{\com peqx;\com r{e'}sy;X}
\qquad\mbox{ and }\qquad
\rec{X}{\com r{e'}sy;\com peqx;X}
\]
and both of these definitions correctly capture all behaviours of the network.
\eoe
\end{example}
Validity implies, however, that there are some non-deadlocked networks that are not extractable,
such as $\actor pX\parp \actor qY\parp \actor rZ$ where $\rec X{\bsend qe;X}$, $\rec Y{\brecv px;Y}$
and $\rec Z{\brecv py;Z}$, for which there is no valid SEG.
This is to be expected, since deadlock-freedom is undecidable in SP.
In practice, there are situations where livelocks are acceptable, namely in the presence of a service that is designed to be used only when necessary. In Section~\ref{sec:impl-services} we briefly discuss how to deal with such cases.
\subsection{Soundness and completeness}
Since extraction ignores the definition of procedures, it is simple to find counterexamples to the
second part of Theorem~\ref{thm:correct}.
\begin{example}
Consider the very simple choreography \[\chorterm{X:=\gencom;\gencom;X}{\gencom;X}\,.\]
Its projection is the network
\[
\actor p{\procterm{X:=\bsend qe;\bsend qe;X}{\bsend qe;X}}
\parp
\actor q{\procterm{X:=\brecv px;\brecv px;X}{\brecv px;X}}
\]
which extracts to the choreography $\chorterm{X:=\gencom;\gencom;X}X$.
\eoe
\end{example}
We show that the first part of this result still holds, from which it follows that the analogue of
Lemma~\ref{lem:sound-fin} also applies.
This proof is divided into several steps.
Throughout this section, let $N$ be a network, $\chorterm\procs C$ be the choreography
extracted from $N$ for a particular SEG $\mathcal G$, and $\sigma$ be a state.
We consider the (possibly infinite) sequences $\{\lambda_i\}_{i\in I}$, $\{C_i\}_{i\in I}$, and
$\{\sigma_i\}_{i\in I}$ defined as:
\begin{itemize}
\item $C_0=C$;
\item $\sigma_0=\sigma$;
\item for each $i$, $\lambda_i$ is the label of the reduction executing the head action in $C_i$
(the only action that can be executed without applying any of the structural congruence rules
other than~\rname{C}{Unfold});
\item for each $i$, $C_{i+1}$ and $\sigma_{i+1}$ are the only choreography and state such that
$C_i,\sigma_i\lto{\lambda_i}C_{i+1},\sigma_{i+1}$;
\item $I=\{0,\ldots,n\}$ if $C_n$ is $\nil$ for some $n$, and $\mathbb N$ otherwise.
\end{itemize}
Observe that $C_{i+1}$ and $\sigma_{i+1}$ are well-defined, since the semantics of CC completely
determines these terms given $C_i$, $\sigma_i$, and $\lambda_i$.
\begin{lemma}
There exists a sequence $\{n_i\}_{i\in I}$ in $\mathcal G$ such that there is an edge
$n_i\lto[]{\lambda'_i}n_{i+1}$, where $\lambda'_i$ is the label corresponding to $\lambda_i$ in
the abstract semantics for SP.
\end{lemma}
\begin{proof}
By induction on $i$.
We take $n_0$ to be the starting node in the construction of $\mathcal G$.
By construction of $C$, for each $i$, there must be an outgoing edge labelled with $\lambda'_i$,
and we define $n_{i+1}$ as the target of that edge.
\end{proof}
\begin{lemma}
There exists a sequence of networks $\{N_i\}_{i\in I}$ such that $N_0=N$ and
$N_i,\sigma_i\lto[]{\lambda_i}N_{i+1},\sigma_{i+1}$.
\end{lemma}
\begin{proof}
We prove by induction that $N_0,\sigma_0\lmto{\lambda_0\ldots\lambda_{i-1}}N_i,\sigma_i$, where
$N_i$ is the network labelling the node $n_i$ defined in the previous lemma.
This trivially holds for $i=0$.
Now assume by induction hypothesis that it holds for $i-1$.
Given how SEGs are constructed, $\lambda'_i$ is an abstraction of an action that $N_{i-1}$ can
execute, and the only possible action corresponding to it is $\lambda_i$ (since the details
missing in the abstraction are uniquely defined by $\sigma_{i-1}$, and they coincide for
choreographies and networks).
The semantics of SP guarantees that there exist unique $N'$ and $\sigma'$ such that
$N_{i-1},\sigma_{i-1}\lto[]{\lambda_i}N',\sigma'$.
Since the abstract and concrete semantics act in the same way on networks, $N'=N_i$; and since
an inspection of the rules for the semantics of CC and SP establishes that $\sigma'=\sigma_i$.
\end{proof}
\begin{lemma}
\label{lem:reduces-iff}
For every $i\in I$ and reduction label $\lambda$, $C_i,\sigma_i$ can execute a reduction labelled
by $\lambda$ iff $N_i,\sigma_i$ can execute a reduction labelled by $\lambda$.
\end{lemma}
\begin{proof}
Assume that $C_i,\sigma_0\lto\lambda C',\sigma'$ for some $C'$ and $\sigma'$.
Let $j\geq i$ be the minimal index such that $\lambda$ and $\lambda_j$ share process names.
Since structural precongruence can only exchange actions that do not share process names and the
semantics of CC only allows one action for each process at each point, it immediately follows that
$\lambda=\lambda_j$.
Furthermore, since no action in $\lambda_i,\ldots,\lambda_{j-1}$ shares process names with
$\lambda$, it follows that the behaviour of the processes involved in $\lambda$ is unchanged in
$N_i,\ldots,N_j$ and that $\sigma_i(\pid p)=\sigma_j(\pid p)$ for every such process $\pid p$.
Since $N_j,\sigma_j$ can execute $\lambda$ and the conditions for executing an action are local to
the processes involved in that action, this implies that $N_i,\sigma_i\lto[]\lambda N',\sigma'$
for some network $N'$ -- the same argument as in previous proofs implies that the resulting state
must be $\sigma'$.
Now assume that $N_i,\sigma_i\lto[]\lambda N',\sigma'$ for some $N'$ and $\sigma'$.
Since the processes involved in $\lambda$ cannot participate in any other reductions, $\lambda$
is enabled in all nodes of $\mathcal G$ until an edge labelled by its abstract counterpart is
traversed -- in other words, $N_i,\ldots,N_j$ can all execute $\lambda$ for the least $j\geq i$
such that $\lambda=\lambda_j$.
Furthermore, such a $j$ must exist due to the fairness conditions imposed by
Definition~\ref{defn:valid-seg}: since $\mathcal G$ is a valid SEG, either execution of $N_i$
terminates (in which case $\lambda$ must have been executed) or there is a loop in the SEG, and
every process in the network must reduce at least once inside that loop (and again $\lambda$ must
be executed in that loop).
Since $\lambda$ shares no process names with any actions in $\lambda_i,\ldots,\lambda_{j-1}$, it
also follows that $C_i$ can execute it, and as before the resulting state must be $\sigma'$.
We thus conclude that $C_i,\sigma_i\lto\lambda C',\sigma'$ for some $C'$.
\end{proof}
The next lemma is a property of CC not directly related to extraction.
\begin{lemma}
\label{lem:chor-perm}
Let $\til\lambda'=\lambda'_0,\ldots,\lambda'_j$ be a (finite) sequence of reduction labels such
that $C,\sigma\lcmto{\til\lambda'}C',\sigma'$.
Then there exist $n\in\mathbb N$ and a permutation $\pi:\{0,\ldots,n\}\to\{0,\ldots,n\}$ such that
$\lambda'_i=\lambda(\pi(i))$ for $i=0,\ldots,j$.
Furthermore, $\pi$ can be obtained by repeatedly transposing consecutive pairs of labels that
share no process names.
\end{lemma}
\begin{proof}[Proof (sketch).]
This result is a corollary of the proof of confluence of CC from~\cite{CMP21a}, although it
has not been stated in this form before.
Confluence is proved by first showing that executing two independent actions in any order always
yields the same result.
This is extended by induction to sequences of actions, where the inductive case is split
according to whether both sequences start with the same action.
The current lemma follows from observing that we can choose a large enough $n$ such that all
actions in $\til\lambda'$ occur in $\lambda_0,\ldots,\lambda_n$.
Unfolding the proof of confluence as described above iteratively applies a transposition of
consecutive independent actions to $\lambda_0,\ldots,\lambda_n$, until this sequence starts with
$\lambda'_0,\ldots,\lambda'_j$.
The composition of these transpositions yields the permutation $\pi$.
\end{proof}
\begin{lemma}
\label{lem:net-perm}
Let $\til\lambda'=\lambda'_0,\ldots,\lambda'_j$ be a (finite) sequence of reduction labels such
that $N,\sigma\lmto{\til\lambda'}N',\sigma'$.
Then there exist $n\in\mathbb N$ and a permutation $\pi:\{0,\ldots,n\}\to\{0,\ldots,n\}$ such that
$\lambda'_i=\lambda(\pi(i))$ for $i=0,\ldots,j$.
Furthermore, $\pi$ can be obtained by repeatedly transposing consecutive pairs of labels that
share no process names.
\end{lemma}
\begin{lemma}
\label{lem:perm-red}
Let $\til\lambda'=\lambda'_0,\ldots,\lambda'_j$ be a prefix of any sequence of reduction labels
obtained by repeatedly transposing consecutive elements of $\lambda$ that share no process names.
Then there exist a choreography $C'$, a network $N'$ and a state $\sigma'$ such that
$C,\sigma\lto{\til\lambda'}C',\sigma'$ and $N,\sigma\lto[]{\til\lambda'}N',\sigma'$.
Furthermore, the actions that $C'$ and $N'$ can execute coincide.
\end{lemma}
\begin{proof}
By induction on the number of transpositions applied.
If this number is~$0$, then this is simply Lemma~\ref{lem:reduces-iff}.
Assume by induction hypothesis that the thesis holds for $\{\lambda'_i\}_{i\in I}$ obtained by
applying $n$ transpositions to consecutive actions in $\lambda$, and suppose that $\lambda'_i$ and
$\lambda'_{i+1}$ share no process names.
Note that the thesis holds for the sequence obtained by swapping these two labels for any
$j\neq i$: for $j<i$ the sequence $\lambda'$ is unchanged, while confluence ensures that the
result of executing $\lambda'_0,\ldots,\lambda'_i,\lambda'_{i+1}$ coincides with the result of
executing $\lambda'_0,\ldots,\lambda'_{i+1},\lambda'_i$ for both $C$ and $N$.
But as observed before, a reduction does not change the possible actions of processes not involved in it.
Since $\lambda_i$ and $\lambda_{i+1}$ do not share any such processes, if
$C,\sigma\lto{\lambda'_0,\ldots,\lambda'_{i-1},\lambda'_{i+1}}C'',\sigma''$, then the executable
actions in $C''$ are those that were already available in the previous step, together with any
actions unblocked by $\lambda_{i+1}$.
Furthermore, the latter actions remain unchanged after executing $\lambda_i$.
A similar reasoning applies to the executable actions in $N''$, where
$N,\sigma\lto[]{\lambda'_0,\ldots,\lambda'_{i-1},\lambda'_{i+1}}N'',\sigma''$.
Since the set of executable actions before and after executing $\lambda_{i+1}$ and $\lambda_i$
coincide, the actions executable by $C''$ and $N''$ are defined in the same way, and therefore
must also coincide.
\end{proof}
\begin{theorem}
\label{thm:oc-ac-ap}
If $C$ is a choreography extracted from a network $N$, then $N\bisim C$.
\end{theorem}
\begin{proof}
Let $N$ be a network, $C$ be a choreography extracted from $N$, and $\sigma$ be a state.
Define a relation $\mathcal R\subseteq\mathcal C\times\mathcal N$, where
$\mathcal C=\{C'\mid C,\sigma\to^\ast C'\sigma'\mbox{ for some $\sigma'$}\}$ and
$\mathcal N=\{N'\mid N,\sigma\to^\ast N',\sigma'\mbox{ for some $\sigma'$}\}$, as follows:
$C'\mathcal R N'$ if $C,\sigma\lcmto{\til\lambda'}C',\sigma'$ and
$N,\sigma\lmto{\til\lambda'}N',\sigma'$ for some sequence of actions $\til\lambda'$.
We show that $\mathcal R$ is a bisimulation.
Assume that $C'\mathcal R N'$.
Then there exists a sequence of actions $\til\lambda'$ such that
$C,\sigma\lcmto{\til\lambda'}C',\sigma'$ and $N,\sigma\lmto{\til\lambda'}N',\sigma'$.
By Lemma~\ref{lem:chor-perm}, $\til\lambda'$ can be obtained from $\til\lambda$ by repeatedly
permuting two consecutive independent actions and taking an initial segment of the result.
By Lemma~\ref{lem:perm-red}, the actions that $C'$ and $N'$ can execute are therefore the same.
For each such action $\alpha$, we can again apply Lemmas~\ref{lem:chor-perm}
and~\ref{lem:perm-red} to the sequence $\til\lambda';\alpha$ to conclude that, if
$C',\sigma'\lto\alpha C'',\sigma''$, then there exists $N''$ such that
$N',\sigma'\lto[]\alpha N'',\sigma''$; conversely, if $N',\sigma'\lto[]\alpha N'',\sigma''$, then
applying Lemmas~\ref{lem:net-perm} and~\ref{lem:perm-red} yields that
$C',\sigma'\lto\alpha C'',\sigma''$ for some $C''$.
\end{proof}
\section{Implementation}
\label{sec:impl}
We now describe an implementation of the algorithm presented in
Section~\ref{sec:extraction}, with emphasis on the interesting
technical details. The main challenge is computing a valid SEG for
the input network efficiently, or determining in reasonable time that
none exists; we follow the idea, given previously, of lazily expanding
the relevant parts of the AES until a valid SEG is found or we can
safely conclude that none exists.
\subsection{Overview}
The extraction algorithm is implemented in a depth-first manner, starting with a single node (the
initial network, properly annotated), on which we call a method, \code{buildGraph}, graphically
described in Figure~\ref{fig:buildGraph}.
This method builds a list of all actions that the network can execute: a communication (of either a
value or a label) between two processes, or the execution of a conditional at a process.
This list includes actions that require unfolding procedure calls.
Then, the method tries to complete the SEG assuming that the first action in the list is executed,
returning \code{true} if this succeeds.
If this step fails, the next action in the list is considered.
If no action leads to success, \code{buildGraph} returns \code{false}.
\begin{figure}
\input{buildGraph.pdf_tex}
\caption{Graphical depiction of \code{buildGraph}.}
\label{fig:buildGraph}
\end{figure}
Actions are processed by two different methods, depending on their type.
In the case of communications, method \code{buildCommunication} (see
Figure~\ref{fig:buildCommunication}) computes the network resulting from executing the action, and
checks whether there exists a node in the graph containing it.
In the affirmative case, it checks whether adding an edge to that node creates a valid loop; if so,
the edge is added and the method returns \code{true}; otherwise, the method returns \code{false}.
If no such node exists, a fresh node is added with an edge to it from the current node, and
\code{buildGraph} is called recursively on the newly created node.
\begin{figure}
\input{buildCommunication.pdf_tex}
\caption{Graphical depiction of \code{buildCommunication}.}
\label{fig:buildCommunication}
\end{figure}
The case of conditionals is more involved, since two branches need to be created successfully.
Method \code{buildConditional} (Figure~\ref{fig:buildConditional}) starts by treating the
\code{then} case, much as described above, except that in case of success (by closing a loop or by
building a new node and receiving \code{true} from the recursive invocation of \code{buildGraph}) it
does not return, but moves to the \code{else} branch.
If this branch also succeeds, the method returns \code{true}; if it fails, then it returns
\code{false} and deletes all edges and nodes created in the \code{then} branch from the graph: this
step is essential for soundness of the method deciding loop validity (see
Section~\ref{sec:bad-loops}).
\begin{figure}
\input{buildConditional.pdf_tex}
\caption{Graphical depiction of \code{buildConditional}.}
\label{fig:buildConditional}
\end{figure}
Edges created by \code{buildCommunication} and \code{buildConditional}
are as in Definition~\ref{defn:extr-graph}. In the network(s) in
target node(s), we unfold exactly those procedure calls necessary for
the action labelling the edge to be executed and update the
annotations.
If the main call to \code{buildGraph} returns \code{true}, the graph
created is a valid SEG for the given network. We then proceed to
computing a choreography according to
Definition~\ref{defn:extr-unsound}. Method \code{unrollGraph} is
called to identify and split nodes corresponding to procedure calls.
Finally, we extract the main choreography and all procedure
definitions from the relevant nodes recursively by reading the edges
of the SEG as an abstract syntax tree, AST (method
\code{buildChoreographyBody}).
\subsection{Recognising bad loops}
\label{sec:bad-loops}
The critical part of \code{buildGraph} is deciding when a loop can be
closed. Definition~\ref{defn:valid-seg} requires all paths that form
a loop to include a node where all processes are unmarked. Checking
this directly is extremely inefficient, as it requires retraversing a
large part of the graph; instead, we reduce this problem to list
membership. In order to do this, we enhance the graph structure in
different ways, so they are not simply networks anymore. We describe
each addition below.
\paragraph{Choice-free networks.}
\label{paragraph-choice-free}
In order to best structure our explanation of our method, we first consider
the simplified case where processes do not use the conditional operator.
We construct the SEG iteratively by maintaining a
set of unexplored nodes. Whenever an unexplored node is examined,
the possible reductions lead to new terms, and, by keeping all
created nodes in a search structure, we can determine with a simple
lookup if we have created a network that already exists in the graph
we have built so far, and get a reference to that node in the SEG.
Thus, we do not recreate the node and we form a loop.
Forming a loop, we need to check if the loop contains an all-white node,
and we handle this as follows.
Since we stop our search and start backtracking when we discover a loop,
we conceptually have a path from the start node to our
current node at all times, and the path behaves in a stack-like manner.
We introduce an explicit stack as an auxiliary data structure.
Each node on the current path has a pointer to its entry on the stack.
An item on the stack contains a counter of how many white nodes can be found
further down on the stack. This information can easily be maintained
as we push and pop elements in connection with
running the backtracking algorithm.
When we encounter a loop, we follow the pointer to the node's associated
stack item and check the counter,~$c$. The loop just found has at least one
white node if and only if the counter of the top item on the stack
is strictly greater than~$c$.
\paragraph{Choice paths.}
\label{paragraph-choice-paths}
The soundness of the strategy described above relies on the fact that,
while building the graph, no new edges are added between existing
nodes that make it possible to close a loop bypassing the edge where
the marking was erased. This is automatically guaranteed when a
communication action is selected (the corresponding node only has one
outgoing reduction), but not in the case of conditionals.
Using the method outlined in Section~\ref{paragraph-choice-free},
following a path from an \code{else} branch, we may arrive at a node
somewhere on the \code{then} branch. Basing our decision of loop
validity on the counter may give an incorrect result, since we may
enter into some location on the \code{then} branch after the all-white node.
Thus, the counters would indicate that the loop was valid, but, in
fact, no all-white node was encountered on the \code{else} branch.
To avoid this problem, we restrict edge creation so that we can only
add an edge to an existing node if that node is a ``predecessor'' of
the current node with respect to conditionals, i.e., it was not
generated while expanding a different branch of a conditional
statement. We do this by annotating each node with a \emph{choice
path}: a string that represents the sequence of conditional branches
on which the node depends. The initial node has an empty choice path,
nodes generated from a communication action inherit their parent's
choice path, and nodes generated from a conditional get their parent's
choice path appended with a $0$ or $1$ for \code{then} and \code{else}
branch, respectively.
We use choice paths in our algorithm for building a SEG
(\code{buildGraph}) as follows. Whenever \code{buildGraph} checks
whether a node with the target network already exists in the graph, we
now additionally require the node with the target network to have a
choice path that is a prefix of the current node's (the node on which
\code{buildGraph} has been invoked); otherwise, we proceed as if no
such node exists (and create a new node).
\subsection{Well-formedness}
\label{sec:impl-wellformed}
Until this point, we have assumed networks to be well-formed (Section~\ref{sec:networks}).
While most networks that are not well-formed are not extractable (some processes are deadlocked and
therefore there is no valid SEG), this may still take a long time to detect.
Therefore, we have added an initial check that the network we are trying to extract is well-formed
(before calling \code{buildGraph}), and immediately fail in case it is not.
Having this check also allows us to assume that the network is well-formed throughout the remainder
of execution, which is relevant for some later optimisations.
\subsection{Guardedness of procedure calls}
Many previous works on process calculi require procedure calls to be \emph{guarded} (preceded by a
communication action or a conditional), in order to avoid situations such as $\procterm{X=X}{X}$.
Our language has no such restriction; however, by definition, a network containing a process whose
behaviour unfolds infinitely to a procedure call has no valid SEG: such a process would be either
livelocked in a loop or non-terminated in a leaf.
To detect these situations, our implementation includes a preprocessing check to ensure that no definition of a
procedure accessible from the main behaviour of a process can unfold to a self-call.
For example, we \emph{do} allow $\procterm{X=X}{\nil}$ and
$\procterm{\{X=Y,Y=\bsend{\pid q};X\}}{X}$, but not $\procterm{\{X=Y,Y=X\}}{X}$.
Having this previous check again simplifies the building of the graph, since we know in advance that
we cannot get into infinite loops by repeatedly unfolding a behaviour until we meet an action.
\subsection{Complexity}
Before discussing optimizations and performance on test cases, we end
this section with a discussion of the worst-case computational
complexity of our method. The starting point is the size of the AES
for an annotated network.
\begin{lemma}
\label{lem:size-aes}
The AES for an annotated network of size $n$ has at most
$e^{\frac{2n}{e}}$ vertices.
\end{lemma}
\begin{proof}
Let $N$ be a network with $p$ processes of sizes $n_1$ through
$n_p$, where the size of a process is the number of nodes in an
abstract syntax tree representing the syntactical term. Let
$n=\sum_{i=1}^p n_i$ denote the size of $N$.
Since recursive definitions are unfolded only when they occur at the
top of a behaviour, a process of size $n_i$ can give rise to at most
$n_i$ different terms when all possible reductions are considered.
Thus, $N$ can reduce to at most $\prod_{i=1}^p n_i$ different terms.
Since the reductions give rise to the edges in the graph, this is
also an upper bound on the number of edges, so the graph is sparse.
By the AM-GM inequality, $\prod_{i=1}^p n_i$ is maximised when all
the $n_i$ are equal, where it evaluates to
$\left(\frac{n}{p}\right)^p$.
We now consider annotations. Since each process is either marked or
unmarked, there are at most $2^p$ annotations for each network,
giving a total upper bound of $2^p(\frac{n}{p})^p=(\frac{2n}{p})^p$
different nodes in the AES. This expression attains its maximum
when $p=\frac{2n}{e}$, giving the upper bound of $e^{\frac{2n}{e}}$
nodes in the AES.
\end{proof}
Next we consider the extraction of the enhanced SEG from the AES.
\begin{theorem}
\label{thm:extract-complex}
Extraction from a network of size $n$ with $c$ conditionals terminates in time $O(2^c n e^{\frac{2n}{e}})$.
\end{theorem}
\begin{proof}
For networks without conditionals, we develop a network of size at
most $e^{\frac{2n}{e}}$, as outlined in
Sections~\ref{paragraph-choice-free} and bounded in
Lemma~\ref{lem:size-aes}. However, adding the choice path as part of
the node identity, as outlined in
Section~\ref{paragraph-choice-paths}, the number of possible
different nodes is increased by a factor $2^c$, representing all
possible choice paths, where $c$ is the number of conditionals in
the network -- which is of course (much) smaller than~$n$. Look-up for
a node to check if it is new has time complexity worst-case
logarithmic in the size of the set of nodes, using any standard dictionary
implementation, i.e., $O(\log(2^ce^{\frac{2n}{e}}))\subseteq O(n)$,
plus a check for term identity which is also $O(n)$. Clearly
maintaining the auxiliary stack takes constant time in each step,
and with the stack available, we can check for bad loops in constant
time as well. Thus, the overall time complexity is
$O(n 2^c e^{\frac{2n}{e}})$.
\end{proof}
As mentioned in the introduction, our method avoids the
factorial time complexity of previous work. Exponential time is
better than factorial, but we may perform even better in
practice. Algorithmically, all the required work stems from traversals
of the AES, so any reduction in its (explored) size will lead to
proportional runtime improvements. We point out that in the
algorithms proposed above, instead of first computing
the entire AES and then a valid SEG, we compute the relevant parts
of the AES lazily as we need them. Thus, parts of the AES that are never
explored while computing a valid SEG are never generated.
\section{Extensions and optimisations}
\label{sec:optim}
We now discuss some extensions and optimisations to the original algorithm.
Some of these changes aim at making make the implementation more efficient in situations that may
occur often enough to warrant consideration; others extend the domain of extractable choreographies,
and were motivated by practical applications.
\subsection{Parallelisation}
\label{sec:impl-parallel}
In our first testing phase (see Section~\ref{sec:eval}), we took the benchmarks from~\cite{LTY15}
and wrote them as networks.
This translation was done by hand, ensuring that the network represented the same protocol as the
communicating automata in the original work.
Of these, 3 benchmarks (\emph{alternating 2-bit}, \emph{alternating 3-bit} and \emph{TPMContract})
were not implemented.
The first uses group communications, an extension described in~\cite{CLM17} that is not implemented;
the two last require local threads, and are not representable in our formalism.
(We discuss this in the conclusions.)
Several benchmarks are parallel compositions of two instances of the same network.
They exhibit a very high degree of parallelism, visibly slowing down extraction.
However, very simple static analysis can easily improve performance in such instances.
We define the network's communication graph as the undirected graph whose nodes are processes, and
where there is an edge between $\pid p$ and $\pid q$ if they ever interact.
The connected components of this graph can be extracted independently, and the choreographies
obtained composed in parallel at the end.
Theoretically, this requires adding a parallel composition constructor at the top level of a
choreography, which is straightforward.
In practice, this trivial preprocessing drastically reduces the computation time: for the (very
small) benchmarks from~\cite{LTY15}, doubling the size of the network already corresponds to a
increase in computation time of up to $35$ times, while splitting the network in two and extracting
each component in sequence keeps this factor under $2$, since the independent components can be
extracted in parallel.
We report our empirical evaluation in Table~\ref{tab:nobuko}.
These numbers are purely indicative: due to the very small size of these examples, we did not
attempt to make a very precise evaluation.
We measured the extraction time for all benchmarks, and computed the ratio between each benchmark
containing a duplicate network and the non-duplicated one.
This was done with the original, sequential, algorithm, and with the parallelised one.
All execution times were averaged over three runs.
The values themselves are not directly comparable to those from~\cite{LTY15}, since the network
implementations are substantially different, but the ratios show the advantages of our approach:
even without parallelisation, our ratios are substantially lower, in line with the better
asymptotical complexity of our method shown in~\cite{CLM17}.
(Note that the examples from~\cite{LTY15} where the ratio is lowest are the smallest ones, where the
execution time is dominated by the setup and command-line invocation of the different programs
used.)
\begin{table}
\centering
\caption{Empirical evaluation of the effect of parallelising extraction (all times in ms).}
\label{tab:nobuko}
\begin{tabular}{l@{\hspace{1em}}rrr@{\hspace{1em}}rrr@{\hspace{1em}}rrr}
\toprule
Test name & \multicolumn3c{sequential} & \multicolumn3c{parallel} & \multicolumn3c{from~\cite{LTY15}} \\
& single & double & ratio & single & double & ratio & single & double & ratio \\ \midrule
Bargain &
1.7 & 10.0 & \bf5.88 &
3.0 & 3.7 & \bf1.23 &
103 & 161 & \bf1.56 \\
Cloud system &
8.3 & 83.0 & \bf10.0 &
8.6 & 8.3 & \bf0.96 &
140 & 432 & \bf3.08 \\
Filter collaboration &
4.0 & 123.3 & \bf30.83 &
5.0 & 4.7 & \bf0.93 &
118 & 178 & \bf1.51 \\
Health system &
6.0 & 80.3 & \bf13.39 &
7.3 & 11.7 & \bf1.59 &
17 & 1702 & \bf100.12 \\
Logistic &
1.0 & 34.7 & \bf34.70 &
5.3 & 16.7 & \bf3.14 &
276 & 2155 & \bf7.81 \\
Running example &
7.7 & 143.3 & \bf18.61 &
5.7 & 6.7 & \bf1.17 &
184 & 22307 & \bf121.23 \\
Sanitary agency &
6.0 & 61.0 & \bf10.17 &
8.0 & 7.3 & \bf0.92 &
241 & 3165 & \bf13.13 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Extraction strategies}
\label{sec:impl-strategy}
The performance of our implementation depends on the choice of the network action, in cases where
there are several possible options: expanding a communication generates one descendant node, but
expanding a conditional generates two descendant nodes that each need to be processed.
On the other hand, if the choreography contains cyclic behaviour, different choices of actions may
impact the size of the extracted loops (and thus also execution time).
In order to control these choices, we define \emph{execution strategies}: heuristics that guide the
choice of the next action to pick.
Strategies either take into account the syntactic type of the action (e.g., prioritise interactions)
or the semantics of bad loops (prioritise unmarked processes), or combine them with different
priorities (prioritise unmarked processes and break ties by preferring interactions).
We also include a basic strategy that picks a random action.
All strategies are implemented in the same way: in \code{buildGraph}, we choose from the list of possible actions
that the network in the current node according to the chosen criterion.
Our implementation includes the following strategies.
The abbreviations in parenthesis are used in captions of graphics.
\begin{description}
\item[\texttt{Random (R)}] Choose a random action.
\item[\texttt{LongestFirst (L)}] Prioritise the process with the largest body.
\item[\texttt{ShortestFirst (S)}] Prioritise the process with the smallest body.
\item[\texttt{InteractionsFirst (I)}] Prioritise interactions.
\item[\texttt{ConditionalsFirst (C)}] Prioritise conditionals.
\item[\texttt{UnmarkedFirst (U)}] Prioritise actions involving unmarked processes.
\item[\texttt{UnmarkedThenInteractions (UI)}] Prioritise actions involving unmarked processes, and
as secondary criterion prioritise interactions.
\item[\texttt{UnmarkedThenSelections (US)}] Prioritise unmarked processes, as a secondary criterion
prioritise selections, and afterwards value communications.
\item[\texttt{UnmarkedThenConditionals (UC)}] Prioritise unmarked processes, and as secondary
criterion prioritise conditionals.
\item[\texttt{UnmarkedThenRandom (UR)}] Prioritise unmarked processes, in random order.
\end{description}
We remark that \texttt{UnmarkedFirst} and \texttt{UnmarkedThenRandom} are different strategies:
\texttt{UnmarkedFirst} does not distinguish among actions involving unmarked processes, so they come
in the order of the processes involved in the network.
By contrast, \texttt{UnmarkedThenRandom} chooses randomly from the list of possible actions, in
principle contributing towards more fairness among processes.
From the results in the next section, we see that \texttt{LongestFirst} and \texttt{ShortestFirst}
perform significantly worse than all other strategies, while \texttt{Random} and
\texttt{UnmarkedFirst} in general give the best results.
However, we remark that comparing the performances of different strategies was not an objective of
this work, as it would require a dedicated test suite.
We leave it as interesting future work.
\subsection{Livelocks}
\label{sec:impl-services}
Several examples in~\cite{LTY15} include processes that offer a service, and as such may be inactive throughout a part (or the whole) of execution.
This is the case in our Example~\ref{ex:aes}: process $\pid r$ provides a value to $\pid q$ whenever it is needed, but $\pid q$ might stop requesting values during execution.
Our extraction algorithm does not allow for this behaviour: when a loop is closed, every process must either be terminated or reduce inside the loop.
In order to allow for services, we added a parameter to the extraction method containing a list of
services (in our example, $\pid r$), which are not required to reduce inside loops.
Intuitively, we ignore the annotations in these processes when deciding whether a loop is valid.
In the implementation, these processes are marked initially, and are not unmarked when the marking
is erased.
\subsection{Clever backtracking}
Our strategy of building the SEG in a depth-first fashion requires that, on failure, we backtrack
and explore different possible actions.
This leads to a worst-case behaviour where all possible execution paths need to be explored, in the
case that no choreography can be extracted from the original network.
However, a closer look at why a particular branch leads to deadlock allows us to avoid backtracking
in some instances: network execution is confluent, so if we reach a deadlocked state, then every
possible execution reaches such a state, and extraction must fail.
It is only when extraction fails because of attempting to close an invalid loop that backtracking is
required.
To implement this refinement, the return type of all methods that try to build an edge of the SEG
was changed to a $3$-element set.
If a method succeeds, it returns \m{ok} (corresponding to \code{true}); if it fails due to reaching
a deadlock, it returns \m{fail} (corresponding to \code{false}); and if it fails due to trying to
close an invalid loop, it returns \m{badloop}.
In recursive calls, these values are treated as follows:
\begin{itemize}
\item if the caller is processing a communication or the \code{else} branch of a conditional, they
are propagated upwards;
\item if the caller is processing the \code{then} branch of a conditional, \m{fail} and \m{badloop}
are propagated upwards, while \m{ok} signals that the \code{else} branch can now be treated.
\end{itemize}
For \code{buildGraph}, a method call returning \m{ok} or \m{fail} is also propagated upwards, while
\m{badloop} signals that a different possible action should be tried.
If all possible actions return \m{badloop}, then \code{buildGraph} returns \m{fail}.
This is sound: due to confluence, any action that could have been executed before that would make it
possible to close a loop from this node can also be executed from this node.
This optimisation is crucial to get a practical implementation in the case of \emph{unextractable}
networks.
Most of the failure tests (Section~\ref{sec:testing-bad}) did not terminate before this change,
while they now fail in time comparable to that of success.
\section{Practical evaluation}
\label{sec:eval}
In order to evaluate the performance of our implementation, we developed a three-stage plan.
\begin{enumerate}[{Phase }1.]
\item We focused on the test cases from~\cite{LTY15}, in order to ensure that our tool covered at
least those cases.
Since these cases are simple, we verified their correctness by hand.
\item We generated 1050 random choreographies and their projections by varying four different
parameters (see details below), and applied our tool to the projected networks.
In this way, we tested whether we can extract networks that are direct projections of
choreographies -- these should correspond to the majority of (extractable) practical applications.
Soundness can be checked by testing that the extracted choreography is bisimilar to the original
one.
\item We proposed a model for the typical changes (correct or incorrect) introduced when a
programmer modifies a process directly, and tried to extract choreographies from the resulting
networks.
This yielded information about how quickly our program fails when a network is unextractable; as a
side result, we also got information about how often some types of protocol errors can slip through
undetected, that is, the network is still extractable, but it implements a different protocol than the
original.
\end{enumerate}
We deliberately did \emph{not} generate any networks directly.
We claim that such tests are not very meaningful for two reasons: first, they do not correspond to
realistic scenarios; second, randomly generated networks are nearly always unextractable.
We believe our test suite is comprehensive enough to model most situations with practical relevance.
All tests reported in this section were performed on a computer running Arch Linux, kernel version 5.14.8, with an AMD Ryzen 9 3950x as CPU and 50 GB RAM as available memory for the Java Virtual Machine.
\subsection{Comparison with the literature}
Our first testing phase used the benchmarks from~\cite{LTY15}.
As described in Section~\ref{sec:impl-parallel}, the networks corresponding to those examples were
written by hand.
These tests were done simply as a proof-of-concept, as their simplicity means that the measured
execution times are extremely imprecise.
As discussed earlier, three test cases were not implementable; all others succeeded.
The results (using strategy \texttt{InteractionsFirst}) are reported in Table~\ref{tab:nobuko}.
\subsection{Reverse projection}
In the second phase, we generate large-scale tests to check the scalability of our implementation.
Our tests consist of randomly-generated choreographies characterised by four parameters: number of
processes, total number of actions, number of those actions that should be conditionals, and a number
of procedures.
Then, we generate ten choreographies for each set of parameters as follows: first, we determine how
many actions and conditionals each procedure definition (including \code{main}) should have by
uniformly partitioning the total number of actions and conditionals.
Then we generate the choreography sequentially by randomly choosing the type of the next action so
that the probability distribution of conditional actions within each procedure body is uniform.
For each action, we randomly choose the process(es) involved, again with uniform distribution, and
assigned fresh values to any expression or label involved.
At the end, we randomly choose whether to end with termination or a procedure call.
Finally, we apply rules for swapping conditionals (rules \rname{C}{Cond-Eta} and \rname{C}{Cond-Cond} from Figure~\ref{fig:cc_precongr}) to obtain inefficient representations of choreographies where code is duplicated in both branches of a conditional. (This actually increases the number of conditionals in a choreography from at most $50$ to over $8000$ in some cases.)
This method may generate choreographies with dead code (if some procedures are never called).
Therefore there is a post-check that determines whether every procedure is reachable from
\code{main} (possibly dependent on the results of some conditional actions); if this is not the
case, the choreography is rejected, and a new one is generated.
A randomly generated choreography with conditional actions is typically unprojectable, so we amend
it (see~\cite{CM20}) to make it projectable.
In general, this increases the size of the choreography.
Finally, we apply projection to obtain the networks for our second test suite.
\begin{table}
\caption{Parameters for the choreographies generated for testing.}
\label{tab:chor-params}
\centering
\begin{tabular}{ccrrrrr}
\toprule
Test set & parameter & size & processes & \code{if}s & \code{def}s & \#\ tests \\ \midrule
size & $k\in[1..42]$ & $50k$ & $6$ & $0$ & $0$ & $420$ \\
processes & $k\in[1..20]$ & $500$ & 5$k$ & $0$ & $0$ & $200$ \\
ifs (finite) & $k\in[1..4]$ & $50$ & $6$ & $10k$ & 0 & $40$ \\
ifs (varying procedures) & $\tuple{j,k}\in[0..5]\times[0..3]$ & $200$ & $5$ & $j$ & 5$k$ & $240$ \\
procedures (fixed ifs) & $k\in[1..15]$ & $20$ & $5$ & $8$ & $k$ & $150$ \\ \midrule
total &&&&&& $1050$ \\ \bottomrule
\end{tabular}
\end{table}
The parameters for generation are given in Table~\ref{tab:chor-params}.
The upper bounds were determined by our hardware limitations.
Four of the generated files contained tests that were too large to extract, and were removed from
the final test set.
\paragraph{Results.}
We report on the most interesting tests.
The first test shows that, predictably, for choreographies consisting of only communications, the
extraction time is nearly directly proportional to the network size (with a small overhead
from needing to work with larger objects), except when using strategies that need to compute the
size of each process term.
We could enrich the networks with this information in order to make these strategies more efficient,
but since they perform poorly in general, we did not pursue this approach.
\begin{figure}
\begin{tikzpicture}
\begin{axis}[
width=.5\textwidth,
height=15em,
xlabel=Number of actions,
ylabel=Time (msec),
legend pos=north west
]
\addplot[only marks,mark=x]
table[x=numberOfActions,y=time(msec)-LongestFirst,col sep=tab]{stats-comms-only.csv};
\addlegendentry{\texttt{L}}
\addplot[only marks,mark=square]
table[x=numberOfActions,y=time(msec)-ShortestFirst,col sep=tab]{stats-comms-only.csv};j
\addlegendentry{\texttt{S}}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=.5\textwidth,
height=15em,
xlabel=Number of actions,
ylabel=Time (msec),
legend pos=north west
]
\addplot[only marks,mark=+]
table[x=numberOfActions,y=time(msec)-InteractionsFirst,col sep=tab]{stats-comms-only.csv};
\addlegendentry{\texttt{I}}
\addplot[only marks,mark=triangle]
table[x=numberOfActions,y=time(msec)-Random,col sep=tab]{stats-comms-only.csv};
\addlegendentry{\texttt{R}}
\end{axis}
\end{tikzpicture}
\caption{Execution time vs.\ length for networks consisting only of communications.
The omitted strategies essentially perform as \texttt{InteractionsFirst}, since there are no
other types of actions and no recursive procedures.}
\label{fig:results-test-1}
\end{figure}
The second test is similar, but varying the number of processes (which makes for a greater number of
possible actions at each step) while keeping the size constant.
Our results show that execution time grows linearly with the number of processes for
\texttt{InteractionFirst} and \texttt{Random}.
The behaviour of \texttt{LongestFirst} and \texttt{ShortestFirst} is more interesting, as the time for computing the length of the behaviours dominates for small numbers of processes.
\begin{figure}
\begin{tikzpicture}
\begin{axis}[
width=.5\textwidth,
height=15em,
xlabel=Number of processes,
ylabel=Time (msec),
legend pos=north west
]
\addplot[only marks,mark=x]
table[x=numberOfProcesses,y=time(msec)-LongestFirst,col sep=tab]{stats-increasing-processes.csv};
\addlegendentry{\texttt{L}}
\addplot[only marks,mark=square]
table[x=numberOfProcesses,y=time(msec)-ShortestFirst,col sep=tab]{stats-increasing-processes.csv};
\addlegendentry{\texttt{S}}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=.5\textwidth,
height=15em,
xlabel=Number of processes,
ylabel=Time (msec),
legend pos=north west
]
\addplot[only marks,mark=+]
table[x=numberOfProcesses,y=time(msec)-InteractionsFirst,col sep=tab]{stats-increasing-processes.csv};
\addlegendentry{\texttt{I}}
\addplot[only marks,mark=triangle]
table[x=numberOfProcesses,y=time(msec)-Random,col sep=tab]{stats-increasing-processes.csv};
\addlegendentry{\texttt{R}}
\end{axis}
\end{tikzpicture}
\caption{Execution time vs.\ number of processes, with constant total number of actions.}
\label{fig:results-test-2}
\end{figure}
The third test introduces conditionals.
Our results show that execution time varies with the \emph{total} number of conditionals in the
network, rather than with the number of conditionals in each process.
Figure~\ref{fig:results-test-3} (left) exhibits the worst-case exponential behaviour of our
algorithm, and also suggests that delaying conditionals is in general a better strategy.
Figure~\ref{fig:results-test-3} (right) shows the number of nodes created in the SEG, illustrating
that execution time is not directly proportional to this value.
\begin{figure}
\begin{tikzpicture}
\begin{axis}[
width=.5\textwidth,
height=15em,
xlabel=Total \#ifs in network,
ylabel=Time (msec),
legend pos=north west,
scaled x ticks=false
]
\addplot[only marks,mark=x]
table[x=numberOfConditionals,y=time(msec)-LongestFirst,col sep=tab]{stats-increasing-ifs-no-recursion.csv};
\addlegendentry{\texttt{L}}
\addplot[only marks,mark=square]
table[x=numberOfConditionals,y=time(msec)-ConditionsFirst,col sep=tab]{stats-increasing-ifs-no-recursion.csv};
\addlegendentry{\texttt{C}}
\addplot[only marks,mark=+]
table[x=numberOfConditionals,y=time(msec)-InteractionsFirst,col sep=tab]{stats-increasing-ifs-no-recursion.csv};
\addlegendentry{\texttt{I}}
\addplot[only marks,mark=triangle]
table[x=numberOfConditionals,y=time(msec)-Random,col sep=tab]{stats-increasing-ifs-no-recursion.csv};
\addlegendentry{\texttt{R}}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=.5\textwidth,
height=15em,
xlabel=Total \#ifs in network,
ylabel=Nodes created,
legend pos=north west,
scaled x ticks=false
]
\addplot[only marks,mark=x]
table[x=numberOfConditionals,y=nodes-LongestFirst,col sep=tab]{stats-increasing-ifs-no-recursion.csv};
\addlegendentry{\texttt{L}}
\addplot[only marks,mark=square]
table[x=numberOfConditionals,y=nodes-ConditionsFirst,col sep=tab]{stats-increasing-ifs-no-recursion.csv};
\addlegendentry{\texttt{C}}
\addplot[only marks,mark=+]
table[x=numberOfConditionals,y=nodes-InteractionsFirst,col sep=tab]{stats-increasing-ifs-no-recursion.csv};
\addlegendentry{\texttt{I}}
\addplot[only marks,mark=triangle]
table[x=numberOfConditionals,y=nodes-Random,col sep=tab]{stats-increasing-ifs-no-recursion.csv};
\addlegendentry{\texttt{R}}
\end{axis}
\end{tikzpicture}
\caption{Execution time (left) and number of nodes (right) vs.\ total number of conditionals, for
networks consisting only of conditionals.
The omitted strategies essentially perform as \texttt{InteractionsFirst} or
\texttt{ConditionalsFirst}.}
\label{fig:results-test-3}
\end{figure}
The behaviour when recursive procedures also occur is shown in Figure~\ref{fig:results-test-4},
where we fix the number of procedures to 5.
\begin{figure}
\begin{tikzpicture}
\begin{axis}[
width=.5\textwidth,
height=15em,
xlabel=Total \#ifs in network,
ylabel=Time (msec),
legend pos=north west,
scaled x ticks=false
]
\addplot[only marks,mark=x]
table[x=numberOfConditionals,y=time(msec)-LongestFirst,col sep=tab]{stats-increasing-ifs-with-recursion.csv};
\addlegendentry{\texttt{L}}
\addplot[only marks,mark=square]
table[x=numberOfConditionals,y=time(msec)-ConditionsFirst,col sep=tab]{stats-increasing-ifs-with-recursion.csv};
\addlegendentry{\texttt{C}}
\addplot[only marks,mark=+]
table[x=numberOfConditionals,y=time(msec)-InteractionsFirst,col sep=tab]{stats-increasing-ifs-with-recursion.csv};
\addlegendentry{\texttt{I}}
\addplot[only marks,mark=triangle]
table[x=numberOfConditionals,y=time(msec)-Random,col sep=tab]{stats-increasing-ifs-with-recursion.csv};
\addlegendentry{\texttt{R}}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=.5\textwidth,
height=15em,
xlabel=Total \#ifs in network,
ylabel=Time (msec),
legend pos=north west,
scaled x ticks=false
]
\addplot[only marks,mark=x]
table[x=numberOfConditionals,y=time(msec)-UnmarkedFirst,col sep=tab]{stats-increasing-ifs-with-recursion.csv};
\addlegendentry{\texttt{U}}
\addplot[only marks,mark=square]
table[x=numberOfConditionals,y=time(msec)-UnmarkedThenRandom,col sep=tab]{stats-increasing-ifs-with-recursion.csv};
\addlegendentry{\texttt{UR}}
\addplot[only marks,mark=+]
table[x=numberOfConditionals,y=time(msec)-UnmarkedThenInteractions,col sep=tab]{stats-increasing-ifs-with-recursion.csv};
\addlegendentry{\texttt{UI}}
\addplot[only marks,mark=triangle]
table[x=numberOfConditionals,y=time(msec)-UnmarkedThenConditions,col sep=tab]{stats-increasing-ifs-with-recursion.csv};
\addlegendentry{\texttt{UC}}
\end{axis}
\end{tikzpicture}
\caption{Execution time vs.\ total number of conditionals for networks including 5 recursive procedures.}
\label{fig:results-test-4}
\end{figure}
The final tests introduce variations in the number of procedures.
The results of these tests are too complex to allow for immediate conclusions.
Figure~\ref{fig:results-test-5} shows what happens when we vary the number of procedures for
choreographies without conditionals.
Although the number of procedures potentially influences the number of loops in the AES, this
dependency is likely too complex to be visible in the test results.
\begin{figure}
\begin{tikzpicture}
\begin{axis}[
width=.5\textwidth,
height=15em,
xlabel=Number of procedures,
ylabel=Time (msec),
legend pos=north west
]
\addplot[only marks,mark=x]
table[x=numberOfProcedures,y=time(msec)-ConditionsFirst,col sep=tab]{stats-increasing-procedures-no-ifs.csv};
\addlegendentry{\texttt{C}}
\addplot[only marks,mark=+]
table[x=numberOfProcedures,y=time(msec)-InteractionsFirst,col sep=tab]{stats-increasing-procedures-no-ifs.csv};
\addlegendentry{\texttt{I}}
\addplot[only marks,mark=triangle]
table[x=numberOfProcedures,y=time(msec)-Random,col sep=tab]{stats-increasing-procedures-no-ifs.csv};
\addlegendentry{\texttt{R}}
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\begin{axis}[
width=.5\textwidth,
height=15em,
xlabel=Number of procedures,
ylabel=Time (msec),
legend pos=north west
]
\addplot[only marks,mark=x]
table[x=numberOfProcedures,y=time(msec)-UnmarkedFirst,col sep=tab]{stats-increasing-procedures-no-ifs.csv};
\addlegendentry{\texttt{U}}
\addplot[only marks,mark=square]
table[x=numberOfProcedures,y=time(msec)-UnmarkedThenRandom,col sep=tab]{stats-increasing-procedures-no-ifs.csv};
\addlegendentry{\texttt{UR}}
\addplot[only marks,mark=+]
table[x=numberOfProcedures,y=time(msec)-UnmarkedThenInteractions,col sep=tab]{stats-increasing-procedures-no-ifs.csv};
\addlegendentry{\texttt{UI}}
\addplot[only marks,mark=triangle]
table[x=numberOfProcedures,y=time(msec)-UnmarkedThenConditions,col sep=tab]{stats-increasing-procedures-no-ifs.csv};
\addlegendentry{\texttt{UC}}
\end{axis}
\end{tikzpicture}
\caption{Execution time vs.\ number of procedures, no conditionals.}
\label{fig:results-test-5}
\end{figure}
When we vary the number of procedures in more complex scenarios, the picture is even less clear, and
we omit a discussion of these results.
\paragraph{Correctness.}
In order to obtain confirmation of the correctness of our algorithm and its implementation, we
performed an additional verification at this point.
We implemented a naive similarity checker that tests whether a choreography $C_1$ can simulate
another choreography $C_2$ as follows: we keep a set of pairs $R$, initially containing only the
pair \tuple{C_1,C_2}.
At each step, we choose a pair \tuple{C,C'} from $R$ and compute all actions $\alpha$ and
choreographies $C_\alpha$ such that $C$ can reach $C_\alpha$ by executing $\alpha$.
For each such action $\alpha$, we check that $C'$ can execute $\alpha$, compute the resulting
choreography $C'_\alpha$, and add the pair \tuple{C_\alpha,C'_\alpha} to $R$.
If $C'$ cannot execute $\alpha$, the checker returns \code{false}.
When all pairs in $R$ have been processed, the checker returns \code{true}.
We then check, for each test, that the original choreography and the one obtained by extraction
can simulate each other.
\begin{lemma}
If $C_1$ and $C_2$ can simulate each other, then there is a bisimulation between $C_1$ and $C_2$.
\end{lemma}
\begin{proof}
We first observe that the final set $R$ computed by the algorithm is always the same, regardless
of the order in which pairs are picked.
Let $R_{12}$ and $R_{21}$ be the sets built when checking that $C_2$ simulates $C_1$ and that
$C_2$ simulates $C_1$, respectively.
We show by induction on the construction of $R_{12}$ that $R_{12}^{-1}\subseteq R_{21}$.
Initially this holds, since $R_{12}=\{\tuple{C_1,C_2}\}$ and \tuple{C_2,C_1} is initially in
$R_{21}$.
Suppose $\tuple{C,C'}\in R_{12}$ is selected for processing.
By induction hypothesis, $\tuple{C',C}\in R_{21}$.
For every $\alpha$ such that $C$ can execute $\alpha$ and move to $C_\alpha$, there is a unique
choreography $C'_\alpha$ such that $C'$ can execute $\alpha$ and move to $C'_\alpha$.
Therefore, in the step where \tuple{C',C} is selected from $R_{21}$, every such pair
\tuple{C'_\alpha,C_\alpha} is added to $R_{21}$, hence it is in the final set.
Thus, after extending $R_{12}$ with all the pairs obtained from \tuple{C,C'}, the thesis still holds.
By reversing the roles of $C$ and $C'$, we also establish that $R_{21}^{-1}\subseteq R_{12}$.
Therefore $R_{12}=R_{21}^{-1}$.
It then follows straightforwardly that $R_{12}$ is a bisimulation between $C_1$ and $C_2$.
\end{proof}
Given that bisimulation is in general undecidable and that we did not make any effort to make a
clever implementation, our program often runs out of resources without terminating.
Still, it finished in about 5\%\ of the tests (those of smaller size), always with a positive
result.
While this may not sound impressive, it is unlikely that errors in the implementation would only
show up in larger tests, and this result increases our confidence in the soundness of the
implementation.
\subsection{Fuzzer and unroller}
\label{sec:testing-bad}
In the third testing phase, we changed the networks obtained by choreography projection using two
different methods.
The first method (the \emph{fuzzer}) applies transformations that are semantically incorrect, and
typically result in unextractable networks (modelling programmer errors).
The second method (the \emph{unroller}) applies transformations that are semantically correct, and
result in networks that are bisimilar to the original and should be extractable (modelling
alternative implementations of the same protocol).
\paragraph{The fuzzer.}
For the fuzzer, we considered the following transformations: adding an action; removing an action; and
switching the order of two actions.
The first two always result in an unextractable network, whereas the latter may still give an
extractable network that possibly implements a different protocol.
Our fuzzer takes two parameters $d$ and $s$, randomly chooses one process in the network, deletes
$d$ actions in its definition and switches $s$ actions with the following one.
The probability distribution of deletions and swaps is uniform (all actions have the same
probability of being deleted of swapped).
We made the following conventions: deleting a conditional preserves only the \code{then} branch;
deleting a branching term preserves only the first branch offered; swapping a conditional or
branching with the next action switches it with the first action in the \code{then}/first branch;
and swapping the last action in a behaviour with the next one amounts to deleting that action.
Deleting a conditional results in an extractable network that implements a subprotocol of the
original one, while other deletions yield unextractable networks.
Exchanges of communication actions may yield extractable networks, but with a different extracted
choreography; all other types of exchanges break extractability.
We did not implement adding a random action, as this is covered in our tests: adding an unmatched
send from \m{p} to \m{q} can be seen as removing a receive at \m{q} from \m{p} from a choreography
that includes that additional communication.
We restricted fuzzing to one process only since in practice we can assume that processes are changed
one at a time.
We applied three different versions of fuzzing to all our networks: one swap; one deletion; and two
swaps and two deletions.
The results are summarised in Table~\ref{tab:fuzzing}.
\begin{table}
\caption{Extracting fuzzed networks: for each strategy we report on the percentage of
unextractable networks (\%) and the average and median times to fail in those cases (ms).
We highlight the best and worst values in each column in green and red, respectively.}
\label{tab:fuzzing}
\centering
\begin{tabular}{c|rrr|rrr|rrr}
\toprule
Strategy
& \multicolumn3{c|}{$d=0$, $s=1$}
& \multicolumn3{c|}{$d=1$, $s=0$}
& \multicolumn3{c}{$d=2$, $s=2$}
\\
& \% & avg & med
& \% & avg & med
& \% & avg & med
\\ \midrule
\texttt{R}
& 45 & 384 & 10
& 99 & 198 & 18
& 100 & \best{85} & \best{9}
\\
\texttt{L}
& 45 & \worst{1171} & \worst{13}
& 99 & \worst{1080} & \worst{85}
& 100 & \worst{664} & \worst{44}
\\
\texttt{S}
& 43 & \worst{1627} & \worst{13}
& 99 & \worst{1163} & \worst{124}
& 100 & \worst{696} & \worst{45}
\\
\texttt{I}
& 43 & 400 & 11
& 99 & \best{175} & 19
& 100 & \best{73} & 15
\\
\texttt{C}
& 46 & 451 & 9
& 99 & 226 & 21
& 100 & 106 & \best{9}
\\
\texttt{U}
& 45 & \best{368} & 10
& 99 & 192 & \best{17}
& 100 & 98 & 10
\\
\texttt{UI}
& 42 & 394 & 10
& 99 & \best{185} & 21
& 100 & 94 & 12
\\
\texttt{US}
& 44 & \best{358} & 10
& 99 & \best{185} & 19
& 100 & 94 & 11
\\
\texttt{UC}
& 45 & 414 & 10
& 99 & 208 & 18
& 100 & 104 & 13
\\
\texttt{UR}
& 44 & \best{370} & 10
& 99 & 204 & \best{17}
& 100 & \best{80} & \best{9}
\\ \bottomrule
\end{tabular}
\end{table}
The differences in the percentages in the first row are due to memory running out in some cases, but
they are small enough as to be statistically irrelevant.
In later rows, most networks are unextractable; the interesting observation here is that strategies
prioritising actions that involve two processes and unmarked processes tend to fail faster.
\paragraph{The unroller.}
Projections of choregraphies are intuitively easy to extract because their recursive procedures are
all synchronised (they come from the same choreography).
In practice, this is not necessarily the case: programs often include ``loops-and-a-half'', where it
is up to the programmer to decide where to place the duplicate code; and sometimes procedure
definitions can be locally optimised.
For example: if $X=\com peqx;\com p{e'}qx;\com p{e''}ry;X$, then in the extracted implementation of
$\pid q$ the definition of $X_{\pid q}$ can simply be $\brecv px;X$.
Our unroller models these situations by choosing one process and randomly unfolding some procedures,
as well as shifting the closing point of some loops.
These transformations are always correct, so they should yield extractable networks, but extraction
time may be larger and there may be higher chance for bad loops.
We generated $240$ tests, which we were all able to extract, and compared the extraction times for
the original and unrolled networks.
In Table~\ref{tab:unroller} we report the average and median ratios for each extraction strategy.
\begin{table}
\caption{Extracting unrolled networks: for each strategy we report the average and the median of
the ratio between the time needed to extract unrolled networks and the time needed to extract
the original networks.}
\label{tab:unroller}
\centering
\begin{tabular}{c|rr}
\toprule
Strategy
& Average
& Median
\\ \midrule
\texttt{R} & 6.20 & 1 \\
\texttt{L} & 1.54 & 1 \\
\texttt{S} & 4.82 & 1 \\
\texttt{I} & 5.24 & 1.07 \\
\texttt{C} & 1.95 & 1.06 \\
\texttt{U} & 9.12 & 1.03 \\
\texttt{UI} & 4.70 & 1 \\
\texttt{US} & 2.17 & 1 \\
\texttt{UC} & 3.88 & 1 \\
\texttt{UR} & 2.47 & 1 \\
\bottomrule
\end{tabular}
\end{table}
The table shows that unrolling slows down the extracter somewhat, but in a very asymmetric way: for
most networks the changes are minor (shown by the median around $1$), while for a few there are very
large changes in either direction.
An analysis of the raw data shows that:
\begin{itemize}
\item there is no general trend -- in some cases the unrolled network is fastest to extract, in
other cases it is slower;
\item in most cases the ratio is close to $1$ (and in many exactly $1$, due to the fact that
execution times are rounded to the nearest millisecond);
\item ratios vary from as low as $0.0003$ to as high as $1505$.
\end{itemize}
\section{Conclusions and Discussion}
\label{sec:discussion}
\label{sec:concl}
We have presented an efficient algorithm for extracting choreographies from network specifications, improving the original conference presentation in~\cite{CLM17}.
We have successfully implemented this algorithm, developed the first comprehensive test suite for evaluating this kind of algorithms, and used the test suite to evaluate our implementation.
Our results are very encouraging compared to previous work~\cite{LTY15}, and open the door to interesting future developments.
We discuss some of them.
\paragraph{More expressive communications and processes.}
In real-world contexts, values stored and communicated by processes are typed, and the receiver
process can also specify how to treat incoming messages~\cite{CM17a}.
This means that communication actions now have the form $\com peqf$, where $f$ is the function
consuming the received message, and systems may deadlock because of typing errors.
Our construction applies without changes to this scenario -- any requirements regarding type
checking, for example, will also be necessary for defining the semantics of the process calculus.
\paragraph{Choreographic Programming and Multiparty Session Types.}
Choreographic languages like ours are used in choreographic programming, a programming paradigm where choreographies are programs that can be compiled to distributed implementations~\cite{M13:phd,CM20,M22}.
Our extraction algorithm can be applied to several existing languages for networks, modulo minor syntactic differences~\cite{CM17a,CM20,CMP18,M22}. For some of these languages, our algorithm can be applied only to fragments of them; we point out some of the features for future work in the next paragraphs.
Choreographies have also been advocated for the specification of communication
protocols.
Most notably, multiparty session types use choreographies to define types used in the verification of process calculi~\cite{HYC16}.
While there are multiple variants of multiparty session types, the one used most in practice so far is almost identical to a simplification of SP.
In this variant, each pair of participants has a dedicated channel, and communication actions refer
directly to the intended sender/recipient as in SP (see the theory
of~\cite{CM13,MY13,CLMSW16,CDYP16}, for example, and the practical implementations
in~\cite{HMBCY11,NY15,M13:phd}).
To obtain multiparty session types from SP (and CC), we just need to: remove the capability of
storing values at processes; replace message values with constants (representing types, which could
also be extended to subtyping in the straightforward way); and make conditionals nondeterministic
(since in types we abstract from the precise values and expression used by the evaluator).
These modifications do not require any significant change to our approach since our AES already
abstracts from data and, thus, our treatment of the conditional is already nondeterministic.
For reference, we can simply treat the standard construct for an internal choice at a process
$\pid p$ -- $\actor p{B_1\oplus B_2}$ -- as syntactic sugar for a local conditional such as
$\actor p{\bcond{\m{coinflip}}{B_1}{B_2}}$.
\paragraph{Asynchrony.}
Our process calculus is not expressive enough to model examples from~\cite{LTY15} that use the
pattern of asynchronous exchange.
\begin{example}
\label{ex:async-mot}
The network $\actor p{\bsend qe;\brecv qx} \parp \actor q{\bsend p{e'};\brecv py}$ is deadlocked
in SP, but would run without errors in an asynchronous context: both $\pid p$ and $\pid q$ can
send their respective values, becoming ready to receive each other's messages.
\eoe
\end{example}
Asynchronous semantics for SP and CC have been described in~\cite{CM17b}.
For SP, we add a FIFO queue for each pair of processes.
Communications now synchronise with these queues: send actions append a message in the queue of the
receiver, and receive actions remove the first message from the queue of the receiver.
In order to extract asynchronous exchanges, we do not need full asynchrony at the choreography
level. Rather, we can restrict ourselves to a new primitive called a \emph{multicom}~\cite{CMP18}: a list of communication actions with distinct receivers, written
$\genmulticometa$.
\begin{comment}
The usual communications and selections are a special case of this action.
The semantics of multicom is given by the following rule, which generalizes (and replaces) both
\rname{C}{Com} and \rname{C}{Sel}.
\begin{displaymath}
\infer[\rname{C}{MCom}]{
\genmulticometa;C,\sigma
\lto{
\genmulticometa[e_i/v_i]_{i \in I}
}
C, \sigma[\pid q_i \mapsto v_i]_{i \in I}
}{
I = \left\{ i \mid \com{p_i}{e_i}{q_i}{x_i} \in \til \eta \right\}
&
\eval{e_i}\sigma{p_i}{v_i}
}
\end{displaymath}
Structural precongruence rules for the multicom are motivated by its intuitive semantics: actions
inside a multicom can be permuted as long as the senders differ, and sequential multicoms can be
merged as long as they do not share receivers and there are no sequential constraints between them
(that is, none of the receivers in the first multicom is a sender in the second one).
\begin{eqnarray*}
&\infer[\rname{C}{MCom-Perm}]
{\multicom{\ldots, \eta_1, \eta_2, \ldots} \equiv \multicom{\ldots, \eta_2, \eta_1, \ldots}}
{\pn(\eta_1)\cap\pn(\eta_2)=\emptyset}
\\[1ex]
&\infer[\rname{C}{MCom-MCom}]
{\multicom{\til\eta};\multicom{\til\nu} \equiv \multicom{\til\eta,\til\nu}}
{\rcv(\eta)\cap\rcv(\nu)=\emptyset
&
\rcv(\til\eta)\cap\snd(\til\nu)=\emptyset
}
\end{eqnarray*}
\end{comment}
Using multicoms, the program in Example~\ref{ex:async-mot} can be extracted as
$\multicom{\com peqx, \com q{e'}py}$.
The theory of this extension has been discussed briefly in~\cite{CLM17}, but implementing it is outside of the scope of this work.
\begin{comment}
Structural precongruence rules for multicom also allow us to define a normal form for
choreographies, where no multicom can be split in smaller multicoms.
In order to extract choreographies containing multicoms, we alter the definition of the AES for a
process network by allowing multicoms as labels for the edges.
These can be computed using the following iterative algorithm.
\begin{enumerate}
\item For a process $\pid p$ with behaviour $\bsend qe;B$ or $\bsel q\ell;B$, set
$\m{actions}=\emptyset$ and $\m{waiting}=\{\com peq?\}$ or $\m{waiting}=\{\sel pq\ell\}$, respectively.
\item While $\m{waiting}\neq\emptyset$:
\begin{enumerate}
\item Select an action $\eta$ from $\m{waiting}$.
Assume $\eta$ is of the form $\com res?$ (the case for label selection is similar).
\item If the behaviour of $\pid s$ is of the form $a_1;\ldots;a_k;\brecv ry;B$ where each $a_i$ is
either the sending of a value or a label selection, then: (i)~for each $a_i$, if the
corresponding choreography action is not in $\m{actions}$, add it to $\m{waiting}$; (ii)~add
$\com resy$ to $\m{actions}$.
\end{enumerate}
\item Return $\m{actions}$.
\end{enumerate}
This algorithm may fail (the behaviour of $\pid s$ in step~2(b) is not of the required form), in
which case the action initially chosen cannot be unblocked by a multicom.
\begin{example}
Consider the network from Example~\ref{ex:async-mot}.
Starting with action $\bsend qe$ at process $\pid p$, we initialize
$\m{actions}=\emptyset$ and $\m{waiting}=\{\com peq?\}$.
We pick the action $\com peq?$ from $\m{waiting}$.
The behaviour of $\pid q$ is $\bsend p{e'};\brecv px$, which is of the form described in
step~2(b); the choreography action corresponding to $\bsend p{e'}$ is $\com q{e'}p?$,
so we obtain $\m{actions}=\{\com peqx\}$ and $\m{waiting}=\{\com q{e'}px\}$.
Now we remove the action $\com q{e'}p?$ from $\m{waiting}$ and consider $\pid p$'s behaviour, which
is $\bsend qe;\brecv qy$.
The choreography action corresponding to $\bsend qe$ is $\com peq?$, which is already in
$\m{actions}$, so we do not change $\m{waiting}$, and we add $\com q{e'}py$ to $\m{actions}$.
The set $\m{waiting}$ is now empty, and the algorithm terminates,
returning $\multicom{\com peqx\\ \com q{e'}py}$.
We would obtain the equivalent
$\multicom{\com q{e'}py\\ \com peqx}$ by starting with the send action at $\pid q$.
\eoe
\end{example}
\begin{example}
As a more sophisticated example, we show how our new choreographies with multicom can model the
alternating 2-bit protocol.
Here, Alice alternates between sending a $0$ and a $1$ to Bob; in turn, Bob sends an
acknowledgment for every bit he receives, and Alice waits for the acknowledgment before sending
another copy of the same bit.\kim{``copy'' sounds really strange, I think}
Since we are in an asynchronous semantics, we only consider the time when the messages arrive.
With this in mind, we can write this protocol as the network
$\actor a{\bsend b0;\bsend b1;X} \parp \actor b Y$, where
\begin{align*}
&\rec{X}{\brecv bx;\bsend b0;\brecv bx;\bsend b1;X} \\
&\rec{Y}{\brecv ay;\bsend a{\m{ack}_0};\brecv ay;\bsend a{\m{ack}_1};Y}
\end{align*}
This implementation imposes exactly the dependencies dictated by the protocol.
For example, Alice can receive Bob's acknowledgment to the first $0$ before or after Bob receives
the first $1$.
This network extracts to the choreography
\[
\com a0by;X
\qquad\mbox{where}\qquad
X = \multicom{\com a1by\\ \com b{\m{ack}_0}ax};
\multicom{\com a0by\\ \com b{\m{ack}_1} ax}; X
\]
which is a simple and elegant representation of the alternating 2-bit protocol.
\eoe
\end{example}
Extraction for asynchronous SP is still sound, but behavioural equivalence is now an
expansion~\cite{AH92,SW01}, as each communication takes two steps in asynchronous SP.\kim{I don't know what ``expansion'' means here, but that's probably ok}
Its complexity can also be shown to be no larger than for the synchronous case.
We have not implemented the extension of extraction to the asynchronous case.
\end{comment}
\paragraph{Process spawning.}
Another useful construct is the capability to spawn new processes at runtime~\cite{CHY12,CM17a}. This feature would suffice, for example, to represent the remaining examples from~\cite{LTY15}, as well as many more complex examples.
Having such a construct breaks the fundamental premise of our algorithm, namely that SEGs are finite. Studying how the theory and implementation could be adapted to this extension is a challenging future direction.
\paragraph{Extraction strategies.}
We also believe that extraction strategies have unexplored potential, but a full study of their impact goes beyond the scope of this work. An interesting direction could be to develop more complex heuristics, for example such that the choice of action to be consumed takes into account the shape of the network and the partial graph built so far.
\section*{Acknowledgments}
All authors were supported in part by the Independent Research Fund Denmark, Natural Sciences, grant DFF-7014-00041. Larsen was supported in part by the Independent Research Fund Denmark, Natural Sciences, grant DFF-0135-00018B. Montesi was supported in part by Villum Fonden, grant 29518, and the Independent Research Fund Denmark, Technology and Production, grant DFF-4005-00304.
\clearpage
\bibliographystyle{plain}
|
train/arxiv
|
BkiUdTI241xg-QyzETzB
| 5 | 1 |
\section{Introduction}
Neural networks have been extensively popular these days. However the theory of why neural network work is still not well developed and extensive research is going on. Many explain the success of ANN by comparing the neuron in ANN that of a biological neural network in the brain.
However, in our perspective, the miraculous success of ANNs can better be attributed to some other properties of ANNs like the "Universal property of Neural networks" which have thorough mathematical proofs underlying them. \cite{shamir2018resnets,lu2017expressive,eldan2016power,hanin2017approximating,telgarsky2016benefits,kidger2020universal}.
Universal Approximation theorem states that the neural networks are able to approximate any function that connects inputs to outputs.
\section{Literature Survey}\label{literature survey}
\subsection{Approximating various classes of functions}
In this paper , we consider approximating various classes of functions, namely trigonometric function, polynomial function, exponential function, step wise increasing function etc. The target function is expressed in terms of taylor series expansions to large number of terms. All the terms in taylor series are polynomial functions which are in turn implemented by suitable NN architectures\cite{liang2016deep}.
Given a function f , let the neural network is simulating function g, the distance or error between functions is the maximum absolute difference over $ hyper cube [0,1]^d $
In the paper, the authors presented upper bounds for univariate starting with polynomial functions.
Two kinds of activation functions are used in this paper. ReLU and binary step units. The multiplication of two bits is implemented using a ReLU function.
\subsubsection{Approximating function $x^2$}
In the method employed, the author wants to find the depth and size of neural network needed to approximate the function. First a simple function $x^2$ is considered. The quantity 'x' $\in (0,1)$.
Here, first 'x' is approximated as $\sum_0^{n} \frac{x_i}{2^i} .$
A n-layer neural network is discussed to find $x_i$'s.
Next, the function $f^\sim (x) = f(\sum_0^{n} \frac{x_i}{2^i})$ is implemented by a two-layer neural network. To achieve $\epsilon-approximation$ error, n should be chosen as $n=\lceil log_2 \frac{1}{\epsilon} \rceil$+1. Such deep neural network has $\mathcal{O}(log\frac{1}{\epsilon})$ layers, $\mathcal{O}(log\frac{1}{\epsilon})$ binary step units and $\mathcal{O}(log\frac{1}{\epsilon})$ rectifier linear units.
In the Resnet paper, the functions are approximates to a concatenation of trapezoidal functions which is in turn implemented by Resnet.
\subsection{Feedforward Neural network as Function Approximator}
For this discussion, I took some of the concepts from
\cite{liang2016deep}. Here, we discuss the question of function approximation using Neural network. The key intuition is that multiplying of two binary bits can be performed by ReLU. Similarly. two numbers can also be multiplied using ReLU. We will demonstrate these methods in this section.
\subsubsection{Binary representation of x using NN}
$Example:$
\begin{center}
$Let \ x_1 \ and \ x_2 \ be \ two\ bits\ to \ be \ multiplied.
\newline
$\hspace{5mm} $ x_1 \in \{0,1\}\ and\ x_2 \in {0,1}. \ Let \ x_1*x_2=\ y. \newline
$So \ the\ Truth \ table\ for \ \textit{y} \ is\ as\ follows:
\begin{tabular}{ c c c }
\texttt{$x_1$} & \texttt{$x_2$} & \texttt{$y$} \\
0 & 0 & 0 \\
0 & 1 & 0\\
1 & 0 & 0 \\
1 & 1 & 1
\end{tabular}
\end{center}
This can be written as $y=max\{0,x_1+x_2-1\}$ \newline or more generally as $y=max\{0,k(x_1-1)+x_2\}$ where $k$ is any constant , such that $k \geq $1 .
Fig.1 gives the error between the function and taylor series approximation. The latter is taken for various number of polynomials given by N. For example, N=5, indicated taylor series is calculated till degree 5 polynomial.
Various types of functions like sinusoidal, pollynomial, exponential functions are considered.
It may appear not so as expected. According to Remainder theorem of Taylor series, for a smooth function one would expect the remainder which is correlated to epsilon1 to be very less. However we notice that the constant coeficient of x in the function f(x) decides the value of the remainder and for small N, this can be large as seen in the table for sin function.
Where as in case of epsilon2 error, its the difference in value between the output of Neural network and the taylor series and Neural network output. Here the variable N corresponds to the number of terms in binary expansion.
• Here also the implementation is done in the same way as Resnet paper for various functions to get values for the errors $\epsilon_1$ and $\epsilon_2$
• Here, For $\epsilon_1$ table, rows are represented by numbers of terms under consideration for Taylor series and columns are represented by functions.
• And different tables are made for different points about which Taylor series is considered.
• And a whole another set of tables are made keeping infinity norm under consideration.
• For $\epsilon_2$ table, rows are represented by numbers of bits used and columns are represented by functions.
• For $\epsilon_2$, there are zero errors for 60 and more bits.
\section{Preliminaries and Problem Statement}
We want to find an optimum Neural network architecture for a given class of functions. Different methods of constructing Neural networks to approximate the functions are discussed elaboratorately
Following the results of implementing the classes of functions in Srikant's paper and Resnet paper, following points are observed:
1. Srikant's method is approximating polynomial functions better as expected.
2. Resnet method better approximating step wise increasing / decreasing functions as expected.
3. None of the methods approximate the sinusoidal functions satisfactorily.
Hence, we propose a class of NN called fourier NN with sinusoidal activation function to deal with this problem.
Our intuition is because the activation function is sinusoidal, it can easily approximate sinusoidal functions.
Also since , by Fourier series , any function can be written in terms of sinusoidal functions. So, by using sinusoidal functions as building blocks, our Fourier Neural network can be used to construct other functions also.
In the literature, the optimality of the neural network architecture is not considered. In this paper, we address the problem that is it possible to optimise the number of layers or number of neurons required for the architecture to perform universal approximation? If so, how do we arrive at that architecture? In this paper, a FNN is proposed to approximate sinusoidal class of functions. Further , we extend the FNN architechture to take two layer instead of single layer. This implements double trigonimetric functions affectively. Further, a skip connection is allowed from first later to the outputs allowing single trigonimetric functions as well. The trade of between width and depth is considered. It is proven that FNN with two layers is able to approximate better than single layer FNN keeping the number of neurons constant. Further the model is compared with other constructive methods like the Taylor's series implementation and Trapezoidal functions implementation using ResNet.
\section{Motivation}
In digitone method of creating instrument sounds digitally, FM synthesis is used. Instead of using multiple frequencies to implementing the instruments , only a carrier and modulation frequency are used. In FM synthesis, double trigonometric functions are used. Hence, we are motivated to use double trigonometric functions which can be generated by double layer FOurier neural networks. Our idea is similar to digitone, here also we eould be able to approximate the target functions by using less frequencies, which imply less neurons in the Network.
We see that the feed forward network is able to approximate polynomial functions with less error. So they are ideal for implementing polynomial type of functions. The reason is that these networks follow a specific methodology which causes this. In this case, the target function is first approximated with Taylor series approximation function F1 which is in turn approximated with the Neural network. Thus, these Neural network can be said to be based on polynomial functions.
Similarly, the ResNet network we implemented is based on trapezoidal approximation. Consider a target function which is to be implemented. Its function graph is divided uniformly, with sample distance a. For every sample distance, the graph is approximated by a trapezoid.
The concatenation of all such trapezoidal functions, approximates the target function. This this ResNet network can be considered to be based on trapezoidal function. We know that trapezoidal function in this case is set in a way to approximate the step function with step size a. Hence, this NN approximates well, the functions which are combinations of step functions. Or functions which can be well approximated by combination of step functions.
We implemented both networks for different types of target functions. We observe that sinusoidal functions are not approximated well. We propose that this is because sinusoidal function by nature cannot be well approximated by polynomial functions or step functions. Hence we propose a Neural network which is based on sinusoidal functions. This new NN approximates the target functions by its Fourier series, i.e. combinations of sinusoidal functions.
Hence it will be effective to approximate sinusoidal type of functions.
\section{Fourier Neural Network- Effectiveness}
Fourier Neural network as we predicted is able to approximate the sinusoidal functions effectively. But there are some problems. For implementing this NN, it is requiring so many neurons in the hidden layer. This is the main problem.
\section{Double Fourier Neural network}
To reduce the number of neurons required to approximate the target function , we propose Neural network that generate double trigonometric functions. Our motivation is based on the fact that deep Neural network in general approximates better. We find that in some cases, the network is giving good performance, while in others, its not very effective. We could not make solid conclusions based on the experiments.
\section{Hybrid Fourier Neural network}
We proposed a new hybrid Neural network which is combination of normal trigonometric functions and double trigonometric functions. This hybrid Neural network is applied to approximate the target function by implementing corresponding Fourier series. We conclude that this network is giving good results in most cases.
\section{Results And Discussion}\label{sec:results}
\begin{table*}[ht!]
\caption{Epsilon 1 using ResNet method}
\label{epsilon1 ResNet}
\renewcommand{\arraystretch}{1}
\renewcommand{\tabcolsep}{2.4mm}
\centering
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\textbf{M} & \textbf{sin(2*pi*x/5)} & \textbf{sin(2*pi*x/2.5)} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}rectfunc 1 to 10 \\ 1 cycle\end{tabular}}} & \multicolumn{1}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}rectfunc 1to10\\ 2cycles\end{tabular}}} & \textbf{log(x)} \\ \hline
\textbf{5} & 6.366198 & 6.366198 & 0 & 2.5 & 3.61822 \\ \hline
\textbf{10} & 4.266574 & 7.531705 & 0 & 1.111111 & 2.250196 \\ \hline
\textbf{50} & 0.814721 & 1.617057 & 0 & 0 & 0.898553 \\ \hline
\textbf{100} & 0.403754 & 0.805831 & 0 & 0 & 0.519654 \\ \hline
\textbf{500} & 0.080142 & 0.160234 & 0 & 0 & 0.112856 \\ \hline
\textbf{1000} & 0.040238 & 0.08008 & 0 & 0 & 0.031386 \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[ht!]
\caption{Epsilon 2 using ResNet method}
\label{epsilon2 ResNet}
\renewcommand{\arraystretch}{1}
\renewcommand{\tabcolsep}{2.4mm}
\centering
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\textbf{M} & \textbf{sin(2*pi*x/5)} & \textbf{sin(2*pi*x/2.5)} & \textbf{rect\_1\_to\_10} & \textbf{rect\_1\_to\_10(2 cycles)} & \textbf{log(x)} \\ \hline
\textbf{5} & 0.00056 & 0.00112 & 0 & 0 & 0.274802 \\ \hline
\textbf{10} & 1.10E-13 & 1.86E-13 & 0 & 0 & 8.18E-12 \\ \hline
\textbf{50} & 1.28E-11 & 1.57E-11 & 0 & 0 & 1.18E-10 \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[ht!]
\caption{Epsilon 1 for Feedforward Networks}
\label{epsilon1 Srikant}
\renewcommand{\arraystretch}{1}
\renewcommand{\tabcolsep}{2.4mm}
\centering
\resizebox{175mm}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{N} & \multicolumn{1}{c|}{\textbf{gaussian}} & \multicolumn{1}{c|}{\textbf{x\textasciicircum{}2}} & \multicolumn{1}{c|}{\textbf{x\textasciicircum{}(-2)}} & \multicolumn{1}{c|}{\textbf{sinc2}} & \multicolumn{1}{c|}{\textbf{sin(2*pi*x/0.5)}} & \multicolumn{1}{c|}{\textbf{sin(2*pi*x/0.25)}} & \multicolumn{1}{c|}{\textbf{exp(x)}} & \multicolumn{1}{c|}{\textbf{exp(-x)}} & \multicolumn{1}{c|}{\textbf{sinc2\_new}} \\ \hline
\textbf{5} & 0.006882 & $5.55\times10^{-17}$ & 7.35E-08 & 1.86088 & 2273.285 & 78403.85 & 2.22E-16 & 2.22E-16 & 2.263957 \\ \hline
\textbf{10} & 2.78E-16 & 5.55E-17 & 9.86E+24 & 8.51E-13 & 10781.7 & 3291133 & 0 & 2.05E-08 & 2.22E-16 \\ \hline
\textbf{25} & 5.8E-15 & 5.55E-17 & 5.46E-12 & 4.82E-14 & 35.25349 & 3.04E+09 & 1.78E-15 & 8.33E-16 & 6.57E-08 \\ \hline
\textbf{50} & 2.78E-16 & 5.55E-17 & 3.1E+105 & 4.54E+85 & 6.26E-11 & 68970.15 & 1.78E-15 & 8.33E-16 & 2.94E-15 \\ \hline
\textbf{75} & 2.78E-16 & 5.55E-17 & 5.46E-12 & 1.69E+86 & 2.39E-11 & 5.44E-06 & 1.78E-15 & 8.33E-16 & 2.94E-15 \\ \hline
\end{tabular}%
}
\end{table*}
\begin{table*}[ht!]
\caption{Epsilon 2 for Feedforward Networks}
\label{epsilon2 Srikant}
\renewcommand{\arraystretch}{1}
\renewcommand{\tabcolsep}{2.4mm}
\centering
\resizebox{175mm}{!}{%
\begin{tabular}{|c|l|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{1}{|l|}{\textbf{n}} & \multicolumn{1}{c|}{\textbf{gaussian}} & \multicolumn{1}{c|}{\textbf{x\textasciicircum{}2}} & \multicolumn{1}{c|}{\textbf{x\textasciicircum{}(-2)}} & \multicolumn{1}{c|}{\textbf{sinc2}} & \multicolumn{1}{c|}{\textbf{sin(2*pi*x/0.5)}} & \multicolumn{1}{c|}{\textbf{sin(2*pi*x/0.25)}} & \multicolumn{1}{c|}{\textbf{log(x)(from 0.1)}} & \multicolumn{1}{c|}{\textbf{exp(x)}} & \multicolumn{1}{c|}{\textbf{exp(-x)}} \\ \hline
\textbf{5} & 0.004827 & 0.030599 & 6.701083 & 0.03125 & 0.25 & 0.5 & 0.081827 & 0.053137 & 0.01996 \\ \hline
\textbf{10} & 0.000143 & 0.000944 & 0.097095 & 0.000904 & 0.007226 & 0.014422 & 0.002232 & 0.001552 & 0.000567 \\ \hline
\textbf{25} & 4.17E-09 & 2.57E-08 & 2.88E-06 & 2.74E-08 & 2.21E-07 & 4.4E-07 & 6.58E-08 & 4.73E-08 & 1.78E-08 \\ \hline
\textbf{50} & 1.28E-16 & 8.13E-16 & 1.05E-13 & 7.96E-16 & 6.31E-15 & 1.28E-14 & 2.21E-15 & 1.43E-15 & 5.19E-16 \\ \hline
\textbf{60} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline
\end{tabular}%
}
\end{table*}
\begin{table*}[ht!]
\caption{Error for Fourier Neural Networks}
\label{epsilon2 Srikant}
\renewcommand{\arraystretch}{1}
\renewcommand{\tabcolsep}{2.4mm}
\centering
\resizebox{175mm}{!}{%
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline
\textbf{Function} & gaussian & x\textasciicircum{}2 & x\textasciicircum{}(-2) & sinc2 & sin(2*pi*x) & sin(4*pi*x) & exp(x) & exp(-x) & log(x) \\ \hline
\textbf{Error} & 9.66E-10 & 1.82E-08 & 4.9642 & 1.26E-09 & 3.70E-04 & 1.05E-03 & 5.48E-05 & 5.39E-05 & 1.67E-03 \\ \hline
\end{tabular}%
}
\end{table*}
The tables 1,2 show the error values of ResNet architecture. As discussed before, epsilon 1 here, is error between ResNet and the concaetination of rectangular functions . We take M samples of functions and construct the rectangular functions with the sample height for the sample duration. In the table 1, rectfunc 1 to 10 (1 cycle) means function is rectangular function with output 1 from input values in the range 0 to 5 and outputs -1 for input values in the range 5 to 10. Whereas the function in table shown as rectfunc 1 to 10 (2 cycles) means function is rectangular function with output 1 when input is in range 0 to 2.5. Output is -1 when input is in range 2.5 to 5, again output is 1 for input in range 5 to 7.5 and again output is -1 for input in range 7.5 to 10. Thus this function has 2 cycles between 0 and 1.
It can be observed that For Table 1, error is least for rectangular type of functions which is expected. In the case of sine function approximation, it is found thatfor sinsoidal function of higher frequency, the error is more.The log(x) function is considered from 0.01 to 10.
In table 2, it is observed that error is greatly decreased for M=50. It is also noticed that for epsilon2 error in Table 2, the error is almost independent of the nature of input function , for sufficiently large number of samples(greater than 50) and the error is less than $10^{-10}$.
In case of feed forward network implementation, the results are given in Table 3 and Table 4. In table 3, we observe that error converges in all functions except $x^{-2}$ and $sinc^{2}$ functions. Further when comparing the sinusoidal functions of diiferent frequencies, the function with high frequency gets less error. Error in this table are direct consequence of Remainder theorem in Taylor series. It is clear that error is least for polynomial functions of positive integer degrees. $sinc^2_new$ indicates the taylor series is calculated for sinx and then divided by x. For functions $sinc^2$ and exp(-x) , taylor series is taken around 0.01, while for all other values taylor series is calculated around 0.
The Table 4, gives errors between the Neural network and taylor series implementation. It is observed that for sufficient number of terms in binary expansion of x given by 'n', error is almost zero.
Table 5 , gives the error between double layer Fourier Neural network which is earlier refered as Hybrid Fourier Neural Network. This experiment is conducted taking 10000 samples of x in [-1,1]. It is observed that the error is more for $x^{-2}$ since function becomes undefined at x=0. Hence samples around x=0 give large values leading to error. For all other functions, error is fairly contained. For all the experiments in Tables 1-5, error is 2-norm error.
\section{Conclusion}
The deficiencies of the neural network as universal approximator proposed in the literature are explored and discussed. Some of the existing architectures are explored which are found to be relatively inefficient when approximating sinusoidal like target functions. Taking hint from the previous architectures, a FNN architecture is proposed to tackle such target functions. We extend this further by considering FNN with two hidden layers. It is proven that such a FNN output cannot be expressed in Fourier series and hence it cannot be proved as universal approximator unlike single hidden layer FNN which is a Universal Approximator. A novel architecture called "modified Architecture" is proposed which is proven to be more effective than FNN. Using two hidden layers, it is seen that number of neurons required is less compared to Single layer FNN.
\bibliographystyle{IEEEbib}
|
train/arxiv
|
BkiUbQbxK7FjYAb_5JzE
| 5 | 1 |
\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex}
{2.3ex plus .2ex}{\large\bf}}
\def\@startsection{subsection}{2}{\z@}{2.3ex plus .2ex{\@startsection{subsection}{2}{\z@}{2.3ex plus .2ex}
{2.3ex plus .2ex}{\bf}}
\newcommand\Appendix[1]{\def\Alph{section}}{Appendix \Alph{section}}
\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{\label{#1}}\def\Alph{section}}{\Alph{section}}}
\defLandau--Ginzburg{Landau--Ginzburg}
\begin{document}
\begin{titlepage}
\samepage{
\setcounter{page}{0}
\rightline{ACT--12/98}
\rightline{CERN--TH--98/395}
\rightline{CPT--TAMU--49/98}
\rightline{NSF--ITP--98--128}
\rightline{TPI--MINN--98/24}
\rightline{UFIFT-HEP-98--39}
\rightline{UMN--TH--1733--98}
\rightline{\tt hep-th/9812141}
\rightline{December 1998}
\vfill
\begin{center}
{\Large \bf On Elevating Free-Fermion $Z_2\times Z_2$ Orbifolds \\}
\vspace{.10in}
{\Large \bf Models to Compactifications of $F$ Theory}
\vfill
\vspace{.25in}
{\large P. Berglund$^{1}$, J. Ellis$^{2}$,
A.E. Faraggi$^{3}$, D.V. Nanopoulos$^{4}$,
$\,$and$\,$ Z. Qiu$^5$\\}
\vspace{.25in}
{\it $^{1}$ Inst. for Theoretical Physics, University of California,
Santa Barbara, CA~93106\\}
\vspace{.05in}
{\it $^{2}$ Theory Division, CERN, CH-1211 Geneva, Switzerland \\}
\vspace{.05in}
{\it $^{3}$ Department of Physics,
University of Minnesota, Minneapolis, MN 55455, USA\\}
\vspace{.05in}
{\it $^{4}$ Dept. of Physics,
Texas A \& M University, College Station, TX~77843-4242, USA, \\
HARC, The Mitchell Campus, Woodlands, TX~77381, USA, and \\
Academy of Athens, 28~Panepistimiou Avenue,
Athens 10679, Greece.\\}
\vspace{.05in}
{\it $^{5}$ Department of Physics,
University of Florida, Gainesville, FL, 32611, USA\\}
\end{center}
\vfill
\begin{abstract}
{\rm
We study the elliptic fibrations of some Calabi-Yau three-folds,
including the $Z_2\times Z_2$
orbifold with $(h_{1,1},h_{2,1})=(27,3)$, which is equivalent to
the common framework of realistic free-fermion models,
as well as related orbifold models with
$(h_{1,1},h_{2,1})=(51,3)$ and $(31,7)$.
However,
two related puzzles arise when one considers the
$(h_{1,1},h_{2,1})=(27,3)$ model as
an $F$-theory compactification to six dimensions.
The condition for the vanishing of the gravitational
anomaly is not satisfied, suggesting that
the $F$-theory compactification does not make sense, and
the elliptic fibration is well defined everywhere except at
four singular points in the base.
We speculate on the possible existence of $N=1$ tensor and
hypermultiplets at these points which would cancel the
gravitational anomaly in this case. }
\end{abstract}
\vfill
\smallskip}
\end{titlepage}
\setcounter{footnote}{0}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\label{\label}
\def\end{eqnarray}{\end{eqnarray}}
\def{\rm Tr}\,{{\rm Tr}\,}
\def{Ka\v{c}-Moody}{{Ka\v{c}-Moody}}
\def{\it i.e.}{{\it i.e.}}
\def{\it etc}{{\it etc}}
\def{\it e.g.}{{\it e.g.}}
\def{\textstyle{1\over 2}}{{\textstyle{1\over 2}}}
\def{\textstyle {1\over3}}{{\textstyle {1\over3}}}
\def{\textstyle {1\over4}}{{\textstyle {1\over4}}}
\def{\tt -}{{\tt -}}
\def{\tt +}{{\tt +}}
\def\rep#1{{\bf {#1}}}
\def\slash#1{#1\hskip-6pt/\hskip6pt}
\def\slash{k}{\slash{k}}
\def\,{\rm GeV}{\,{\rm GeV}}
\def\,{\rm TeV}{\,{\rm TeV}}
\def\,{\rm y}{\,{\rm y}}
\defStandard-Model {Standard-Model }
\defsupersymmetry {supersymmetry }
\defsupersymmetric standard model{supersymmetric standard model}
\def\vev#1{\left\langle #1\right\rangle}
\def\langle{\langle}
\def\rangle{\rangle}
\def{\tilde H}{{\tilde H}}
\def{\overline{\chi}}{{\overline{\chi}}}
\def{\overline{q}}{{\overline{q}}}
\def{\overline{\imath}}{{\overline{\imath}}}
\def{\overline{\jmath}}{{\overline{\jmath}}}
\def{\overline{H}}{{\overline{H}}}
\def{\overline{Q}}{{\overline{Q}}}
\def{\overline{a}}{{\overline{a}}}
\def{\overline{\alpha}}{{\overline{\alpha}}}
\def{\overline{\beta}}{{\overline{\beta}}}
\def{ \tau_2 }{{ \tau_2 }}
\def{\cal F}{{\cal F}}
\def{\cal P}{{\cal P}}
\def{\cal N}{{\cal N}}
\def\smallmatrix#1#2#3#4{{ {{#1}~{#2}\choose{#3}~{#4}} }}
\def{\bf 1}{{\bf 1}}
\def{\bf V}{{\bf V}}
\def{\bf N}{{\bf N}}
\def{\bf Q}{{\bf Q}}
\def\t#1#2{{ \Theta\left\lbrack \matrix{ {#1}\cr {#2}\cr }\right\rbrack }}
\def\C#1#2{{ C\left\lbrack \matrix{ {#1}\cr {#2}\cr }\right\rbrack }}
\def\tp#1#2{{ \Theta'\left\lbrack \matrix{ {#1}\cr {#2}\cr }\right\rbrack }}
\def\tpp#1#2{{ \Theta''\left\lbrack \matrix{ {#1}\cr {#2}\cr }\right\rbrack }}
\def\ul#1{$\underline{#1}$}
\def\bE#1{{E^{(#1)}}}
\def\relax{\bf Z}}\def\relax\hbox{$\inbar\kern-.3em{\rm C}$}{\relax{\bf C}{\relax{\bf Z}}\def\relax\hbox{$\inbar\kern-.3em{\rm C}$}{\relax{\bf C}}
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def\lambda{\lambda}
\def\fc#1#2{{#1\over#2}}
\def\hx#1{{\hat{#1}}}
\def\hat{\Gamma}{\hat{\Gamma}}
\def\subsubsec#1{\noindent {\it #1} \hfill\break}\def\ni{\noindent}
\def{\bf WP}{{\bf WP}}
\def\Gamma_0{\Gamma_0}
\def{\bar \Gamma}_0{{\bar \Gamma}_0}
\def\Delta^\star{\Delta^\star}
\def\abstract#1{
\vskip .5in\vfil\centerline
{\bf Abstract}\penalty1000
{{\smallskip\ifx\answ\bigans\leftskip 2pc \rightskip 2pc
\else\leftskip 5pc \rightskip 5pc\fi
\noindent\abstractfont \baselineskip=12pt
{#1} \smallskip}}
\penalty-1000}
\def\us#1{\underline{#1}}
\def\hth/#1#2#3#4#5#6#7{{\tt hep-th/#1#2#3#4#5#6#7}}
\def\nup#1({Nucl.\ Phys.\ $\us {B#1}$\ (}
\def\plt#1({Phys.\ Lett.\ $\us {B#1}$\ (}
\def\cmp#1({Commun.\ Math.\ Phys.\ $\us {#1}$\ (}
\def\prp#1({Phys.\ Rep.\ $\us {#1}$\ (}
\def\prl#1({Phys.\ Rev.\ Lett.\ $\us {#1}$\ (}
\def\prv#1({Phys.\ Rev.\ $\us {#1}$\ (}
\def\mpl#1({Mod.\ Phys.\ Let.\ $\us {A#1}$\ (}
\def\ijmp#1({Int.\ J.\ Mod.\ Phys.\ $\us{A#1}$\ (}
\def\hfill\break}\def\ni{\noindent{\hfill\break}\def\ni{\noindent}
\def\hfill\break\vskip 0.2cm{\hfill\break\vskip 0.2cm}
\def\cx#1{{\cal #1}}\def\alpha}\def\IP{{\bf P}{\alpha}\def\rlx{\rm I\kern-.18em P}{{\bf P}}
\def\ov#1#2{{#1 \over #2}}
\def\beta}\def\al{\alpha{\beta}\def\alpha}\def\IP{{\bf P}{\alpha}
\def{\bf b}{{\bf b}}
\def{\bf S}{{\bf S}}
\def{\bf X}{{\bf X}}
\def{\bf I}{{\bf I}}
\def{\mathbf b}{{\mathbf b}}
\def{\mathbf S}{{\mathbf S}}
\def{\mathbf X}{{\mathbf X}}
\def{\mathbf I}{{\mathbf I}}
\def{\mathbf \alpha}{{\mathbf \alpha}}
\def{\mathbf \beta}{{\mathbf \beta}}
\def{\mathbf \gamma}{{\mathbf \gamma}}
\def{\mathbf \xi}{{\mathbf \xi}}
\def\ul#1{$\underline{#1}$}
\def\bE#1{{E^{(#1)}}}
\def\relax{\bf Z}}\def\relax\hbox{$\inbar\kern-.3em{\rm C}$}{\relax{\bf C}{\relax{\bf Z}}\def\relax\hbox{$\inbar\kern-.3em{\rm C}$}{\relax{\bf C}}
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\def\lambda{\lambda}
\def\fc#1#2{{#1\over#2}}
\def\hx#1{{\hat{#1}}}
\def\hat{\Gamma}{\hat{\Gamma}}
\def\subsubsec#1{\noindent {\it #1} \hfill\break}\def\ni{\noindent}
\def{\bf WP}{{\bf WP}}
\def\Gamma_0{\Gamma_0}
\def{\bar \Gamma}_0{{\bar \Gamma}_0}
\def\Delta^\star{\Delta^\star}
\def\abstract#1{
\vskip .5in\vfil\centerline
{\bf Abstract}\penalty1000
{{\smallskip\ifx\answ\bigans\leftskip 2pc \rightskip 2pc
\else\leftskip 5pc \rightskip 5pc\fi
\noindent\abstractfont \baselineskip=12pt
{#1} \smallskip}}
\penalty-1000}
\def\us#1{\underline{#1}}
\def\hth/#1#2#3#4#5#6#7{{\tt hep-th/#1#2#3#4#5#6#7}}
\def\nup#1({Nucl.\ Phys.\ $\us {B#1}$\ (}
\def\plt#1({Phys.\ Lett.\ $\us {B#1}$\ (}
\def\cmp#1({Commun.\ Math.\ Phys.\ $\us {#1}$\ (}
\def\prp#1({Phys.\ Rep.\ $\us {#1}$\ (}
\def\prl#1({Phys.\ Rev.\ Lett.\ $\us {#1}$\ (}
\def\prv#1({Phys.\ Rev.\ $\us {#1}$\ (}
\def\mpl#1({Mod.\ Phys.\ Let.\ $\us {A#1}$\ (}
\def\ijmp#1({Int.\ J.\ Mod.\ Phys.\ $\us{A#1}$\ (}
\def\hfill\break}\def\ni{\noindent{\hfill\break}\def\ni{\noindent}
\def\hfill\break\vskip 0.2cm{\hfill\break\vskip 0.2cm}
\def\cx#1{{\cal #1}}\def\alpha}\def\IP{{\bf P}{\alpha}\def\rlx{\rm I\kern-.18em P}{{\bf P}}
\def\ov#1#2{{#1 \over #2}}
\def\beta}\def\al{\alpha{\beta}\def\alpha}\def\IP{{\bf P}{\alpha}
\def\,\vrule height1.5ex width.4pt depth0pt{\,\vrule height1.5ex width.4pt depth0pt}
\def\relax\hbox{$\inbar\kern-.3em{\rm C}$}{\relax\hbox{$\,\vrule height1.5ex width.4pt depth0pt\kern-.3em{\rm C}$}}
\def\relax\hbox{$\inbar\kern-.3em{\rm Q}$}{\relax\hbox{$\,\vrule height1.5ex width.4pt depth0pt\kern-.3em{\rm Q}$}}
\def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}}
\font\cmss=cmss10 \font\cmsss=cmss10 at 7pt
\def\relax{\bf Z}}\def\relax\hbox{$\inbar\kern-.3em{\rm C}$}{\relax{\bf C}{\relax\ifmmode\mathchoice
{\hbox{\cmss Z\kern-.4em Z}}{\hbox{\cmss Z\kern-.4em Z}}
{\lower.9pt\hbox{\cmsss Z\kern-.4em Z}}
{\lower1.2pt\hbox{\cmsss Z\kern-.4em Z}}\else{\cmss Z\kern-.4em Z}\fi}
\defA.E. Faraggi{A.E. Faraggi}
\defK.R. Dienes{K.R. Dienes}
\defJ. March-Russell{J. March-Russell}
\def\NPB#1#2#3{{\it Nucl.\ Phys.}\/ {\bf B#1} (19#2) #3}
\def\PLB#1#2#3{{\it Phys.\ Lett.}\/ {\bf B#1} (19#2) #3}
\def\PRD#1#2#3{{\it Phys.\ Rev.}\/ {\bf D#1} (19#2) #3}
\def\PRL#1#2#3{{\it Phys.\ Rev.\ Lett.}\/ {\bf #1} (19#2) #3}
\def\PRT#1#2#3{{\it Phys.\ Rep.}\/ {\bf#1} (19#2) #3}
\def\MODA#1#2#3{{\it Mod.\ Phys.\ Lett.}\/ {\bf A#1} (19#2) #3}
\def\IJMP#1#2#3{{\it Int.\ J.\ Mod.\ Phys.}\/ {\bf A#1} (19#2) #3}
\def\nuvc#1#2#3{{\it Nuovo Cimento}\/ {\bf #1A} (#2) #3}
\def{\it et al,\/}\ {{\it et al,\/}\ }
\hyphenation{su-per-sym-met-ric non-su-per-sym-met-ric}
\hyphenation{space-time-super-sym-met-ric}
\hyphenation{mod-u-lar mod-u-lar--in-var-i-ant}
\setcounter{footnote}{0}
\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Introduction}
Important progress has been achieved during the past few years in
understanding
non--perturbative aspects of superstring theories~\cite{reviewnps}.
However, the ultimate goal of understanding how string theory is relevant
to physics in the real world remains elusive. Encouraged by the hope
that string theory provides a framework for consistently
unifying all of the observed elementary
matter particles and interactions~\cite{initial}, many phenomenological
string models have been developed~\cite{reviewsp}.
Among the most advanced
models are those constructed in the
free-fermion formulation~\cite{fsu5,fny,alr,eu,lykken}. These models
have been the subject of detailed studies showing that they
can, at least in
principle, account for desirable physical features
including the observed fermion
mass spectrum, the longevity of the proton,
small neutrino masses,
the consistency of gauge-coupling unification
with the experimental data from LEP and elsewhere, and the
universality of the soft supersymmetry-breaking
parameters~\cite{reviewsp}.
It is plausible that some improved understanding
how recent advances in non-perturbative
aspects of string theory are relevant in the real world
may be gained by studying their application to these
realistic free-fermion models~\footnote{Other approaches are
also possible: see for example the $M$-theory three-generation
models proposed in~\cite{donagi}. It will be interesting
to see whether such models share the attractive features of
the perturbative three-generation models.}.
An important feature of the realistic free-fermion
models is their underlying $Z_2\times Z_2$
orbifold structure. Many of the encouraging phenomenological
characteristics of the realistic free-fermion
models are rooted in this
structure, including the three generations arising from the
three twisted sectors, and the canonical $SO(10)$ embedding for
the weak hypercharge. To see more precisely this
orbifold correspondence, recall that
the free-fermion models are generated by a set of basis vectors
which define the transformation properties of the world--sheet
fermions as they are transported around loops on the string world sheet.
A large set of realistic free-fermion models contains
a subset of boundary conditions which
can be seen
to correspond to $Z_2\times Z_2$ orbifold compactification
with the standard embedding of the gauge connection~\cite{foc}.
This underlying free-fermion model contains 24 generations from the
twisted sectors, as well as three additional
generation/anti--generation pairs from the untwisted sector.
At the free-fermion point in the Narain moduli space~\cite{Narain},
both the metric and the antisymmetric background fields
are non-trivial, leading to an $SO(12)$ enhanced symmetry group.
The action of the $Z_2\times Z_2$ twisting on the
$SO(12)$ Narain lattice then gives rise to a model
with $(h_{11},h_{21})=(27,3)$, matching the data of
the free-fermion model~\footnote{We
emphasize that the data of this
model differs from the $Z_2\times Z_2$ orbifold on a $SO(4)^3$
lattice with $(h_{11},h_{21})=(51,3)$, which has been
more extensively discussed in the literature \cite{z2mf}.}.
Recently, we have shown how to construct this
$Z_2\times Z_2$ orbifold model
in the Landau--Ginzburg formalism~\cite{befnq}.
This was done using a freely-acting $Z_2$ twist on a
$Z_2\times Z_2$ Landau-Ginzburg orbifold with
$(h_{11},h_{21})=(51,3)$.
In this paper, we extend this
analysis to include a formulation of this
and related $(51,3)$ and $(31,7)$ models in terms of
elliptically-fibered Calabi-Yau manifolds,
opening the way to non-trivial $F$- or $M$-theory
compactifications. This geometrical approach should allow us to
study properties of the moduli space away from the special
free-fermion/orbifold point. However, in this paper
we are more concerned with the consistency
of these manifolds as valid $F$-theory
compactifications, and thus as six-dimensional vacua, rather
than exploring them from the traditional heterotic four-dimensional point
of view.
We recall that
$F$ theory is a way of compactifying type-IIB string theory
which allows the string coupling to vary over the compact
manifold.
The key point in compactifications of $F$ theory to six
dimensions is that
the models should admit an elliptic fibration and (at least) a global
section, in which a Calabi--Yau
three-fold is identified as a two complex--dimensional base
manifold $B$ with an elliptic fiber. Among these
models are some which have an orbifold interpretation, such as
the above-mentioned $Z_2\times Z_2$ orbifold
with $(h_{11},h_{21})=(51,3)$~\cite{z2mf}, denoted by $X_1$,
as we demonstrate with a standard Weierstrass representation.
Using the Landau-Ginzburg analysis and {\it quartic}
Weierstrass representations, we construct related
freely-acting $Z_2$ orbifolds with $(h_{11},h_{21})=(27,3)$
and $(h_{11},h_{21})=(31,7)$,
denoted by $X_2$ and $X_3$. The former admits an
elliptic fibration, apart from four singular
points in the base
$B$. However, these points prevent us from having a global
section and so the six-dimensional theory does not exist.
Another sign of this is that
the gravitational anomaly equation in six dimensions is not
satisfied. This implies that the formulation of this $F$-theory
compactification is not well defined. This is consistent with
the absence of a globally-defined section, but it is possible that
there may be a non-trivial contribution of $N=1$
tensor and hypermultiplets associated with the singular points in the base
$B$ of $X_2$,
due to the $Z_2$ quotient, which cancels the gravitational anomaly.
On the other hand, we are able to show that the $(31,7)$ model $X_3$ does
admit a consistent elliptic fibration.
This paper is organized as follows. In section 2, we give a brief review
of the free-fermion orbifold and Landau-Ginzburg constructions of the
$(h_{11},h_{21})=\{(51,3),(27,3)\}$ $Z_2\times Z_2$ orbifolds. In section
3, we first review the elliptic fibration of the former orbifold
using a standard Weierstrass representation, and then
generalize it to the closely related $(27,3)$ and $(31,7)$ models
using a quartic Weierstrass representation. In particular, we
focus on the question of the existence of the
$(27,3)$ model in six dimensions and
the appearance of singular points in the base space, and suggest how the
issue of the gravitational anomaly may be resolved. We study in section 4
type-IIB orientifold constructions relevant for the discussion of the
$F$-theory compactification of $X_2$. Finally, we end in section 5 with
some concluding remarks.
\setcounter{footnote}{0}
\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{The $Z_2\times Z_2$ Orbifold Equivalent of Realistic
Free-Fermion Models}
The purpose of this section is to motivate the choice of the relevant
Calabi-Yau
three-folds because of their relation to certain free-fermionic models. We
first review the construction of
the $Z_2\times Z_2$ fermionic orbifold of interest followed by their
realization as Landau-Ginzburg and toroidal orbifolds.
Let us recall that,
in the free-fermion formulation~\cite{fff}, a model
is defined by a set of boundary-condition basis
vectors, together with the related one--loop
GSO-projection coefficients, that are constrained
by the string consistency constraints.
These boundary-condition basis vectors encode the phases
of all the world--sheet fermions, when transported
around one of the non-contractible loops of the
string world sheet. In the case of the heterotic string
in the light--cone gauge, there are 20 left--moving and
44 right--moving real Majorana--Weyl world--sheet fermions,
whereas for the type-IIA and type-IIB strings there would be
20 left-- and 20 right--moving world--sheet fermions.
Given the set of boundary-condition basis vectors and the one--loop
GSO-projection coefficients, one can then construct the one--loop
partition function and extract the physical spectrum.
The $Z_2\times Z_2$ fermionic orbifold model
of interest is generated
by the following set of boundary-condition basis vectors, the
so-called NAHE set~\cite{fsu5,nahe}:
$
\{{\bf 1},{\bf S},{\mathbf \xi}={\bf 1}+{\bf b}_1+{\bf b}_2+{\bf b}_3,X,{\bf b}_1,{\bf b}_2\},
$
The first four vectors
in the basis $\{{\bf 1},{\bf S},{\mathbf \xi},{\bf X}\}$ generate a model with $N=4$
space--time supersymmetry, and an $E_8\times SO(12)\times E_8$ gauge
group. In this construction,
the sector {\bf {\bf S}} generates $N=4$ space--time supersymmetry.
The $SO(12)$ factor is obtained
from $\{{\bar y},{\bar\omega}\}^{1,\cdots,6}$.
The first and second $E_8$ factors are obtained from the world--sheet
fermionic states $\{{\bar\psi}^{1,\cdots,5},{\bar\eta}^{1,2,3}\}$ and
$\{{\bar\phi}^{1,\cdots,8}\}$, respectively.
The sectors ${\bf X}$ and
${\mathbf \xi}$ produce the spinorial representations of $SO(16)$ in the
observable and hidden sectors, respectively,
and complete the observable and hidden gauge groups to $E_8\times E_8$.
The Neveu--Schwarz sector
produces the adjoint representations of $SO(16)\times SO(12)\times SO(16)$.
The vectors ${\bf b}_1$ and ${\bf b}_2$ break the gauge symmetry to
$E_6\times U(1)^2\times SO(4)^3\times E_8$ and
the $N=4$ space--time supersymmetry to $N=1$.
The sectors $({\bf b}_1;{\bf b}_1+{\bf X})$, $({\bf b}_2;{\bf b}_2+{\bf X})$ and $({\bf b}_3;{\bf b}_3+{\bf X})$
each give eight ${\bf 27}$'s of $E_6$. The $(NS;NS+{\bf X})$ sector gives, in
addition to the vector bosons and spin-two states, three copies of
scalar representations in ${\bf 27}+{\overline{\bf 27}}$ of $E_6$.
The net number of generations
in the {\bf 27} representation of $E_6$
is therefore 24.
In the toroidal orbifold construction, the same model is obtained
by first specifying the background fields, which produce
the $SO(12)$ lattice~\cite{foc}, and then
applying the appropriate $Z_2\times Z_2$ identifications.
One takes the metric
on the six-dimensional compactified manifold
to be the Cartan matrix of $SO(12)$,
and the antisymmetric tensor to be $b_{ij}=g_{ij}$ for $i>j$ \cite{foc}.
When all the radii of the six-dimensional compactified
manifold are fixed at $R_I=\sqrt2$, it is easily seen that the
left-- and right--moving momenta
$P^I_{R,L}=[m_i-{1\over2}(B_{ij}{\pm}G_{ij})n_j]{e_i^I}^*$
reproduce all the massless root vectors in the lattice of
$SO(12)$,
where the $e^i=\{e_i^I\}$ are six linearly-independent
vectors normalized: $(e_i)^2=2$.
The ${e_i^I}^*$ are dual to the $e_i$, and
$e_i^*\cdot e_j=\delta_{ij}$. The momenta $P^I$ of the compactified
scalars in the bosonic formulation can be seen to coincide
with the $U(1)$ charges of the unbroken Cartan generators
of the four-dimensional gauge group.
The incorporation in the free-fermion model of
the two basis vectors ${\bf b}_1$ and ${\bf b}_2$ as well as
$\{{\bf 1},{\bf S},{\mathbf \xi},{\bf X}\}$
corresponds to the $Z_2\times Z_2$
orbifold model with standard embedding.
The fermionic boundary conditions are translated,
in the bosonic language, into twists on the internal dimensions
and shifts in the gauge degrees of freedom.
Starting from the model with $SO(12)\times E_8\times E_8$
symmetry, and applying the $Z_2\times Z_2$ twisting on the
internal
coordinates, we then obtain an orbifold model with $SO(4)^3\times
E_6\times U(1)^2\times E_8$ gauge symmetry. There are sixteen fixed
points in each twisted sector, yielding the 24 generations from the
three twisted sectors mentioned above. The three additional pairs of
${\bf 27}$
and ${\bf \overline{27}}$
are obtained from the untwisted sector. This
orbifold model, which we call $X_2$, therefore has the same
topological data as the free-fermion model
with the six-dimensional basis set
$\{{\bf1},{\bf S},{\bf X},{\bf I}={\bf1}+{\bf b}_1+{\bf b}_2+{\bf b}_3,{\bf b}_1,{\bf b}_2\}$, since
the Euler characteristic of this model is 48, with $h_{11}=27$ and
$h_{21}=3$.
This $Z_2\times Z_2$ orbifold, corresponding
to the extended NAHE set at the core of the realistic
free--fermion models,
differs from the one which has usually been
examined in the literature~\cite{z2mf}.
In that orbifold model, the Narain
lattice is $SO(4)^3$, yielding a $Z_2\times Z_2$ orbifold model,
which we call $X_1$.
It has Euler characteristic equal to 96, corresponding to 48 generations,
and $h_{11}=51$, $h_{21}=3$. This $Z_2\times Z_2$ orbifold
can be constructed in a similar manner to the model $X_2$ above.
First the Narain $SO(4)^3$
lattice is produced via the diagonal metric $g_{ij}=2\delta_{ij}$
and the trivial anti-symmetric tensor field $b_{ij}=0$.
For $R_{I}=\sqrt{2}$, all the roots in the root
lattice of $SO(4)^3$ are again massless. Then, applying
the $Z_2\times Z_2$ twisting reduces the $N=4$ supersymmetry
to $N=1$. Each twisted sector now produces
16 generations, yielding a total of 48, and three
additional generation and anti-generation pairs
are obtained from the untwisted sector.
Before proceeding, we note that, at the level of the toroidal
compactification, the $SO(4)^3$ and $SO(12)$ lattices
are continuously connected by varying the parameters of the
background metric and antisymmetric tensor.
However, this cannot be done while preserving the
$Z_2\times Z_2$ invariance, because
the continuous interpolation cannot change the Euler characteristic.
Therefore, the two toroidal models are in the same
moduli space, but not the two orbifold models $X_1$ and $X_2$.
Let us now show how the two $Z_2\times Z_2$
orbifold models, constructed above using toroidal
compactification, may be
represented in the Landau-Ginzburg orbifold
construction~\cite{befnq}. We start from a non-degenerate
quasi--homogeneous superpotential $W$ of degree $d$,
$W(\lambda^{n_i}X_i)=\lambda^d W(X_i)$,
where the $X_i$ are chiral superfields and the $q_i=n_i/d$ are their
left and right charges under the $U(1)_{J_0}$ current
of the $N=2$ algebra. In the Landau--Ginzburg~ construction
one twists by some symmetry group $G$
of the original superpotential \cite{lg}. The Landau--Ginzburg~ potential
that mimics the $T_2^3$ torus, corresponding to the $SO(4)^3$ lattice,
is given by
\begin{equation}
W=X_1^4+X_2^4+X_3^2+X_4^4+X_5^4+X_6^2+X_7^4+X_8^4+X_9^2
\label{quarticpotential}
\end{equation}
where $X_{3,6,9}$ are trivial superfields, and
the superpotential (\ref{quarticpotential})
corresponds to a superconformal field theory with
$c=9$. The mirror of the $X_1$ model is obtained
by taking the orbifold
${\cal M}/(Z_2^A\times Z_2^B)$ where
\begin{eqnarray}
&& Z_2^A~:~(X_1,\cdots,X_9)~\rightarrow~
(X_1, X_2,X_3, -X_4, -X_5,X_6,-X_7,-X_8,X_9)~;~~~~~~~~~~\nonumber\\
&& Z_2^B~:~(X_1,\cdots,X_9)~\rightarrow~
(-X_1, -X_2, X_3, -X_4, -X_5,X_6,X_7,X_8,X_9)~.~~~~~~~~~~\label{z2z2513}
\end{eqnarray}
and ${\cal M}=W/j$, where $j$ is the $Z_4$ scaling symmetry
of~(\ref{quarticpotential}).
It is easy to show that there are 51 $(1,1)$ and 3 $(-1,1)$
states. Using the convention that the deformations of $W$, which give
part of the spectrum of $(1,1)$ states, correspond to the
$(2,1)$ forms of the orbifold, and the fact that the $(1,1)$ forms are in
one-to-one correspondence
with the $(-1,1)$ states, we reproduce the data of the $Z_2\times Z_2$
orbifold on the $SO(4)^3$ lattice.
We showed in~\cite{befnq} that the mirror of the $X_2$ model is
obtained from the mirror of the $X_1$ model
by applying the twist
\begin{eqnarray}
&& Z_2^{w}~:~(X_1,\ldots, X_9)~\rightarrow~
(-X_1, X_2, -X_3, -X_4, X_5, -X_6, -X_7, X_8, -X_9)~,~~~~~~~
\label{z2z2273}
\end{eqnarray}
i.e., we have the Landau--Ginzburg\ orbifold ${\cal M}/(Z_2^A\times Z_2^B\times
Z_2^\omega)$.
Note that the three trivial superfields are twisted by the
$Z_2^w$ twist. This is to ensure that $Z_2^w$ acts freely on each of
the $T^2$ factors in~(\ref{quarticpotential}),
thus reproducing the data of the $Z_2\times Z_2$ orbifold
on the $SO(12)$ lattice.
Another way to realize the connection between the $X_1$ and $X_2$
models is by using a freely-acting shift, rather than a freely-acting
twist~\footnote{This second way will be instrumental later in trying
to find a
type-IIB orientifold
description of the six-dimensional vacuum corresponding to the $F$-theory
compactification on $X_2$.}.
For this purpose, let us first start with the compactified
$T^2_1\times T^2_2\times T^2_3$ torus parameterized by
three complex coordinates $z_1$, $z_2$ and $z_3$,
with the identification
\begin{equation}
z_i=z_i + 1~~~~~~~~~~;~~~~~~~~~~z_i=z_i+\tau_i
\label{t2cube}
\end{equation}
where $\tau$ is the complex parameter of each
$T^2$ torus.
We consider $Z_2$ twists and possible shifts of order
two:
\begin{equation}
z_i~\rightarrow~(-1)^{\epsilon_i}z_i+1/2\delta_i
\label{z2twistanddance}
\end{equation}
subject to the condition that $\Pi_i(-1)^{\epsilon_i}=1$.
This condition insures that the holomorphic three--form
$\omega=dz_1\wedge z_2\wedge z_3$ is invariant under the $Z_2$ twist.
Under the identification $z_i\rightarrow-z_i$, a single torus
has four fixed points at
\begin{equation}
z_i=\{0,1/2,\tau/2,(1+\tau)/2\}.
\label{fixedtau}
\end{equation}
The first model that we consider is produced
by the two $Z_2$ twists
\begin{eqnarray}
&& \alpha:(z_1,z_2,z_3)\rightarrow(-z_1,-z_2,~~z_3)\cr
&& \beta:(z_1,z_2,z_3)\rightarrow(~~z_1,-z_2,-z_3)
\label{alphabeta}
\end{eqnarray}
There are three twisted sectors in this model, $\alpha$,
$\beta$ and $\alpha\beta=\alpha\cdot\beta$, each producing
16 fixed tori, for a total of 48.
To facilitate the
discussion of the subsequent examples, we briefly describe
the calculation of the cohomology for this orbifold:
a more complete discussion can be found in~\cite{msanddt}.
Consider first the untwisted sector. The Hodge
diamond for a single untwisted torus is given by
\begin{equation}
\left(\matrix{1 &1\cr
1 &1\cr
} \right)
\label{hodge1t}
\end{equation}
which displays the dimensions of the $H^{p,q}(T_i)$,
with $H^{0,0}$, $H^{0,1}$, $H^{1,0}$ and $H^{1,1}$
being generated by the differential forms $1$, $d{\bar z}_i$,
$dz_i$ and $dz_i\wedge d{\bar z}i$, respectively. Under the $Z_2$
transformation $z~\rightarrow~-z$, $H^{0,0}$ and $H^{1,1}$
are invariant, whereas $H^{1,0}$ and $H^{0,1}$ change sign.
The untwisted sector of the manifold produced
by the product of the three tori $T_1\times T_2\times T_3$
is then given by the product of differential forms
which are invariant under the $Z_2\times Z_2$ twists
$\alpha\times \beta$. The invariant terms
are summarized by the Hodge diamond
\begin{equation}
\left(\matrix{ 1&0&0&1\cr
0&3&3&0\cr
0&3&3&0\cr
1&0&0&1\cr}\right)
\label{hodge3t}
\end{equation}
For example, $H^{1,1}$ is generated by $dz_i\wedge {\bar z}_i$
for $i=1,2,3$, and $H^{2,1}$ is produced by
$dz_1\wedge z_2\wedge{\bar z}_3$, $dz_2\wedge z_3\wedge{\bar z}_1$,
$dz_3\wedge z_1\wedge{\bar z}_2$, etc..
We next turn to the twisted sectors, of which there are three,
produced by $\alpha$, $\beta$ and $\alpha\beta$, respectively.
In each sector, two of the $z_i$ are identified under $z_i\rightarrow-z_i$,
and one torus is left invariant. We need then consider
only one of the twisted sectors, say $\alpha$, and the others
will contribute similarly. The sector $\alpha$ has 16 fixed
points from the action of the twist on the first and second
tori. Since the action is trivial on the third torus, we get 16 fixed
tori. The cohomology
is given by sixteen copies of the cohomology of $T_3$,
where each $H^{p,q}$ of $T_3$ contributes $H^{p+1,q+1}$
to that of the orbifold theory~\cite{msanddt}.
The Hodge diamond from each twisted sector then
has the form
\begin{equation}
\left(\matrix{ 0& 0& 0&0\cr
0&16&16&0\cr
0&16&16&0\cr
0& 0& 0&0\cr}\right)
\label{hodgets}
\end{equation}
It remains to find the forms from the $\alpha$ twisted sector
which are invariant under the action of the $\beta$ twist.
Since $z_3\rightarrow-z_3$ under $\beta$, it follows that
$1$ and $dz_3\wedge d{\bar z}_3$ are invariant,
whereas $dz_3$ and $d{\bar z}_3$ are not.
Consequently, only the contributions of $H^{1,1}$
and $H^{2,2}$ in (\ref{hodgets}) are invariant
under the $\beta$ twist.
Therefore, we see that the invariant contribution
from each twisted sector is only along the diagonal of
(\ref{hodgets}), and that the total Hodge diamond of the
$Z_2\times Z_2$ orbifold is
\begin{equation}
\left(\matrix{ 1& 0& 0&1\cr
0&51& 3&0\cr
0& 3&51&0\cr
1& 0& 0&1\cr}\right)
\label{hodgez2z2}
\end{equation}
Next we consider the model generated by the $Z_2\times Z_2$
twists in (\ref{alphabeta}), with the additional shift
\begin{equation}
\gamma:(z_1,z_2,z_3)\rightarrow(z_1+{1\over2},z_2+{1\over2},z_3+{1\over2})
\label{gammashift}
\end{equation}
This model again has fixed tori from the three
twisted sectors $\alpha$, $\beta$ and $\alpha\beta$.
The product of the $\gamma$ shift in (\ref{gammashift})
with any of the twisted sectors does not produce any additional
fixed tori. Therefore, this shift acts freely.
Under the action of the $\gamma$ shift, half
the fixed tori from each twisted sector are paired.
Therefore, the action of this shift is to reduce
the total number of fixed tori from the twisted sectors
by a factor of $1/2$. Consequently, the Hodge diamond for this model is
\begin{equation}
\left(\matrix{ 1& 0& 0&1\cr
0&27& 3&0\cr
0& 3&27&0\cr
1& 0& 0&1\cr}\right)
\label{hodgez2z2shift3}
\end{equation}
with $(h_{11},h_{21})=(27,3)$. This model therefore
reproduces the data of the $Z_2\times Z_2$ orbifold
at the free-fermion point in the Narain moduli space.
The action of the freely-acting shift (\ref{gammashift})
is seen to be identical to the action of the freely-acting twist
(\ref{z2z2273}) that connects the (51,3) and (27,3)
models in the Landau-Ginzburg representation.
Finally, let us consider the model generated by the
twists (\ref{alphabeta}) with the additional
shift given by
\begin{equation}
\gamma:(z_1,z_2,z_3)\rightarrow(z_1+{1\over2},z_2+{1\over2},z_3)
\label{gammashift2}
\end{equation}
This model, denoted by $X_3$, again has three twisted sectors $\alpha$,
$\beta$ and $\alpha\beta$. Under the action of the $\gamma$ shift,
half of the fixed tori from these twisted
sectors are identified. These twisted sectors
therefore contribute to the Hodge diamond
as in the previous model. However, the $\gamma$ shift
in (\ref{gammashift2}) does not act freely,
as its combination with $\alpha$ produces additional
fixed tori, since, under the action of the product
$\alpha\cdot\gamma$, we have
\begin{equation}
\alpha\gamma:(z_1,z_2,z_3)\rightarrow(-z_1+{1\over2},-z_2+{1\over2},z_3)
\label{alphagamma}
\end{equation}
This sector therefore has 16 additional fixed tori.
Repeating the analysis as in the previous cases,
we see that, under the identification imposed by the $\alpha$ and $\beta$
twists, the invariant states from this sector
give rise to four additional (1,1) forms and four additional
(2,1) forms. The Hodge diamond for this model
therefore has the form
\begin{equation}
\left(\matrix{ 1& 0& 0&1\cr
0&31& 7&0\cr
0& 7&31&0\cr
1& 0& 0&1\cr}\right)
\label{hodgez2z2shift4}
\end{equation}
with $(h_{11},h_{21})=(31,7)$.
As we discuss in subsequent sections,
the various $Z_2\times Z_2$ orbifold models
discussed in this section display
interesting features when one considers
the possibility of elliptic fibration in the context of $F$ theory.
\setcounter{footnote}{0}
\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Elliptic Fibration and $F$ Theory}
Let us now study compactifications of $F$ theory
on the different $Z_2\times Z_2$ orbifold models
analyzed in the previous section, in particular on $X_2$.
Our strategy is to study first the Weierstrass representation
of the elliptic fibration of the $F$-theory compactification on
the $Z_2\times Z_2$ orbifold $X_1$,
and then implement the extra twist, so as to obtain the $X_2$ orbifold.
Finally, we discuss how one may hope to resolve the puzzle of the
non-vanishing gravitational anomaly that we advertised earlier.
When compactifying $F$ theory on a Calabi-Yau threefold, it is essential
that it should admit an elliptic fibration, with
base $B$ and a toroidal global section. Elliptically-fibered manifolds
are conveniently parameterized by writing the equation
for the toroidal fiber in the standard Weierstrass form:
\begin{equation}
y^2=x^3+f x + g
\label{weierform}
\end{equation}
which expresses the torus as a double cover of the complex plane
with three finite branch points and one branch point at infinity.
The functions $f$ and $g$ are polynomials of degree 8 and 12, respectively,
in the base coordinates. Compactifying on a Calabi-Yau three-fold,
Morrison and Vafa~\cite{vafamori,vafamorii} have shown
that, in terms of the $h_{1,1}(B)$ of the base manifold
and the $h_{1,1}(X)$ and $h_{2,1}(X)$ of the Calabi-Yau three-fold $X$,
the number of neutral hypermultiplets is given by
\begin{equation}
H^0=h_{2,1}(X)+1 ~~,
\label{nhyper}
\end{equation}
the number of tensor multiplets is given by
\begin{equation}
T=h_{1,1}(B)-1~~,
\label{tensorhype}
\end{equation}
and the rank of the vector multiplets is given by
\begin{equation}
r(V)=h_{1,1}(X)-h_{1,1}(B) -1.
\label{rankgg}
\end{equation}
Finally, cancelation of the gravitational anomaly
in $N=1$ supergravity in six dimensions requires
the following relation between the numbers of neutral hyper-,
vector, and tensor multiplets \cite{d6anomaly}:
\begin{equation}
H^0-V=273-29 T.
\label{ganomaly}
\end{equation}
Although we will mainly be using the
Weierstrass parameterization above, this is not so
convenient for some of the
models written as orbifolds of toroidal compactifications.
In these cases, it is sometimes easier to work with the
corresponding orientifold model, where such a model
has been identified. Specifically,
only some of the orbifolds
studied in the previous section
have been shown to admit an elliptic
fibration,
and have already been discussed
in the context of $F$-theory compactification
to six dimensions~\cite{vafamorii}.
These models are the special classes
of Calabi--Yau three-folds that have been analyzed by
Voisin~\cite{voisin} and Borcea~\cite{borcea}.
They have been further classified by
Nikulin \cite{nikulin} in terms of three invariants $(r,a,\delta)$.
In the context of our discussion, we note that both the $X_1$
and $X_3$ models
are part of this classification, whereas $X_2$ is not.
Returning to the Weierstrass representation~(\ref{weierform}),
we consider an
elliptically-fibered Calabi-Yau manifold $X$ with base $\rlx{\rm I\kern-.18em P}^1\times
\rlx{\rm I\kern-.18em P}^1$. The $Z_2\times Z_2$ orbifold $X_1$ can then be realized
as a singular limit of $X$.
We can represent $X$ in Weierstrass form, as a (singular)
elliptic fiber which depends on the (inhomogeneous) coordinates
$w,\tilde w$ of the respective $\rlx{\rm I\kern-.18em P}^1$:
\begin{equation}
y^2 = x^3 + f_8 (w,\tilde w) x z^4 + g_{12}(w,\tilde w) z^6~.
\label{weier}
\end{equation}
Here $f_8$ and $g_{12}$ are of bidegree eight and twelve,
respectively, in $w,\tilde w$.
This model has $h_{1,1}=3$ and $h_{2,1}=243$, and Sen~\cite{sengp} has
shown that, considered as an $F$-theory vacuum in six dimensions,
it is equivalent to the
Gimon-Polchinski type-IIB orientifold on $T^4/Z_2$~\cite{gp}.
The next step is to choose
a particular complex structure for $f_8$ and $g_{12}$.
To do so, we let
\begin{equation}
f_8 = \eta - 3 h^2\,,\quad
g_{12}=h(\eta - 2 h^2)~.
\label{singular}
\end{equation}
which implies that the discriminant, $\Delta = 4f^3 + 27 g^2$, takes
the form
\begin{equation}
\Delta = \eta^2 (4\eta-9h^2)~.
\label{disc}
\end{equation}
We then further restrict $f_8$ and $g_{12}$ by setting
\begin{equation}
h = K \prod_{i,j=1}^4(w- w_i)(\tilde w - \tilde w_j)\,,\quad
\eta = C \prod_{i,j=1}^4(w- w_i)^2(\tilde w - \tilde w_j)^2~.
\label{etah}
\end{equation}
Thus, as we approach any of $w=w_i$ (or $\tilde w=\tilde w_j$) we have
a $D_4$ singular fiber. This follows from Kodaira's classification of
ADE singularities~\cite{vafamori,vafamorii,bv}, and has
\begin{equation}\label{dfour}
f_8\sim (w-w_i)^2\,,\quad g_{12}\sim (w-w_i)^3\,,\quad \Delta\sim
(w-w_i)^6~.
\end{equation}
Thus, we have an enhanced $SO(8)^8$ gauge symmetry, since
$i,j=1,\ldots,4$.
These $D_4$ singularities intersect in 16 points, $(w_i,\tilde w_j),\,
i,j=1,\ldots 4$ in the base $\rlx{\rm I\kern-.18em P}^1\times \rlx{\rm I\kern-.18em P}^1$. Due to the severity
of the individual singularities, at each point of intersection
one has to resolve the
base~\cite{aspinwall}, as follows.
At each point of intersection, we
have the following singular behavior:
\begin{equation}\label{dfourinter}
f_8\sim (w-w_i)^2(\tilde w-\tilde w_j)^2\,,\quad g_{12}\sim
(w-w_i)^3(\tilde w-\tilde w_j)^3\,,\quad \Delta\sim
(w-w_i)^6(\tilde w-\tilde w_j)^6~.
\end{equation}
Thus the order of the singularity is $(4,6,12)$ respectively for
$f_8,g_{12},\Delta$. However, just resolving the singular fiber is not
enough. We also have to blow the base up once at each of $(w_i,\tilde
w_j),\, i,j=1,\ldots 4$. Thus, in addition to the
enhanced gauge symmetry, we also obtain 16 additional tensor
multiplets~\cite{vafamori,vafamorii}.
The resulting Calabi-Yau manifold
yields
an $F$-theory compactification on the elliptically-fibered
Calabi--Yau three-fold corresponding to the $Z_2\times Z_2$ (51,3)
orbifold model $X_1$. To see that it has
$h_{1,1}=51$ and $h_{2,1}=3$, we first note that
blowing up the base gives 16
$(1,1)$ forms, and recall that the $SO(8)^8$ group has rank 32, which
contributes
another 32 $(1,1)$ forms to $h_{1,1}$. In addition, out of the original
250 deformations encoded in $f_8,g_{12}$ ($9^2+13^2$), we are left with
$C, K$ and $w_i,\tilde w_i,i=1,\ldots,4$. This leaves us with three
independent deformations, once we have used the $SL(2,\relax\hbox{$\inbar\kern-.3em{\rm C}$})$
reparametrization
of each of the $\rlx{\rm I\kern-.18em P}^1$, as well as an overall rescaling
of~(\ref{weier}). (In
this way, we can fix three of the $w_i,\tilde w_i$ as well as $K$.)
{}From~(\ref{nhyper}), (\ref{tensorhype}), and (\ref{rankgg})
we then find that this six-dimensional $F$-theory vacuum has
$V=224$, $T=17$ and $H^0=4$~\cite{vafamorii}, which is consistent with
the formula (\ref{ganomaly}) for the vanishing of the gravitational anomaly.
To seek the elliptic fibration of the (27,3) orbifold model,
we have to implement the final $Z_2$ which acts as a
freely-acting twist in the Landau--Ginzburg representation of the model,
or as a freely-acting shift as in~(\ref{gammashift}).
It is obvious that the model written in the
form~(\ref{weier}) is not the correct way of representing the covering space
of the final $Z_2$. Rather, we have to rewrite~(\ref{weier}) in terms of a
quartic polynomial~\footnote{This form has appeared before in various
contexts in F-theory, see e.g. \cite{aspgross,ibanez,kmv,cpr,bps,bkmt}},
\begin{equation}\label{weierfour}
\hat y^2 = \hat x^4 + \hat x^2 \hat z_2 \hat f_4 + \hat x z^3 \hat g_6
+ \hat z^4 \hat h_8~.
\end{equation}
where $\hat f_4,\hat g_6,\hat h_8$ are of bidegree $4,6,8$
respectively in $w,\tilde w$. The relation between~(\ref{weier}) and
(\ref{weierfour}) is given in terms of
\begin{equation}\label{rel}
\hat f_4 = -3 h\,,\quad \hat g_6 = 0\,,\quad \hat h_8 = -1/4 \eta~.
\end{equation}
Writing the fibered torus in the quartic form~(\ref{weierfour})
amounts to bringing the branch point at infinity to a finite
point. The reason for this rewriting of the fibered
torus is that in this representation some
symmetries of the torus become manifest, thus simplifying
the analysis.
We now note that~(\ref{weierfour}) enjoys a $Z_2$ symmetry: $(\hat y,\hat
x, \hat z)\to (-\hat y,-\hat x, \hat z)$~\cite{bps,bkmt}.
We are, however, interested in an action which extends to the base as
well:
$(\hat y,\hat x, \hat z,w,\tilde w)\to (-\hat y,-\hat x, \hat
z,-w,-\tilde w)$. In order to carry out this identification, we need to
modify~(\ref{etah}) to
\begin{equation}\label{etahztwo}
h = K \prod_{i,j=1}^2(w^2- w_i^2)(\tilde w^2 - \tilde w_j^2)\,,\quad
\eta = C \prod_{i,j=1}^2(w^2- w_i^2)^2(\tilde w^2 - \tilde w_j^2)^2~.
\end{equation}
Note that on each of the $\rlx{\rm I\kern-.18em P}^1$ there are just two points where
the elliptic fiber acquires a $D_4$ singularity: $w_i\sim -w_i,\,
i=1,2$ and $\tilde w_i\sim -\tilde w_j,\,j=1,2$. Each of them gives
rise to an $SO(8)$ enhanced gauge symmetry,
leading to a total $SO(8)^4$ gauge-group enhancement. There are now eight
points at which the $D_4$ singularities intersect:
\begin{equation}\label{inter}
(w_i,\tilde w_j)\,,\quad (w_i,-\tilde w_j)\,,\quad i,j=1,2~.
\end{equation}
This gives rise to eight tensor multiplets, and hence we have
$h_{1,1}=27$ and $h_{2,1}=3$. Note that the $Z_2$ action on the base has
restricted the $SL(2,\relax\hbox{$\inbar\kern-.3em{\rm C}$})\times SL(2,\relax\hbox{$\inbar\kern-.3em{\rm C}$})$ acting on the $\rlx{\rm I\kern-.18em P}^1\times
\rlx{\rm I\kern-.18em P}^1$ base to a $1+1$-parameter family rather than the full
$3+3$-parameter family. Thus, out of the six parameters
in~(\ref{etahztwo}), we are
still left with three parameters after rescaling~(\ref{weierfour}) and
using the above
restricted $SL(2,\relax\hbox{$\inbar\kern-.3em{\rm C}$})\times SL(2,\relax\hbox{$\inbar\kern-.3em{\rm C}$})$ action. From the discussion
above and using (\ref{nhyper}),
(\ref{tensorhype}) and (\ref{rankgg}), we find that in this
model $H_0=4$, $T=9$ and $r(V)=16$.
The gauge group $SO(8)^4$ gives rise to 112 vector multiplets.
Inserting these values into (\ref{ganomaly}) we see
that the gravitational anomaly is apparently not satisfied.
This conflict raises the question whether the elliptic fibration
on the $Z_2\times Z_2$ orbifold $X_2$, with either the additional twist
in the Landau-Ginzburg representation or the additional shift
of (\ref{gammashift}), gives a consistent $F$-theory compactification.
However, it is observed~\footnote{We thank Andre Losev for
discussions
on this point.} that the action of the additional shift
(\ref{gammashift}) on the base coordinates commutes
with its action on the fiber. Thus, the additional shift
should still preserve the fibration. Hence, if the
fibration is consistent for the (51,3) Calabi--Yau
three--fold, it should still be consistent for the (27,3)
one, which is obtained from the (51,3) model by the
additional shift (\ref{gammashift}).
Note, however that the additional shift on the base
commutes with the shift on the fiber only for the 1/2
shift chosen. That is, any other shift $z_i\rightarrow z_i+a$
with $a\ne1/2$ will not preserve the fibration.
Furthermore, regarded as an orbifold of a flat torus, the $Z_2\times Z_2$
orbifold $X_2$ with $(h_{1,1},h_{2,1})=(27,3)$ is an orbifold
of a $T^6$ lattice, rather than $(T^2)^3$. However, as we
saw in the previous section, the $X_2$ model can be obtained
from the $X_1$ model by adding the freely-acting shift given in
(\ref{gammashift}), which preserves the cyclic permutation
of the $Z_2\times Z_2$ orbifold, and hence the factorization
into $({\tilde T}^2)^3$. This is consistent with the fact that
the (27,3) $Z_2\times Z_2$ orbifold model, for example in
its free-fermion realization, still possesses the cyclic
permutation symmetry between the three $T^2$ factors, which is
the characteristic property of the $Z_2\times Z_2$
orbifold. This factorization, and the existence of
the cyclic permutation symmetry, would naively suggest that
the (27,3) $Z_2\times Z_2$ orbifold model $X_2$ should still possess
a sensible elliptic fibration.
But a general requirement of
a consistent $F$-theory compactification is that the elliptic fibration
produces
a global section. We will now show that this is not
fulfilled with the additional shift.
Consider the quartic
representation (\ref{weierfour}) of the Weierstrass representation.
This quartic representation has two global sections, $\hat y=\pm\hat
x,\hat z=0$,
whereas that of (\ref{weier}) has only one, $y^2=x^3,z=0$. Under the
additional $Z_2$
action, the two sections in the quartic representation are simply
identified, except for at four fixed points in the base:
\begin{equation}\label{fixpts}
(w,\tilde w) = (0,0),\quad (0,\infty),\quad (\infty,0), \quad
(\infty,\infty)~.
\end{equation}
Note that because of the action on the fiber these are not fixed
points of the Calabi-Yau manifold, which remains smooth.
Furthermore, at every point on
the base, other than the additional singular points of the fibration, the
transformation takes one point on the fiber to another point on the fiber.
However, at the singular points of the base
the fiber is shrunk to half
its original size. The crucial observation~\footnote{We thank Paul Aspinwall
for pointing this out.} is that, whilst the intersection of a generic
fiber with the section is 1, it is 1/2 for the special fibers over the
fixed points. Thus, the section is not globally defined and $F$ theory
on $X_3$ is not well defined.
It is intriguing that this new puzzle
in the $F$-theory compactification on the (27,3) model
arises precisely because of the action of the
additional shift on the $T^2$ fiber.
To see how the above applies to the $X_2$ model, we consider
a related orbifold in which the $Z_2$ action is restricted to
the base, leaving the elliptic fiber invariant.
In the examples of the previous sections, this corresponds
to the additional shift imposed on the $Z_2\times Z_2$ model
in the form (\ref{gammashift2}).
Taking the first two coordinates, $z_1$ and $z_2$,
to be the coordinates of the base and the third, $z_3$,
to be the coordinate of the fiber, we see that only the base
coordinates are identified under the additional shift, whereas the
fiber is left invariant. In the case of this model,
as we saw in (\ref{alphagamma}) and (\ref{hodgez2z2shift4}),
there is an additional sector producing four additional $(1,1)$ and
$(2,1)$ forms.
To see how these additional multiplets arise
in the Weierstrass representation, note that
the additional $Z_2$ action is realized by
$(\hat y,\hat x, \hat z,w,\tilde w)\to (\hat y,\hat x, \hat
z,-w,-\tilde w)$.
This gives the four fixed points in the base~(\ref{fixpts}),
and hence four fixed tori in the Calabi-Yau manifold,
as there is no action on the elliptic fiber. Each of these
tori contributes one K\"ahler form, from the two-cycle of the $\rlx{\rm I\kern-.18em P}^1$
used to blow up the torus, and one complex structure deformation from
a three-cycle built of a family of $\rlx{\rm I\kern-.18em P}^1$s over a one-cycle in the torus.
Thus, $h_{1,1}$ and $h_{2,1}$ both increase by four. The rest of the
analysis follows that of the previous model.
After adding these contributions from the fixed tori, the spectrum is
$h_{1,1}=27+4=31,h_{2,1}=3+4=7$.
We see here the difference between the two models $X_2$ and $X_3$.
In $X_3$, the additional
shift (\ref{gammashift2}) is neither freely-acting on the base,
nor on the manifold regarded
as a Calabi--Yau three--fold, thus producing the additional
four $h_{1,1}$ and $h_{2,1}$ forms needed to resolve the singularity
in the conventional manner. However, in the $X_2$ model
the shift (\ref{gammashift}) does act freely on the
Calabi--Yau three--fold, although there are four fixed points in the base
$B$, and therefore does not produce any additional $h_{1,1}$ and $h_{2,1}$
forms associated with these singularities.
We observe that
$F$ theory on $X_3$ has $T=13$ tensor multiplets, $H_0=8$ neutral
hyper--multiplets and $V=112$ vector multiplets, and
we see from (\ref{ganomaly}) that the
gravitational anomaly vanishes in this case.
Furthermore, from this example we see that the problem with the
gravitational anomaly for the $X_3$ model arises strictly because
of the action of the additional shift (\ref{gammashift})
on the fiber. Thus, we would like to argue that a possible resolution of
the gravitational anomaly is the existence of one
hypermultiplet and one tensor multiplet at each of the fixed
points in the base.
Still, how and whether the additional
singularities of the fibration may be resolved resulting in a
non-anomalous theory is
an open question. However, it is tempting to
speculate that the resolution of these singularities
is intimately connected to the cancellation of the
gravitational anomaly in this model.
\setcounter{footnote}{0}
\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Orientifold Representation}
In this section, we briefly discuss what would be the
orientifold construction corresponding to
$F$-theory compactification on the $X_2$ orbifold.
Whilst it is not guaranteed that there exists an orientifold
model for every $F$-theory compactification,
studying the orientifold construction may provide
a complementary way to understand physical issues.
In particular, in the case of the (27,3) model $X_2$,
the key question would be how to couple the additional
freely-acting shift to the orientifold projection.
In this connection, recall that, in this case, as the complex structure of
the fiber is identified with the dilaton of the
corresponding type-IIB string theory,
the shift in (\ref{gammashift}) acts non--trivially
on the dilaton.
We begin our brief discussion by studying
the orientifold corresponding to the
$F$-theory compactification of the $X_1$ orbifold~\cite{bz,dap,gm}.
We follow the analysis in~\cite{dap}, focusing in particular on the
closed string sector as it seems to relevant for understanding the
missing tensor and hypermultiplets. Starting from $T^6$ given in terms
of complex coordinates $z_i,\,i=1,2,3$, and the identifications $z_i\sim
z_i+1\sim z_i + i,\, i=1,2,3$, the $Z_2\times Z_2$ action is given
by
\begin{eqnarray}\label{ztwoztwo}
&& Z_2^\alpha\,:\, (z_1,z_2,z_3) \to (-z_1,-z_2,z_3)\,,\\
&& Z_2^\beta\,:\, (z_1,z_2,z_3) \to (z_1,-z_2,-z_3)~.
\end{eqnarray}
We then let $z_3$ be the coordinate of the elliptic fiber, and define
the type-IIB orientifold on $T^4/Z_2^\alpha$ by the orientifold action
$\{1,\Omega(-1)^{F_L}R_2\}$. The $R_2$ acts on $T^4$ as $(z_1,z_2\to
(z_1,-z_2)$, and the remainder of the $Z_2^\beta$ action on the
elliptic fiber is represented by $\Omega(-1)^{F_L}$~\cite{sen8d}.
Clearly, in the absence of the $\Omega (-1)^{F_L}R_2$, we would just have
a type-IIB compactification on a $K3$ manifold, $T^4/Z_2^\alpha$,
which gives an $N=2$ theory in six
dimensions with $T=21$ $N=2$ tensor multiplets, each of which
consists of one $N=1$ tensor multiplet and one $N=1$ hypermultiplet.
These arise from each of the sixteen fixed points of the
$T^4/\relax{\bf Z}}\def\relax\hbox{$\inbar\kern-.3em{\rm C}$}{\relax{\bf C}^2$, as well as five
from the untwisted sector. However, $\Omega(-1)^{F_L}R_2$ projects out
the hypermultiplets in the twisted sectors and leaves one tensor and
four hypermultiplets invariant in the untwisted sector~\cite{dap}.
Thus, we are left with $T=16+1$ tensor multiplets and $H_0=4$
hypermultiplets. This is the spectrum from the closed-string sector.
It can be shown~\cite{dap}\ by a more
careful analysis that there is an $SO(8)^8$ gauge enhancement
arising from the open-string sector, but no charged matter.
In order to understand how to implement the additional $Z_2$ action
required to get the
$(h_{1,1}=27,h_{2,1}=3)$ $Z_2\times Z_2$ orbifold $X_2$
in the orientifold language, let us first consider a similar situation
in eight dimensions.
It was shown by Witten~\cite{witten}
that there exists an orientifold in which there are three $O_+$
planes, one $O_-$ plane and eight
$D7$ branes. This is to be compared with the regular orientifold in
eight dimensions, in which there are four $O_+$ planes and 16
$D7$ branes. The latter corresponds to an $F$-theory compactification on
an elliptically-fibered $K3$ in which the monodromy of the elliptic
fiber is $SL(2,Z)$. By placing groups of four $D7$ branes
on each of the $O_+$ planes, cancelling the charge locally, one
obtains an $SO(8)^4$ gauge group~\cite{sen8d}. If
we do the same for the former model, we find a reduced gauge group
$SO(8)^2$, as there now only are eight $D7$ branes. In this case
the $F$-theory compactification is an elliptically-fibered
$K3$ with restricted monodromy $\Gamma_0(2)$~\cite{bps,bkmt}. Witten
argued that the existence of the two sets of orientifold planes is due
to a non-zero flux through the NS-NS 2-form $B$, even if the 2-form
itself is projected out~\cite{witten}.
Let us now study the situation in six dimensions.
It was pointed out that just as in eight dimensions, by
turning on the NS-NS 2-form $B$, along one of the $T^2$ in the
underlying $T^4$, the rank is reduced by a factor of two~\cite{bpstwo}.
In addition
the contribution from the closed string sector of the twisted sector
is changed. Rather than contributing 16 hypermultiplets, one instead
obtains 12 hypermultiplets and 4 tensor
multiplets~\cite{kst}. Since there is an
ambiguity in the action of the world-sheet parity
$\Omega$~\cite{polchinski}, we can consider a case with 4 $O_+$ planes
and 12 $O_-$ planes. Compared to the extreme case of 16 $O_-$ planes,
the gauge symmetry is reduced from $SO(8)^8$ to $SO(8)^4$, whilst there
are 12 tensor and 4 hypermultiplets. We recognize this as the spectrum
of the $F$-theory compactification of $X_3$.
Thus, we find that it does not seem possible to construct an
orientifold with the properties corresponding to the $F$-theory based on
the $X_2$ orbifold.
We would like to remark that it is not at all clear that
the orientifold construction will capture all of the non-perturbative
singularities of the $F$-theory compactification.
In particular, in the case of the (27,3) $Z_2\times Z_2$
orbifold, the freely-acting shift (\ref{gammashift})
should act non-trivially on the dilaton multiplet.
Understanding the exact nature of this action
in the orientifold construction may provide
a complementary way to study the physical process involved.
\setcounter{footnote}{0}
\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Discussion and Conclusions}
We have discussed in this paper $F$-theory compactification
on the Calabi--Yau three--fold which is associated
with the realistic free-fermion models.
The motivation to consider $F$-theory compactification on
this particular manifold is apparent: the realistic
free-fermion models have, after over a decade
of exploration, yielded the most realistic superstring models to date.
On the other hand,
over the last few years important progress has
been achieved in understanding non--perturbative aspects
of string theories. Although there is still a considerable way to go
before we have complete understanding what role non-perturbative string
effects play in connection with physics as observed in Nature,
some of the new tools have been applied to phenomenology. One such
example is the proposal~\cite{witten5d} to use
the eleventh dimension in $M$ theory to resolve the mismatch
between the unification scale calculated in the MSSM on the basis
of the values of the couplings observed at LEP and estimates
of the string unification scale in weak-coupling heterotic string
models~\cite{reviewsp}.
Among the gaps in our understanding of non-perturbative
aspects of string theory
is the ultimate mechanism that selects
the string vacuum. One important aspect of this paper is that,
whilst the new puzzles that we have raised
are not well understood, {\it a priori} they may indicate
some non--trivial new physics associated with the dilaton multiplet
of the type-IIB string theory. We have exhibited and explored
elliptic fibrations corresponding to the orbifold models
$X_1$ and $X_3$. Although a six dimensional $N=1$ theory based on
the $F$-theory compactification of $X_2$ does not exist, we can
speculate that the additional multiplets needed to
resolve the singularities, as well as the gravitational anomaly,
arise in some non--trivial non--perturbative way. The
most exciting may be the possibility of a still-unknown
physical phenomena that will provide a new view on
the dilaton fixing problem.
\bigskip
\leftline{\large\bf Acknowledgments}
\medskip
We are pleased to thank Shyamoli Chaudhuri,
Lance Dixon, Tristan H\"ubsch,
Sheldon Katz, Peter Mayr, David Morrison, Ergin Sezgin
and especially Paul Aspinwall and Andre Losev for very helpful discussions.
This work was supported in part by the Department of Energy
under Grants No.\ DE-FG-05-86-ER-40272, DE-FG-03-95-ER-40917 and
DE-FG-02-94-ER-40823.
The work of P.B. was supported in part by the National
Science Foundation grant NSF PHY94-07194. P.B. would also like to
thank the Aspen Center for Physics, LBL, Berkeley and TPI, Minneapolis
for hospitality during the course of this work.
\bigskip
\leftline{\large\bf Note Added}
\medskip
Subsequent to the submission of our paper, a Calabi-Yau
compactification with an elliptic fibration has been proposed
which has a very similar structure to the $X_2$ model discussed
here, namely a freely-acting shift with a bi--section and a
non--trivial $\pi_1$~\cite{ACK}.
\bibliographystyle{unsrt}
|
train/arxiv
|
BkiUfE825V5jU0uocTHc
| 5 | 1 |
\section{Introduction}
It is now generally accepted that neutrino oscillation data indicate that at least two of
the three active neutrinos have nonvanishing
masses. This cannot be accommodated in the minimal Standard Model (SM)without adding new degrees of freedom
such as two or more SM right-handed neutrinos. However, neutrino masses can be generated by the addition of the Weinberg operator \cite{Wo}, $O_5$. This nonrenormalizable dimension five operator takes the form of $O_5=\frac{y}{\Lambda}\ell_L \ell_LHH$, where $H$ is the SM Higgs field, $\ell_L$ denotes a SM lefthanded lepton doublet, $y$ is a free dimensionless parameter, and $\Lambda$ is an unknown high scale. After $H$ takes on
a vacuum expectation value $v\simeq 247$ GeV, the electroweak symmetry is spontaneously broken, and we get a neutrino mass $m_\nu \sim \frac{y v^2}{\Lambda}$. Since data indicate that $m_\nu \lesssim 1$ eV, depending on the value of $y$, the scale $\Lambda$ can range from 1 to $ 10^{11}$ TeV. This elegant way of generating neutrino masses using only SM fields comes with the price of nonrenomalizability. Furthermore, it reinforces the idea that the SM is an effective theory and the neutrino masses call for its extension.
Neutrino mass generated from the Weinberg operator is of the Majorana type, and it has lepton number $\ell=2$ provided the conventional lepton number assignments that all SM charged leptons $e,\mu,\tau$ and their associated neutrinos
$\nu_e,\nu_\mu,\nu_\tau$ have $\ell=1$ and all other SM fields carry $\ell=0$ are assumed. Also the anti-leptons
have $\ell=-1$. This is a natural consequence if lepton number is a $\Uel$ symmetry. Thus, the SM is largely invariant under this symmetry with a very small breaking by the Weinberg operator. However, the nature
of this symmetry is unknown. Usually, the total lepton number is taken to be a global symmetry that is
broken at a very high scale $\Lambda \gtrsim 10^{12}$ GeV by two or more SM singlet righthanded neutrinos $N_R$
with Majorana masses of $O(\Lambda)$. Integrating them out gives rise to the Weinberg operator, and this
is the celebrated type I seesaw mechanism \cite{ Glashow:1979nm, GellMann:1980vs,Yanagida:1979as, Mohapatra:1979ia, Schechter:1981cv}. Doing so raises the question of the origin of the Majorana mass bestowed to
$N_R$. One can add a Majorana mass for $N_R$ by hand. However, our current understanding is that masses of
fermions are generated by the Higgs mechanism. It is interesting to also to apply this to
$\Uel$. Doing so will lead to the existence of a Goldstone boson in the physical spectrum which can act as a candidate for dark radiation \cite{CNJW,CN1}.
Moreover, it is phenomenologically and theoretically interesting to investigate the possibility of a gauged $\Uel$
and study the spontaneously broken gauge theory. There are several possibilities. One can gauge the total
lepton number as in \cite{ST}\footnote{For other constructions in conjunction with gauged baryon number see\cite{otherL_0,otherL_1,otherL_2,otherL_3,otherL_4}.}. One can also gauge a combination of lepton generation number such as
$L_\mu -L_\tau$\cite{He:1990pn,He:1991qd}. In Ref.(\cite{Chang:2018vdd}), hereafter referred to as (I), we gauged each lepton family with the usual lepton
number assignments for them. Of the just mentioned three examples only the second one is anomaly-free with only the
SM fields. Gauging the total lepton will require extra leptons with very exotic lepton charges such
as $\ell=3$ to cancel the anomalies from $\Uel$. In (I), the extra anomalies cancelations require two extra pairs of vector-like $SU(2)$ doublet leptons with eigenvalues $\ell=1,0$ for each family. We also did not include any
singlet $N_R$ field, and the Weinberg operator is generated radiatively at 1-loop. The principal source of lepton number violation comes from
a SM singlet scalar with $\ell=2$ which picks up a vacuum expectation value.
In this paper, we study a different mechanism of neutrino mass generation in the gauged lepton number scheme
introduced in (I). The extra leptons presented before is sufficient to generate neutrino masses with the aid of a SM triplet scalar $T$ and a SM singlet scalar $\phi_1$.
$T$ has $\ell=0$ whereas $\phi_1$ is given $\ell=1$, with both fields being Higgssed. This naturally leads to
an inverse seesaw mechanism (ISM)\cite{ISS1,ISS2,ISSr} for active neutrino mass. The novel feature here is that we do not add by hand
any SM singlet leptons to implement ISM as is commonly done. The required leptons are dictated by anomaly cancelations. Details will be given in Sec. 3.
Since the physics involved with the gauge new gauge boson $Z_\ell$ and the extra leptons are the same
as in (I), we will not repeat their phenomenology here. Instead, we focus on neutrino physics and the
phenomenology of $T$. We find that $T$ has interesting different signatures at high energy colliders from previous studies of $l=2$ Higgs triplets\cite{tri_1,tri_2,tri_3,tri_4,tri_5}, which are commonly employed in the type-II see-saw model\cite{ss2, ss2_0,ss2_1,ss2_2,ss2_3,ss2_4}. For a recent review see \cite{Cai:2017mow}.
We organize the paper as follows. The next section we present our anomalies solution for completeness.
Then we discuss lepton mass generation for one generation to illustrate the physics. This is followed by
a realistic 3-generation study. Sec. 4 gives fits to the neutrino oscillation data. Constraints from
charged lepton flavor changing neutral currents are given in Sec.5. Important electroweak
precision constraints are studied in Sec. 6. The productions of different new triplet scalars at
the LHC and CLIC are examined in Sec. 7. Our conclusions are given in Sec. 8.
\section{$\Uel$ anomalies cancelations and new fields}
We extend the SM gauged group by adding a $\Uel$ and is explicitly given as $G=SU(2)\times U(1)_Y\times\Uel$.
All SM leptons have the conventional value of $\ell=1$. We will concentrate on one family.
This can be trivially extended for all 3 SM families.
The new anomaly coefficients are
\begin{subequations}
\label{eq:ac}
\beqa
\Act_1([SU(2)]^2\Uel)&=&-1/2\,, \\
\Act_2([U(1)_Y]^2\Uel)&=&1/2\,, \\
\Act_3([U(1)_Y[\Uel]^2)&=&0\,, \\
\Act_4([\Uel]^3)&=&-1\,,\\
\Act_5(\Uel)&=&-1\,,
\eeqa
\end{subequations}
where $\Act_5$ stands for the lepton-graviton anomaly.
While new chiral leptons are introduced to cancel Eq.(\ref{eq:ac}), one also needs to make sure that the SM anomalies of $\Act_6([SU(2)]^2 U(1)_Y)$, $\Act_7([U(1)_Y]^3)$, and $\Act_8(U(1)_Y)$ are canceled.
It is easy to check that the new leptons in Table.\ref{tb:lA} cancel the above anomalies.
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.30}
\begin{tabular}{|c|c|c|c|}
\hline
Field&$SU(2)$&$\phantom{U}Y\;\;$&$\phantom{U}\ell\;\;$\\ \hline
$\ell_L=\begin{pmatrix} \nu_L\\e_L \end{pmatrix}$ &{\bf{2}}&$-\frac{1}{2}$&$\phantom{-}1$\\ \hline
$e_R$&{\bf{1}}&$-1$&$\phantom{-}1$\\ \hline
$L_{1L}=\begin{pmatrix}N_{1L}\\E_{1L}\end{pmatrix}$&\bf{2}&$-\frac{1}{2}$&$-1$\\ \hline
$E_{1R}$&{\bf{1}}&$-1$&$-1$ \\ \hline
$L_{2R}=\begin{pmatrix}N_{2R}\\E_{2R}\end{pmatrix}$&{\bf{2}}&$-\frac{1}{2}$&$\phantom{-}0 $\\ \hline
$E_{2L}$&{\bf{1}}&$-1$&$\phantom{-}0 $\\ \hline
\end{tabular}
\caption{Lepton fields for anomalies free solution.}
\label{tb:lA}
\end{center}
\end{table}
Since the pair of new leptons are vectorlike, the SM anomalies $\Act_6([SU(2)]^2 U(1)_Y)$, $\Act_7([U(1)_Y]^3)$, and $\Act_8(U(1)_Y)$ cancelations are not affected.
The minimal set of scalar fields, by utilizing the triplet scalar for neutrino mass generation, can be obtained by examining the gauge invariant set of
lepton bilinears that can be formed from the above fields. They are given in Table.\ref{TAB:Scalar_content}.
\begin{table}[h!]
\begin{center}
\renewcommand{\arraystretch}{1.30}
\begin{tabular}{|c|c|c|c|}
\hline
Field&$SU(2)$&$\phantom{U}Y\;\;$&$\phantom{U}\ell $ \\ \hline
$H$ &{\bf{2}}&$\phantom{-}\frac{1}{2}$&$\phantom{-}0$ \\ \hline
$T$&{\bf{3}}&$-1$&$\phantom{-}0 $ \\ \hline
$\phi_1$&\bf{1}&$\phantom{-}0$&$\phantom{-}1$ \\ \hline
\hline
\end{tabular}
\caption{Scalar fields content }
\end{center}
\label{TAB:Scalar_content}
\end{table}
where all $H,\phi_1$, and $T$ develop non-zero VEVs.
The Yukawa interactions are
\beqa
{\Lag}_Y&=&
f_1 \overline{l_L} L_{2R} \phi_1 + f_2\overline{e_R} E_{2L} \phi_1 + f_3\overline{L_{1L}} L_{2R} \phi_1^* + f_4 \overline{E_{1R}} E_{2L} \phi_1^* \nonr\\
&+&h_1 \overline{l}_L e_R H + h_2 \overline{L_{1L}} E_{1R} H + h_3\overline{L_{2R}} E_{2L} H \nonr\\
&+& y_1 \overline{l^c_L}T^\dagger L_{1L} + \frac{y_2}{2}\overline{L^c_{2R}}T^\dagger L_{2R} + h.c.
\label{eq:all_Yukawa}
\eeqa
where all the generation indices are suppressed.
The full gauge invariant and renormalizable scalar potential reads,
\beqa
V&=&
-\mu_H^2 H^\dagger H +\lambda_H (H^\dagger H)^2
-\mu_L^2 |\phi_1|^2 + \lambda_L |\phi_1|^4 \nonr \\
&&- \mu_t^2 \Tr(T^\dagger T) +\lambda_t [ \Tr(T^\dagger T) ]^2 \nonr \\
&&+\lambda_1 (H^\dagger H) \Tr(T^\dagger T)
+\lambda_2 (H^\dagger H) |\phi_1|^2 +\lambda_3 \Tr(T^\dagger T) |\phi_1|^2
\nonr \\
&&+\lambda_4\Tr(T^\dagger T T^\dagger T)+\lambda_5 \det T^\dagger T+\lambda_6 H^\dagger T T^\dagger H \nonr\\
&&-\sqrt{2}\kappa H^T(i\tau_2) T(i\tau_2) H
+h.c.
\eeqa
where we have used the bi-doublet form for $T$ as below\footnote{If a doublet, $D$, transforms under $SU(2)$ as $D\ra U_2 D$, then $T\ra U_2 T U_2^T$.}
\beq
T= \left( \begin{array}{cc}
T_0 & \frac{1}{\sqrt{2}}T_- \\ \frac{1}{\sqrt{2}}T_- & T_{--}
\end{array}
\right)\,.
\eeq
The following conditions must hold ($\lam_{4t}=\lambda_4\!+\!\lambda_t$)
\beqa
\lambda_H, \lambda_L,\lambda_{4t} >0\,,\;
\lambda_1>-2\sqrt{\lambda_H \lambda_{4t}}\,,\;
\lambda_2 >-2\sqrt{\lambda_H \lambda_L}\,,\;
\lambda_3>-2\sqrt{\lambda_L \lambda_{4t}}\;,
\eeqa
so as to ensure that the potential is bounded from below.
After SSB,
\beq
\langle H \rangle= \frac{v}{\sqrt{2}} \left( \begin{array}{c} 0\\ 1\end{array}\right),
\langle \phi_1 \rangle=\frac{v_L}{\sqrt{2}},
\langle T \rangle= \frac{v_t}{\sqrt{2}}\left( \begin{array}{cc} 1 &0\\ 0&0 \end{array}\right)\,,
\eeq
and the fields can be expanded around their VEVs as
\beq
H= \left( \begin{array}{c} H_+\\ { v+ \Re H_0 +i \Im H_0 \over\sqrt{2}}\end{array}\right)\,,\,
\phi_1= { v_L+ \Re \Phi +i \Im \Phi \over\sqrt{2}}\,,\,
T=\left( \begin{array}{cc} { v_t+ \Re T_0 +i \Im T_0 \over\sqrt{2}} & \frac{1}{\sqrt{2}}T_-\\ \frac{1}{\sqrt{2}}T_-& T_{--}\end{array}\right)\,.
\eeq
And the minimal condition for the scalar potential become
\beqa
&&v\left(-\mu_H^2 +\lambda_H v^2 +\frac{\lambda_1}{2}v_t^2+\frac{\lambda_2}{2}v_L^2 -\kappa v_t \right) =0\,,\\
&&v_L\left(-\mu_L^2 +\lambda_L v_L^2 +\frac{\lambda_2}{2}v^2+\frac{\lambda_3}{2}v_t^2 \right ) =0\,,\\
&&v_t\left(-\mu_t^2 +\lambda_t v_t^2 +\frac{\lambda_1}{2}v^2+\frac{\lambda_3}{2}v_L^2+\lambda_4 v_t^2\right) -\frac{1}{2}\kappa v^2 =0\,.
\eeqa
Note that $\lambda_{5,6}$ do not come into play here.
From the above equations, the tree-level mass squared for $T_-$ and $\Re T_0$ are
\beq
M^2_{T_-}=\frac{\kappa v^2 }{2 v_t} +\frac{\lambda_6}{4}v^2\,,\;
M^2_{\Re T_0}=\frac{\kappa v^2 }{2 v_t}+\frac{1}{2}(\mu_t^2+\lambda_t v_t^2 +\lambda_4 v_t^2)\,.
\label{eq:TL_triplet_mass}
\eeq
From phenomenology we expect that $v_t\ll v$ (see Sec.8) and before scalar mixings considerations we have
\beq
M_{T_-} \simeq \left(\frac{\kappa}{v_t}\right)^{\frac{1}{2}} \frac{v}{\sqrt{2}}\,,\;
M_{\Re T_0} \simeq M_{T_-} \sqrt{1+\frac{v_t}{\kappa}\frac{\mu_t^2}{v^2}}
\eeq
which are above a TeV if $\kappa$ takes a phenomenologically interesting value around the electroweak scale.
However, in general, $\kappa$ is a free parameter.
The scalar potential after SSB gives a small mass splitting between $T^-$ and $T^{--}$.
The mass squared difference can be worked out to be
\beq
M^2_{T_{--}}-M^2_{T_-}= v^2 \frac{\lambda_6}{4} -v_t^2 \left(\lambda_4-\frac{\lambda_5}{2}\right)\,,
\eeq
which is $\sim {\mathcal{O}}(v^2)$ provided $\lam_6$ is not much smaller than $\lam_{4,5}$.
Therefore, it is a good approximation to assume that $T^-$ and $T^{--}$ are degenerate.
However, we should keep in mind the mass splitting could be about the Fermi scale.
Similarly, ignoring the contribution from $v_t$, we have
\beqa
-\mu_H^2+\lambda_H v^2 +\frac{1}{2} \lambda_2 v_L^2 \simeq 0\,,\\
-\mu_L^2+\lambda_L v_L^2 +\frac{1}{2} \lambda_2 v^2 \simeq 0\,.
\eeqa
Since we expect that $v_L \gg v$, i.e. lepton symmetry breaking to be above the Fermi scale, we
obtain
\beq
v_L \simeq \sqrt{ \frac{\mu_L^2}{\lambda_L}}\,,\,\, M_{\phi_1}\simeq \sqrt{2}\mu_L\,,\,\,
v^2 \simeq \frac{1}{\lambda_H} \left(\mu_H^2 -\frac{\lambda_2}{4\lambda_L}M^2_{\phi_1}\right)\,.
\eeq
Thus, it is also required to have $|\lambda_2| \ll \lambda_L$. As expected, there will be mixing among the three neutral scalars $\mathbf{H}=(\Re H_0, \Re T_0,\Re \Phi)$. They are related to the physical states $\mathbf{h}=(h_{SM},t_0,\phi_0)$ via the usual unitary rotation given by
\beq
\label{eq:Uh}
\mathbf{h} =\mathbf{U}_h\cdot \mathbf{H}\,.
\eeq
Details of this transformation are not
important for this study and we will not present them.
For completeness, we discuss the imaginary parts of the scalar fields. $\Im \Phi$ is the would-be Goldstone for the gauge boson $Z_\ell$.
Moreover, the would-be Goldstone bosons eaten by $W^\pm, Z$, the physical singly charged scalars, $h^\pm$, and the pseudoscalar, $A_0$, can be identified as:
\beqa
G_\pm &=& {v H_\pm -\sqrt{2} v_t T_\pm \over \sqrt{v^2+2v_t^2}}\simeq H_\pm\,,\,\, G^0={v \Im H_0-2v_t \Im T_0 \over \sqrt{v^2+4v_t^2}}\simeq \Im H_0\,,\nonr\\
h_\pm &=& {\sqrt{2} v_t H_\pm + v T_\pm \over \sqrt{v^2+2v_t^2}}\simeq T_\pm\,,\,\, A_0={2v_t \Im H_0+v \Im T_0 \over \sqrt{v^2+4v_t^2}}\simeq \Im T_0\,.
\eeqa
Since $v_t\ll v$ from the electroweak precision studies (see Sec. 8), it is a good approximation to treat $T_\pm$ and $\Im T_0$ as the physical states.
Being the only degree of freedom with two units of electric charge, $T_{\pmt}$ are the physical scalars.
Since the symmetry $G$ forbids $T$ from coupling to two SM fermions simultaneously, its gauge interactions become
the most relevant for phenomenology. From the $G$-covariant derivative we obtain the Feynman
rules for its triple couplings to gauge bosons, displayed in Table \ref{tab:TTV_vertex}, where $P$ stands for the photon, and all the momenta are incoming.
\begin{table}
\begin{center}
\begin{tabular}{|ccc|ccc|}
\hline
$T_0^{*} T_- W^+_\mu $ & : & $i g_2(p_--\bar{p}_0)_\mu$ & $T_+ T_{--} W^+_\mu $ & : & $i g_2(p_{- -}- p_+)_\mu$ \\
$ T_+T_0 W^-_\mu $ & : & $i g_2(p_0 -p_+ )_\mu$ & $T_{+\!+} T_{-} W^-_\mu $ & : & $i g_2(p_- - p_{++})_\mu$ \\
$ T_+ T_- P_\mu $ & : & $ -i e(p_- -p_+ )_\mu$ & $T_{+\!+} T_{--} P_\mu $ & : & $ -2i e(p_{--} - p_{++})_\mu$ \\
$ T_+ T_- Z_\mu $ & : & $ i\frac{g_2}{c_W}(s_W^2) (p_- -p_+ )_\mu$ & $T_{+\!+} T_{--} Z_\mu $ & : & $ i\frac{g_2}{c_W}(-1+2s_W^2)(p_{--} - p_{+\!+})_\mu$ \\
$ T_0^{*}T_0 Z_\mu $ & : & $ i\frac{g_2}{c_W} (p_0 -\bar{p}_0 )_\mu$ & $ \Re T_0 \Im T_0 Z_\mu$ & : & $\frac{g_2}{c_W} (p_{\tiny \Im T_0} - p_{\tiny \Re T_0} )_\mu$ \\
\hline
\end{tabular}
\caption{Couplings of gauge bosons to triplet fields}
\label{tab:TTV_vertex}
\end{center}
\end{table}
\section{Lepton masses for 1 generation}
The physics of how the new leptons affect the SM charged leptons is best seen in the
one family scenario.
In the basis $\{e, E_1, E_2 \}$, the Dirac mass matrix is
\beq
{\cal M}_C =\frac{v_L}{\sqrt{2}} \times \left( \begin{array}{ccc}
h_1 \epsilon_v & 0 & f_1 \\
0 & h_2 \epsilon_v & f_3 \\
f_2^* & f_4^* & h_3 \epsilon_v \\
\end{array}
\right)\,,
\eeq
where $\epsilon_v= \frac{v}{v_L}\ll 1$. In general the electron will mix with $E_{1,2}$ and the mixing depends on $f_1$ and $f_2$.
In that case, the charged-current interaction of the SM leptons could deviate from the canonical SM $(V-A)$ form due to their mixings with $L_{2R}$ and $E_{2L}$. Moreover, the SM gauge couplings are flavor non-diagonal.
Physically, this mixing must be very small and we can take the limiting case of $f_1=f_2=0$ and eliminate the mixing of the electron with the new charged leptons\footnote{ Theoretically, these two Yukawa couplings can not be forbidden by any $U(1)$ or $Z_N$ charge assignment in this model.
However, one can obtain the desired Yukawa
hierarchy in the split-fermion model if the 5D wave-functions centers are in the order
of $e_R, l_L, L_{1L}, L_{2R}; E_{1R};E_{2L}$, where ``,'' and ``;'' mean large and small separations in between two adjacent wave functions, respectively, along the fifth dimension, see \cite{SF,RS1_1,RS1_2} for some other examples of achieving the hierarchical 4D Yukawa. }.
In general, we can write the physical mass eigenstates $\mathcal{E}_\alpha^\prime=(e,E_-,E_+)$ where $\alpha=1,2,3$ as
\beq
\label{eq:DMe}
\mathcal{E}_{L/R}^\prime=V_{L/R}\cdot \mathcal{E}_{L/R}\,,
\eeq
where $V_{L/R}$ is the left-handed/right-handed unitary matrix that diagonalizes the charged lepton mass matrix so that $V_L^\dagger\cdot M_C\cdot V_R=\mbox{diag}\{m_e,M^E_-,M^E_+\}$. For the limiting case of $f_1=f_2=0$ and $f=f_3=f_4(1+\delta)$ with $|\delta|\ll 1$, the mass eigenvalues can be worked out to be
\beq
m_e=h_1 \frac{\epsilon_v v_L}{\sqrt{2}}\,,\, M^E_{\pm}\simeq\pm \frac{f_4 v_L}{\sqrt{2}}\left(1+\frac{\delta}{2}\right) + \frac{\epsilon_v v_L}{\sqrt{2}}\frac{h_2+h_3}{2}\,.
\eeq
One can see that the leading mass splitting between $E_+$ and $E_-$, apart from the phase convention, comes from the SM Higgs Yukawa interaction, $h_{2,3}$, and to a very good approximation,
\beq
\label{eq:UsymB}
V_{L/R}\simeq V^B \equiv \left(
\begin{array}{ccc} 1&0&0\\
0&\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
0& -\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}} \\
\end{array}
\right)\,.
\eeq
In the basis $\{\nu_L, N_{1L}, N_{2R}^c\}$, the neutrino mass matrix is
\beq
\label{eq:iss}
{\cal M}_N =\frac{v_L}{\sqrt{2}} \times \left( \begin{array}{ccc}
0 & y_1\epsilon_t & f_1^* \\
y_1\epsilon_t & 0 & f_3^*\\
f_1^*& f_3^*& y_2\epsilon_t \\
\end{array}
\right)
\eeq
and $\epsilon_t= \frac{v_t}{v_L}<\epsilon_v$. Again, we consider the case that $f_1\ll1$ and $y_1\sim y_2 =y$. The eigenvalues can be worked out to be around $ (y\epsilon_t/f)^3$, $-1+(y\epsilon_t/2f)$, and $1+(y\epsilon_t/2f)$ in units of $ f v_L/\sqrt{2}$. It is natural to identify the first term as the mass of the active neutrino. For $y v_t \sim 0.1$GeV and $f v_L\sim 3$TeV, the resulting active neutrino mass is about $(y v_t)^3/(f v_L)^2\sim 0.1$ eV. From electroweak precision measurements
we expect $v_t \lesssim {\cal O}(1)$ GeV. We see that the
desired neutrino mass can be obtained without much tuning of the Yukawa couplings.
Notice that the neutrino mass matrix given in Eq.(\ref{eq:iss}) is of the inverse seesaw type\cite{ISS1,ISS2}, and a review can be found in \cite{ISSr}. The novel feature here is that we do not require ad hoc addition of the SM singlet leptons. The additional
leptons are dictated by anomaly cancelation and are SM doublets.
\section{3-generation lepton masses}
One can extend the above to the realistic 3-generation case.
Without losing any generality, we can start with the basis that the Yukawa couplings for $\overline{N_{2R}}N_{1L}$ are diagonal.
And we can go to the basis where the SM charged leptons are in their mass eigenstates by bi-unitary transformation among the $e_R$ and $e_L$.
Similarly, we have the freedom to start with diagonal $ (3\times 3)$ $\bar{E}_{1R} E_{2L}$ and $\bar{E}_{2R}E_{1L}$ sub-matrices.
\subsection{Charged lepton mass matrix}
For simplicity, let's consider that $f_{1,2}=0$, $f_{3,4} \sim f$, and the heavy charged lepton are roughly degenerate.
Then, in the basis $(\mathbf{e},\mathbf{E_1},\mathbf{E_2})$ where each entry is a 3-vector in family space, the most general $(9\times 9)$ mass matrix for charged leptons looks like
\beq
{\cal M}_C= \frac{v_L}{\sqrt{2}}\left(
\begin{array}{ccc}
h_1 \epsilon_v & \mathbf{0} & \mathbf{0}\\
\mathbf{0} & h_2 \epsilon_v & f\cdot \mathbf{1}+\delta_1 \\
\mathbf{0}& f \cdot \mathbf{1}+\delta_2 & h_3 \epsilon_v
\end{array}
\right)
\eeq
where $h_1$ and $\delta_{1,2}$ are $3\times 3$ diagonal matrices and $\mathbf{1}$ is the unit matrix.
For convenience, $\delta_{1,2}$ which encodes the small splitting of the heavy charged leptons
are separated out from the leading term.
One can first perform a rotation among the heavy charged leptons by $U=\mathbf{V}_B$, which is a $(9\times 9)$ generalization of Eq.(\ref{eq:UsymB}).
Then the small perturbation can be separated from the leading order mass eigenvalues,
\beqa
U^T.{\cal M}_C.U &=& {\cal M}_C^{(0)}+ \Delta {\cal M}_C\nonr\\
{\cal M}^{(0)}_C &=& \left( \begin{array}{ccc}
\mbox{diag}(m_e,m_\mu,m_\tau) & \mathbf{0}&\mathbf{0}\\ \mathbf{0} & -\frac{v_L}{\sqrt{2}}\left[ f\cdot\mathbf{1} +\frac{1}{2}(\delta_1+\delta_2)\right] & \mathbf{0}\\ \mathbf{0}&\mathbf{0} & \frac{v_L}{\sqrt{2}}\left[ f\cdot\mathbf{1} +\frac{1}{2}(\delta_1+\delta_2)\right] \end{array}\right)\,,\nonr\\
\Delta {\cal M}_C &=& \frac{v_L}{2\sqrt{2}} \left(
\begin{array}{ccc}
\mathbf{0} & \mathbf{0} & \mathbf{0} \\
\mathbf{0} & (h_2+h_3) & (h_2-h_3+\delta_1-\delta_2)\\
\mathbf{0} & (h_2-h_3-\delta_1+\delta_2) &(h_2+h_3)
\end{array}
\right)\,.
\eeqa
One can further diagonalize the diagonal $(3\times 3)$ block, $(h_2+h_3)$, by a bi-unitary transformation such that $ V_L^\dagger\cdot(h_2+h_3)\cdot V_R =h_{diag}$.
Or by using
\beq
U_R= U\cdot \mbox{diag}\{\mathbf{1},V_R,V_R\}\,,\,
U_L= U\cdot \mbox{diag}\{\mathbf{1},V_L,V_L\}\,,
\eeq
so that
\beqa
U_L^\dagger.{\cal M}_C.U_R &=& \mbox{diag}\{\mathbf{M}_e,\mathbf{M}_-,\mathbf{M}_+\}+ \Delta {\cal M}'_C\nonr\\
\mathbf{M}_e &=&\mbox{diag}(m_e,m_\mu,m_\tau)\,,\,\,
\mathbf{M}_\pm = \pm \frac{v_L}{\sqrt{2}}\left[ f\cdot\mathbf{1} +\frac{1}{2}(\delta_1+\delta_2\pm h_{diag})\right]\,,\nonr\\
\Delta {\cal M}'_C &=& \frac{v_L}{2\sqrt{2}} \left(
\begin{array}{ccc}
\mathbf{0} & \mathbf{0} & \mathbf{0} \\
\mathbf{0} & \mathbf{0} & V_L^\dagger\cdot(h_2-h_3+\delta_1-\delta_2)\cdot V_R\\
\mathbf{0} & V_L^\dagger\cdot(h_2-h_3-\delta_1+\delta_2)\cdot V_R & \mathbf{0}
\end{array}
\right)\,.
\eeqa
It is clear that the 6 heavy charged leptons will form 3 nearly degenerate pairs. And as in the 1-generation case, the
mass splitting for each pair is mainly controlled by $h_{2,3}$.
Moreover, they decouple from the SM charged leptons.
\subsection{Neutral lepton mass matrix}
Using the notation of the charged leptons and factor out the common mass, we write the general $(9\times 9)$ neutrino mass matrix as
\beq
{\cal M}_N= \frac{f v_L}{\sqrt 2} \widetilde{{\cal M}}_N\,,\,\, \widetilde{{\cal M}}_N=\left(
\begin{array}{ccc}
\mathbf{0} & \epsilon_1 & \mathbf{0}\\
\epsilon_1^T & \mathbf{0} & \mathbf{1}+\delta \\
\mathbf{ 0} & \mathbf{1}+\delta & \epsilon_2
\end{array}
\right)
\eeq
where $\epsilon_2$ is a symmetric $3\times 3$ matrix with elements $\{\epsilon_2\}_{ij}=\{\epsilon_2\}_{ji}\sim {\cal O}(y_2 v_t/v_L)$ , $\epsilon_1$ is a general $3\times 3$ matrix with elements $\{\epsilon_1\}_{ij} \sim {\cal O}(y_1 v_t/v_L)$, and now $\delta$ is a $3\times 3$ diagonal matrix $\delta=\mbox{diag}(0,\delta_1,\delta_2)$, $\delta_{1,2}\ll 1$, to accommodate the small non-degeneracy among the three heavy $N$s.
First, the leading mass diagonalization can be made by the same rotation $\mathbf{V}_B$, similar to Eq.(\ref{eq:UsymB}), as in the charged lepton case.
This results in a symmetric $3\times 3$ matrix,
$\delta_3\equiv -\delta +\epsilon_2/2$, in the diagonal blocks as the perturbation.
Assume there exists an orthogonal $3\times 3$ transformation $V$, such that $V^T\cdot \delta_3\cdot V=\mbox{diag}\{a_1,a_2,a_3\}\equiv \delta_4$, and $|a_{1,2,3}|\sim {\cal O}\left(\sqrt{\delta^2+\epsilon_2^2} \right)\ll 1$.
Then by using $U=\mathbf{V}_B\cdot \mbox{diag}\{\mathbf{1},V,V\}$, the re-scaled neutrino mass matrix can be brought into the following form
\beqa
(U)^T.\widetilde{{\cal M}}_N.U &=& {\cal M}^{(0)}+ \Delta {\cal M}\nonr\\
{\cal M}^{(0)} &=& \left( \begin{array}{ccc}
\mathbf{ 0} & \mathbf{0}&\mathbf{0}\\ \mathbf{0} & -\mathbf{1}+\delta_4 & \mathbf{0}\\ \mathbf{0}&\mathbf{0} & \mathbf{1}+\delta_4 \end{array}\right)\,,\,\,
\Delta {\cal M} = \left(
\begin{array}{ccc}
\mathbf{0} & y & y \\
y^T & \mathbf{0} & - z\\
y^T & -z & w
\end{array}
\right)
\eeqa
where $y= \frac{\epsilon_1 \cdot V}{\sqrt{2}}\sim {\cal O}(\epsilon_1)$, $z= \frac{V^T \cdot\epsilon_2\cdot V}{2}\sim {\cal O}(\epsilon_2)$,
and $w=2 V^T\cdot \delta\cdot V\sim {\cal O}(\delta)$.
One can see that after this rotation, the leading mass eigenstates are nothing but the Cartesian basis.
By the standard perturbation techniques, it is easy to see that the SM neutrinos will acquire nonzero masses at the second order perturbation.
For example, at this order,
\beqa
m_1 \left(\frac{f v_L}{\sqrt 2}\right)^{-1} &=& \sum_{i=2}^9 { (\Delta {\cal M}_{1i})^2 \over 0- {\cal M}^{(0)}_{ii} }
= - \sum_{j=1}^3 \left[ {(y_{1j})^2\over -1+a_j} + {(y_{1j})^2\over 1+a_j} \right]
\simeq 2 \sum_{i=1}^3(y_{1i})^2 a_i\,,
\eeqa
and it is indeed of the order of ${\cal O}(\epsilon_1^2 \epsilon_2)$ as in the 1-generation case.
The active neutrino masses can also be understood diagrammatically. By integrating out the heavy $N$, the corresponding Feynman diagram in the weak basis, displayed in Fig.\ref{fig:numass}, and can be seen to give the same conclusion.
It also reveals that the low energy effective operator for active neutrino mass is not given by the Weinberg operator.
If we assume a hierarchy that $v_L \gg v \gg v_t$, and $T$ is the only beyond SM degree of freedom left below $v_L$, the active neutrino masses are generated by a dimension six operator $O_6= \frac{c}{(\Lambda_L)^2}(\ovl{l^c_L} T^\dagger l_L)Tr(T^\dagger T)$ where $c$ is a constant and $\Lambda_L$ is the lepton number breaking scale related to $v_L$. After $T$ picks up a VEV, $v_t$, the neutrino mass is given by $m_\nu \simeq \frac{c v_t^3}{\Lambda_L^2}$. It is also clear that
$O_6$ has a higher dimension than the Weinberg operator. Together with the fact that $v_t\ll v$., they allow the lepton breaking scale to be much lower than the usual type I seesaw mechanism.
\begin{figure}[h!]
\centering
\includegraphics[width=0.65\textwidth]{numass-tri.eps}
\caption{Diagrammatic representation of the $\epsilon^3$-suppression for the active neutrino masses.
Superscripts denote family indices. Upper(green) arrows denote flow of lepton charge.}
\label{fig:numass}
\end{figure}
Now, the upper-left $(3\times 3)$ sub-matrix, denoted as $\mathbf{N}_\nu$, of $U$ for active neutrinos is in general non-unitary,
$\mathbf{N}_\nu \mathbf{N}_\nu^\dagger \neq 1$. This non-unitarity will result in various observable effects.
However, one expects that the off-diagonal elements of $|\mathbf{N}_\nu \mathbf{N}_\nu^\dagger|$ are of the order of ${\cal O}(\epsilon_1^2) \sim 10^{-6} \times (v_t/\mbox{GeV})^2 \times (\mbox{TeV}/v_L)^2$, which is roughly below the current experimental limits, $\lesssim 10^{-5}$\cite{nonU_1,nonU_2}. Therefore, we will leave the comprehensive study of these precision tests to future work.
\section{Neutrino oscillations and data fitting}
First, we provide a simple, realistic solution which can accommodate the neutrino data. Then we move on to the more general numerical survey where the solutions will be fed into the later study of lepton flavor changing processes.
To simplify the discussion, we assume that the heavy $N$'s are degenerate($\delta=0$), $y_2\propto \mathbf{1}$, and all the Yukawa couplings in $y_1$ are of the same order and there is no hierarchy among them.
The $(9\times 9)$ mass matrix looks like
\beq
{\cal M}_N = v_L \left( \begin{array}{ccc}
\mathbf{0} & \epsilon y_1 & \mathbf{0} \\
\epsilon y_1^T& \mathbf{0} & \mathbf{1}\\
\mathbf{0}& \mathbf{1}& \epsilon y_2 \\
\end{array} \right)\,,
\eeq
where $\epsilon\sim {\cal O}(v_t/v_L)$ is an unknown overall constant which controls the amplitude of perturbation and the elements of $y_1$ are of $\sim {\cal O}(1)$.
As discussed previously, in the leading approximation, the $(3\times 3)$ active neutrino mass matrix reads
\beq
{\cal M}^\nu_{ij} \sim \epsilon^3 v_L \{y_1\}_{i\alpha} \{y_2\}_{\alpha\beta} \{y_1\}_{j \beta}
\sim y_2 \epsilon^3 v_L \{y_1\}_{i\alpha} \{y_1\}_{j \alpha}\,.
\eeq
If $y_1$ is highly democratic, namely,
\beq
y_1 \propto \mathbf{I}_c \equiv \left( \begin{array}{ccc} 1&1&1\\ 1&1&1\\1&1&1\\ \end{array} \right)\,,
\eeq
the resulting active neutrino mass matrix also has the pattern ${\cal M}^\nu \propto \mathbf{I}_c$
which is of rank one and it has two zero eigenvalues. It naturally leads to the normal hierarchical neutrino masses.
Taking into account the data, the realistic mass matrix for normal hierarchy(NH) instead takes the form
\beq
{\cal M}^\nu \sim \left( \begin{array}{ccc} 0.1&0.1&0.1\\ 0.1&1&1\\0.1&1&1\\ \end{array} \right) \times (0.03) \mbox{ eV}
\eeq
if $m_1\simeq 0$, and, to simplify the discussion we set $\delta_{CP}=0$.
A simple solution to arrive such pattern is
\beq
y_1 \sim \frac{y}{\sqrt{3}} \left( \begin{array}{ccc} -0.3&0.3&0.3\\ 1&1&1\\1&1&1\\ \end{array} \right)\,,\,\,
y_2\sim y \mathbf{1}
\eeq
which has the apparent $\mu-\tau$ symmetry.
This can be realized in the extra-dimensional models by arranging the amount of overlap in higher dimensional fermion wavefunctions, see for example \cite{RS1_1,RS1_2,SF}.
On the other hand, a more subtle construction of $y_1$ is required to accommodate the inverted hierarchy( IH ) case.
For example, if $m_3\simeq 0, \delta_{CP}=0$, the following realistic neutrino masses matrix
\beq
{\cal M}^\nu \sim \left( \begin{array}{ccc} 1.6&-0.2&-0.2\\ -0.2&0.9&-0.8\\-0.2&-0.8&0.8\ \end{array} \right) \times (0.03) \mbox{ eV}
\eeq
can be generated by
\beq
y_1 \sim y \left( \begin{array}{ccc} -0.7&-0.7&-0.7\\ 0.3&0.6&-0.7\\-0.1&-0.4&0.8\\ \end{array} \right)\,,\,\,
y_2\sim y \mathbf{1}\,.
\eeq
For both NH and IH cases, $ y^3 v_t^3 /v_L^2 \sim 0.03 {eV}$. Taking $v_t =1$GeV and $v_L=1(5)$TeV, we have $y \sim 0.03 (0.09)$.
This simple solution with $y_2\sim y \cdot\mathbf{1}$ gives us a rough idea of the Yukawa coupling strengths.
For the realistic data fitting, we perform a comprehensive numerical scan
with the working assumption that $|(y_2)_{ij}|\simeq y_2$ and that the heavy $N$'s are nearly degenerate.
These assumption can be relaxed giving rise to more free parameters to fit the data.
Moreover, the Yukawa couplings are taken to be complex in the numerical study to accommodate the nonzero CP phase, $\delta_{CP}$ which current data give a hint of.
However, it is clear that the resulting neutrino mass is about $m_\nu\sim (y_1^2 y_2 v_t^3/M_N^2)$.
We adopt the following $3\sigma$ ranges from \cite{NuFit} for the neutrino oscillation parameters.
For NH,
\beqa
&&31.42^\circ<\theta_{12}<36.05^\circ\,,\,40.3^\circ<\theta_{23}<51.5^\circ\,,\,8.09^\circ<\theta_{13}<8.98^\circ\,,\,
144^\circ<\delta_{CP}<374^\circ\,,\nonr\\
&&\Delta m_{21}^2=(6.8-8.02)\times 10^{-5}\mbox{ eV}^2\,,
\Delta m_{31}^2=(2.399-2.593)\times 10^{-3}\mbox{ eV}^2\,.
\eeqa
As for the IH case, the corresponding $3\sigma$ ranges are:
\beqa
&&31.43^\circ<\theta_{12}<36.06^\circ\,,\,41.3^\circ<\theta_{23}<51.7^\circ\,,\,8.14^\circ<\theta_{13}<9.01^\circ\,,\,
192^\circ<\delta_{CP}<354^\circ\,,\nonr\\
&&\Delta m_{21}^2=(6.8-8.02)\times 10^{-5}\mbox{ eV}^2\,,
\Delta m_{31}^2=-(2.369-2.562)\times 10^{-3}\mbox{ eV}^2\,.
\eeqa
The lightest neutrino masses, $m_{\mbox{\tiny lightest}}$, for both NH and IH are allowed to vary in the range between $10^{-4}$eV and $0.2$eV so that the cosmological bound, $\sum_j m_j<0.57$eV at $95\%$ C.L.(from CMB spectrum and gravitational lensing data only)\cite{numassCos}\footnote{This upper limit
has been improved to $\sum_j m_j<0.44$eV ( or $m_{\mbox{\tiny lightest}}\lesssim 0.15$eV ) at $95\%$ C.L.\cite{Planck_2018} recently. }, can be met.
Once $m_{\mbox{\tiny lightest}}$ is fixed, $m_{1,2,3}$ can be determined from the measured mass squared differences.
Then the effective active neutrino mass matrix can be obtained by
\beq
{\cal M}_\nu = U_{PMNS}^* \cdot\mbox{diag}(m_1,m_2,m_3)\cdot U_{PMNS}^\dagger\,.
\eeq
In the standard parametrization, the rotation matrix is given by\footnote{In general $U_{PMNS}=U^\dagger_{c.l.}U$ where $U_{c.l.}$ is the unitary matrix that diagonalizes the
SM charged leptons mass matrix and $U$ is the neutrino rotation matrix. In the limit that
the $\mathbf{E}$ decouples from the $\mathbf{e}$ we can take $U_{c.l.}=\mathbf{1}$.}
\beq
\label{eq:definition_of_pmns}
U_{\mathrm{PMNS}}=\left(\begin{array}{ccc} 1 & 0 & 0\\
0 & c_{23} & s_{23}\\
0 & -s_{23} & c_{23}\end{array}\right)\left(\begin{array}{ccc}
c_{13} & 0 & s_{13} e^{-i\delta} \\
0 & 1 & 0 \\
-s_{13} e^{i\delta} & 0 & c_{13}\end{array}
\right)\left(\begin{array}{ccc}
c_{12} & s_{12} & 0 \\
-s_{12} & c_{12} & 0 \\
0 & 0 & 1\end{array}\right)
\begin{pmatrix}
1 & 0 & 0
\\
0 & e^{ i \alpha_{21}/2} & 0
\\
0 & 0 & e^{ i \alpha_{31}/2}
\end{pmatrix}
\eeq
where the shorthand $s_{12}\equiv \sin\theta_{12}$ and the like are used.
Each element of the $y_2$ Yukawa matrix is a random number in between $0.7$ and $1.3$ times an overall unknown factor $y_2$ with either sign.
And we require that the ratio of the largest to the smallest absolute values in $y_1$ to be smaller than $10$.
About $10^5$ such solutions are prepared for both NH and IH cases.
The realistic Yukawa coupling configurations can be used for predicting the lepton flavor violating processes.
The results will be displayed in the next section.
\section{ $\mu\ra e \gamma$, and $a_\mu$}
With this rich exotic leptons introduced for anomaly cancelations it is important to examine
the constraints from charged lepton flavor violation (CLFV) searches.
The $\mu\ra e \gamma$ process can be mediated by triplet running in the loop, see Fig.\ref{fig:MEG}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{mu_e_gamma.eps}
\caption{Feynman diagrams for $\mu\ra e \gamma$ transition, the photon can attach to any charged line.}
\label{fig:MEG}
\end{figure}
In the lepton mass basis, with the $\mathbf{V}_B$ rotation, the triplet coupling can be approximated as
\beqa
\left. \frac{(y_1)_{ij}}{\sqrt{2}} \right\{ && \bar{T}_0 \bar{\nu^c}_i(N_{j+} +N_{j-}) + T_{++} \bar{e}^c_i(E_{j+}+E_{j-} ) \nonr\\
&& \left. +\frac{1}{\sqrt{2}} T_+ \left[ \bar{\nu^c}_i(E_{j+}+E_{j-} ) +\bar{e}^c_i (N_{j+} +N_{j-}) \right]
\right\} +h.c.
\eeqa
where $i,j=1,2,3$ are the generation indices, and $\pm$ denote the different mass eigenstates within each generation.
The 1-loop contributions can be calculated to be
\beqa
\Delta a_\mu = -\sum_{i=1}^6 {|(y_1)_{\mu i}|^2 m_\mu^2 \over 32\pi^2} \left[ {I_1\left((\tau_i^{--})^{-1}\right) \over M_{E_i}^2}
+{2 I_1\left(\tau_i^{--}\right) \over M_{T_{--}}^2}
+ \frac{1}{2}{I_1\left(\tau_i^N\right) \over M_{T_-}^2}\right]
\label{eq:mu_g-2}
\eeqa
where $\tau_i^{--}\equiv \frac{M_{E_i}^2}{M_{T_{--}}^2}$, $\tau_i^N \equiv \frac{M_{N_i}^2}{M_{T_{-}}^2} $, and
\beq
I_1(x)=\int^1_0 dz {z(1-z)^2 \over 1-z+ x z}= {1\over 6(1-x)^4}\left[1-6x+3x^2+2x^3-6x^2 \ln x\right]\,.
\eeq
See Fig.\ref{fig:F1} for the plot of this function.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{loop_F.eps}
\caption{The loop function $I_1(x)$.}
\label{fig:F1}
\end{figure}
When $x\ll 1$ and $x\sim 1$, the loop function can be expanded as
\beqa
I_1(x)&\simeq& \frac{1}{6}-\frac{x}{3}-\left(\frac{11}{6}+\ln x\right)x^2+{\cal O}(x^3)\,,\\
&\simeq& \frac{1}{12}-\frac{x-1}{20}+\frac{(x-1)^2}{30}+{\cal O}((x-1)^3)\,.
\eeqa
The first term in the square bracket of Eq.(\ref{eq:mu_g-2}) is the contribution where the photon attaches to the heavy charged lepton. The second and third terms are the
contributions where the photon attaches to the $T_{--}$ and $T_-$, respectively.
Because of the electric charge, the $T_{--}$ contribution has an extra factor 2.
Note also the one half factor associated with the $T_-$ contribution which is due to the extra $1/\sqrt{2}$ factor in the singly charged triplet-fermion vertex coupling.
Moreover, assuming that $M_E\sim M_N \sim M$ ( so that all $I_1\sim 1/12$) the $(g-2)_\mu$ can be related to the neutrino mass
and eliminating the $y_1$ dependence,
\beq
\Delta a_\mu \sim -{3 m_\mu^2 m_\nu \over 16 \pi^2 y_2 v_t^3}\left[ I_1\left((\tau_i^{--})^{-1}\right)
+2 I_1\left(\tau_i^{--}\right)\frac{M^2}{M_T^2}
+\frac{1}{2} I_1\left(\tau_i^N\right)\frac{M^2}{M_T^2} \right]
\sim -10^{-15}\frac{(\mbox{GeV})^3}{y_2 v_t^3}
\eeq
which is negligibly small.
Similar calculation can be carried out for the $\mu\ra e \gamma$ dipole transition amplitude.
\beqa
{\cal M}^\mu_{\mu\ra e\gamma}
&=& A_{\mu e}\, \overline{e}(p_2) \left(-i\sigma^{\mu q}\right) \hat{R} \mu(p_1)\,,\nonr\\
A_{\mu e}&=& \sum_{i=1}^6 { e (y_1)_{\mu i}(y_1)_{e i}^* m_\mu \over 64\pi^2}\,\, \left[ {I_1\left((\tau_i^{--})^{-1}\right) \over M_{E_i}^2}
+{2 I_1\left(\tau_i^{--}\right) \over M_{T_{--}}^2}
+\frac{1}{2} {I_1\left(\tau_i^N\right) \over M_{T_-}^2}\right]\,,
\eeqa
where $q^\mu\equiv (p_2-p_1)^\mu$ is the photon 4-momentum, and $\hat{R}=(1+\gamma^5)/2$.
Then, the branching ratio is\cite{Chang:2005ag}
\beq
Br(\mu\ra e\gamma )= {12 \pi^2 A_{\mu e}^2 \over G_F^2 m_\mu^2}\sim {27 \alpha\over 64\pi} {(y_1)^4 (3.5 I_1)^2 \over G_F^2 M^4}
\sim 10^{-13} \left({\mbox{TeV} \over M}\right)^4 \left({(y_1) \over 0.01}\right)^4\,.
\eeq
Or, assuming that $M_E\sim M_N \sim M$, the LFV process can be related to the neutrino mass,
\beq
A_{\mu e} \sim {3 e m_\mu m_\nu \over 32 \pi^2 y_2 v_t^3}\left[ I_1\left((\tau_i^{--})^{-1}\right)
+2 I_1\left(\tau_i^{--}\right)\frac{M^2}{M_T^2}
+\frac{1}{2} I_1\left(\tau_i^N\right)\frac{M^2}{M_T^2} \right]\,.
\eeq
Note that the mass squared of heavy leptons in numerator and denominator cancel out,
and the branching ratios is not very sensitive to the masses of heavy degree of freedom.
Comparing to the most recent bound, $Br(\mu\ra e \gamma)<4.2\times 10^{-13}$ at 90\% C.L.\cite{MEG},
our numerical results are shown in Fig.\ref{fig:LFVMeA}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{MeA.eps}
\caption{The log-log plot of $Br(\mu\ra e\gamma)$ v.s. $m_{lightest}(eV)$ in our model.
NH/IH is on the left/right panel. The red dash line indicates the current experimental bound\cite{MEG}. The radius of each dot is proportional to the number of found solutions in the corresponding $Br-m_{\mbox{\tiny lightest}}$ cell.
We take $y_2 v_t^3=1(GeV)^3$ and $M_N=M_E=M_T$.}
\label{fig:LFVMeA}
\end{figure}
As can be seen from the plot, it is easier to find the solutions for larger $m_{\mbox{\tiny lightest}}$ in the IH case.
For $y_2 v_t^3=1(GeV)^3$, the $\mu\ra e\gamma$ branching ratio is right below the current experimental limit for $m_{\mbox{\tiny lightest}}\lesssim 10^{-2}$eV.
Note that the branching ratios have lower bounds, around $\sim 10^{-16}/10^{-15}$ for NH/IH case with $y_2 v_t^3=1(GeV)^3$.
Therefore, for this model to admit a realistic solution which accommodates simultaneously the neutrino oscillation data and the current $\mu\ra e\gamma$ bound, the predicted lower bounds must stay below the experimental limit. It is required that
\beq
{ 1(\mbox{GeV})^3\over y_2 v_t^3 } < 64.807(20.493)
\eeq
for NH(IH).
Thus, we arrive an interesting lower bound on the triplet VEV that
\beq
v_t > { 0.249(0.365)\over (y_2)^{1/3} }\,\, \mbox{GeV} >0.107(0.157) \mbox{GeV}
\eeq
for the NH(IH) case, where the ultima bound is obtained by taking the strong coupling limit $y_2=4\pi$.
From Eq.(\ref{eq:TL_triplet_mass}), this lower bound implies that the triplet mass is roughly below $\lesssim 8$TeV if $\kappa\sim v$.
Since in our model the triplet does not carry lepton number, there is no tree-level contribution to $\mu\ra 3 e$ and the similar $\tau$ decays.
The dipole induced $Br(\mu\ra 3e)$ will be small comparing to $\mu\ra e\gamma$. The ratio\cite{Chang:2005ag} is given by
\beq
{ Br(\mu\ra 3e ) \over Br(\mu\ra e\gamma )} ={2\alpha\over 3\pi}\left(\ln\frac{m_\mu}{m_e}-\frac{11}{8}\right)\simeq 0.7\times 10^{-2}
\eeq
which makes $Br(\mu\ra 3e)<3\times 10^{-15}$ in this model.
Similarly, the branching ratios of $\tau\ra l\gamma\, (l=e,\mu)$ are
\beq
Br(\tau\ra l\gamma) \simeq {12 \pi^2 A_{\tau l}^2 \over G_F^2 m_\tau^2}\times Br(\tau\ra e \bar{\nu}_e \nu_\tau)
\eeq
and we adopt the measured $Br(\tau\ra e \bar{\nu}_e \nu_\tau)=17.82\%$\cite{PDG2016}.
The predicted branching ratios of $\tau\ra l \gamma$ in our model are displayed in Fig.\ref{fig:LFVTlA} which are much smaller than the current
experimental bound; $Br(\tau\ra e \gamma)< 3.3 \times 10^{-8}$ and $Br(\tau\ra \mu \gamma)< 4.4 \times 10^{-8}$ at 90\%C.L.\cite{PDG2016}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{TeA.eps}
\\
\includegraphics[width=0.7\textwidth]{TmA.eps}
\caption{ $Br(\tau\ra e\gamma)$ and $Br(\tau\ra \mu\gamma)$ v.s. $m_{\mbox{\tiny lightest}}$ in our model.
We take $y_2 v_t^3=1(GeV)^3$ and $M_N=M_E=M_T$.}
\label{fig:LFVTlA}
\end{figure}
Note that in the IH cases the two have same statistics which is due to the complex conjugated pair solutions to the $y_1$ Yukawa for a given $m_{\mbox{\tiny lightest}}$ and $U_{PMNS}$.
As pointed out in \cite{CZB}, the double ratios, for example, $Br(\mu\ra e\gamma)/Br(\tau\ra e\gamma)$, are independent of the unknown parameters
$y_2, v_t$ and the masses of the heavy degrees of freedom. They are complementary handles to the long baseline experiments for determining the type of neutrino mass hierarchy. Unfortunately, we have not found any notable statistical difference between the double ratios of NH and IH in this model.
\section{Triplets at colliders}
The phenomenology of the $Z_\ell$ and the
charged heavy leptons are the same as in (I), and we shall not repeat them here. The triplets
are the new players and we will discuss their signatures at the LHC below. We start with a list of their dominant decay modes.
\subsection{Decays of the triplet}
Due to the gauge couplings and SSB, the triplet scalar can decay into (a) two SM gauge bosons collectively called $V$ (b) a lighter triplet partner plus a V, e.g. $T_{--}\ra T_-W^+$, and (c) two light triplets, e.g. $T_{--}\ra 2T_-$. The later two require huge mass splitting or the rates are suppressed by $v_t$, thus can be ignored here\footnote{For example, $\Gamma(T\ra T_1 W)=G_F M^3 \lambda_{cm}^3(x_1,x_W)/(2\sqrt{2}\pi)$, where $x_1=(M_1/M)^2$ and $x_W=(M_W/M)^2$. However, there is no allowed phase space if the mass squared difference between $T$ and $T_1$ is at most $v^2$.}.
Therefore, $T\ra V_1V_2$ $ (V_{1,2}=W^\pm,Z)$ are the dominant decays since $T$ does not couple to two SM fermions simultaneously in the weak basis.
This is very different from the cases of triplet with $l=2$ as discussed, for example, in \cite{HanT}.
Parameterizing the vertex $T V_1^\mu V_2^\nu$ Feynman rule as $i \kappa_{V_1,V_2} g^{\mu\nu}$, it's straightforward to calculate the
following decay widthes:
\beqa
\Gamma(T\ra V_1 V_2)&=& {|\kappa_{V_1,V_2}|^2 \over 16\pi M_T} \lambda_{cm}(x_1^2,x_2^2)\left[2 +\left({1-x_1^2-x_2^2\over 2x_1 x_2}\right)^2\right]\,,\\
\Gamma(T\ra V_1 V_1)&=& {|\kappa_{V_1,V_1}|^2 \over 32\pi M_T} \sqrt{ 1-4 x_1^2} \left[2 +\left({1-2 x_1^2\over 2x_1^2 }\right)^2\right]\,,\\
\Gamma(T\ra V_1 \gamma )&=& {3|\kappa_{V_1,\gamma}|^2 \over 16\pi M_T} \left( 1- x_1^2\right)\,,
\eeqa
where $x_i \equiv M_{V_i}/M_T$ and $\lambda_{cm}(y,z)\equiv \sqrt{1+y^2+z^2-2y-2z-2yz}$.
The couplings are listed in Table. \ref{tab:TVV_vertex}.
\begin{table}[thb]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Vertex & $\Re T_0 W^+W^-$ & $\Re T_0 Z Z$ & $T_{--}W^+W^+$ & $T_-W^+Z$ & $T_-W^+ P$\\ \hline
$\kappa_{VV}$ & $i g_2^2 v_t$ & $2i \frac{g_2^2}{c_W^2} v_t$ & $i \sqrt{2}g_2^2 v_t$ & $i \frac{g_2^2}{\sqrt{2}c_W}(1+s_W^2)v_t$ &
$-i \frac{g_2 e}{\sqrt{2}} v_t $\\\hline
\end{tabular}
\caption{Feynman rules for $TVV$ vertices. The $g^{\mu\nu}$ factors are omitted. }
\label{tab:TVV_vertex}
\end{center}
\end{table}
The typical decay widthes for charged triplets are narrow, around ${\cal {O}}(10^{-2})$ MeV, for $v_t\sim 1$GeV and $M_T\sim 1$TeV. However, the charged triplet still decays promptly once produced. Moreover, the signal of triplet will be 4 fermion final state from the decay of two gauge bosons or 2 fermion plus a high energy photon.
On the other hand, if there is mixing between $\Re T_0$ and the Higgs boson, $ t_0$ can decay into fermion pairs. The two body decay width of $t_0$ is given by
\beq
\Gamma( t_0\ra f\bar{f})=\frac{|U_h^{12}|^2 G_F M_T}{ 4\pi \sqrt{2}} \sum_f N_c m_f^2\left(1-\frac{4m_f^2}{M_T^2}\right)^{3/2}
\eeq
where $U_h$ is given in Eq.(\ref{eq:Uh}). This will be dominated by the $t\bar{t}$ final state if $M_T \gg M_t=174$ GeV.
LHC-1 gave a bound on the SM signal strength that $\mu=1.09\pm 0.11$\cite{LHC_mu}, which implies that $|U_h^{12}|^2<0.13$ at $2\sigma$ level.
For $M_T=0.5(1.0)$TeV, the 2-body decay width has an upper bound $\Gamma( t_0\ra t\bar{t}) <8(36)$ MeV,
and $\Gamma( t_0\ra b\bar{b}) <0.57(1.1)$ MeV.
The mixing with the SM Higgs will also provide additional 2 gauge bosons decay widthes,
\beqa
\Gamma( t_0\ra W^+W^-)&=&\frac{|U_h^{12}|^2 G_F M_T^3} {32\pi \sqrt{2}}\sqrt{1-x_W}(4-4x_W+3x_W^2)\,,\nonr\\
\Gamma( t_0\ra ZZ)&=&\frac{|U_h^{12}|^2 G_F M_T^3}{ 64\pi \sqrt{2}}\sqrt{1-x_Z}(4-4x_Z+3x_Z^2)\,,
\eeqa
where $x_V\equiv 4M_V^2/M_T^2$. For $M_T=0.5(1.0)$TeV, $\Gamma( t_0\ra W^+W^-)\simeq 2\Gamma(t_0\ra ZZ)<5.3(42.6)$GeV.
Finally, we discuss the $t_0\ra 2h_{SM}$ decay.
Since $|U_h^{12}|\ll 1$, the relevant Lagrangian is roughly
\beq
\simeq \left[3\lambda_H v U_h^{12} +\frac{1}{2}(\lambda_1 v_t +\kappa) \right]h_{SM}^2 t_0
\eeq
and the $\kappa$ term dominates. We have
\beq
\Gamma( t_0\ra 2h_{SM})\simeq {\kappa^2 \over 32\pi M_T}\sqrt{1-x_H} = 0.172(0.096)\times\left({\kappa\over 100 \mbox{GeV}}\right)^2 \mbox{GeV}
\eeq
for $M_T=0.5(1.0)$TeV.
With non-zero mixing with Higgs, the two gauge bosons are still the dominant decay channel of $t_0$.
Moreover, for $M_{t_0}\gg M_Z$, the decay branching ratios of $Br(t_0\ra ZZ)\simeq 1/3$ and $Br(t_0\ra W^+ W^-)\simeq 2/3$ are quite robust.
\subsection{Triplet production at hadron colliders}
As seen in the previous section, the production and decay of $t_0$ is very sensitive to its mixing with the SM Higgs.
We will start with the case that the mixing between $h,\Re T_0$ is negligible and focus on the production of the charged triplet at the collider.
The pair production at the LHC is mainly by the Drell-Yan processes through the $TTV$ vertices.
The gauge boson associated production cross section, $\sigma(pp\ra V T)$, is proportional to $v_t^2$ and negligible.
If ignoring the mixing and mass differences, $\sigma(pp\ra T_+T_{--})=\sigma(pp\ra T_0^* T_{-})$
and $\sigma(pp\ra T_-T_{++})=\sigma(pp\ra T_0 T_{+})$ for they have the same couplings and mediated by the s-channel $W$-exchange diagrams.
The cross sections at LHC14 for some typical triplet masses, listed in Table.\ref{tab:LHC_Tc_prod}, are evaluated by the program CalcHep\cite{CalcHep} with the CTEQ6l1\cite{CTEQ} PDF.
Note that $pp\ra t\bar{t}W$ will be dominant SM background for $T_{--}T_{++}$.
After applying proper cuts, a doubly charged of mass up to about $0.7$ TeV, and it decays mainly into di-boson, can be probed at LHC14
with an integrated luminosity of $300 fb^{-1}$\cite{HanT}. However, we defer a full study of the signal and proper treatment of the background to a future
study.
\begin{table}[h!]
\begin{center}
\renewcommand{\arraystretch}{1.30}
\begin{tabular}{|c|c|c|c|c|}
\hline
$M_T$ (TeV)& $T_{++}T_{--}$ &$T_{+}T_{-}$ & $T_+ T_{--}$ &$T_{++}T_-$ \\\hline
$0.5$ & $1.77$ & $0.179$ & $0.872$& $2.40$\\ \hline
$0.7$ & $0.345$ & $3.50\times 10^{-2}$&$0.157$& $0.496$\\ \hline
$1.0$ & $4.62\times 10^{-2}$ & $0.47\times 10^{-2}$&$1.93\times 10^{-2}$& $7.01\times 10^{-2}$\\ \hline
\end{tabular}
\caption{Charged Triplet bosons pair production cross sections(in $fb$) at the LHC14. Here we have neglected effects from $\Re H_0$ and $\Re T_0$ since it is small and give subleading contributions.}
\label{tab:LHC_Tc_prod}
\end{center}
\end{table}
In contrast, the real part of the neutral triplet\footnote{The single production of $A_0$ can be ignored for it has a $v_t/v$ suppressing mixing with the Goldstone $G_0$, otherwise it couples only to one SM lepton doublet and one $L_1$, Eq.(\ref{eq:all_Yukawa}).}
can be singly produced via gluon fusion through the mixing $(U_h^{12})$. Our estimates
of the production cross sections at the LHC and future hadron colliders are given in Table. \ref{tab:LHC_ggT0_prod}.
The SM backgrounds are estimated by evaluating the production cross section with the di-boson invariant mass in the $M_T\pm 50\mbox{GeV}$ range.
Derived from the numbers listed in Table. \ref{tab:LHC_ggT0_prod}, the $5\sigma$ limits on the 2-dimensional $|U_h^{12}|^2$ and effective luminosity plane is shown in Fig.\ref{fig:LHC}.
The limit is determined by
\beq
{\mbox{Signal}\over \sqrt{\mbox{Background }}}=
{ \sqrt{{\cal L}_0\times \xi_{VV}} \times\sigma_S\times Br(t_0\ra VV) \over \sqrt{\sigma_{BG} } }=5
\eeq
where $\xi_{VV}$ is the efficiency of detection of $VV$ final states, and ${\cal L}_0$ is the integrated luminosity.
It can be seen that $t_0$ with a mass of $1$TeV and $|U_h^{12}|^2=0.05$ could be directly studied at the LHC14 with $\sim 1 ab^{-1}$ effective luminosity.
\begin{table}[h!]
\begin{center}
\renewcommand{\arraystretch}{1.30}
\begin{tabular}{|c|c|c|c|c|}
\hline
$\sqrt{s}$(TeV)& $M_{t_0}$(TeV) & $\sigma(pp\ra t_0)$ &$\sigma(pp\ra W^+W^-)_{SM}$ & $\sigma(pp\ra ZZ)_{SM}$ \\\hline
$14$ &$0.5$ & $2.88\times 10^2$ & $2.56\times 10^3$ & $4.0\times10^{2}$ \\ \hline
$14$ &$1.0$ & $7.42$ & $1.49\times 10^2$ & $2.35\times10^{1}$ \\ \hline
$14$ &$2.0$ & $6.02\times 10^{-2}$ & $4.93$ & $0.879$ \\ \hline
$30$ &$0.5$ & $1.68\times 10^3$ & $7.27\times 10^3$ & $1.17\times10^{3}$ \\ \hline
$30$ &$1.0$ & $6.72\times 10^1$ & $5.49\times 10^2$ & $9.45\times10^{1}$ \\ \hline
$30$ &$2.0$ & $1.16$ & $3.56\times 10^1$ & $8.30$ \\ \hline
$100$ &$0.5$ & $1.63\times 10^4$ & $3.14\times 10^4$ & $5.27\times10^{3}$ \\ \hline
$100$ &$1.0$ & $9.73\times 10^2$ & $3.13\times 10^3$ & $4.26\times10^{2}$ \\ \hline
$100$ &$2.0$ & $3.03\times 10^1$ & $4.09\times 10^2$ & $1.40\times10^{2}$ \\ \hline
\end{tabular}
\caption{Gluon fusion neutral Triplet boson production cross sections(in $fb$) at the LHC and beyond.
Here we assume that $|U_h^{12}|^2=0.1$.
}
\label{tab:LHC_ggT0_prod}
\end{center}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{LHC_5sigma.eps}
\caption{ The $5 \sigma$ limits of detecting a Triplet $t_0$ at the LHC. The curves are for $t_0$ decays into $W^+W^-$, and dashed ones are for $t_0\ra ZZ$.
The color codes, (blue, black, orange), are for $M_T =(0.5,1.0,2.0)$ TeV, respectively.
The horizontal dashed red line is the LHC-1 upper limit on $|U_h^{12}|^2$. }
\label{fig:LHC}
\end{figure}
\subsection{Triplet pair productions at the $e^+e^-$ machine}
The triplet pair productions at the $e^+e^-$ machine are mediated by the s-channel photon and Z diagrams.
Ignoring the mixing between $\Re T_0$ and $h$, the cross sections can be easily calculated to be:
\beqa
\sigma(e^+e^-\ra t_0 A)&=&\frac{\pi \alpha^2}{3 s} \frac{1}{s_W^2 c_W^2} \left( {1\over 1-\frac{M_Z^2}{s} }\right)^2\left(1-\frac{4M_T^2}{s}\right)^{3/2}\,,\\
\sigma(e^+e^-\ra T_{+} T_{-})&=&\frac{\pi \alpha^2}{3 s}\left(1- {\tan\theta_W\over 1-\frac{M_Z^2}{s} }\right)^2\left(1-\frac{4M_T^2}{s}\right)^{3/2}\,,\\
\sigma(e^+e^-\ra T_{++} T_{--})&=&\frac{4\pi \alpha^2}{3 s}\left(1+ {\cot 2\theta_W\over 1-\frac{M_Z^2}{s} }\right)^2\left(1-\frac{4M_T^2}{s}\right)^{3/2}\,.
\eeqa
The cross sections are displayed in Fig\ref{fig:clic}.
Note that the interference between photon and $Z$ contributions is destructive/constructive for $T_+T_-$/$T_{--}T_{++}$ production cross section.
Because the electric charge squared, $T_{\pm\pm}$ has the largest production cross section.
We use CalcHEP to estimate the SM backgrounds and find that they are about three orders of magnitude smaller than the signals, and thus negligible.
\begin{figure}[h!]
\centering
\includegraphics[width=0.93\textwidth]{ee_prodXS.eps}
\caption{ The pair production cross sections ( in $fb$ ) for four different masses,$M_T=0.5,0.7,1.0,1.4$ TeV, v.s. $\sqrt{s}$ at the $e^+e^-$ collider. }
\label{fig:clic}
\end{figure}
\section{EW precision, $\Delta S$ and $\Delta T$ from the exotic fermions and scalars}
\subsection{Tree-level $\rho-$parameter}
Since $T$ gets a VEV, $v_t$, the tree-level $\rho-$parameter is less than unit:
\beq
1+\alpha\Delta T_{tree}= \rho_{tree}= {v^2+2v_t^2 \over v^2+4v_t^2 } =1 - {2 v_t^2 \over v^2+4v_t^2 }\,.
\eeq
Therefore, the loop-induced $\Delta T_{loop}(>0)$ can be compensated by $\Delta T_{tree}(<0)$.
For $\Delta T= 0.08\pm0.12$\cite{PDG2016}, the $2\sigma$ range is
\beq
-0.16<\Delta T=\Delta T_{tree}+\Delta T_{loop} <0.32\,.
\eeq
The above only uses tree level contributions from the SM triplet implies that $v_t<5.94$ GeV.
Combining with neutrino mass generation and the $\mu\ra e\gamma$ limit, we obtain the following interesting limit
\beq
0.107<v_t< 5.94\; \mbox{GeV}\,.
\label{eq:vtbound_no_loop}
\eeq
\subsection{Loop corrections}
Since anomaly cancelation mandates the addition of extra leptons, it is important to
know how quantum corrections to $\Delta T$ and $\Delta S$ from these new states will alter the above bound on $v_T$.
For each generation, the contributions from exotic leptons are \cite{Chang:2018vdd}
\beqa
\Delta T_{F_i} &=& {1\over 16\pi s_W^2 M_W^2} \left( M_{N_i}^2 +M_{E_i}^2 -2 {M_{N_i}^2 M_{E_i}^2 \over M_{N_i}^2 -M_{E_i}^2 } \ln\frac{M_{N_i}^2}{M_{E_i}^2} \right)\,,\\
\Delta S_{F_i} &=& \frac{1}{6\pi} \left(1 +\ln\frac{M_{N_i}^2}{M_{E_i}^2} \right)\,,
\eeqa
where $i=1,2$. From the triplet $T=(T_0,T_{-},T_{--})^T$, they are
\beqa
\Delta T_T &=& {1\over 8\pi s_W^2 M_W^2} \left( M_{T_0}^2 +M_{T_{-}}^2 -2 {M_{T_0}^2 M_{T-{-}}^2\over M_{T_0}^2 -M_{T_{-}}^2 } \ln\frac{M_{T_0}^2}{M_{T_{-}}^2} \right.\nonr\\
&& \left.M_{T_{-}}^2 +M_{T_{--}}^2 -2 {M_{T_{-}}^2 M_{T_{--}}^2\over M_{T_{-}}^2 -M_{T_{--}}^2 } \ln\frac{M_{T_{-}}^2}{M_{T_{--}}^2} \right)\,,\\
\Delta S_T &=& \frac{1}{3\pi} \ln\frac{M_{T_0}^2}{M_{T_{--}}^2}\,.
\eeqa
To simplify the discussion, we assume that all the exotic charged(neutral) leptons have the same mass $M_E(M_N)$, $T_-$ and $T_{--}$ are degenerate,
and implement the current limit $\Delta S=0.05\pm 0.10$\cite{PDG2016}.
To proceed, we assume that the mass squared differences, $|M_E^2-M_N^2|, |M_{T_0}^2-M_{T_-}^2|$, are at most $v^2$ (see Sec. 2). It is easy to generalize to other values.
We define two variables, $x_E\equiv (M_N/M_E)^2$ and $x_T=(M_{T_0}/M_{T_-})^2$, for the discussion.
In terms of these variables
\beqa
\Delta T&=& \frac{1}{8\pi s_W^2 M_W^2}\left[ 3 M_E^2 I_2(x_E)+ M_{T_-}^2 I_2(x_T)-16\pi^2 v_t^2\right]\,,\\
\Delta S&=&\frac{1}{\pi}(1+\ln x_E+\frac{1}{3}\ln x_T)\,,
\eeqa
where
\beq
I_2(x)=1+x- \frac{ 2x \ln x }{x-1}\,.
\eeq
The function $I_2(x=1)=0$ and it is monotonically increasing when $x$ goes to zero, $I_2(0)=1$.
The $2\sigma$ range of $\Delta S$, $-0.15< \Delta S<0.25$, amounts to
\beq
0.3317< (x_E^3 x_T)^{1/4}<0.8513\,.
\label{eq:Del_S_bound}
\eeq
Apparently, $x<1$ is preferred, and one needs either $M_N<M_E$, $M_{T_0} <M_{T_-}$ or both to satisfy the requirement form $\Delta S$.
Since we assume the mass squared difference is at most $v^2$. In the cases of largest mass squared splitting, $M_E, M_{T_-}$ can be related to $x$'s as
\beq
M_E^2= \frac{v^2}{1-x_E}\,,\;M_{T_-}^2= \frac{v^2}{1-x_T}\,.
\label{eq:TS_x_M}
\eeq
We consider two simple cases: $x_E=1$, and $x_T=1$.
For $x_T=1$, the direct search of charged Higgs sets a bound $M_{T_-}(=M_{T_0})>80$GeV\cite{PDG2016}.
$\Delta S$ requires that $0.2296<x_E<0.8069$. From $\Delta T$, one has
\beq
-0.16<\frac{1}{8\pi s_W^2 M_W^2}\left[ 3 M_E^2 I_2(x_E)-16\pi^2 v_t^2\right] <0.32\,.
\eeq
The largest $\Delta T_F$ comes from the smallest $x_E$, namely, the largest mass squared difference.
By using Eq.(\ref{eq:TS_x_M}), one obtains $ v_t<23.75$GeV, with $M_E=280.3$GeV, and $M_N=134.2$GeV.
By the same token, when $x_E=1$, one has $0.0121<x_T<0.5253$, $v_t < 19.72$GeV, with $M_{T_-}=247.5$ GeV, and $M_{T_0}=27.2$ GeV.
Since $T$ does not carry lepton number, it interacts with SM fermions through the mixing $U_h^{12}$ and $b\bar{b}$ will be the dominant decay channel.
However $t_0$ has the SM gauge interaction, see Table \ref{tab:TVV_vertex},
and the process $e^+e^-\ra Z^*\ra Z b\bar{b}$ can go with an effective mixing squared $\simeq (4v_t/v)^2|U_h^{12}|^2<0.013$.
The effective mixing squared agrees with the bound, $\lesssim 0.02$, from the direct search of neutral scalar at LEP2 for this mass\cite{LEP2}.
A full analysis yields an upper bound
\beq
v_t< 24.08 \; \mbox{GeV}\,,
\eeq
which corresponds to $x_E=x_T=0.3317$, $M_E=M_{T_-}=300.9$ GeV, $M_N=M_{T_0}=173.3$GeV.
The solution agrees with the current direct search bounds on the masses of charged heavy lepton, $\gtrsim 100$GeV\cite{HeavyLatLEP}, and Higgs, $\gtrsim 80$ GeV\cite{PDG2016}.
Moreover, $M_N$ and $M_{T_0}$ are larger than the LEP2 bound from the $Z$ decay and the direct search for the neutral Higgs.
Comparing to the previous bound, Eq.(\ref{eq:vtbound_no_loop}), where loop contributions are not included, the upper bound for $v_t$ is pushed up by
around factor of five.
This much has been said about the upper bound on $v_t$. We should remark that the oblique parameters do not impose any lower bound on $v_t$.
For example, even $v_t=0$, all requirements from $\Delta T$ and $\Delta S$ also that the mass squared differences are less than $v^2$ can be met when $x_E=x_T=0.8513$, $M_E=M_{T_-}=613.7$GeV, and $M_N=M_{T_0}=566.2$GeV.
\section{Higgs to $2\gm$}
New electrically charged degrees of freedom which couple to $h_{SM}$ modify the SM Higgs di-photon decay width.
In addition to the new charged leptons introduced for anomaly cancelation, which have been studied in \cite{Chang:2018vdd},
the charged components of the triplet also contribute.
For $h_{SM}$ di-photon decay, the relevant Lagrangian are the $\lambda_{1,6}$ terms,
\beq
\simeq v(\lambda_1+\lambda_6/2 )h_{SM} T_+T_- + v(\lambda_1+\lambda_6 )h_{SM} T_{++}T_{--}\,,
\eeq
assuming that $|U_h^{11}|\sim 1.0$.
Although the $\lambda_{4,t}$ terms also contribute to $h_{SM}T_+ T_-$ and $h_{SM}T_{++} T_{--}$ vertices, their strengthes are doubly suppressed by $v_t$ and $U_h^{12}$, and thus can be ignored.
The di-photon decay width is thus
\beqa
\Gamma(H\ra \gamma\gamma)&=& {G_F \alpha^2 M_H^3 \over 128\sqrt{2} \pi^3}
\left| F_1(\tau_W) +\frac{4}{3}F_{1/2}(\tau_t) +\sum_{i=1}^6 y_{E_i}\frac{2 M_W}{g_2 M_{E_i}}F_{1/2}(\tau_{E_i})\right.\nonr\\
&&\left.+ (\lambda_1+\lambda_6/2 ) \frac{v^2}{2 M_{T_+}^2}F_0(\tau_{T_+})+ 4 (\lambda_1+\lambda_6 ) \frac{v^2}{2 M_{T_{++}}^2}F_0(\tau_{T_{++}})
\right|^2\,,
\eeqa
where $\tau_i\equiv (m_H/2 m_i)^2$, and all the loop functions can be found in \cite{tome}.
For the exotic leptons, the Yukawa couplings are parameterized as ${\cal L}\supset - y_{E_i} \bar{E}_i E_i h_{SM}$ in the mass basis.
Assuming that $T_-$ and $T_{--}$ are degenerate, the width reads
\beqa
\Gamma(H\ra \gamma\gamma)&=& \left.{G_F \alpha^2 M_H^3 \over 128\sqrt{2} \pi^3}
\times \right| -8.324 + 1.834 \nonr\\
&& \left. +\sum_{i=1}^6 0.32(3.64 )\times y_{E_i} + 0.051(0.203)\times (\lambda_1 + 0.9 \lambda_6) \right|^2
\eeqa
for $M_{E_i}=1000(100)$GeV, and $M_{T_-}=1.0(0.5)$TeV.
The first two numbers are the dominate SM contributions from $W^\pm$ and top quark, respectively.
The dominant SM Higgs production channel at the LHC is through gluon fusion which is intact in this model.
Therefore, the signal strength of $pp\ra h\ra \gamma\gamma$ is
\beqa
\mu_{\gamma\gamma}&\simeq &\Gamma(H\ra \gamma\gamma)/\Gamma(H\ra \gamma\gamma)_{SM}\nonr\\
&\sim& 1 - \sum_i(0.049-0.561)\times y_{E_i} -(0.0157-0.063)\times( \lambda_1+0.9\lambda_6)\,.
\eeqa
It is expected that $ |y_{E_i}| \sim m_l/v_h \ll 1$\cite{Chang:2018vdd}, and the charged leptons contribution can be ignored.
Comparing to the data $\mu_{\gamma\gamma}=1.18 (+0.17 -0.14)$ \cite {CMS18}, it is safe even $|\lambda_{1,6}|\sim{\cal O}(1)$.
This agrees with the general analysis given in \cite{CNWS}.
\section{Conclusions}
We have studied a novel neutrino mass generation mechanism in the recently proposed gauged lepton number model by us \cite{Chang:2018vdd}.
The model is free of anomalies by the addition of two sets of exotic chiral leptons for each generation.
The $U(1)_l$ gauge symmetry is spontaneously broken when a $l=1$ SM singlet, $\phi_1$, gets a VEV, $v_L$.
In addition, one $l=0$ SM triplet, $T$, is introduced for neutrino mass generation.
The triplet in this model differs from the well-studied $l=2$ triplet in the type-II see-saw model.
Since it carries no lepton number, the triplet does not couple to the SM leptons.
An immediate consequence is that there is no doubly charged triplet contribution to the neutrino-less double decays of nuclei which in our model is given mainly by the exchange of light neutrinos.
The VEV of the charge-neutral parts of $T$, $v_t$, and the SM Higgs $H$, $v\simeq 246$GeV, breaks the SM electroweak gauge symmetry
and the custodial symmetry.
With only two exotic scalars, $\phi_1$ and $T$, and no RH SM singlet neutrino, the resulting
neutrino mass is of the inverse see-saw type.
Since the phenomenology of the obligatory new gauge boson $Z_\ell$ and the exotic leptons have been studied in \cite{Chang:2018vdd}, we have focused on the
physics of neutrino mass and the new $l=0$ triplet in this work.
We begin the discussion of the one-generation case since the physics is clear in this simple setting. Since the exotic leptons required for anomaly cancelations will in general mix with the SM leptons we require
that the Yukawa couplings $f_{1,2}$ to be very small. This discussion is later extended to the realistic three-generation case, and we have carefully investigated the physics of active neutrino masses in this model.
The active neutrino masses are of the order of $v_t^3/v_L^2$ given by the dimension-six operator $O_6$. Since the electroweak precision requires a relatively small $v_t$, no further parameter fine-tuning is required other than taking $f_1\simeq 0$ mentioned before.
Both realistic NH and IH neutrino masses can be accommodated in this model. If assuming a democratic structure of the Yukawa couplings, it is more natural to get an NH pattern. For IH, it requires a more subtle Yukawa pattern and prefers to have the lightest neutrino mass $\gtrsim10^{-2}$ eV, which is promising for the neutrinoless double beta decays searches.
It is worth noting that $O_6$ produces elements of the active neutrino mass matrix that is Majorana-like, i.e of the form
$\ovl{\nu^{ic}_{L}} \nu_{L}^j$ where $i,j$ are family indices. This is the same as $O_5$ would.
Thus, low energy neutrino measurements such as neutrinoless double beta decays of nuclei, tritium $\beta$ decays spectrum endpoint, and cosmological neutrino mass bounds cannot distinguish between $O_6$ or the Weinberg operator
as the origin of neutrino masses. In order to do that one needs to explore the TeV scale to discover whether there are new degrees of freedom. $O_5$ assumes that there are none whereas $O_6$ requires
new leptons below 10 TeV\footnote{ We have seen previously that the mass splitting $|M_E^2-M_N^2|\lesssim v^2$. Leptons with the mass around $10$TeV will give a splitting of $< 1$ GeV. This is much smaller than what we
have encountered and will require very delicate tuning of parameters.}. In addition, a detailed program searching for CLFV decays of muon and $\tau$ will also be useful since $O_5$ and $O_6$
have very different UV completions and thus will yield different results for these processes.
We have calculated the 1-loop triplet contributions to $a_\mu$ and the LFV processes $l_1\ra l_2 \gamma ( l_{1,2}=e,\mu,\tau )$.
$\Delta a_\mu$ is negative but negligible in this model. Thus, it cannot resolve the discrepancy
between the data and the SM expectation \cite{PDGamu}. On the other hand, we have found an interesting connection between the neutrino masses and the LFV
branching ratios. Taking into account the current limit on $Br(\mu\ra e\gamma)<4.2\times 10^{-13}$, we have obtained an interesting lower bound on $v_t\gtrsim 0.1$ GeV.
Since $T$ does not couple to SM leptons, the LFV process $\mu\ra 3 e$ and the $\tau$ counterparts are mediated by the photon dipole transition
and thus predicted to be very small, $Br(\mu\ra 3e)\lesssim 10^{-15}$.
The triplet gets a VEV so that the constraint from $\Delta T$ can be relaxed. We have carefully analyzed the limits from both $\Delta S$ and $\Delta T$ and arrived an upper bound for $v_t\lesssim 24.1$ GeV if assuming the mass squared differences among the isospin components of the triplet and the heavy leptons to be at most electroweak, $\lesssim v^2$. Combing with the neutrino masses and LFV bounds, we have $0.1 \lesssim v_t\lesssim 24.1$GeV in this model. The lower bound of $v_t$ also implies that $M_T\lesssim 8$TeV provided that $\kappa \simeq v$.
We have studied the decays of the triplet. For $T_\pm$ and $T_{\pm\pm}$, the dominant decay channel is into di-boson.
Depending on the scalar potential, the $T_0$ component of the triplet in general mixes with the SM Higgs doublet, although the mixing squared is limited to be smaller than $0.13$ at $2\sigma$ level\cite{LHC_mu}. However, even allowing for this mixing the dominant decay channel of $T_0$ is still the SM di-boson modes.
Due to their SM gauge interactions, the charged triplets can be pair produced via Drell-Yan processes at the LHC.
In addition to the SM gauge couplings, due to its mixing with the SM Higgs, the neutral triplet can be singly produced via the gluon fusion. At LHC14, it is possible to probe $t_0$ of mass up to 1TeV and $|U_h^{12}|^2\sim0.1$ with an integrated luminosity of $300 fb^{-1}$.
At the linear colliders, the signal of triplet pair production will be very clean once the center-of-mass energy is higher than the mass threshold.
For the mass range of triplet we are interested in, we have found that the bound from the current $h_{SM}\ra 2\gamma$ measurement is weak.
\begin{acknowledgments}
WFC is supported by the Taiwan Minister of Science and Technology under
Grant No.106-2112-M-007-009-MY3 and No.105-2112-M-007-029.
TRIUMF receives federal funding via a contribution through the National Research Council of Canada.
\end{acknowledgments}
\bibliographystyle{JHEP}
|
train/arxiv
|
BkiUdoTxaKgQIQzdyX60
| 5 | 1 |
\section*{Introduction}
Spatial analysis can be seen as a supervised learning problem where we seek to learn an underlying function that maps a set of inputs to an output of interest based on known input-output pairs. In supervised learning, it is crucial that the mapping is done such that the underlying function can accurately predict (or generalise) to new data. Generally, the output, or \textit{response variable}, is a variable measured at multiple geolocated points in space (e.g. case counts for a disease under investigation \cite{gething2016mapping}, anthropocentric indicators like height and weight \cite{osgood2018mapping, josepha2019understanding}, or socioeconomic indicators such as access to water or education \cite{graetz2018mapping, andres2018geo}). Here, we shall consider response variables as $\textbf{y} = (y_{1}, y_{1},...,y_{N}) \in \displaystyle \mathbb{R}^N$, indicating that the metric of interest is a vector of length corresponding to the $N$ locations. The inputs are referred to as \textit{explanatory variables} or \textit{covariates} and consist of multiple independent variables taken at the same $N$ locations as the response variables. Epidemiological examples of explanatory variables are population, age, precipitation, urbanicity and spatial or space-time coordinates. Explanatory variables are given as $X = [\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{N}] \in \displaystyle \mathbb{R}^{N\times d}$, i.e. a matrix (signified by the upper case $X$) of length (rows) corresponding to the $N$ locations and width (columns) of $d$ representing the number of explanatory variables at each location. This matrix is often referred to as the \textit{design matrix} or \textit{basis}, with each row of the matrix represents a location and each column an explanatory variable. The final step is to define a measurement equation, $\textbf{y} = f\!(X) + \epsilon$, that links our explanatory variables to our responses.
The goal of this paper is to cast complex spatial methods into a tool familiar to most researches - the linear model. We will introduce a large body of theory known as model-based geostatistics \cite{diggle1998model} from a machine learning perspective with the goal of slowly build the theory from scratch and arrive at a predictive algorithm capable of making inferences. To arrive at this goal, we will first introduce linear regression and a more powerful variant of linear regression called ridge regression. We will then introduce kernels as a way to create complex functions and show how kernels can be learnt in the linear regression framework. Finally, we introduce a powerful new approach, random Fourier features, that is computationally favourable. We present code wherever relevant, include a brief toy example in R where we fit a spatial problem using nothing more than linear regression and some transforms.
Within this paper, we focus on real-valued response variables as this scenario is easier to handle computationally and also better at illustrating the mathematical theory of Gaussian processes. There is considerable overlap between our introduction of kernel learning and the more traditional formulations based on Gaussian process regression. For an introduction to the Gaussian process and model-based geostatistics, we refer the reader here \cite{Rasmussen2005, Diggle2007}. For a detailed description of the mathematical correspondence between kernels and Gaussian processes, we refer the reader here \cite{kanagawa2018gaussian}.
\section*{Linear Regression}
One of the most popular choice of model in supervised learning is the linear model. A linear model assumes that the outputs are a weighted linear combination of the inputs with some additional uncorrelated (independent) noise, $\epsilon_{i}$, \cite{McCullagh:1989} such that for any location
\begin{equation}
\begin{split}
y_{i} & = x_{i,1}w_{1} + x_{i,2}w_{2} + ... + x_{i,d}w_{d} + \epsilon_{i} \\
y_{i} & = \sum_{j=1}^{d} x_{i,j}w_{j}+ \epsilon_{i} \; \; \; ,\textup{for}\; j = 1,2,...,d
\end{split}
\end{equation}
Where $\textbf{w} = (w_{1},w_{2},...,w_{d})\in \displaystyle \mathbb{R}^d$ represents the column vector of weights (or coefficients) of length $d$ used to transform the input to the outputs. We can write this much more succinctly using matrix notation:
\begin{equation}
\mathbf{y} = X\mathbf{w}+ \mathbf{\epsilon}
\end{equation}
By changing the values of the weights, $\mathbf{w}$, we can generate a wide range of different functions. The learning task is to find an optimal set of weights (or more broadly parameters) that can produce a function which reproduces our data as close as possible. To do this, we first need to define what is meant by "as close as possible". Mathematically, we need to define a function, known as the object function, that measures the quality of our model given a set of weights. In the case of linear regression, a common objective function is given by:
\begin{equation}
S(\mathbf{w}) = \frac{1}{2} \left \|\boldsymbol{ y }- X\mathbf{w} \right \|^2
\end{equation}
This objective function is often called the squared loss function and computes the sum of the squared differences between the measured response variables and the model's prediction of the responses for a given set of weights. Note, the multiplication by half only added to make taking the derivative simpler and does not affect the solutions.
The learning task is therefore to find the set of weights that minimise (or maximise the negative of) the objective function and correspondingly produce the best possible solution for our model. Formally we seek to solve: $\min_{\mathbf{w }\in \mathbb{R}^d} \left \|\boldsymbol{ y }-X\mathbf{w} \right \|^2$
Elementary calculus tells us that a minima of a given function can be found when its derivative with respect to the parameters are zero. i.e.
\begin{equation}
\frac{\partial S(\mathbf{w})}{\partial \mathbf{w}} = X^{T}(X\mathbf{w} - \mathbf{y})=0
\end{equation}
Rearranging the derivative allows us to solve for the optimal set of weights, $\textbf{w}^{*}$, that minimise the objective function
\begin{equation}
\textbf{w}^{*} = (X^{T}X)^{-1}X^{T}\textbf{y}
\end{equation}
Equation 5 is termed the ordinary least squares (OLS) solution. With these optimal weights, the model can predict the response for any given set of inputs. For a new unseen data point, $\textbf{x}_{*}$, the model can make a prediction of the response, $\widehat{y}_{*}$, computed by $\widehat{y}_{*} = \textbf{x}_{*}\textbf{w}^{*}$
Minimising the objective function to find weights that best fit the data is often referred to as training. Once training is done, we often need to check if predictions from this model can generalise well to new data not use in training, known as the testing. A good model should be capable of both fitting the training data well but also able to predict well to new unobserved locations.
Linear regression is ubiquitous across all fields of science due to the ease of derivation, the simplicity of the solutions, and the guarantee that when the assumptions of the Gauss-Markov theorem are met the OLS estimator is a best linear unbiased estimator (BLUE) (see \cite{davidson2004econometric} for the standard proof). However, this does not mean that linear regression will always yield good results. Some problems are non-linear such that even the best linear model is inappropriate (we will address this using kernels in the following sections). In other scenarios, the data often violate the assumptions that guarantee OLS as the BLUE.
One of the most common violated assumptions is that the explanatory variables are uncorrelated (or orthogonal). In the most extreme case, one explanatory variable can be an exact linear combination of the one or multiple other variables, termed perfect multicollinearity \cite{farrar1967multicollinearity}. A common cause for perfect multicollinearity is in overdetermined systems when the number of explanatory variables exceeds the number of data points (the width of $X$ is greater than its length, $d > N$), even though there may be no relationship between the variables. Both multicollinear and overdetemined systems are common in epidemiological and genetic studies \cite{vatcheva2016,yoo2014study}. When the data contains perfect multicollinearity, the matrix inversion $(X^{T}X)^{-1}$ is no longer possible, preventing the solving for the weights in Equation 5. Multicollinearity (as well as other violations to the Gauss-Markov assumptions) is characterised by a poor performance in model testing and aberrant weights often despite good performance in training.
\section*{The Bias-Variance Trade-off}
Surprisingly, one of the biggest problems with OLS is that it is unbiased. When there is a large number of predictors (i.e. $d$ is large), having an unbiased estimator can greatly overfit the data, resulting in very poor predictive capability. A celebrated result in machine learning, the bias-variance decomposition \cite{Geman1992, Domingos2000} explains why this is the case for the squared loss function. For the squared loss, and a linear model $X\mathbf{w}$, some data $y$ and the true function $f(X)$ the loss can be decomposed as a sum of expectations as:
\begin{equation}
\begin{split}
\mathbb{E}[(y-X\mathbf{w})^2] &= \mathbb{E}[X\mathbf{w}-f(X)]^2 + \left (\mathbb{E}[X\mathbf{w}^2]-( \mathbb{E}[X\mathbf{w}])^2 \right ) \\
loss &= bias^2 + variance
\end{split}
\end{equation}
Here, the bias term can be thought of as the error caused by the simplifying assumptions of the model. The variance term tells us how much the model moves around the mean. The more complex a given model, the closer it will be able to fit the data and the lower its bias. However, a more complex model can "move" more to capture the data points resulting in a larger variance. Conversely, if the model is very simple it will have low variance but might not be able to fit a complex function resulting in high bias. This creates a trade-off, reducing bias means increasing variance and vice versa. Figure 1 allows us to visualise the consequences of this trade-off (Supplementary Code 1).
\begin{figure}[ht]
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{Figures/Fig1.png}
\caption{An example of bias-variance trade-off for a linear/degree-1 (red), degree-2 (green) and degree-5 (blue) polynomial model given the same training data. (\textbf{A}) All three models were fitted to the same subset of 20 points derived a simulated epidemic with a latent function $f(X)=1+X+0.2X^{2}$ with added Gaussian noise $(\epsilon \sim N(\mu=0,\sigma^{2}=150))$. The function is left truncated such that the number of cases is always greater than zero. (\textbf{B}) The mean squared error (MSE) for each model was calculated for the remaining data not used for the fitting, so-called testing data.}
\label{fig.1}
\end{figure}
In Figure 1, data points were generated from a simulated epidemic, with the y-axis representing the number of cases and the x-axis time measured in days. The true latent function of the simulated epidemic is a degree-2 polynomial with added Gaussian noise given by $y=1+X+0.2X^{2}+\epsilon$, where $\epsilon \sim N(\mu=0,\sigma^{2}=150)$. The function is left truncated such that the number of cases is always greater than zero. Three Polynomial models of different degrees were fitted (linear/degree-1 in \textit{red}, degree-2 in \textit{green} and degree-5 in \textit{blue}). The most complex, degree-5 polynomial has the closest fits to the training points in Figure 1A. However, when the error is calculated for the testing data (not used to train the model), the degree-2 model has the lowest error, shown in Figure 1B.
The crucial point here is despite the degree-5 model fitting the training data better than the degree-2 model, it poorly predicts data not included in the training data-set. The extra parameters in the degree-5 model allow the model to move to meet all of the training points resulting in a very low bias. However, this movement to meet the training points results in very high variance, with the model fitting to the random noise in the data. Therefore, the degree-5 polynomial can be said to overfit the data, with the model's high variance out-weighting its low bias. The opposite is true of the linear model, where the linear functions cannot capture the non-linearity in the data resulting in a high bias that outweighs its low variance. The linear model underfits the data. The degree-2 polynomial strikes a balance between bias and variance, with the functions flexible enough to capture the non-linearity of latent function without fitting to the noise, capturing an optimal balance between bias and variance.
An approach that is frequently adopted to maintain optimal bias-variance trade-off is to restrict the choice of functions in some way. Such a restriction is referred to as regularisation and can constrain weights values, stabilise matrix inversion, and prevent over-fitting.
\section*{Ridge Regression}
The most common choice of regularisation is called Tikhonov regularisation or ridge regression/weight decay in the statistical literature \cite{Tikhonov1963, Bell1978, Hoerl1970}. The fundamental idea behind ridge regression is to prevent overly complex functions by favouring weights and thus functions with small norms (small absolute values). This is achieved by adding a penalisation term with respect to the norm of the weights to our objective function giving us the standard objective function for ridge regression:
\begin{equation}
S(\mathbf{w}) = \frac{1}{2}(\textbf{y}-X\mathbf{w})^2 + \frac{1}{2}\lambda \left\|\mathbf{w}\right \|^2
\end{equation}
The objective function for ridge regression is identical to the OLS objective function but with the addition of a regularisation term. Again, multiplying both components by half is to improve the simplicity where taking derivatives. The regularisation term is consists of a Euclidean norm term, $ \left\| \mathbf{w} \right \|^2$, and and a regularisation parameter $\lambda$. The Euclidean norm terms computes the positive square root of the sum of squares of the weights, $ \left \| \mathbf{w} \right \|^2 := \sqrt{w_{1}^2+ w_{2}^2+...+w_{d}^2}$ and measures the complexity of a function. Functions with many large weights capable of producing highly non-linear functions will have large norms. The $\lambda$ parameter, a positive number that scales the norm and controls the amount of regularisation. When $\lambda$ is big, complex functions with large norms are heavily penalised, with $\lambda \left\|\mathbf{w}\right \|^2$ term significantly increasing the value of the objective function. Therefore, when $\lambda$ is big, minimising the objective function favours less complex functions and small weights. As $\lambda$ approaches zero, large norms are less heavily penalised, with the product of $\lambda \left\|\mathbf{w}\right \|^2$ getting smaller and smaller, allowing more complex functions with larger weights as optimal solutions to Equation 7. When $\lambda = 0$ the objective function is the same as the OLS model.
The addition of regularisation forces the objective function to fit the data as closely as possible without creating functions that are too complex. Therefore, $\lambda$ can be used to control the bias-variance trade-off. As $\lambda$ get bigger the bias increases and variance decreases and vice versa. However, regularisation results in model weights that are biased towards zero, and thus (by design) the ridge estimate is a biased estimator. But as shown in the bias-variance trade-off, the increase in bias is out-weighted by lower variance and improved testing performance of the model.
As was the case with linear regression, the aim is to find the vector of weights that minimise the ridge regression objective function:
\begin{equation}
\min_{\mathbf{w }\in \mathbb{R}^d} \frac{1}{2} \left \|\boldsymbol{ y }- X\mathbf{w} \right \|^2 +\frac{\lambda}{2}\left\|\mathbf{w}\right \|^2
\end{equation}
Again this is calculated by taking the derivative of the objective function, setting it to zero, and solving for the weights:
\begin{equation}
\frac{\partial S(\mathbf{w})}{\partial \mathbf{w}} = X^{T}(X\mathbf{w}-\mathbf{y}) + \lambda \mathbf{w} = 0
\end{equation}
\begin{equation}
\mathbf{w}^{*} = (X^{T}\!X + \lambda I_{n})^{-1}X^{T}\mathbf{y}
\end{equation}
For prediction we now have $\widehat{y_{*}} = \textbf{x}_{*}\mathbf{w}^{*}=\textbf{x}_{i}(X^{T}\!X + \lambda I_{n})^{-1}X^{T}\mathbf{y}$ where $I_{n}$ is the identity matrix (a square matrix in which all the elements of the principal diagonal are ones and all other elements are zeros) of dimensions $N \times N$.
The above solution is termed the \textit{primal} solution of the ridge regression. The addition of $\lambda I_{n}$ ensures that when $\lambda >0$ the matrix $(X^{T}\!X + \lambda I_{n})$ is always invertible, and hence allows for stable computation. Secondly, the addition of lambda allows us to control the complexity of the optimal solutions. Optimising the primal involves solving the $d \times d$ system of equations given by $X^{T}X$, with computational complexity $\mathcal{O}(d^{2}N)$ in the best case (the big O notation, $\mathcal{O}$, used to denote how the relative running time or space requirements grow as the input size grows). This complexity is extremely useful because even when the number of locations, $N$, is large, the dimensions of the explanatory variables dominate the primal solution.
\section*{The Dual Solution and the Gram Matrix}
Interestingly, the primal form can be rewritten using the following matrix identity
\begin{equation}
(X^{T}X + \lambda I_{n})X^{T} = X^{T}XX^{T}+\lambda X^{T} = X^{T}(XX^{T} +\lambda I_{n})
\end{equation}
Taking this rearrangement and plugging it back into the primal solution in Equation 5 gives a new equation for the weights and for model prediction:
\begin{equation}
\mathbf{w}^{*} = X^{T}(XX^{T} +\lambda I_{n})^{-1}\mathbf{y}
\end{equation}
\begin{equation}
\widehat{y_{*}} = \textbf{x}_{*}X^{T}(XX^{T} +\lambda I_{n})^{-1}\mathbf{y}
\end{equation}
This new equation can be further abstracted by defining $\mathbf{\alpha} = (XX^{T} + \lambda I_{n})^{-1}\mathbf{y}$ such that the equations can be expressed as $\mathbf{w}^{*} = \sum_{i=1}^{N}\alpha_{i}\mathbf{{x}_{i}} = X^{T}\mathbf{\alpha}$.
This derivation shows that the vector of weights, $\mathbf{w}$, can be written as a linear combination of the training points where each $\textbf{x}_{i}$ represents a row of the design matrix corresponding to the explanatory variables at a location. For any new observation the output is given by $ \widehat{y_{*}} = \mathbf{{x}_{*}}\sum_{i=1}^{N}\alpha_{i}\mathbf{{x}_{i}} = \mathbf{x}_{*}X^{T}\mathbf{\alpha}$. This solution is termed the \textit{Dual} solution of the ridge regression and the variables $\alpha \in \displaystyle \mathbb{R}^N$ are called dual variables. Rather than finding the optimal weights for each of the $d$ explanatory variable (the column of the design matrix), we find the appropriate dual variable for each of the $N$ locations (the rows of the design matrix). In contrast to the primal, the dual solution requires inverting an $N \times N$ dimensional matrix, $XX^{T}$, with complexity $\mathcal{O}(N^{3})$ in the worst case.
Given, that the dual has been derived directly from the primal it is easy to see that the solutions are equivalent, with the two solutions said to exhibit \textit{strong duality} \cite{boyd2004convex}. However, the dual form can be derived directly from ridge regression objective function independent of the primal by expressing the problem as a constrained minimisation problem and solving using the Lagrangian (Supplementary Equations 1).
Given the much higher complexity of the dual solution, it is not immediately obvious why this solution would ever be useful. However, a property of dual solution in Equation 12 is that it requires computing $XX^{T}$, with the resulting matrix a symmetric positive semidefinite matrix called the Gram matrix, $G$. $G$ contains all of the pairwise inner product of inputs across all $N$ locations. An inner product is a way to multiply vectors together, with the result of this multiplication being a scalar measure of the vectors \emph{similarity}. Explicitly, the inner product of two vectors $\textbf{x}$ and $\textbf{z} \in \mathbb{R}^{d}$ is given by:
\begin{equation}
\begin{split}
\left \langle \mathbf{x} , \mathbf{z} \right \rangle & = \left \langle(x_{1}, x_{2}, x_{3},..., x_{d}),(z_{1}, z_{2}, z_{3},..., z_{d}) \right \rangle \\
& = x_{1}z_{1} + x_{2}z_{2}+ x_{3}z_{3} + ... + x_{d}z_{d}
\end{split}
\end{equation}
where $\left \langle \cdot ,\cdot \right \rangle$ is used to signify the inner product. Therefore, the Gram matrix can be thought of as containing the similarity between all pairs of inputs. Indeed, if these vectors are centred random variables, $G$ is approximately proportional to the covariance matrix. This notion of similarity is central in spatial analysis where we want to leverage the fact the points close to each other in space are similar.
As an example, the entry at row $i$ column $j$ of matrix $G$ represents the inner product between the vectors of explanatory variables at location $i$ and $j$ respectively (corresponding to rows $i$ and $j$ in the design matrix). This is written as $g_{ij} = \left \langle \mathbf{x}_{i}, \mathbf{x}_{j} \right \rangle$. The full gram matrix is given by $G = \left \langle X, X \right \rangle$. Therefore, in the dual solution, we can substitute $XX^{T}$ for the $G$, and $\textbf{x}_{*}X^{T}$ with $\mathbf{g}(\mathbf{x}_{*}, X)$, the inner product between a new data point and the training points, ($\left \langle \mathbf{x}_{*}, X \right \rangle$) giving:
\begin{equation}
\begin{split}
& \alpha = (G +\lambda I_{n})^{-1}\mathbf{y} \\
& \widehat{y_{*}} = \mathbf{g}(\mathbf{x}_{*}, X) \alpha
\end{split}
\end{equation}
However, the question remains, how can we use this useful construction of the Gram matrix to model non-linear functions?
\section*{Non-Linear Regression}
Up to this point, the equations have only represented linear models, where the outputs are assumed to be some linear combination of the inputs. However, many problems cannot be adequately described in purely linear terms. One approach to introduce non-linearity to the model is to transform the explanatory variables using non-linear transformations, such that the output is described as a linear combination of non-linear terms. For example, rather than a weighted sum of linear terms (i.e. $x_{1}+x_{2}+x_{3}$), we may instead use terms with exponents, logarithms or trigonometric functions (i.e. $exp(x_{1})+log(x_{2})+sin(x_{3})$). Transforming the inputs rather changing the model allows us to maintain all of the convenient maths we have derived for linear models to create non-linear ones.
Mathematically, the input data is said to be projected from a linear input space to some new, and potentially non-linear, space. The projecting of data is termed a (non-linear) \textit{feature mapping} and the new space to which the data is mapped is called the \textit{feature space}. Figure 2 is an example of a mapping to a feature space such that the output can be expressed in linear terms (Supplementary Code 2).
\begin{figure}[ht]
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{Figures/Fig2.png}
\caption{An example of non-linear feature mapping, where the data is mapped for (\textbf{A}) an input space in which the problem is non-linear into a new feature space (\textbf{B}) in which the outputs can be described as a linear combination of the inputs. }
\label{fig.2}
\end{figure}
Generally, a mapping is denoted by $\phi$ (or $\Phi$ when applied to a matrix). The general form of a feature mapping is given by:
\begin{equation}
\phi : \mathbf{x}_{i} \in \mathbb{R}^d \mapsto \phi (\mathbf{x}_{i}) \in \mathbb{R}^D
\end{equation}
\begin{equation}
\Phi : X \in \mathbb{R}^{N \times d} \mapsto \Phi(X) \in \mathbb{R}^{N \times D}
\end{equation}
The vector, $\textbf{x}_{i}$, is mapped from a vector of length $d$ to a vector of length $D$ denoted as $\phi(\textbf{x}_{i})$. Applying this mapping to the entire design matrix gives a new design matrix of explanatory variables in the feature space, $\Phi(X) \in \mathbb{R}^{N \times D}$. A mapping can project to into higher-dimensional ($d < D$) or lower-dimensional ($d > D$) space, although in the context of regression, \textit{lifting} to a higher dimension is more common. As an explicit example, consider the following mapping:
\begin{equation}
\phi : \mathbf{x} =(x_{1},x_{2}) \mapsto \phi (\mathbf{x}) = (x_{1}^2, \sqrt2 x_{1}x_{2}, x_{2}^2)
\end{equation}
This mapping \textit{lifts} the data from 2-dimensions into 3-dimensions ($\mathbb{R}^{2} \mapsto \mathbb{R}^{3}$). The same set of equations can be used to solve a linear model in feature space after the mapping. For example, the primal for ridge regression is now given by:
\begin{equation}
\min_{\mathbf{w}\in \mathbb{R}^D} \left \|\boldsymbol{y}- \Phi(X)\mathbf{w} \right \|^2 +\frac{\lambda}{2}\left\|\mathbf{w}\right \|^2
\end{equation}
\begin{equation}
\mathbf{w}^{*} = (\Phi(X)^{T}\!\Phi(X)+ \lambda I_{n})^{-1}\Phi(X)^{T}\mathbf{y}
\end{equation}
\begin{equation}
\widehat{y_{*}} = \phi(\textbf{x}_{*})\mathbf{w}^{*}=\phi(\textbf{x}_{*})(\Phi(X)^{T}\!\Phi(X) + \lambda I_{n})^{-1}\Phi(X)^{T}\mathbf{y}
\end{equation}
All that was required to derive these equations is to substitute the design matrix in the original input space, $X$, with the new design matrix in the feature space, $\Phi(X)$ (with the equivalent mapping for new input points, $\phi(\textbf{x}_{*})$). This substitution also applies to standard linear regression and the ridge regressor dual but are omitted for brevity. The primal solution now requires solving for the weights in $R^{D}$. Therefore, it is easy to see how a very high dimensional mapping ($D \gg d$ and $D > N$) solving the primal will computationally very difficult. Thankfully, due to our dual ridge solution, this overdetermined situation does not present a computational problem - no matter how many terms we add, the complexity will still be $\mathcal{O}(N^3)$.
However, it is rarely apparent \textit{a priori} what is the most appropriate mapping to apply to the data. Therefore, while the dual allows for very high dimensional feature mapping, the question remains, which mapping should we use? How many terms should be added? How do we capture interactions between terms? A brute force approach can quickly become combinatorially large. Given that we can limit model complexity using a ridge regularisation, the ideal situation would be to be able to define a large, if not infinite mapping capable of nearly any function. We can do this using kernels.
\section*{Kernel Methods}
As highlighted earlier, an ideal feature mapping would be to a large or infinite feature space to which we can then apply regularisation. The dual solution ensures that we only need to solve for the same number of dual variables regardless of the dimensionality of the feature mapping (remember dual variables are in $R^{N}$ with complexity is $\mathcal{O}(N^3)$). However, this raises an interesting question: can we solve ridge regression in an infinite-dimensional feature space?
\begin{equation}
\begin{split}
& \phi : \mathbf{x}_{i} \in \mathbb{R}^d \mapsto \phi (\mathbf{x}_{i}) \in \mathbb{R}^\infty \\
& \Phi : X \in \mathbb{R}^{N \times d} \mapsto \Phi(X) \in \mathbb{R}^{N \times \infty}
\end{split}
\end{equation}
We do not want to (and can't) compute all the infinite terms required for explicit infinite-dimensional feature mapping. To work with these infinite dimensional spaces, we need to move away from explicit to implicit feature maps. To arrive at an implicit map, consider solving the dual using the aforementioned feature mapping in Equation 18. The dual solution requires computing the inner product the inner product between all pairs of inputs in feature space, such that the inner product between any two input vectors, $\textbf{X}$ and $\textbf{Z}$, first requires their mapping to feature space:
\begin{equation}
\begin{split}
& \phi :\mathbf{x} =(x_{1},x_{2}) \mapsto \phi (\mathbf{x}) = (x_{1}^2, \sqrt2 x_{1}x_{2}, x_{2}^2) \\
& \phi :\mathbf{z} =(z_{1},z_{2}) \mapsto \phi (\mathbf{z}) = (z_{1}^2, \sqrt2 z_{1}z_{2}, z_{2}^2)
\end{split}
\end{equation}
Then computing the inner product between the vectors in feature space:
\begin{equation}
\begin{split}
\left \langle \phi (\mathbf{x}),\phi (\mathbf{z}) \right \rangle & = \left \langle (x_{1}^2, \sqrt2 x_{1}x_{2}, x_{2}^2) ,(z_{1}^2, \sqrt2 z_{1}z_{2}, z_{2}^2) \right \rangle \\
&= x_{1}^2z_{1}^2 + 2 x_{1}x_{2}z_{1}z_{2} + x_{2}^2z_{2}^2
\end{split}
\end{equation}
This calculation of the inner product gives a scalar value of the similarity between the two vectors in feature space but required explicitly computing each of the feature mappings of the two vectors before taking their inner product. What is interesting is that the equation for the inner-product in feature space (Equation 24) takes the form of a quadratic equation between $\textbf{x}$ and $\textbf{z}$. Factorising the quadratic gives:
\begin{equation}
\begin{split}
\left \langle \phi (\mathbf{x}),\phi (\mathbf{z}) \right \rangle & = x_{1}^2z_{1}^2 + 2 x_{1}x_{2}z_{1}z_{2} + x_{2}^2z_{2}^2 \\
& = (x_{1}z_{1} + x_{2}z_{2})^{2} \\
& = \left \langle \mathbf{x},\mathbf{z} \right \rangle^{2}
\end{split}
\end{equation}
We have just arrived at a new way of computing the inner product in feature space. The inner product between the vectors in feature space is equal to the square of the inner product in input space. The beauty of this result is our new function for the inner product in feature space does not require computing the explicit mapping of our data (does not require computing $\Phi(X)$). We have already established that solving the dual only requires the inner product. Therefore, a solution to the dual can be obtained without ever computing and storing the explicit feature mapping as long as we have a function to directly compute the inner product in feature space.
The ability for linear models to learn a nonlinear function while avoiding an explicit mapping is a famed result in machine learning known as the kernel trick \cite{Boser1992} with the functions that compute the inner product in feature space are called \textit{kernels functions}. Kernels, therefore, perform the mapping $k:\mathcal{X} \times \mathcal{X} \mapsto \mathbb{R}$ between inputs $x,z \in \mathcal{X}$. The resulting matrix of inner products generated by the kernel function is termed the \textit{kernel matrix} (and can be thought of as the Gram matrix in feature space). We can represent the kernel matrix by $K$ (or $k$ for vectors) such that:
\begin{equation}
\begin{split}
K(X, X) & = \left \langle \Phi(X),\Phi(X)\right \rangle = \Phi(X)\Phi(X)^{T} \\
k(\mathbf{x}_{*}, X) & = \left \langle \phi(\mathbf{x}_{*}),\Phi(X)\right \rangle = \phi(\mathbf{x}_{*})\Phi(X)^{T}
\end{split}
\end{equation}
The dual equations can now be written in terms of kernels given by:
\begin{equation}
\widehat{y_{*}} = k(\textbf{x}_{*},X)(K(X,X) + \lambda I_{n})^{-1}\mathbf{y}
\end{equation}
This equation encapsulates all the theory we have developed thus far. Using the squared loss, we can compute in closed form the optimal solution for the weights. Using the kernel trick, we can project our input data implicitly into an infinite-dimensional feature space. Using regularisation through the ridge penalty, we can prevent our solution from overfitting. Thus, we have derived a powerful non-linear model using no more same equations than we would have used for a linear model.
For the specific example of the mapping in Equation 23, the kernel function is given by $K(\mathbf{x},\mathbf{z}) = \left \langle \mathbf{x},\mathbf{z} \right \rangle^{2}$ and represent a second-degree polynomial kernel. A more general formula for a degree-$p$ polynomial kernel is given by $K_{\theta, p}(\mathbf{x},\mathbf{z})= \left \langle \mathbf{x},\mathbf{z} + \theta \right \rangle^{p}$ (for values of $p \in \mathbb{Z}^{+}$ and $\theta \in \mathbb{R}_{\geq 0}$), where $\theta$ is a free parameter that allows control over the influence of higher-order and lower-order terms in the polynomial in the kernel function. However, there are many kernel functions; an example of an infinite dimensional kernel is the popular and widely used squared exponential kernel:
\begin{equation}
K_{\ell, \sigma}(\mathbf{x},\mathbf{z})=\sigma^2\exp\left(-\frac{(x - z)^2}{2\ell^2}\right)
\end{equation}
Where $\ell$ called the length scale, controls the distance over which the range over which the kernel similarly operates and $\sigma$ determines the average distance from the mean. The proof that a squared exponential kernel gives rise to an infinite feature mapping uses a Taylor series expansion of the exponential, shown in Supplementary Equations 2.
\section*{The Big $N$ Problem}
It is evident that the combination of the dual estimator and the kernel function is a powerful tool capable of extending linear models to handle non-linear problems. One of the primary motivations for considering the dual was that it required computing and inverting an $N \times N$ dimensional matrix rather than the $d \times d$ dimensional matrix of features in the primal. Even as $d$ grows infinitely large, the kernel matrix remains of constant size and involves solving for the same number of dual variables. Historically this led to the widespread adoption of kernel methods (kernel ridge regression, support vector machines, Gaussian processes etc.) to solve difficult problems on small datasets. However, the dual still requires inverting and storing the kernel matrix, which for $N$ observations will be $\mathcal{O}(N^{3})$ complexity and need $\mathcal{O}(N^{2})$ storage. Given only a few thousand points, these polynomial costs rapidly outstrip computational power.
A plethora of methods have been developed to aid the scaling of kernels methods to large datasets. Broadly, these methods aim to find smaller/simpler matrices that are good approximations of the full kernel matrix. The three major techniques are low-rank approximations, sparse approximations and spectral methods. Low-rank approximations of a matrix aim to find smaller representations of the kernel matrix that contains all (or nearly all) of the information in the full kernel \cite{Bach2005}. For example, the popular Nystr{\"o}m approximates the full kernel matrix through a subset of its columns and rows \cite{williams}. In comparisons, sparse methods aim to find representations of the matrix that are mostly zeros because there exist efficient algorithms for the storage of and computation with such matrices \cite{rue2005gaussian, straeter1971extension, saad1986gmres}. One of the best examples is the sparse matrix generated when modelling spatial data as a Gaussian Markov random field (GMRF) that are solutions to Stochastic Partial Differential Equation (SPDE) \cite{Lindgren, whittle1954stationary, whittle1963stochastic}. However, the remainder of this paper will focus on a new, exciting subset of spectral methods called random Fourier features (RFF).
\section*{Random Fourier Features}
RFF and other spectral method utilise the characterisation of the kernel function through its Fourier transform. A Fourier transform allows for the decomposition of any function into the periodic functions of different frequencies that make it up. The central idea behind these methods is that a good approximation of the kernel in the frequency domain (where the function described in terms of the periodic functions that make it up) will naturally yield a good approximation to the kernel function in its original domain.
All spectral methods are based on the same mathematical foundation; specifically, the celebrated \textit{Bochner's theorem} \cite{bochner1949fourier}. Loosely, Bochner's theorem states that a shift-invariant kernel functions (where the output of the kennel is only dependent on the difference between the inputs and not the explicit values of the input themselves) $k(x_{1},x_{2}) = k(\delta)$ for $\delta=|x_{1}-x_{2}|$, can be expressed through a Fourier transform \cite{Rudin1990}:
\begin{equation}
k(x_{1}-x_{2}) = \int_{R^{d}}e^{i\omega^{T}(x_{1}-x_{2})} \mathbb{P}(\omega ) \; d\omega
\end{equation}
If we apply Euler's identity to the exponential and ignore the imaginary component of Equation 29 we can express this integral as:
\begin{equation}
k(x_{1}-x_{2}) = \int_{R^{D}}
\begin{pmatrix}
\cos(\omega^T x_1)\\
\sin(\omega^T x_1)
\end{pmatrix}^T
\begin{pmatrix}
\cos(\omega^T x_2)\\
\sin(\omega^T x_2)
\end{pmatrix}
\mathbb{P}(\omega ) \; d\omega
\end{equation}
This real part of Bochner's theorem computed by projecting the data onto the line drawn by $\omega$ (given by $\omega^T x_1$ and $\omega^T x_2$), pass these projections through $\cos$ and $\sin$ and stack them together. To understand why this process works consider the two key components, $\omega$ and its distribution $\mathbb{P}(\omega )$. The variable $\omega$ is the frequency of the periodic functions. The distribution of these frequencies, $\mathbb{P}(\omega)$, is called the spectral density of the kernel and gives the 'importance' of a given frequency, $\omega$, in constructing the kernel function. This is visualised in Figure 3, showing different spectral densities (Figures 3A,C,E,G,I) and the resulting functions produced by sampling from the kernel generated by each spectral density (Figures 3B,D,F,H,J). The code for sampling the spectral densities and generating functions is given in Supplementary Code 3.
\begin{figure}[ht]
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{Figures/Fig3.png}
\caption{Power spectral densities and the functions produced by sampling from the resulting kernel. The spectral densities in A,C,E,G correspond to sampling from delta functions (arrowheads) such that the sampled frequencies, $\omega$, can only take the point values corresponding to each delta function.}
\label{fig.3}
\end{figure}
In figure 3A the spectral density is composed of two delta functions, such that samples frequencies ($\omega$) can only take values of $1$ or $2$. The functions generated by sampling from this spectral density show strong periodicity and closely resemble the standard trigonometric functions with corresponding frequencies (Figure 3D). When the frequencies of the two delta functions are increased to $\in\{10,20\}$, the functions are again highly cyclical but, due to their higher frequencies, have rougher sample paths and a much smaller period (small changes in $x$ can cause greater changes in $y$) (Figure 3C,D). By expanding the spectral density to 5 peaks, the sample paths show considerably more variation due to the inclusion of a larger variety of frequencies (Figure 3E,F and 3G,H). Finally figures 3H and 3J show samples functions generated by sampling frequencies form a Gaussian and Cauchy distribution respectively. The Gaussian spectral density corresponds to a spectral density of the squared exponential kernel (Gaussian kernel) and gives rise to smooth sample functions with a tremendous amount of variety when compared to the simpler spectral densities (Figure 3J). The Cauchy distribution corresponds to a spectral density generated by the Fourier transform of the Laplacian kernel and generates functions with a high degree of roughness (Figure 3L) due to the inclusion of very high frequencies in the long tails of the distribution (Figure 3K). Table 1 below shows the resulting power spectral densities generated by some common shift-invariant kernels. It should also be noted that empirical spectral densities can be used \cite{TON201859, Lazaro-Gredilla2010, Quinonero-candela2005}, and non-stationary extensions of Bochner's theorem exist \cite{TON201859, yaglom2012correlation} (discussed in Non-stationary and arbitrary Kernel Functions section).
However, a major problem is that evaluating the integral in Equation 30 requires integrating over the infinite set of all possible frequencies. To get around this, we can approximate this infinite integral by a finite one using Monte Carlo integration. In Monte Carlo integration the full integral of a function is approximated by computing the average of that function evaluated at a random set of points. Therefore, for RFF, rather than an infinite set of frequencies, the spectral density is approximated by averaging the sum of the function evaluated at random samples of $\omega$ drawn from the spectral density. The more samples that are evaluated, the closer the approximation gets to the full integral. Indeed, one of the best properties of random Fourier features is that of uniform convergence, with the Monte Carlo approximation of the entire kernel function converging to the full kernel uniformly (rather than pointwise).
\begin{table}[ht]
\centering
\begin{tabular}{|p{2cm}|l|l|}
\hline
\textbf{Kernel} & \multicolumn{1}{c|}{\textbf{Kernel Function}, $K(x_{1}-x_{2})$} & \multicolumn{1}{c|}{\textbf{Power Spectral Density} , $\mathbb{P}(\omega)$} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textit{Squared}\\ \textit{Exponential}\end{tabular} & $\sigma^2\exp\left(-\frac{(x_{1}-x_{2})^2}{2\ell^2}\right)$ & $(2\pi)-\frac{D}{2}\sigma^{D}\exp\left(-\frac{\sigma^{D}\left \| \omega \right \|_{2}^{2}}{2}\right)$ \\ \hline
\textit{ Mat{\'e}n} & \small{$\frac{2^{1-\lambda}}{\Gamma(\lambda)}\left (\frac{\sqrt{2\lambda \left \| x_{1}-x_{2} \right \|_{2}}}{\sigma} \right )^{\lambda} K_{\lambda} \left (\frac{\sqrt{2\lambda \left \| x_{1}-x_{2} \right \|_{2}}}{\sigma} \right )$} & \small{$\frac{2^{D+\lambda}\pi^{\frac{D}{2}} \Gamma(\lambda+\frac{D}{2})\lambda^{\lambda}}{\Gamma(\lambda)\sigma^{2\lambda}} \left (\frac{2\lambda}{\sigma^{2}}+4\pi^{2}\left \| \omega \right \|^{2}_{2} \right )^{-(\lambda+\frac{D}{2})}$}
\\ \hline
\textit{Cauchy} & $\prod_{i=1}^{D}\frac{2}{1 +x_{i}^{2}} $ & $ \exp (-\left \| \omega \right \|_{1})$ \\ \hline
\textit{Laplacian} & $\exp \left (-\sigma\left \| x_{1}-x_{2} \right \|_{1} \right )$ & $\left (\frac{2}{\pi} \right )^{\frac{D}{2}}\prod_{i=1}^{D}\frac{\sigma}{\sigma^{2} +\omega_{i}^{2}}$ \\ \hline
\end{tabular}
\caption{Common shift-invariant kernels and their associated spectral densities}
\label{table:1}
\end{table}
Therefore, the infinite integral in Equation 30 can be converted to a finite approximation by taking multiple independent samples from the power spectral density, $\mathbb{P}(\omega)$, and computing the Monte Carlo approximation to the kernel $k(x_{1}-x_{2})$ via:
\begin{equation}
k(x_{1}-x_{2}) = \frac{1}{m}\sum_{j=1}^{m}
\begin{pmatrix}
\cos(\omega_j^T x_1)\\
\sin(\omega_j^T x_1)
\end{pmatrix}^T
\begin{pmatrix}
\cos(\omega_j^T x_2)\\
\sin(\omega_j^T x_2)
\end{pmatrix}
= \Phi_{\tiny{\textup{RFF}}}(x_{1})\Phi_{\tiny{\textup{RFF}}}(x_{2})^T
\;\;\;\;\;\;\;,\left \{\omega\right\}_{j=1}^{m} \overset{i.i.d.}{\sim } \mathbb{P}(\omega)
\end{equation}
Given that the spectral densities are probability distributions, it is often reasonably trivial to sample frequencies from them. For example, generating the frequencies for approximating a squared exponential is as simple as independently sampling $\omega$'s from a Gaussian distribution (Table 1). Equation 31 is truly an astoundingly condensed result and can be written in its entirety in just 4 lines of R-code:\newline
\begin{lstlisting}[language=R, caption=Example of creating random Fourier features to approximate a Gaussian kernel matrix.]
# data matrix X of dimensions (N x d), number of features m
Omega = matrix(rnorm(m*ncol(X)), m) # squared exponential kernel
Proj = x
Phi = cbind(cos(Proj), sin(Proj)) / sqrt(m) # basis
K = Phi
\end{lstlisting}
The RFF approach results in an approximation of the whole kernel matrix. Therefore, it is different from other low-rank methods that try to approximate parts of the full kernel matrix. Using the Woodbury matrix inversion formula we can reduce the complexity of inverting the whole kernel matrix from $\mathcal{O}(N^{3})$ to $\mathcal{O}(m^{3})$ (where $m$ is the number of $i.i.d.$ samples from $\mathbb{P}(\omega)$) to solve the dual \cite{MR0038136}. However, one of the key observations about RFF's is that the random basis matrix $\Phi_{\tiny{\textup{RFF}}}$ defines a function space of its own! Technically a function space that is dense in a reproducing Hilbert space - the same space of functions from our kernel matrix. We can define this feature space as:
\begin{equation}
\Phi_{\tiny{\textup{RFF}}}(x) =
\begin{pmatrix}
\cos(\omega_j^T x)\\
\sin(\omega_j^T x)
\end{pmatrix} \;\; \; \; \; \; \left \{\omega\right\}_{j=1}^{m} \overset{i.i.d.}{\sim } \mathbb{P}(\omega)
\end{equation}
This can be written using matrix notation as $\Phi(X) = [cos(X\Omega^{T}) \;sin(X\Omega^{T})] \in \mathbb{R}^{N \times 2m}$, where matrix $X\in \mathbb{R}^{N \times d}$ is the design matrix and $\Omega \in \mathbb{R}^{m \times d}$ is the frequency matrix with rows corresponding a sampled $\omega$ ($[\omega_{1},...,\omega_{m}]$). See Supplementary Equations 3 for a more comprehensive walk-through of the steps from Bochner's theorem to the RFF equation.
We can use the $\Phi_{\tiny{\textup{RFF}}}(X)^{T}\Phi_{\tiny{\textup{RFF}}}(X) \in \displaystyle \mathbb{R}^{m \times m}$ matrix to solve the primal. The resulting linear model takes the form $y \sim \Phi_{\tiny{\textup{RFF}}}(X)\textbf{w}$. Therefore, for a spatial data set, the method only requires mapping the explanatory variables using RFF and fitting a linear model. Therefore, the ubiquitous linear model is converted to a much more expressive form, while retaining all the desirable mathematical properties that make linear models so popular.
The theoretical properties of RFF estimators are still far from fully understood. The seminal paper by Rahimi and Recht \cite{Rahimi2007} showed that every entry in our kernel matrix is approximated to an error of $\pm \epsilon $ with $m = \frac{\log(N)}{\epsilon^{2}}$ Fourier features. A more recent result shows that only $\sqrt{N}\log(N)$ features can achieve the same learning bounds as full kernel ridge regression with squared loss \cite{rudi2017generalization}. This is particularly relevant for large datasets. For example, given $100,000$ data points, we would only need $~3600$ features to achieve the generalisation error as if we had used all points.
\section*{Beyond the General Linear Model}
Throughout this paper, we have focused on Gaussian likelihoods/Squared loss, but Fourier bases can be used with any loss function. For example, when performing classification with binary data, i.e. $y\in\{0,1\}$ the cross-entropy loss (also known as log loss) can be used, given by:
\begin{equation}
S(\mathbf{w}) = -\left((y\log(\varphi^{-1}(X\mathbf{w}))+(1-y)\log(1-\varphi^{-1}(X\mathbf{w})) \right)
\end{equation}
where $\varphi$ is the Sigmoid or Logit function. Another example is the use of a Poisson likelihood to model count data \cite{cameron2013regression}. More broadly, generalised linear models (GLM) encompass the extension of linear regression to response variables that have a restricted range of potently values (such as binary or count data) or non-Gaussian error \cite{nelder1972generalized}. This generalisation is achieved by letting the linear be related to the response variable via a link function. The link function ensures that the model's estimated responses are on the correct range and have the appropriate error structure but is still a function of the weighted linear sum of explanatory variables. Thus, we can still apply feature mapping including RFF and can continue to fit these models using maximum likelihood.
These models can easily be extended to include uncertainty through Bayesian inference. For example, Bayesian linear regression is achieved by assuming that both the response variables and the model parameters are drawn from distributions, with the aim to find the posterior distribution of the model parameters given the input and output variables. By specifying a Gaussian likelihood, with a Gaussian prior on our coefficients, in expectation, our posterior is the same as that of ridge regression. We can evaluate full posterior uncertainty by sampling from the posterior distribution.
\section*{Toy Example of Random Fourier Features for Spatial Analysis}
As an example, we simulate a non-linear spatial regression problem. The code for this example is provided in Supplementary Code 4. A set of random points in space is generated, such that each location has unique coordinates (longitude and latitude). Each location has a response variable generated from the same latent function as in Figure 2 with added Gaussian noise ($\textbf{y} = \textbf{x}_{1}^{2}+\textbf{x}_{2}^{2}+\epsilon$ where $\epsilon \sim N(\mu=0,\sigma^{2}=1)$). In figure 4a we show random 500 points drawn from the spatial process (the code can easily be changed to any function of the user's choice). Note that in the provided code generates points at random and therefore a user's results may differ from the exact results shown here.
The simulated data were used to train three models, a linear regression model, a non-linear, kernel regression model and a kernel ridge regression model (KRR). Both the kernel regression and kernel ridge regression models use RFFs to approximate a Gaussian kernel approximation (user can specify the number of features). We assume, as is common for nearly all real-world spatial processes, that we only observe a subset of all possible locations. Therefore, models are trained on only 20\% of all the generated points, shown in Figure 4b. The remaining data not used to training is used for testing.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{Figures/Fig4.png}
\caption{\textbf{(A)} All 500 points generated from the latent spatial process given by $\mathbf{y} = \mathbf{x}_{1}^2 + \mathbf{x}_{2}^2+\epsilon$ where $\epsilon \sim N(\mu=0,\sigma^{2}=1)$), and (\textbf{B}) the subset of data points used to train the regression models.}
\label{fig.4}
\end{figure}
Each model was trained using the same training data and their predictive performance measured by MSE for the testing data. For both kernel methods (kernel regression and kernel ridge regression), 100 Fourier features are used. For KRR, k-fold cross-validation was used to find the optimal regularisation parameter ($\lambda_{ridge}$). The training and testing performance of the three models are shown in Table 2. As expected, the non-linear nature of the latent function results in very poor performance of the linear model with large training and testing error. The difference between the kernel regression (without regularisation) and KRR (with regularisation) is indicative of bias-variance trade-off discussed earlier.
The Kernel regression model has excellent training performance, with the infinite feature space of the Gaussian kernel permitting the fitting of highly complex functions. However, in the absence of regularisation, the kernel regression model greatly overfits the training data resulting in poor testing performance. If we consider the bias-variance trade-off, the kernel regression model has a very low bias, but high variance. In comparison, the regularisation in kernel ridge regression help to prevent this overfitting by penalising the overly complex models. After training, the KRR model (with $\lambda_{ridge}=3.98$) has marginally higher training error than the kernel regression model but less than half the testing error. The regularisation has increased the KRR model's bias, but this is outweighed by the reduction in variance, corresponding to a small decrease the training accuracy but significant improvement in testing performance. Note, in this example the latent function is fairly simple (a 2D quadratic); therefore the number of features required for good performance is low.
\begin{table}[ht]
\centering
\begin{tabular}{|l|c|c|}
\hline
\textbf{Model} & \textbf{Training Error (MSE)}& \textbf{Testing Error (MSE)}\\ \hline
\textit{Linear} & 2.26 & 2.73 \\ \hline
\begin{tabular}[c]{@{}l@{}}\textit{Kernel Regression}\\ \textit{(Gaussian Kernel)}\end{tabular} & 0.64 & 2.70 \\ \hline
\begin{tabular}[c]{@{}l@{}}\textit{Kernel \textbf{Ridge} Regression} \\ \textit{(Gaussian Kernel)}\end{tabular} & 0.88 & 1.19 \\ \hline
\end{tabular}
\caption{Training and testing performance of different models}
\label{table:2}
\end{table}
The importance of the bias-variance trade-off further illustrated by comparing how the training and testing performance of kernel regression and KRR vary with the number of Fourier features increases with, shown in Figure 5. Increasing the number of sampled Fourier bases increases the model's ability to fit the training data and results in a steady reduction in training error kernel regression (Figure 5a, blue line). In comparison, KRR has a higher than the training error than kernel regression that remains constant even with additional features (Figure 5a, red line). As the number of features increases the kernel regression model increasingly overfits the training data resulting in poor testing performance (Figure 5b, blue line).Importantly, the testing performance of KRR is significantly lower than kernel regression and remains low even with additional features (Figure 5b, red line). The regularisation in KRR constrains model complexity, such that as more features are added the regularisation prevents overfitting by increasing the magnitude of the regularisation parameter, $\lambda_{ridge}$ (Figure 5b, inset). As a result, KRR maintains a good bias-variance trade-off across all numbers of features.
\begin{figure}[h]
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio]{Figures/Fig5.png}
\caption{An example of the bias-variance trade-off for kernel regression (blue) and kernel ridge regression model (red) with a random Fourier feature approximation of a Gaussian kernel. The mean squared error (MSE) is calculated for both the (\textbf{A}) training data and the testing data (\textbf{B}) for an increasing number of sampled Fourier features, with optimal $\lambda_{Ridge}$ (by k-fold cross-validation) for the ridge regression model shown (\textbf{B} \textit{inset})}
\label{fig.5}
\end{figure}
\section*{Advanced Methods for Random Fourier Features}
\subsection*{Limitations of RFF's}
Given the immense popularity and good empirical performance of the RFF method, little has been published on their limitations. However, RFF does have its limitations, including in the context of spatial analysis. From here onward, we term the RFF method described previously as the standard RFF method.
Firstly, RFF's can be poor at capturing very fine scale variation as noted in Ton et al. \cite{TON201859}. This is likely due to fine-scale features being captured by the tails of the spectral density that will be infrequently sampled in the Monte Carlo integration. Secondly, from a computational perspective, RFF's are very efficient but can still be poor compared to some state-of-the-art spatial statistics approaches. For example, the sparse matrix approaches based on SPDE as solution GMRF provide impressive savings with complexity $\mathcal{O}(m^{1.5})$ (compared to $\mathcal{O}(m^{3})$ from the RFF primal solution) \cite{Lindgren}. Other methods such as the multiresolution Kernel approximation (MRA) provide incredible performance \cite{ding2017multiresolution}. However, it should be noted that many of these methods, such as MRA, are only valid for two dimensions \cite{ding2017multiresolution}, unlike RFF's which naturally extend to high-dimensions. Thirdly, while the convergence properties of RFF suggest excellent predictive capability \cite{rudi2017generalization}, alternative \emph{data-dependent} methods such as the Nystr{\"o}m approximation can perform much better in many settings \cite{Yang2012, rudi2015less}. The following sections will discuss the current methods that address some of these limitations.
\subsection*{Quasi-Monte-Carlo Features (QMC RFF)}
One of the most significant limitations of standard RFF is the Monte Carlo integration. The infinite integral that describes the kernel function is converted to a finite approximation by sampling $\omega$'s from the spectral density. The convergence of Monte Carlo integration to the true integral occurs at the rate $\mathcal{O}(m^{0.5})$, which means for some problems a large number of features are required to approximate the integral accurately.
A popular alternative is to use Quasi-Monte Carlo (QMC) integration \cite{Avrom2018}. In QMC integration, the set of points chosen to approximate the integral is chosen using a deterministic, low-discrepancy sequence \cite{niederreiter1978}. In this context, low-discrepancy means the points generated appear random even though they are generated from a deterministic, non-random process. For example, the Halton sequence, that generates points on a uniform hypercube before transforming the points through a quantile function (inverse cumulative distribution function) \cite{Halton1964}. Low-discrepancy sequences prevent clustering and enforce more uniformity in the sampled frequencies, allowing QMC to converge at close to $\mathcal{O}(m^{-1})$ \cite{asmussen2007stochastic}. QMC can provide substantial improvements in the accuracy of the approximation of the kernel matrix for the same computational complexity. Crucially, QMC is trivial to implement within the RFF framework for some distributions. For example, for the squared exponential kernel, instead of generating features as by taking random samples from a Gaussian, we generate them as: \newline
\begin{lstlisting}[language=R, caption=Example of Quasi-Monte Carlo sampling of a Gaussian power spectral density using a Halton sequence]
library(randtoolbox) #Package to generate Halton sequence
Omega = matrix(qnorm((halton(m,ncol(X)))),m)
\end{lstlisting}
The remaining code for generating the features is exactly the same as Code 1.
\subsection*{Leverage Score Sampling}
In the standard RFF method, frequencies are sampled with a probability proportional to their spectral density. However, the formula for the power spectral density, and concurrently sampling probability of a given frequency, is data-independent such that the formula for spectral density does not require data (see Table 1). Data-independent sampling is sub-optimal and can yield very poor results \cite{BachLS, mahoney2009cur, gittens2016revisiting}, and has been identified as one of the reasons RFFs perform poorly in certain situations \cite{li2018unified}. An alternative is a \emph{data-dependent} approach, that considers the importance of various features \emph{given} some data. Several data-dependent approaches for RFF have been proposed \cite{Ionescu17, rudi2017generalization, li2018unified}, but one of the most promising and easiest to implement is sampling from the leverage distribution of the RFF (abbreviated to LRFF) \cite{li2018unified}.
Leverage scores are popular across statistics and are a key tool for regression diagnostics and outlier detection \cite{Hoaglin, Velleman}. The leverage score measures the importance a given observation will have on the solution of a regression problem. However, this perspective of leverage scores as a measure of importance can be extended to any matrix. The leverage scores of a matrix $A$ is given by the diagonal matrix $T = A(AA^{T})^{-1}A^{T}$. The leverage score for $i$-th row of matrix $A$ is equal to the $i$-th diagonal element in matrix $T$, denoted as $\tau_{i,i}$ and calculated by:
\begin{equation}
\tau_{i,i} = \mathbf{a}_{i}^{T}(AA^{T})^{-1}\mathbf{a}_{i} = [A(AA^{T})^{-1}A^{T}]_{ii}
\end{equation}
$\tau_{i,i}$ can also be seen as a measure of the importance of the row $\textbf{a}_{i}$.
Most leverage score sampling methods apply ridge regularisation to the leverage scores,c ontrolled by regularisation parameter $\lambda_{\tiny{\textup{LRFF}}}$ given by:
\begin{equation}
\tau_{i,i}(\lambda) = \mathbf{a}_{i}^{T}(AA^{T}+ \lambda I)^{-1}\mathbf{a}_{i}
\end{equation}
The resulting scores are termed ridge leverage scores \cite{alaoui2014fast}. The regularisation serves a nearly identical purpose as when applied in the context of linear regression; ensuring the inversion required to compute the scores is always unique and less sensitive to perturbations in the underlying matrix, such as when only partial information about the matrix is known. The ridge parameter is key in stabilising leverage scores and permits fast leverage score sapling methods that approximate leverage scores using subsets of the full data \cite{musco2017recursive, rudi2018fast, drineas2012fast, cohen2017input}.
Leverage score sampling aims to improve the RFF method by sampling features with probability proportional to importance (rather than their spectral density). This is achieved by sampling columns of the feature matrix with a probability proportional to their leverage scores. The resulting samples should contain more of the important features, and thus a more accurate approximation to the full matrix given the same number of samples. The computation and inversion of the matrix M increases the computational burden of this approach, but only has to be calculated once for a given regularization parameter. A key distinction to note is that the formula for the leverage score up to this point (and in the majority of the literature) have sought the leverage score of the rows. For RFF we want the leverage scores of the columns as they correspond to the Fourier bases. Therefore, the ridge leverage score of the Fourier features matrix is given by:
\begin{equation}
\begin{split}
T(\lambda) & =diag\left(\Phi(X)(\Phi(X)^{T}\Phi(X) + \lambda_{\tiny{\textup{LRFF}}} I)^{-1}\Phi(X)^{T}\right) \\
\tau_{i,i}(\lambda) & = \phi(\mathbf{x}_{i})(\Phi(X)^{T}\Phi(X) + \lambda_{\tiny{\textup{LRFF}}} I)^{-1}\phi(\mathbf{x}_{i})^{T}
\end{split}
\end{equation}
With suitable scaling, we can now sample Fourier features with a probability proportional to the leverage distribution, allowing us to sample features proportional to their importance. The code is given by: \newline
\begin{lstlisting}[language=R, caption=LRFF Example]
# Generate the m frequency samples (Omega) from the PSD
Proj = x
Phi = cbind(cos(Proj), sin(Proj)) / sqrt(m) # Basis
M = t(Phi)
T_lrff <- M
pi_s <- diag(T_lrff) # Diagonal elements
l <- sum(diag(T_lrff)) #Sum of diagonal elements of T
is_wgt <- sqrt(pi_s/l) # Leverage scores
\end{lstlisting}
\subsection*{Orthogonal Random Features}
One of the benefits of RFF's is that they can define kernels in high dimensions. For example, one can use a kernel in 4-dimensions to represent Cartesian spatial coordinates $x,y,z$ and time, $t$, the foundation for any spatiotemporal modelling. However, increasing dimensionality comes at a cost. While RFF's are unbiased estimators with respect to the expectation of the kernel matrix \cite{Rahimi2007}, increase the number of dimensions of the data significantly increases the variance of the RFF's estimate and requiring significantly more features to achieve an accurate approximation \cite{felix2016orthogonal}. One proposed approach to solve this issue with high dimensional kernels is to draw each new feature dimension orthogonally.
In the standard RFF method, the sampled frequencies can be concatenated into a frequency matrix, $\Omega \in \mathbb{R}^{m \times d}$. If we consider a squared exponential/ Gaussian kernel, $\Omega$ is actually just a random Gaussian matrix, as the sampled frequencies are drawn from a standard normal distribution and scaled by the kernel parameter, $\sigma$. Therefore, the matrix $\Omega = \frac{1}{\sigma}G$, where $G$ is a random Gaussian matrix of dimension $\mathbb{R}^{d \times d}$ (and should not be confused with the gram matrix).
In orthogonal random features (ORF), the aim is to impose orthogonality on $\Omega$, such that it contains significantly less redundancy than a random Gaussian matrix, capable of faster convergence to the full kernel matrix with lower variance estimates. The simplest method to impose orthogonality would be to replace $G$ with a random orthogonal matrix, $O$. However, if we consider the squared exponential/Gaussian kernel, the row norms of the $G$ matrix follow will Chi distribution. In comparison, the orthogonal matrix, $O$, will have (by definition) rows with unit norm. Thus, simply replacing $G$ with $O$ means that the RFF will no longer be an unbiased estimator.
To return to an unbiased estimator, the orthogonal matrix, $O$, must be scaled by the diagonal matrix, $S$, with diagonal entries random variables from Chi distributed with $D$ degrees of freedom. This ensures that the row norms of $G$ and $SO$ will be identically distributed and can be used to construct a matrix of orthogonal random features given by $\Omega_{ORF} = \frac{1}{\sigma} SO$. It should be noted that the elements of $S$ are only Chi distributed random variables when $G$ is a random Gaussian matrix (the definition of a Chi distributed random variable identical to taking the L2 norm of a set of standard normally distributed variables). For other kernels, the diagonal elements of matrix $S$ are computed as the norm for the corresponding row in $G$. Therefore, the $i$-th diagonal element of $S$ is calculated by:
\begin{equation}
s_{i,i}=\left \| g_{i} \right \|_{2} = \sqrt{\sum_{j=1}^D \left (G_{i,j} \right )^2}
\end{equation}
Therefore, generating orthogonal random features for a given kernel requires two steps. First, derive the orthogonal matrix $O$ by performing QR decomposition on the feature matrix $G$ (where $O$ corresponding to the Q matrix of QR decomposition). See \cite{gentle2012numerical} for an excellent summary of the QR decomposition. Second, compute the diagonal entries of the matrix, $S$, by talking the norms of corresponding rows of $G$. We then compute the orthogonal feature matrix as $\Omega_{ORF} = \frac{1}{\sigma} SO$. The random Fourier feature matrix is replaced with orthogonal random feature matrix to generate the orthogonal random basis matrix $\Phi_{ORF}(X)$, computed as $\Phi_{ORF}(X) = [cos(X\Omega_{ORF}^{T}) \; sin(X\Omega_{ORF}^{T})] \in \mathbb{R}^{N \times 2m}$. The R code for ORFs is as follows: \newline
\begin{lstlisting}[language=R, caption=ORF example]
omega <- c()
while(Nrow < N_features){
G <- matrix(rnorm(D1*D1),nrow=D1,ncol=D1) # Random Gaussian matrix
G1 <- matrix(rnorm(D1*D1),nrow=D1,ncol=D1)
O <- qr.Q(qr(G),complete=TRUE) #Orthogonal Matrix by QR decomposition
S <- diag(sqrt(rowSums(G1^2))) #Diagonal Matrix
omega1 <- rbind(omega,S Q)
Nrow <- nrow(omega)
}
\end{lstlisting}
Yu et al. \cite{felix2016orthogonal} also included an extension to the ORF method termed structured ORF (SORF) to avoid the computationally expensive steps of deriving the orthogonal matrix ($\mathcal{O}(N^{3})$ time) and computing random basis matrix ($\mathcal{O}(N^{2})$ time). The SORF method replaces the random orthogonal matrix, $O$, by a class of specially structured matrices (consisting of products of binary diagonal matrices and Walsh-Hadamard matrices) that has orthogonality with near-Gaussian entries \cite{felix2016orthogonal}. The SORF method maintains a lower approximation error the standard RFF, but is significantly more computationally efficient than ORF, with computing $\Phi_{SORF}(X)$ taking only $\mathcal{O}(N log(N))$ time.
\subsection*{Non-stationary and Arbitrary Kernel Functions}
One of the most significant limitations to the standard RFF method is the restriction to shift-invariant kernels, where $K(x,z) = K(x-z)$. This restriction means that the kernel value is only dependent on the lag or distance $x-z$ rather than the actual locations. This property imposes stationarity on the spatiotemporal process. While this assumption is not unreasonable, and non-stationarity is often unidentifiable, in some cases the relaxation of stationary can significantly improve model performance \cite{paciorek2006spatial}.
To extend the RFF method to non-stationary kernels requires a more general representation of Bochner's theorem capable of capturing the spectral characteristics of both stationary and non-stationary kernels. This extension \cite{yaglom2012correlation} states than any kernel (stationary or non-stationary) can be expressed as its Fourier transform in the form of:
\begin{equation}
k(x_{1},x_{2}) = \int_{R^{d} \times R^{d}}e^{i(\omega^{T}_{1}x_{1}-\omega^{T}_{2}x_{2})} \mathbb{P}(\omega_{1}) \mathbb{P}(\omega_{2}) \; d\omega_{1} d\omega_{2}
\end{equation}
This equation is nearly identical to the original derivation of Bochner's theorem given in Equation 29, but now we have two spectral densities on $\mathbb{R}^{D}$ to integrate over. It is easy to how if the two spectral densities are the same, the function equates the definition for stationary kernels. Applying the same treatment and Monte Carlo integration can be performed to give the feature space of the non-stationary kernel \cite{TON201859}, now given by:
\begin{equation}
\Phi_{\tiny{\textup{RFF}}}(x)=
\begin{pmatrix}
\cos(\omega^{1\;T} x) + \cos(\omega^{2\;T} x)\\
\sin(\omega^{1\;T} x) + \sin(\omega^{2\;T} x)
\end{pmatrix}
\;\;\;\;\;\;\;, \left \{\omega^{\{1,2\}}\right\}_{j=1}^{m} \overset{i.i.d.} {\sim} \mathbb{P}^{\, \{1,2\}}(\omega)
\end{equation}
Note that this derivation requires drawing independent samples for both of the spectral densities, $\mathbb{P}^{\,l}(\omega)$, such that we generate two frequency matrices, $\Omega^{l} \in \mathbb{R}^{m \times d}$.
In both the stationary and non-stationary case the choice of the kernel is often arbitrary or made with knowledge of the process being modelled. For example, if the spatial data is expected to be very smooth, then a squared exponential kernel can be used. It is, however, possible to treat $\omega$ as unknown kernel parameters variables and infer their values \cite{TON201859}. This is equivalent to deriving an empirical spectral distribution. This strategy is data dependent and can achieve impressive results; however, great care must be taken to avoid overfitting \cite{TON201859}.
\section*{Conclusion}
Regression is a key technique for nearly all scientific disciplines and can be extended from its simplest forms to highly complex and flexible models capable of describing nearly any type of data. Within this paper, we have provided an overview of one such tool in random Fourier features. RFF allows for the extension of the linear regression model into a highly expressive non-linear regression model only using some trigonometric functions and can be implemented with only a few lines of code and provides significant computational benefits to working with a full kernel. To that end, random Fourier features and their extensions represent an exciting new tool for multi-dimensional spatial analysis on large datasets.
\section*{\refname}}{}{}{}
\usepackage[
backend=bibtex,
style=nature,
sorting=none]{biblatex}
\addbibresource{main.bib}
\renewcommand*{\bibfont}{\footnotesize}
\def\keywordname{{\bfseries \emph Keywords}}%
\def\keywords#1{\par\addvspace\medskipamount{\rightskip=0pt plus1cm
\def\and{\ifhmode\unskip\nobreak\fi\ $\cdot$
}\noindent\keywordname\enspace\ignorespaces#1\par}}
\title{Spatial Analysis Made Easy with Linear Regression and Kernels}
\author{
Philip Milton\\
\small{MRC Centre for Outbreak Analysis and Modelling}\\
\small{Department of Infectious Disease Epidemiology}\\
\small{Imperial College London, London, UK}\\
\small{\texttt{[email protected]}}
\and
Emanuele Giorgi\\
\small{CHICAS, Lancaster Medical School}\\
\small{Lancaster University, Lancaster, UK}\\
\small{\texttt{[email protected]}}
\and
Samir Bhatt\\
\small{MRC Centre for Outbreak Analysis and Modelling}\\
\small{Department of Infectious Disease Epidemiology}\\
\small{Imperial College London, London, UK}\\
\small{\texttt{[email protected]}}
}
\date{\today}
\begin{document}
\maketitle
\input{abstract/abstract.tex}
\keywords{Random Fourier Features \and Kernel Methods \and Kernel Approximation}
\input{draft/draft.tex}
\section*{Supplementary Material}
\input{supplement/Lagrangian.tex}
\input{supplement/Gaussian.tex}
\input{supplement/RFFsteps.tex}
\printbibliography
\end{document}
\subsection*{Supplementary Equations 2 - Feature Expansion of the Gaussian Kernel}
\setcounter{equation}{0}
The Gaussian kernel is a shift-invariant kernel given by:
\begin{equation}
K(\mathbf{x},\mathbf{z}) = e \left (-\gamma {\left \| \mathbf{x}-\mathbf{z} \right \|^{2}} \right ), \; \; \; \; \gamma = \frac{1}{2\sigma^{2}}
\end{equation}
To examine the feature space associated with the kernel, let $\mathbf{x}$ and $\mathbf{z}$ be vectors in the space $R^{1}$ and $\gamma > 0$. The feature space can written as:
\begin{equation}
\begin{split}
e^{-\gamma {\left \| \mathbf{x}-\mathbf{z} \right \|^{2}}} & = e^{ -\gamma (\mathbf{x}-\mathbf{z})^{2}} \\
& = e^{-\gamma\mathbf{x}^{2} + 2\gamma\mathbf{xz} -\gamma\mathbf{z}^{2}} \\
& = e^{(-\gamma\mathbf{x}^{2}-\gamma\mathbf{z}^{2})}e^{(2\gamma\mathbf{xz})}
\end{split}
\end{equation}
We can apply a Taylor expansion to the second term:
\begin{equation}
e^{(2\gamma\mathbf{xz})}= \left (1 + \frac{2\gamma \mathbf{xz}}{1!} + \frac{(2\gamma \mathbf{xz})^{2}}{2!} + \frac{(2\gamma \mathbf{xz})^{3}}{3!} + ... \right ) = \sum_{n=0}^{\infty}\frac{(2\gamma \mathbf{xz})^{n}}{n!}
\end{equation}
where
\begin{equation}
K(\mathbf{x},\mathbf{z}) = e^{(-\gamma\mathbf{x}^{2}-\gamma \mathbf{z}^{2})}\sum_{n=0}^{\infty}\frac{(2\gamma \mathbf{xz})^{n}}{n!} = \left \langle \phi (\mathbf{x}),\phi (\mathbf{z}) \right \rangle
\end{equation}
Therefore, the feature mapping of vector $\textbf{x}$ is infinite, given by:
\begin{equation}
\phi(\mathbf{x})= e^{(-\gamma\mathbf{x}^{2})}\sum_{n=0}^{\infty}\sqrt{\frac{(2\gamma \mathbf{x})^{n}}{n!}}
\end{equation}
It should be noted that similar Taylor expansions can be applied to many kernels that contain exponential functions, such as Mat{\'e}n and Laplacian kernels, that also result in infinite feature spaces.
\subsection*{Supplementary Equations 1 - Lagrangian Derivation of the Ridge Regression Dual Problem}
\setcounter{equation}{0}
Rather than solving the ridge regression objective function as a single minimisation problem, it can be considered as a constrained minimisation problem given. The solution of a constrained optimisation problem can often be found by using the so-called Lagrangian method. Given the constrained optimisation problem:
\begin{equation}
\begin{split}
& \min f(x) \\
s.t. \; \; \; & g(x) = r
\end{split}
\end{equation}
The constrained optimisation problem can be concatenated into a single equation termed the Lagrangian, given by:
\begin{equation}
L(\mathbf{x},\mathbf{\alpha}, r) = f(\mathbf{x})+\alpha(g(\mathbf{x})-r)
\end{equation}
The Lagrangian requires the introduction of a new variable, $\alpha$ called a Lagrange multiplier. The ridge regression can be converted into a constrained minimisation problem given by:
\begin{equation}
\begin{split}
& \min_{\mathbf{w }\in \mathbb{R}^d} \left \|r\right \|^2 +\frac{\lambda}{2}\left\|\mathbf{w}\right \|^2 \\
& s.t. \; \; \; X\mathbf{w} - \mathbf{y} = r
\end{split}
\end{equation}
with the Lagrangian given by:
\begin{equation}
L(\mathbf{w},r,\alpha) = \frac{1}{2}\left \|r\right \|^2 +\frac{\lambda}{2}\left\|\mathbf{w}\right \|^2 + \alpha^{T}(X\mathbf{w} - \mathbf{y} - r)
\end{equation}
Within this context, $\alpha$, are the dual variables. The aim set the derivatives with respect to the primal variables to zero and to solve for $\alpha$, that in this context represent the dual variables. The first step is to write the Lagrangian entirely in terms of the dual variables. Therefore, the first set is to find expressions of the primal variables $w$ and $r$ in terms of $\alpha$. This is achieved by setting the derivatives of the Lagrangian with respect to the primal variables to zero, giving:
\begin{equation}
\begin{split}
\frac{\partial L(\mathbf{w},\alpha,r)}{\partial \mathbf{w}} & = \lambda \mathbf{w} - X^{T}\alpha, \; \; \; \; \mathbf{w} = \frac{1}{\lambda}X^{T}\alpha \\
\frac{\partial L(\mathbf{w},\alpha,r)}{\partial r} & = r + \alpha, \; \; \; \; \; \; \; \; \; \; \; \; r = -\alpha
\end{split}
\end{equation}
Substituting the solutions for $\textbf{w}$ and $r$ back into the original Lagrangian equations gives the dual Lagrangian:
\begin{equation}
\begin{split}
L(\mathbf{w}(\alpha),r(\alpha),\alpha) & = \frac{1}{2}\left \| \alpha\right \|^2 +\frac{\lambda}{2}\left\|X^{T}\alpha\right \|^2 + \alpha^{T}(\mathbf{y} - \frac{1}{\lambda}XX^{T}\alpha - \alpha) \\
L(\mathbf{w}(\alpha),r(\alpha),\alpha) & = -\frac{1}{2}\left \| \alpha\right \|^2 -\frac{\lambda}{2}\left\|X^{T}\alpha\right \|^2 + \alpha^{T}\mathbf{y}
\end{split}
\end{equation}
Taking derivatives of the dual Lagrangian and setting it to zero allows the derivation of the optimal solution for the dual variables given by
\begin{equation}
\mathbf{\alpha} = (XX^{T} + \lambda I_{n})^{-1}\mathbf{y}
\end{equation}
\subsection*{Supplementary Equations 3 - Extended Derivation of Random Fourier Features form Bochner's Theorem}
\setcounter{equation}{0}
Bochner's theorem guarantees that an appropriately scaled shift-invariant kernel , where $k(x_{1},x_{2})=k(x_{1}-x_{2})$, is the inverse Fourier transform of a probability measure given by:
\begin{equation}
k(x_{1}-x_{2}) = \int_{R^{d}}e^{i\omega^{T}(x_{1}-x_{2})} \mathbb{P}(\omega ) \; d\omega
\end{equation}
We can write this expectation as:
\begin{equation}
k(x_{1}-x_{2}) = \mathbb{E}_{\omega\sim \mathbb{P}}[e^{i\omega^{T}(x_{1}-x_{2})}]
\end{equation}
Applying Euler's identity to the exponential ($e^{i\pi} = \cos (\pi) + i\sin (\pi)$),
\begin{equation}
\begin{split}
k(x_{1}-x_{2}) & = \mathbb{E}_{\omega\sim \mathbb{P}}[e^{i\omega^{T}(x_{1}-x_{2})}] \\
& = \mathbb{E}_{\omega\sim \mathbb{P}}[\cos(\omega^{T}(x_{1}-x_{2})) + i\sin (\omega^{T}(x_{1}-x_{2}))] \\
\end{split}
\end{equation}
For the purpose of regression we only need to consider the real component:
\begin{equation}
\begin{split}
& = \mathbb{E}_{\omega\sim \mathbb{P}}[\cos(\omega^{T}(x_{1}-x_{2}))] \\
& = \mathbb{E}_{\omega\sim \mathbb{P}}[\cos(\omega^{T}x_{1})\cos(\omega^{T}x_{2}) + \sin(\omega^{T}x_{1})\sin(\omega^{T}x_{2})] \\
\end{split}
\end{equation}
By performing standard Monte Carlo integration on the real component, we can derive a finite-dimensional approximation of the kernel function as the following:
\begin{equation}
\begin{split}
& = \mathbb{E}_{\omega\sim \mathbb{P}}[\cos(\omega^{T}x_{1})\cos(\omega^{T}x_{2}) + \sin(\omega^{T}x_{1})\sin(\omega^{T}x_{2})] \\
& \approx \sum_{j=1}^{m}\cos(\omega^{T}_{j}x_{1})\cos(\omega^{T}_{j}x_{2}) + \sin(\omega^{T}_{j}x_{1})\sin(\omega^{T}_{j}x_{2}) \;\;\;\;\;\;\;,\left \{\omega\right\}_{j=1}^{m} \overset{i.i.d}{\sim } \mathbb{P}(\omega)
\end{split}
\end{equation}
By separating the $x_{1}$ and $x_{2}$ components the approximation to the kernel becomes:
\begin{equation}
k(x_{1}-x_{2}) = \frac{1}{m}\sum_{j=1}^{m}
\begin{pmatrix}
\cos(\omega_j^T x_{1})\\
\sin(\omega_j^T x_{1})
\end{pmatrix}^T
\begin{pmatrix}
\cos(\omega_j^T x_{2})\\
\sin(\omega_j^T x_{2})
\end{pmatrix}
= \Phi_{\tiny{\textup{RFF}}}(x_{1})\Phi_{\tiny{\textup{RFF}}}(x_{2})^T
\;\;\;\;\;\;\;,\left \{\omega\right\}_{j=1}^{m} \overset{i.i.d}{\sim } \mathbb{P}(\omega)
\end{equation}
Thus, the new random Fourier feature matrix given by:
\begin{equation}
\Phi_{\tiny{\textup{RFF}}}(x_{1}) =
\begin{pmatrix}
\cos(\omega_j^T x_{1})\\
\sin(\omega_j^T x_{1})
\end{pmatrix} \;\;\;\;\;\;\;,\left \{\omega\right\}_{j=1}^{m} \overset{i.i.d}{\sim } \mathbb{P}(\omega)
\end{equation}
with the equivalent mapping for $x_{2}$ shown by replacing the $x_{1}$ in the above equation with $x_{2}$.
|
train/arxiv
|
BkiUflA25V5jbixhKvDz
| 5 | 1 |
\section{Introduction}
This paper is a continuation of our previous work \cite{ShenZhuge2} and generalizes the global uniform Lipschitz estimate in periodic or uniformly almost-periodic homogenization to the second-order elliptic operators with coefficients in a broader class of discontinuous almost-periodic functions. Precisely we will study a family of elliptic operators with rapidly oscillating almost-periodic coefficients in the form of
\begin{equation}\label{def_Le}
\mathcal{L}_\varepsilon = - \text{div}(A(\cdot/\varepsilon) \nabla ) = - \frac{\partial}{\partial x_i} \left\{ a^{\alpha\beta}_{ij} \bigg( \frac{\cdot}{\varepsilon}\bigg) \frac{\partial}{\partial x_j}\right\}, \qquad \varepsilon > 0
\end{equation}
where summation convention is used throughout and $\varepsilon$ is assumed to be a tiny parameter. We will assume that the coefficient matrix $A(y) = (a^{\alpha\beta}_{ij}(y))$ with $1\le i,j \le d$ and $1\le \alpha, \beta \le m$ is real, bounded, measurable, and satisfies the following conditions:
(i) Strong ellipticity: for some $\mu>0$, and all $ y\in\mathbb{R}^d $ and $ \xi = (\xi_i^\alpha) \in \mathbb{R}^{d\times m}$,
\begin{equation}\label{def_ellipticity}
\mu|\xi|^2 \le a^{\alpha\beta}_{ij}(y) \xi_i^\alpha \xi_j^\beta \le \mu^{-1}|\xi|^2.
\end{equation}
(ii) Almost-periodicity in the sense of H. Weyl (1927): each entry of $A$ may be approximated by a sequence of trigonometric polynomials with respect to the semi-norm
\begin{equation}\label{def_W2}
\norm{f}_{W^2} = \limsup_{R\to \infty} \sup_{x\in\mathbb{R}^d} \left( \fint_{B(x,R)} |f|^2 \right)^{1/2}.
\end{equation}
In this situation, we also say $A\in APW^2(\mathbb{R}^d)$. We emphasize that this class of almost-periodic functions, which allows discontinuous functions, is much broader than that of uniformly almost-periodic functions in the sense of H. Bohr (1925) considered in \cite{Shen2, AS, AGK}, which is the closure of trigonometric polynomials with respect to the $L^\infty$ norm\cite{CC}.
We consider the following Dirichlet problem(DP) in a bounded domain $\Omega$:
\begin{equation}\label{def_DP}
\mathcal{L}_\varepsilon(u_\varepsilon) +\lambda u_\varepsilon= F \quad \text{in } \Omega, \qquad \text{and} \qquad u_\varepsilon =f \quad \text{on } \partial\Omega,
\end{equation}
where $\lambda\ge 0$ is a parameter. The main goal of this paper is to establish the large scale uniform boundary Lipschitz estimates for the weak solution of (\ref{def_DP}). Here the rigorous meaning to the notion of large scale uniform boundary Lipschitz estimate is given as follows: for any $x_0\in \partial\Omega$ and any $r\ge \varepsilon$, there exists a constant $C$ independent of $\varepsilon$ or $r$, such that
\begin{equation}\label{ineq_Lip_Intro}
\left( \fint_{B(x_0,r)\cap \Omega} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C.
\end{equation}
It is well-known that elliptic equations or systems (\ref{def_DP}) with discontinuous coefficients may have unbounded $\nabla u_\varepsilon$. But (\ref{ineq_Lip_Intro}) claims that $\nabla u_\varepsilon$ may be bounded in terms of average integral at a relatively large scale $r\ge \varepsilon$, uniformly with respect to $\varepsilon$, if the coefficients possess a certain repeated self-similar structure. This phenomenon also occurs in periodic homogenization and random homogenization in the stationary and ergodic setting; see \cite{Shen1, AM, AKM, ASm}. In general, (\ref{ineq_Lip_Intro}) is optimal in the sense that it does not hold uniformly for $r\ll\varepsilon$. However, as long as the assumption of local smoothness on the coefficients $A$ is imposed, a blow-up argument will send $r\to 0$ in (\ref{ineq_Lip_Intro}) and give us the usual full uniform Lipschitz estimate, i.e., $\norm{\nabla u_\varepsilon}_{L^\infty}$ is uniformly bounded; see Remark \ref{rmk_Lip}. This idea of separating large scale estimates ($r\ge \varepsilon$) only related to the homogenization process and small scale estimates ($r< \varepsilon$) only related to smoothness of coefficients has been clearly clarified in \cite{Shen1,AM} for periodic and stochastic homogenization. Therefore, in the present paper, we will focus on obtaining large scale estimate (\ref{ineq_Lip_Intro}), which reflects the essential feature of almost-periodic homogenization and meanwhile avoids the assumption of smoothness.
Let us review some background on the uniform Lipschitz estimates in homogenization before giving our main theorems. Historically, the uniform Lipschitz estimate has been studied for decades since late 1980s. The first breakthrough was due to \cite{AL} in which the authors proved the uniform Lipschitz estimates for Dirichlet problems with periodic coefficients by a compactness argument originating from the regularity theory in the calculus of variations and minimal surfaces. The compactness argument has been proved extremely useful and extensively applied in all kinds of homogenization problems; see \cite{AL2, GSh, GuS,KP,KLS2} for more references on this topic. However, the Lipschitz estimate for Neumann problems was not known until recent remarkable work in \cite{KLS2}, where the compactness argument was used along with a delicate iteration scheme. On the other hand, in \cite{ASm, AM} the authors developed a new approach in stochastic homogenization, as a replacement of compactness argument, to establish the uniform regularity estimates with a general scheme adapted to different boundary conditions. The advantage of this approach is that it relies only on the rates of convergence instead of the periodic structure or specific boundary correctors. So shortly afterwards, this general method was successfully applied in \cite{AS}, where the coefficients were assumed to be uniformly almost-periodic. In the present paper, we will use the similar approach to establish the boundary uniform Lipschitz estimates, down to scale $\varepsilon$, for operator $\mathcal{L}_\varepsilon + \lambda$ with coefficients satisfying (i) and (ii).
\subsection{Main results}
To state the main results of this paper, we recall that locally the boundary of a $C^{1,\alpha}$ domain is the graph of a $C^{1,\alpha}$ function. Without loss of generality, we may consider a $C^{1,\alpha}$ function $\phi:\mathbb{R}^{d-1} \to \mathbb{R}$ with $\phi(0) = 0$ and $\norm{\nabla \phi}_{C^\alpha(\mathbb{R}^{d-1})} \le M$. Unless otherwise indicated, in the following main theorems and the rest of our paper, we will define
\begin{equation}\label{def_C1a}
\begin{aligned}
D_r &= \left\{ (x',x_d)\in \mathbb{R}^d: |x'|<r \text{ and } \phi(x') < x_d < \phi(x') + r \right\}, \\
\Delta_r &= \left\{ (x',x_d)\in \mathbb{R}^d: |x'|<r \text{ and } x_d = \phi(x') \right\}. \\
\end{aligned}
\end{equation}
Let $\omega_{k,\sigma}(\varepsilon) $ be the quantity defined in (\ref{def_omega}) for quantifying the rates of convergence. Then we state the main theorems of this paper as follows.
\begin{theorem}[Boundary Lipschitz estimate for DP]\label{thm_Lip_DP}
Suppose that $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}) and $\omega_{k,\sigma}$ satisfies the Dini-type condition:
\begin{equation}\label{ineq_Dini_Intro}
\int_0^1 \frac{\omega_{k,\sigma}(r)^{1/2}}{r} dr < \infty,
\end{equation}
for some $\sigma \in (0,1)$ and $k\ge 1$. Let $u_\varepsilon \in H^1(D_2;\mathbb{R}^d)$ be a weak solution of $\mathcal{L}_\varepsilon(u_\varepsilon) + \lambda u_\varepsilon= F$ in $D_2$ with $u_\varepsilon = f$ on $\Delta_2$, where $\lambda \in [0,1]$. Then, for any $\varepsilon \le r\le 1$,
\begin{equation}\label{ineq_DP_Lip}
\left( \fint_{D_r} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C\left\{ \left( \fint_{D_1} |\nabla u_\varepsilon|^2 \right)^{1/2} + \norm{f}_{C^{1,\tau}(\Delta_1)} + \norm{F}_{L^p(D_1)} \right\},
\end{equation}
where $p>d$ and $\tau\in (0,\alpha)$. The constant $C$ depends only on $A, p,\sigma,k, \tau, \alpha$ and $M$.
\end{theorem}
We also introduce the Neumann problem(NP):
\begin{equation}\label{def_NP}
\mathcal{L}_\varepsilon(u_\varepsilon) + \lambda u_\varepsilon = F \quad \text{in } \Omega, \quad \text{and} \quad \frac{\partial u_\varepsilon}{\partial \nu_\varepsilon} =g \quad \text{on } \partial\Omega, \quad \text{and} \quad \int_{\Omega} u_\varepsilon = 0,
\end{equation}
where $\lambda \ge 0$. We use $\partial u_\varepsilon/\partial\nu_\varepsilon$ to denote the co-normal derivative of $u_\varepsilon$ associated with $\mathcal{L}_\varepsilon$.
\begin{theorem}[Boundary Lipschitz estimate for NP]\label{thm_Lip_NP}
Suppose that $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}) and $\omega_{k,\sigma}$ satisfies the same Dini-type condition (\ref{ineq_Dini_Intro}) for some $\sigma \in (0,1), k\ge 1$. Let $u_\varepsilon \in H^1(D_2;\mathbb{R}^d)$ be a weak solution of $\mathcal{L}_\varepsilon(u_\varepsilon) + \lambda u_\varepsilon = F$ in $D_2$ with $\partial u_\varepsilon/\partial \nu_\varepsilon = g$ on $\Delta_2$, where $\lambda \in [0,1]$. Then, for $\varepsilon \le r\le 1$,
\begin{equation}\label{ineq_NP_Lip}
\left( \fint_{D_r} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C\left\{ \left( \fint_{D_1} |\nabla u_\varepsilon|^2 \right)^{1/2} + \norm{g}_{C^{\tau}(\Delta_1)} + \norm{F}_{L^p(D_1)} \right\},
\end{equation}
where $p>d$ and $\tau\in (0,\alpha)$. The constant $C$ depends only on $A, p,\sigma,k, \tau, \alpha$ and $M$.
\end{theorem}
\subsection{Strategy of proof}
We now present the outline of our approach, including some key ideas in the proof. Recall that the homogenized system is
\begin{equation}\label{def_homo}
\mathcal{L}_0 u_0 + \lambda u_0 = F, \qquad \text{subject to a certain boundary condition,}
\end{equation}
where $\mathcal{L}_0 = -\text{div}(\widehat{A} \nabla)$ and $\widehat{A}$ is a constant matrix known as homogenized or effective matrix which is defined in (\ref{def_Ahat}). The proof of Theorem \ref{thm_Lip_DP} or \ref{thm_Lip_NP} is roughly divided into three steps, which follow the same line as \cite{Shen1}:
(1) Establish the $L^2$ rate of convergence in Lipschitz domains, i.e., the error estimate of $\norm{u_\varepsilon - u_0}_{L^2}$;
(2) Show that $u_\varepsilon$ satisfies the so-called \emph{flatness property} (how well it may be approximated by affine functions) as long as $u_0$ does;
(3) Iterate step (2) down to microscopic scale $\varepsilon$, under the additional Dini-type condition (\ref{ineq_Dini_Intro}).
The rate of convergence in $L^2$ will be shown in Section 3. In fact, if $\Omega$ is a bounded Lipschitz domain, and $u_\varepsilon, u_0$ are the weak solutions of (\ref{def_DP}) and the corresponding homogenized system (\ref{def_homo}), respectively, then
\begin{equation}\label{ineq_L2Lip_Intro}
\norm{u_\varepsilon - u_0}_{L^2(\Omega)} \le C\omega_{k,\sigma}(\varepsilon)^{1/2} \Big\{ (1+\lambda)^{-1/2}\norm{F}_{L^2(\Omega)} + (1+\lambda)^{1/2} \norm{f}_{H^1(\partial\Omega)} \Big\}.
\end{equation}
The proof of (\ref{ineq_L2Lip_Intro}), in contrast to the periodic homogenization, is based on the estimates of so called approximate correctors established in \cite{ShenZhuge2}; see Section 2 for more details. We should point out that the proofs of those estimates for approximate correctors are extremely difficult and involved with compactness and ergodic arguments. The rate $O(\omega_{k,\sigma}(\varepsilon)^{1/2})$ in (\ref{ineq_L2Lip_Intro}) seems to be suboptimal. But as far as we know, it is the best result derived for almost-periodic homogenization in Lipschitz domains and it is sufficient for us to proceed with our argument for uniform Lipschitz estimates.
The second and third steps of the proof of Theorem \ref{thm_Lip_DP} and \ref{thm_Lip_NP} are laid out in section 4. Based on the \emph{flatness property} of weak solutions of $\mathcal{L}_0 + \lambda$, we are able to prove the following \emph{flatness property} of $u_\varepsilon$:
\begin{equation}\label{ineq_flatness_Intro}
H(\theta r; u_\varepsilon) \le \frac{1}{2} H(r;u_\varepsilon) + C [\omega_{k,\sigma}(\varepsilon/r)]^{1/2} \Phi(2r),
\end{equation}
for some fixed $0<\theta<1$ and all $\varepsilon < r<1$, where $H$ and $\Phi$ are defined in (\ref{def_H}) and (\ref{def_Phi}), respectively. Notice that $H(r;u_\varepsilon)$ quantifies the local regularity property of $u_\varepsilon$ and the second term on the right-hand side of (\ref{ineq_flatness_Intro}) is the error term between $u_\varepsilon$ and $u_0$. For $r>\varepsilon$, we may expect
this error term to be much smaller than the improvement in the flatness property. Then we may iterate (\ref{ineq_flatness_Intro}) down to microscopic scale $\varepsilon$ to obtain a uniform estimate for $H(r; u_\varepsilon)$ for all $\varepsilon<r<1$. This idea can be fulfilled under the extra Dini-type condition (\ref{ineq_Dini_Intro}). This is exactly the technical reason why the condition (\ref{ineq_Dini_Intro}) is necessary in our proof. Fortunately, this condition, closely related to the almost-periodicity of the coefficients $A$, can be easily satisfied in applications; see Lemma \ref{lem_rate} or Table \ref{tab_1} below.
\subsection{Further results and discussions}
In the last section, we also discuss some further applications of Lipschitz estimate and its proof. The first application is to improve the estimate for $\nabla \chi_T$ by interior Lipschitz estimate for $\mathcal{L}_\varepsilon + \lambda$ with $\lambda = 1$. We show that with (\ref{ineq_Dini_Intro}) imposed, $\norm{\nabla \chi_T}_{S_1^2} $ is uniformly bounded, instead of just being bounded by $CT^\sigma$ for $\sigma>0$. The second application is devoted to the large scale Rellich estimate in $L^2$. More precisely, we will show that
\begin{equation}\label{ineq_RelDP}
\left(\fint_{\Omega_r} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C\norm{\nabla_{\text{tan}} u_\varepsilon}_{L^2(\Omega)},
\end{equation}
and
\begin{equation}\label{ineq_RelNP}
\left(\fint_{\Omega_r} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C\norm{\frac{\partial u_\varepsilon}{\partial \nu_\varepsilon} }_{L^2(\Omega)},
\end{equation}
for all $r\ge\omega_{k,\sigma}(\varepsilon)$. These estimates imply the usual Rellich estimate if $\omega_{k,\sigma}(\varepsilon) = O(\varepsilon)$ and $A$ possesses symmetry and certain smoothness; see the remark after Theorem \ref{thm_Rellich_NP}.
The last application is the large scale boundary H\"{o}lder estimates for both Dirichlet and Neumann problems. The argument follows a similar but simpler way as boundary Lipschitz estimate. The main point here is that we do not need any extra condition on the convergence rate $\omega_{k,\sigma}(\varepsilon)$. Indeed, the fact $\omega_{1,\sigma}(\varepsilon) \to 0$ as $\varepsilon\to 0$ is sufficient for us to establish the uniform boundary H\"{o}lder estimate. We state the result as follows.
\begin{theorem}[Boundary H\"{o}lder estimate for DP]\label{thm_holder_DP}
Suppose that $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}). Let $u_\varepsilon \in H^1(D_2;\mathbb{R}^d)$ be a weak solution of $\mathcal{L}_\varepsilon(u_\varepsilon) + \lambda u_\varepsilon= F$ in $D_2$ with $u_\varepsilon = f$ on $\Delta_2$, where $\lambda \in [0,1]$. Then, for any $\varepsilon \le r\le 1$,
\begin{equation}\label{ineq_Holder_Intro}
\left( \fint_{D_r} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C r^{\gamma-1} \Bigg\{ \left( \fint_{D_1} |\nabla u_\varepsilon |^2 \right)^{1/2} + \left( \fint_{D_1} |F|^p \right)^{1/p}
+ \norm{f}_{C^{0,1}(\Delta_{1})} \Bigg\},
\end{equation}
where $\gamma < 2-d/p, p\ge 2, p>d/2$. In particular, if $p = d$, then (\ref{ineq_Holder_Intro}) holds for all $\gamma \in (0,1)$.
\end{theorem}
A similar estimate also works for Neumann problems (\ref{def_NP}) with $\norm{f}_{C^{0,1}(\Delta_{1})}$ replaced by $\norm{g}_{L^\infty(\Delta_{1})}$; see Theorem \ref{thm_holder_NP}.
Overall, we can see from previous results the close relationship between the rate $\omega_{k,\sigma}(\varepsilon)$ and uniform regularity in different situations. This idea more or less has been shown in \cite{Shen1} for periodic homogenization, whose rate of convergence is always the same, i.e., $\omega_{1,\sigma}(\varepsilon) = O(\varepsilon)$. But it is of particular interest for almost-periodic homogenization since the rate of convergence could be arbitrarily slow. In the following table, we will summarize all the uniform regularity results obtained in this paper and \cite{ShenZhuge2}, and clarify how the function $\omega_{k,\sigma}(\varepsilon)$, which quantifies the rate of convergence, is related to the certain uniform regularity.
\begin{table}[h]
\caption{Relationship between convergence rate and regularity}\label{tab_1}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
\bf Sufficient condition on $A$ & \bf Rate of convergence & \bf Large scale regularity\\
\hline
No extra condition needed & $\omega_{1,\sigma}(\varepsilon) \to 0$ as $\varepsilon \to 0 $ & H\"{o}lder estimates \\
\hline
\multirow{2}{*}{$\rho_k(L,L) \lesssim \ln(1+L)^{-\alpha}, \alpha > 3$} & \multirow{2}{*}{$\displaystyle \int_0^1 \frac{\omega_{k,\sigma}(r)^{1/2}}{r} dr< \infty $ }& Lipschitz estimates;\\
& & and $\norm{\nabla \chi_T}_{S_1^2} \le C$ \\
\hline
$\rho_k(L,L) \lesssim L^{-1-\alpha}, \alpha > 0$; & \multirow{3}{*}{$\omega_{k,\sigma}(\varepsilon ) \lesssim \varepsilon$ }& $L^2$ Rellich estimate; \\
or $A$ is sufficiently smooth and& & existence of true corrector \\
quasi-periodic\cite{AGK} & & $\chi$ and $\norm{\chi}_{S_1^2} \le C$\cite{ShenZhuge2} \\
\hline
\end{tabular}
\end{center}
\end{table}
Throughout this paper we will use $\fint_E f$ to denote the average integral of function $f$ over a set $E$, and $C$ to denote constants that depend at most on $A,\Omega$
and other scale-independent parameters(e.g. $k, \sigma, p$, etc.), but never on $\varepsilon, T$ or other scale-dependent parameters(e.g., $\lambda, L,r$, etc.).
\section{Preliminaries for almost-periodic homogenization}
In this section, we will briefly review some preliminaries of almost-periodic homogenization along with particular emphasis on the characterization of almost-periodicity and approximate correctors. Except for some classical contents, most of the them were formulated in our recent paper \cite{ShenZhuge2}.
\subsection{Homogenization}
We start with spaces of almost-periodic functions. Let $\text{Trig}(\mathbb{R}^d)$ denote the set of real trigonometric polynomials in $\mathbb{R}^d$. A function $f$ in $L^2_{\text{loc}}(\mathbb{R}^d) $ is said to belong to $B^2(\mathbb{R}^d)$ if $f$ is the limit of a sequence of functions in $\text{Trig}(\mathbb{R}^d)$ with respect to the semi-norm
\begin{equation}\label{cond_B2}
\norm{f}_{B^2} = \limsup_{R\to\infty} \left( \fint_{B(0,R)} |f|^2 \right)^{1/2}.
\end{equation}
Functions in $B^2(\mathbb{R}^d)$ are usually said to be almost-peiodic in the sense of Besicovitch (1926). It is not hard to see that if $g\in L^\infty(\mathbb{R}^d) \cap B^2(\mathbb{R}^d)$ and $f\in B^2(\mathbb{R}^d)$, then $fg\in B^2(\mathbb{R}^d)$.
Let $f\in L^1_{\text{loc}}(\mathbb{R}^d)$. A number $\ag{f}$ is called the mean value of $f$ if
\begin{equation}
\lim_{\varepsilon\to 0^+} \int_{\mathbb{R}^d} f(x/\varepsilon) \varphi(x) dx = \ag{f} \int_{\mathbb{R}^d} \varphi,
\end{equation}
for any $\varphi \in C_0^\infty(\mathbb{R}^d)$. It is known that if $f\in B^2(\mathbb{R}^d)$, then $f$ has a mean value. Under the equivalent relation that $f\sim g$ if $\norm{f-g}_{B^2} = 0$, the set $B^2(\mathbb{R}^d)$ is a Hilbert space with the inner product defined by $(f,g) = \ag{fg}$. Furthermore, one has the following Weyl's orthogonal decomposition,
\begin{equation}
B^2(\mathbb{R}^d;\mathbb{R}^{d\times m}) = V^2_{\text{pot}} \oplus V^2_{\text{sol}} \oplus \mathbb{R}^{d\times m}.
\end{equation}
where $V^2_{\text{pot}}$ (resp., $V^2_{\text{sol}}$) denotes the closure of potential (resp., solenoidal) trigonometric polynomials with mean value zero in $B^2(\mathbb{R}^d;\mathbb{R}^{d\times m})$. Assume $A =(a^{\alpha\beta}_{ij}) \in B^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}). For each $1\le j\le d$ and $1\le \beta \le m$, let $\psi_j^\beta = (\psi_{ij}^{\alpha\beta})$ be the unique function in $V^2_{\text{pot}}$ satisfying the following auxiliary equations
\begin{equation}\label{def_Vpot}
(a_{ik}^{\alpha\gamma} \psi_{kj}^{\gamma\beta}, \phi_i^\alpha) = - (a_{ij}^{\alpha\beta}, \phi_i^\alpha) \qquad \text{for any } \phi = (\phi_i^\beta) \in V^2_{\text{pot}}.
\end{equation}
It is shown in \cite{JKO} that $A$ admits homogenization with homogenized matrix $\widehat{A} = (\widehat{a}_{ij}^{\alpha\beta})$ defined by
\begin{equation}\label{def_Ahat}
\widehat{a}_{ij}^{\alpha\beta} = \ag{a_{ij}^{\alpha\beta}} + \ag{a_{ik}^{\alpha\gamma} \psi_{kj}^{\gamma\beta}}.
\end{equation}
Moreover, $\widehat{A^*} = (\widehat{A})^*$, where $A^*$ denotes the adjoint of $A$. The following is a statement of the homogenization theorem whose proof actually was contained in \cite{ShenZhuge2}.
\begin{theorem}\label{thm_homo}
Suppose $A=(a_{ij}^{\alpha\beta})$ satisfies the ellipticity condition (\ref{def_ellipticity}) and each $a_{ij}^{\alpha\beta} \in B^2(\mathbb{R}^d)$. Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^d$ and $F\in H^{-1}(\Omega;\mathbb{R}^m)$. Let $u_\varepsilon \in H^1(\Omega;\mathbb{R}^m)$ be a weak solution of $\mathcal{L}_\varepsilon(u_\varepsilon) + \lambda u_\varepsilon = F$. Suppose that $u_\varepsilon \to u_0$ weakly in $H^1(\Omega;\mathbb{R}^m)$. Then $A(x/\varepsilon) \nabla u_\varepsilon \to \widehat{A} \nabla u_0$ weakly in $L^2(\Omega;\mathbb{R}^{d\times m})$. Consequently, if $f\in H^{1/2}(\partial \Omega;\mathbb{R}^m)$ and $u_\varepsilon$ is the weak solution to the Dirichlet problem:
\begin{equation}
\mathcal{L}_\varepsilon(u_\varepsilon) + \lambda u_\varepsilon= F \quad \text{ in } \Omega \quad \text{and} \quad u_\varepsilon=f \quad \text{on } \partial\Omega,
\end{equation}
Then, as $\varepsilon\to 0$, $u_\varepsilon\to u_0$ weakly in $H^1 (\Omega;\mathbb{R}^m)$, where $u_0$ is the weak solution to
\begin{equation}
\mathcal{L}_0(u_0) + \lambda u_0= F \quad \text{ in } \Omega \quad \text{and} \quad u_0=f \quad \text{on } \partial\Omega.
\end{equation}
\end{theorem}
We point out here that $B^2(\mathbb{R}^d)$ is usually the largest space of almost-periodic functions in which the homogenization theorem could be established. However, this space seems unsuitable for obtaining quantitative theory due to the lack of spacial uniformity.
\subsection{Almost-periodicity and approximate correctors}
We define a subspace of $B^2(\mathbb{R}^d)$,
\begin{equation*}
APW^2(\mathbb{R}^d) = \text{the closure of Trig} (\mathbb{R}^d) \text{ with respect to } W^2 \text{ semi-norm},
\end{equation*}
where the $W^2$ semi-norm is defined in (\ref{def_W2}). The functions in $APW^2(\mathbb{R}^d)$ are called almost-periodic in the sense of H. Weyl. Note that in the definition of $APW^2(\mathbb{R}^d)$, the regularity assumption is completely removed and hence this space is much larger than
the classes of uniformly almost-periodic functions considered in \cite{Shen2, AS, AGK}. Earlier work in \cite{ShenZhuge2} also indicates that $APW^2(\mathbb{R}^d)$ is a fairly suitable space for quantitative homogenization. All of the following settings and results concerning the coefficient matrix $A\in APW^2(\mathbb{R}^d)$ were formulated in \cite{ShenZhuge2}, based on the ideas of \cite{Shen2} and \cite{AGK}.
For $g\in L^p_{\text{loc}}(\mathbb{R}^d)$ and $R>0$, we define the norm
\begin{equation}\label{def_Sp}
\| g\|_{S_R^p} =\sup_{x\in \mathbb{R}^d} \left(\fint_{B(x, R)} |g|^p\right)^{1/p}.
\end{equation}
For $y, z\in \mathbb{R}^d$, define the difference operator
\begin{equation}\label{def_Diff}
\Delta_{yz} g (x):= g(x+y)- g(x+z) .
\end{equation}
Let $P=P_k =\big\{ (y_1, z_1), \dots, (y_k, z_k) \big\}$ be a collection of pairs $(y_i, z_i)\in \mathbb{R}^d\times \mathbb{R}^d$, and
\begin{equation*}
Q=\big\{ (y_{i_1}, z_{i_1}), \dots, (y_{i_\ell}, z_{i_\ell}) \big\}
\end{equation*}
be a subset of $P$ with $i_1<i_2<\dots <i_\ell$. Define
\begin{equation*}
\Delta_Q (g)=\Delta_{y_{i_1} z_{i_1}} \cdots \Delta_{y_{i_\ell} z_{i_\ell}} (g).
\end{equation*}
To quantify the almost periodicity of the coefficient matrix $A$,
we introduce
\begin{equation}\label{def_rho_k}
\rho_{k} ( L, R)
=\sup_{y_1\in \mathbb{R}^d}\inf_{|z_1|\le L} \cdots\sup_{y_k\in \mathbb{R}^d} \inf_{|z_k|\le L}
\sum
\|\Delta_{Q_1} (A)\|_{S^p_R}
\cdots \|\Delta_{Q_\ell} (A)\|_{S^p_R},
\end{equation}
where the sum is taken over all partitions of $P=Q_1\cup Q_2 \cup \cdots \cup Q_\ell$ with $1\le \ell\le k$ and $Q_i\cap Q_j = \emptyset$ if $i\neq j$.
The exponent $p$ in (\ref{def_rho_k}) depends on $k$ and is given by
\begin{equation}\label{def_expt_p}
\frac{k}{p} =\frac{1}{2}-\frac{1}{\bar{q}},
\end{equation}
where $\bar{q}>2$ is the exponent related to the reverse H\"older estimate (Meyers' estimate) of solutions of elliptic operators, which depends only on $d,m$ and $\mu$. Note that $\rho_k(L,R) \le C_k \rho_1(L,R)$ and $\rho_1(L,R) \to 0$ as $L,R\to\infty$.
\begin{definition}
Let $P_j^\beta(x) = x_j e^\beta$, where $e^\beta = (0,\cdots,1,\cdots,0)$ with $1$ in the $\beta^{\text{th}}$ position. For any $T>0$, let $u = \chi^\beta_{T,j} = (\chi^{1\beta}_{T,j},\cdots,\chi^{m\beta}_{T,j})$ be the weak solution of
\begin{equation}\label{def_corrector}
-\text{div} (A(x) \nabla u) + T^{-2} u = \text{div}(A(x)\nabla P_j^\beta) \quad \text{in } \mathbb{R}^d,
\end{equation}
given by \cite[Lemma 3.1]{ShenZhuge2}. The matrix-valued functions $\chi_T = (\chi_{T,j}^\beta) = (\chi_{T,j}^{\alpha\beta})$ are called the approximate correctors for the family of operators $\{\mathcal{L}_\varepsilon \}$.
\end{definition}
The importance of approximate correctors is due to the fact that
\begin{equation}\label{ineq_chiT2psi}
\norm{\nabla \chi_T - \psi}_{B^2} \to 0, \qquad \text{as } T \to \infty,
\end{equation}
and thus $\chi_T$ could be regarded as an approximation of the usual correctors.
\begin{theorem}\label{thm_chiT_1}
Suppose that $A\in APW^2(\mathbb{R}^d)$ and satisfies the ellipticity condition (\ref{def_ellipticity}).
Fix $k\ge 1$ and $\sigma\in (0,1)$.
Then there exists a constant $c>0$, depending only on $d$ and $k$, such that
for any $T\ge 2$,
\begin{equation}\label{ineq_dchiT_1}
\|\nabla \chi_T\|_{S^2_1} \le C_\sigma T^\sigma,
\end{equation}
and
\begin{equation}\label{ineq_chiT_1}
\|\chi_T\|_{S^2_1}
\le C_\sigma \Theta_{k,\sigma}(T),
\end{equation}
where $C_\sigma$ depends only on $\sigma, k$ and $A$, and $\Theta_{k,\sigma}$ is defined by
\begin{equation}\label{def_Theta}
\Theta_{k,\sigma}(T) = \int_1^T \inf_{1\le L\le t}
\left\{ \rho_k (L, t) +\exp\left(-\frac{c\, t^2}{L^2} \right) \right\}
\left(\frac{T}{t}\right)^\sigma dt.
\end{equation}
\end{theorem}
\begin{definition}
For $1\le i,j\le d$ and $1\le \alpha,\beta \le m$, define
\begin{equation}\label{def_bT}
b^{\alpha\beta}_{T,ij}(y) = a^{\alpha\beta}_{ij}(y) + a^{\alpha\gamma}_{ik} \frac{\partial}{\partial y_k} \big( \chi^{\gamma\beta}_{T,j}(y)\big) - \widehat{a}^{\alpha\beta}_{ij}.
\end{equation}
We call $\phi^{\alpha\beta}_{T,ij} \in H^2_{\text{loc}}(\mathbb{R}^d)$ the dual approximate correctors if they are the solutions of
\begin{equation}\label{def_phiT}
-\Delta \phi^{\alpha\beta}_{T,ij} + T^{-2} \phi^{\alpha\beta}_{T,ij} = b^{\alpha\beta}_{T,ij} - \ag{b^{\alpha\beta}_{T,ij}},
\end{equation}
given by \cite[Lemma 3.1]{ShenZhuge2}.
\end{definition}
\begin{theorem}\label{thm_dualcor}
Suppose that $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}). Then, for any $\sigma\in (0,1), k\ge 1$ and $T\ge 2$,
\begin{align}\label{ineq_phiT_B1}
\begin{aligned}
\norm{T^{-1} \phi_T}_{S_1^2} + \norm{\nabla \phi_T}_{S_1^2} + \norm{ T \nabla \frac{ \partial }{\partial x_i} \phi_{T,i\cdot}}_{S_1^2} \le C_\sigma \Theta_{k,\sigma}(T),
\end{aligned}
\end{align}
where the constant $C_\sigma$ depends only on $\sigma,k$ and $A$.
\end{theorem}
Also throughout this paper we define
\begin{equation}\label{def_omega}
\omega_{k,\sigma}(\varepsilon) = \norm{\psi-\nabla \chi_T}_{B^2} + \norm{\psi^*-\nabla \chi_T^*}_{B^2} + T^{-1}\Theta_{k,\sigma}(T), \qquad \varepsilon = T^{-1},
\end{equation}
where $\psi^*$ and $\chi_T^*$ are the auxiliary functions and approximate correctors for the family of operators $\{\mathcal{L}_\varepsilon^* \}$. We are willing to emphasize that the quantity $\omega_{k,\sigma}(\varepsilon)$ plays an important role in this paper since it perfectly characterizes the almost-periodicity of coefficients $A$, in the sense of H. Weyl, and quantifies the rate of convergence; see Theorem \ref{thm_L2_C11} and \ref{thm_H1_Lip}. Actually, it is shown in \cite{ShenZhuge2} that if $A$ satisfies the ellipticity condition (\ref{def_ellipticity}) and $A\in APW^2(\mathbb{R}^d)$, then $\omega_{k,\sigma}(\varepsilon) \to 0$ as $\varepsilon \to 0$. In particular, the following lemma gives the explicit control on $\omega_{k,\sigma}(\varepsilon)$ in terms of $\rho_k$, which quantifies the almost-periodicity of $A$.
\begin{lemma}\label{lem_rate}
Suppose that $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}). Then
\begin{equation}\label{ineq_omg}
\omega_{k,\sigma}(\varepsilon) \le C\int_{T}^{\infty} \frac{\Theta_{k,\sigma}(t)}{t^2} dt + CT^{-1}\Theta_{k,\sigma}(T),
\end{equation}
provided the integral on the right-side is bounded, where $T = \varepsilon^{-1}$. In particular,
(i) if $\rho_k(L,L) \le C L^{-1-\alpha}$ for some $\alpha>0,k\ge 1$, then
\begin{equation}
\omega_{k,\sigma'}(\varepsilon) = O(\varepsilon),
\end{equation} for some $\sigma'$ that depends only on $\alpha$;
(ii) if $\rho_k(L,L) \le C L^{-\alpha}$ for some $0<\alpha \le 1,k\ge 1$, then
\begin{equation}
\omega_{k,\sigma'}(\varepsilon) = O(\varepsilon^\beta),
\end{equation} for all $\beta<\alpha$ and some $\sigma'$ that depends only on $\alpha,\beta$;
(iii) if $\rho_k(L,L) \le C \ln(1+L)^{-\alpha}$ for some $\alpha > 1, k\ge 1$, then
\begin{equation}
\omega_{k,\sigma}(\varepsilon) = O((-\ln \varepsilon)^{1-\alpha}).
\end{equation}
\end{lemma}
\begin{proof}
In view of (\ref{def_omega}), to see (\ref{ineq_omg}), it suffices to show
\begin{equation}
\norm{\psi-\nabla \chi_T}_{B^2} \le C\int_{T}^{\infty} \frac{\Theta_{k,\sigma}(t)}{t^2} dt.
\end{equation}
Observe that (\ref{ineq_chiT2psi}) implies
\begin{equation*}
\norm{\psi-\nabla \chi_T}_{B^2} \le \sum_{j=1}^\infty \norm{\nabla \chi_{2^{j-1} T}-\nabla \chi_{2^{j} T}}_{B^2}.
\end{equation*}
Let $v = \chi_{2^{j-1} T}- \chi_{2^{j} T}$. It follows from the definition of $\chi_T$ that $v$ satisfies
\begin{equation*}
-\text{div} (A \nabla v) + (2^{j-1}T)^{-2} v = -3 (2^{j}T)^{-2} \chi_{2^j T}.
\end{equation*}
Therefore, a standard estimate in \cite[Lemma 3.1]{ShenZhuge2} claims that
\begin{equation*}
\norm{\nabla v}_{S_{2^{j-1}T}^2} = \norm{\nabla \chi_{2^j T}-\nabla \chi_{2^{j-1} T}}_{S_{2^{j-1} T}^2} \le C (2^{j}T)^{-1}\norm{\chi_{2^jT}}_{S_{2^{j-1} T}^2}.
\end{equation*}
Hence, it follows from (\ref{ineq_chiT_1}) that
\begin{align*}
\norm{\psi-\nabla \chi_T}_{B^2} &\le C\sum_{j=1}^\infty (2^j T)^{-1} \norm{\chi_{2^jT}}_{S_{2^{j-1} T}^2} \\
&\le C\sum_{j=1}^\infty (2^j T)^{-1} \Theta_{k,\sigma}(2^j T) \\
&\le C \sum_{j=1}^\infty \int_{2^{j-1}T}^{2^j T} \frac{\Theta_{k,\sigma}(t)}{t^2} dt \\
&= C\int_{T}^{\infty} \frac{\Theta_{k,\sigma}(t)}{t^2} dt.
\end{align*}
The estimate for $\norm{\psi^*-\nabla \chi_T^*}_{B^2}$ is exactly the same.
Now we recall that (i) was actually shown in \cite{ShenZhuge2} and we skip the proof here. Parts (ii) and (iii) are direct corollaries of (\ref{ineq_omg}). Indeed, for (ii), if $\rho_k(L,L) \le C L^{-\alpha}$ for some $0<\alpha \le 1$, then
\begin{align*}
\inf_{1\le L\le t}
\left\{ \rho_k (L, t) +\exp\left(-\frac{c\, t^2}{L^2} \right) \right\} &\le \rho_k (t^{\delta}, t) +\exp\left(-t^{2(1-\delta)} \right) \\
& \le C t^{-\delta \alpha}
\end{align*}
for any $0<\delta<1$ and $C$ depends also on $\delta$ and $\alpha$. Choosing $\sigma$ appropriately such that $\delta \alpha + \sigma\neq 1$, we obtain
\begin{equation}
\Theta_{k,\sigma}(T) \le C T^\sigma \int_1^T t^{-\delta \alpha - \sigma} dt \le C T^{1-\delta\alpha}.
\end{equation}
As a result,
\begin{equation}
\omega_{k,\sigma}(\varepsilon) \le C \int_T^\infty \frac{t^{1-\delta \alpha}}{t^2} dt + C T^{-1} T^{1-\delta \alpha} \le C T^{-\delta\alpha} = C\varepsilon^\beta,
\end{equation}
where $\beta = \delta\alpha$ could be any number less than $\alpha$ since $0<\delta<1$ is arbitrary. The estimate for (iii) is similar.
\end{proof}
\begin{remark}
we point out here that part (ii) in the lemma above will not be used in this paper. Part (i) will be involved in the Rellich estimate in the last section. And part (iii) provides a sufficient condition on the coefficients for the Dini-type condition (\ref{ineq_Dini_Intro}). Actually, it is not hard to see that if $\rho_k(L,L) \le C \ln(1+L)^{-\alpha}$ for some $k\ge 1$ and $\alpha > 3$, then (\ref{ineq_Dini_Intro}) is satisfied.
Also, we should mention that these estimates of decay for $\rho_k$ hold for any periodic coefficients or sufficiently smooth quasi-periodic coefficients (see \cite{AGK} for example). This means that all the work in this paper generalizes the results in periodic homogenization and is really applicable to non-periodic homogenization.
\end{remark}
\subsection{A framework for convergence rates}
In this subsection, we will introduce a framework for obtaining rates of convergence in $L^2$ space. This framework was formulated in \cite{ShenZhuge1} for mixed boundary value problems with periodic coefficients, which was motivated by earlier work in \cite{GG,Su1,Su2}. The advantage of this framework is that we can handle homogenization problems with different boundary conditions and non-periodic coefficients in a more efficient uniform fashion. To see this, we first introduce some notations and lemmas. Let $\zeta \in C_0^\infty(B_1(0))$ be a cut off function with $\int \zeta = 1$ and $\zeta_\varepsilon(x) = \varepsilon^{-d} \zeta(x/\varepsilon) $. Define the smoothing operator
\begin{equation}
S_\varepsilon f(x) = \zeta_\varepsilon* f(x) = \int_{\mathbb{R}^d} \zeta_\varepsilon(y) f(x-y) dy.
\end{equation}
Clearly, for $1\le p\le \infty$,
\begin{equation}\label{ineq_S_p}
\norm{S_{\varepsilon} f}_{L^p(\Omega)} \le \norm{f}_{L^p(\Omega)}.
\end{equation}
Let $\delta > 2\varepsilon$ be a small parameter to be determined. Let $\eta_\delta \in C_0^\infty(\Omega)$ be a cut-off function so that $\eta_\delta(x) =0 $ in $\Omega_\delta = \{x\in\Omega; \text{dist}(x,\partial\Omega) < \delta\}$, $\eta_\delta(x) = 1$ in $\Omega \setminus \Omega_{2\delta}$ and $|\nabla \eta_\delta(x)| \le C\delta^{-1}$. Then define the so-called localized smoothing operator as
\begin{equation}
K_{\varepsilon,\delta} f(x) = S_\varepsilon (\eta_\delta f)(x).
\end{equation}
Note that $K_{\varepsilon,\delta} f \in C_0^\infty(\Omega)$ since $\delta > 2\varepsilon$.
Lemma \ref{lem_gxe_B1}-\ref{lem_Oeu_g} are standard and their proofs may also be found in \cite{ShenZhuge1}.
\begin{lemma}\label{lem_gxe_B1}
Let $f\in L^p(\mathbb{R}^d)$ for some $1\le p< \infty$. Then for any $g\in L^p_{\text{loc}}(\mathbb{R}^d)$,
\begin{equation}
\norm{g(\cdot/\varepsilon)S_\varepsilon(f)}_{L^p(\mathbb{R}^d)}\le \norm{g}_{S_1^p} \norm{f}_{L^p(\mathbb{R}^d)},
\end{equation}
where $C$ depends only on $d$.
\end{lemma}
\begin{lemma}\label{lem_S_e}
Let $f\in W^{1,p}(\mathbb{R}^d)$ for some $1\le p<\infty$. Then
\begin{equation*}
\norm{S_\varepsilon(f) - f}_{L^p(\mathbb{R}^d)} \le C \varepsilon \norm{\nabla f}_{L^p(\mathbb{R}^d)},
\end{equation*}
where $C$ depends only on $d$.
\end{lemma}
\begin{lemma}\label{lem_Oeu_g}
Let $\Omega$ be a bounded Lipschitz domain, then for any $u\in H^1(\mathbb{R}^d)$ and any $r\ge \varepsilon$,
\begin{equation*}
\int_{\Omega_{r}} |g(\cdot /\varepsilon) S_\varepsilon(u)|^2 \le C r \norm{g}_{S_1^2}^2 \norm{u}_{H^1(\mathbb{R}^d)} \norm{u}_{L^2(\mathbb{R}^d)},
\end{equation*}
where $\Omega_r = \{x\in \Omega: \text{dist}(x,\partial\Omega) < r\}$ and the constant $C$ depends only on the domain $\Omega$.
\end{lemma}
As we know, the $H^1$ estimate, i.e., the error estimate of the first order approximation, is usually the first step to establish the $L^2$ estimates of $u_\varepsilon - u_0$. We introduce our modified first order approximation, which is defined as follows:
\begin{equation}\label{def_1st-appr}
w_\varepsilon = u_\varepsilon - u_0 -\varepsilon \chi_{T,k}^\beta(x/\varepsilon) K_{\varepsilon,\delta} \left(\frac{\partial u_0^\beta}{\partial x_k}\right),
\end{equation}
where $u_\varepsilon, u_0$ are the weak solutions associated with $\mathcal{L}_\varepsilon$ and $\mathcal{L}_0$, respectively. The operator $K_{\varepsilon,\delta}$ in the correction term has two effects: (i) thanks to Lemma \ref{lem_gxe_B1}, the uniform boundedness of approximate correctors $\chi_T$ is not necessary for $L^2$ estimate of the correction term; (ii) the presence of the cut-off function avoids extra effect of boundary correctors or boundary regularity.
Now we state the theorem of $L^2$ convergence rate in $C^{1,1}$ domains as follows:
\begin{theorem}\label{thm_L2_C11}
Suppose that $\Omega$ is a bounded $C^{1,1}$ domain and $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}). Let $u_\varepsilon$ be the weak solution of (\ref{def_DP}) or (\ref{def_NP}) and $u_0\in H^2(\Omega)$ be the weak solution of homogenized system with the same boundary data. Then
\begin{equation}\label{ineq_L2DN_C11}
\Norm{u_\varepsilon - u_0}_{L^2(\Omega)} \le C\omega_{k,\sigma}(\varepsilon) \Big\{ \norm{\nabla^2 u_0}_{L^2(\Omega)} + (1+\lambda \omega_{k,\sigma}(\varepsilon))^{1/2} \norm{\nabla u_0}_{L^2(\Omega)} \Big\},
\end{equation}
where $\omega_{k,\sigma}(\varepsilon)$ is defined in (\ref{def_omega}) and $C$ depends only on $\sigma,k, A$ and Lipschitz character of $\Omega$.
\end{theorem}
This theorem was essentially proved in \cite{ShenZhuge2} with $\lambda = 0$ and Dirichlet boundary condition. The cases with positive $\lambda$ or Neumann boundary condition follow from a similar argument. The novelty in the theorem we stated above is that we figure out explicitly how the bound depends on $\lambda$ and the certain derivatives. The sketch of the proof is as follows. The main step is to show that for any test function $\varphi\in H^1(\Omega)$,
\begin{equation}\label{ineq_dwdphi}
\begin{aligned}
&\int_{\Omega} A(x/\varepsilon) \nabla w_\varepsilon \cdot \nabla \varphi + \lambda \int_{\Omega} w_\varepsilon \cdot \varphi \\
&\qquad \le C\lambda \omega_{k,\sigma}(\varepsilon) \norm{\varphi}_{L^2(\Omega)} \norm{\nabla u_0}_{L^2(\Omega)} \\
&\qquad \qquad + C\Big\{ \omega_{k,\sigma}(\varepsilon) \norm{\nabla \varphi}_{L^2(\Omega)}+ \omega_{k,\sigma}(\varepsilon)^{1/2} \norm{\nabla \varphi}_{L^2(\Omega_{4\delta})} \Big\} \norm{\nabla u_0}_{H^1(\Omega)}.
\end{aligned}
\end{equation}
The proof of this inequality is vary similar to \cite[Lemma 10.4]{ShenZhuge2}, which will be skipped here. Observe that (\ref{ineq_dwdphi}) gives exactly the $H^1$ convergence rate if we set $\varphi = w_\varepsilon$ and bound $\norm{\nabla \varphi}_{L^2(\Omega_{4\delta})}$ roughly by $\norm{\nabla \varphi}_{L^2(\Omega)}$, i.e.
\begin{equation}\label{ineq_H1DN_C11}
\begin{aligned}
&\norm{ w_\varepsilon}_{H^1(\Omega)} + (1+\lambda)^{1/2} \norm{w_\varepsilon}_{L^2(\Omega)} \\
&\qquad \qquad \le C\omega_{k,\sigma}(\varepsilon)^{1/2} \Big\{ \norm{\nabla^2 u_0}_{L^2(\Omega)} + C(1+\lambda \omega_{k,\sigma}(\varepsilon))^{1/2} \norm{\nabla u_0}_{L^2(\Omega)} \Big\},
\end{aligned}
\end{equation}
Then we can use a duality argument, combining with (\ref{ineq_dwdphi}) and (\ref{ineq_H1DN_C11}), to improve the $L^2$ rate of convergence. The duality argument, as an indispensable part of our framework, has been used in \cite{ShenZhuge1}. Finally, we should mention that for periodic case, the result of Theorem \ref{thm_L2_C11} has also been proved in \cite{Su1} and \cite{Su2}, without showing how the constant depends on $\lambda$.
\section{Convergence rates in Lipschitz domains}
Recently, the sharp rates of convergence in $C^{1,1}$ domains were obtained for variational elliptic problems with rough periodic coefficients; see \cite{GG,Su1,Su2,ShenZhuge1,Gu} for example. Theorem \ref{thm_L2_C11} possibly gives the nearly sharp rate of convergence for elliptic systems with almost-periodic coefficients in H. Weyl's sense. These results are restricted to $C^{1,1}$ domains and $u_0$ has to be in $H^2(\Omega)$, which are sufficient for the interior Lipschitz estimate as in \cite{ShenZhuge2}. However, they are definitely insufficient for boundary Lipschitz estimate with $C^{1,\alpha}$ domains and boundary data considered in this paper. In this section, we will extend the rate of convergence from $C^{1,1}$ domains to general Lipschitz domains for elliptic systems with almost-periodic coefficients. Our argument follows the same ideas as \cite{Shen1} and particularly relies on the solvability of $L^2$ elliptic boundary value problems with constant coefficients in Lipschitz domains. Now we state the main result of this section as follows:
\begin{theorem}\label{thm_H1_Lip}
Suppose that $\Omega$ is a bounded Lipschitz domain and $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}). Let $u_\varepsilon$ be the weak solution of (\ref{def_NP}) and $u_0$ be the weak solution of homogenized system (\ref{def_homo}) with the same boundary data. Let $w_\varepsilon$ be the first order approximation defined in (\ref{def_1st-appr}),
then
\begin{equation}\label{ineq_H1_Lip}
\norm{ w_\varepsilon}_{H^1(\Omega)} + (1+\lambda)^{1/2} \norm{w_\varepsilon}_{L^2(\Omega)} \le C\omega_{k,\sigma}(\varepsilon)^{1/2} \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\},
\end{equation}
where $T = \varepsilon^{-1}, \delta = T^{-1}\Theta_{k,\sigma}(T)$ and $C$ depends only on $\sigma, k, A$ and Lipschitz character of $\Omega$.
\end{theorem}
Observe that (\ref{ineq_H1_Lip}) is exactly the generalization of (\ref{ineq_H1DN_C11}) in Lipschitz domains if we take the energy estimate for $u_0$ into account. However, the proof for (\ref{ineq_H1_Lip}) will be more involved since $\nabla^2 u_0$ may not be an $L^2$ function in general. Also note that, as a corollary, Theorem \ref{thm_H1_Lip} provides an $L^2$ rate of convergence
\begin{equation}\label{ineq_L2_Lip}
\norm{u_\varepsilon - u_0}_{L^2(\Omega)} \le C\omega_{k,\sigma}(\varepsilon)^{1/2} \Big\{(1+\lambda)^{-1/2}\norm{F}_{L^2(\Omega)} + (1+\lambda)^{1/2} \norm{f}_{H^1(\partial\Omega)}\Big\},
\end{equation}
which is clearly far from sharp. However, as far as we know, this estimate is the only one we can derive for Lipschitz domains since the duality argument seems not applicable in this case. In other words, we cannot improve the convergence rate from $\omega_{k,\sigma}(\varepsilon)^{1/2}$ to $\omega_{k,\sigma}(\varepsilon)$ as we (and many other authors) have done in $C^{1,1}$ domains.
Actually, the optimal rate of convergence in Lipschitz domains is still an open problem even for periodic case. The best result so far in periodic case is contained in \cite{KLS}, where under additional symmetry condition on the coefficients the authors showed that the rate of convergence in $L^2$ is $O(\varepsilon|\ln \varepsilon|^{1/2+}) $. For almost-periodic homogenization, very little is known for convergence rate in Lipschitz domains. Nevertheless, estimate (\ref{ineq_L2_Lip}) still allows us to proceed with our work on uniform regularity.
To prove Theorem \ref{thm_H1_Lip}, the following energy estimate will be useful to us.
\begin{theorem}
Assume $A$ satisfies the ellipticity condition (\ref{def_ellipticity}) and $\Omega$ is a bounded Lipschitz domain. Let $u$ be the weak solution of
\begin{equation}
\text{div}(A \nabla u) +\lambda u= F \quad \text{in } \Omega, \qquad \text{and} \qquad u =f \quad \text{on } \partial\Omega,
\end{equation}
with $\lambda \ge 0$, then
\begin{equation}\label{ineq_energy}
\norm{\nabla u}_{L^2(\Omega)} + (1+\lambda)^{1/2}\norm{u}_{L^2(\Omega)} \le C (1+\lambda)^{-1/2}\norm{F}_{L^2(\Omega)} + C (1+\lambda)^{1/2}\norm{f}_{H^{1/2}(\partial \Omega)},
\end{equation}
where $C$ depends only on $d,m,\mu$ and $\Omega$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm_H1_Lip}]
A direct algebraic manipulation shows that
\begin{align}\label{eq_Lewe}
\begin{aligned}
\mathcal{L}_\varepsilon(w_\varepsilon) + \lambda w_\varepsilon &= -\lambda \varepsilon \chi_{T,k}^\beta(x/\varepsilon) K_{\varepsilon,\delta} \left(\frac{\partial u_0^\beta}{\partial x_k}\right) + \frac{\partial}{\partial x_i} \left\{ b^{\alpha\beta}_{T,ij}(x/\varepsilon) K_{\varepsilon, \delta} \bigg( \frac{\partial u^\beta_0}{\partial x_j} \bigg) \right\}\\
&\qquad + \frac{\partial}{\partial x_i} \left\{ \left\{\widehat{a}_{ij}^{\alpha\beta} - a^{\alpha\beta}_{ij}(x/\varepsilon) \right\} \left\{ K_{\varepsilon,\delta} \bigg(\frac{\partial u^\beta_0}{\partial x_j}\bigg) - \frac{\partial u^\beta_0}{\partial x_j} \right\}\right\} \\
&\qquad + \varepsilon \frac{\partial}{\partial x_i} \left\{ a^{\alpha\beta}_{ij}(x/\varepsilon) \chi^{\beta\gamma}_{T,k}(x/\varepsilon) \frac{\partial}{\partial x_j}K_{\varepsilon, \delta}\bigg( \frac{\partial u^\gamma_0}{\partial x_k }\bigg) \right\},
\end{aligned}
\end{align}
and
\begin{align}\label{eq_bT}
\begin{aligned}
&\frac{\partial}{\partial x_i} \left\{ b^{\alpha\beta}_{T,ij}(x/\varepsilon) K_{\varepsilon, \delta} \bigg( \frac{\partial u^\beta_0}{\partial x_j} \bigg) \right\} \\
&= \ag{b_{T,ij}^{\alpha\beta}} \frac{\partial}{\partial x_i}K_{\varepsilon, \delta} \bigg( \frac{\partial u_0^\beta}{ \partial x_j} \bigg) + \frac{\partial}{\partial x_i} \left\{ T^{-2} \phi_{T,ij}^{\alpha\beta}(x/\varepsilon) K_{\varepsilon, \delta} \bigg( \frac{\partial u_0^\beta}{\partial x_j}\bigg)\right\} \\
&- \frac{\partial}{\partial x_i} \left\{ \frac{\partial}{\partial x_i} h_{T,j}^{\alpha\beta}(x/\varepsilon) K_{\varepsilon, \delta} \bigg( \frac{\partial u_0^\beta}{\partial x_j}\bigg)\right\} \\
&- \varepsilon \frac{\partial}{\partial x_i} \left\{ \left[ \frac{\partial}{\partial x_k} (\phi_{T,ij}^{\alpha\beta})(x/\varepsilon) - \frac{\partial}{\partial x_i} (\phi_{T,kj}^{\alpha\beta})(x/\varepsilon) \right] \frac{\partial}{\partial x_k}K_{\varepsilon, \delta} \bigg( \frac{\partial u_0^\beta}{\partial x_j } \bigg) \right\},
\end{aligned}
\end{align}
where $b_T$ and $\phi_T$ are defined by (\ref{def_bT}) and (\ref{def_phiT}), respectively. The proof of (\ref{eq_bT}) is based on the following observation derived from (\ref{def_phiT})
\begin{equation}\label{eq_bT_phiT}
b_{T,ij}^{\alpha\beta} = \ag{b_{T,ij}^{\alpha\beta}} - \frac{\partial}{\partial y_k} \left( \frac{\partial}{\partial y_k} \phi_{T,ij}^{\alpha\beta} - \frac{\partial}{\partial y_i} \phi_{T,kj}^{\alpha\beta} \right) - \frac{\partial}{\partial y_i}\left( \frac{\partial}{\partial y_k} \phi_{T,kj}^{\alpha\beta} \right) - T^{-2}\phi_{T,ij}^{\alpha\beta},
\end{equation}
as well as the fact that the second term in the right-hand side of (\ref{eq_bT_phiT}) is skew-symmetric with respect to $(i, k)$.
Multiplying (\ref{eq_Lewe}) by $w_\varepsilon$ and integrating over $\Omega$, we arrive at
\begin{equation}\label{ineq_dwdw}
\begin{aligned}
&\int_{\Omega} A(x/\varepsilon) \nabla w_\varepsilon \cdot \nabla w_\varepsilon + \lambda \int_{\Omega} w_\varepsilon \cdot w_\varepsilon \\
& \qquad \le C\lambda \Theta_{k,\sigma}(\varepsilon) \norm{w_\varepsilon}_{L^2(\Omega)} \norm{K_{\varepsilon,\delta}(\nabla u_0)}_{L^2(\Omega)}\\
& \qquad \qquad + C \norm{\nabla w_\varepsilon}_{L^2(\Omega)} \Big\{ \norm{K_{\varepsilon,\delta}(\nabla u_0) - \nabla u_0}_{L^2(\Omega)} + \Theta_{k,\sigma}(\varepsilon) \norm{\nabla K_{\varepsilon,\delta}(\nabla u_0) }_{L^2(\Omega)} \\
& \qquad \qquad + |\ag{b_T}| \norm{K_{\varepsilon,\delta}(\nabla u_0)}_{L^2(\Omega)} + \Theta_{k,\sigma}(\varepsilon) \norm{K_{\varepsilon,\delta}(\nabla u_0)}_{L^2(\Omega)} \Big\},
\end{aligned}
\end{equation}
where we also used (\ref{eq_bT}), Theorem \ref{thm_chiT_1}, \ref{thm_dualcor} and integration by parts. Note that there is no boundary integral arising since $w_\varepsilon \in H_0^1(\Omega)$.
By the definition of $b_T$, we see that $|\ag{b_T}| \le C \norm{\nabla \chi_T - \psi}_{B^2}$ where $\psi$ is defined in (\ref{def_Vpot}).
Thus it suffices to estimate $\norm{K_{\varepsilon,\delta}(\nabla u_0)}_{L^2(\Omega)}, \norm{K_{\varepsilon,\delta}(\nabla u_0) - \nabla u_0}_{L^2(\Omega)}$ and $\norm{\nabla K_{\varepsilon,\delta}(\nabla u_0) }_{L^2(\Omega)}$.
First of all, by (\ref{ineq_S_p}) and energy estimate (\ref{ineq_energy}), one has
\begin{align}\label{ineq_Kdu0}
\begin{aligned}
\norm{K_{\varepsilon,\delta}(\nabla u_0)}_{L^2(\Omega)} &\le \norm{\nabla u_0}_{L^2(\Omega)} \\
&\le \frac{C}{(1+\lambda)^{1/2}}\norm{F}_{L^2(\Omega)} + C (1+\lambda)^{1/2}\norm{f}_{H^{1/2}(\partial \Omega)}.
\end{aligned}
\end{align}
Next, observe that
\begin{align*}
&\norm{K_{\varepsilon,\delta}(\nabla u_0) - \nabla u_0}_{L^2(\Omega)} \\
&\qquad \le \norm{S_{\varepsilon}(\theta_\delta \nabla u_0) - \theta_\delta \nabla u_0}_{L^2(\Omega)} + \norm{\nabla u_0}_{L^2(\Omega_{4\delta})} \\
&\qquad \le C \varepsilon \norm{\nabla^2 u_0}_{L^2(\Omega\setminus \Omega_\delta)} + C(1+\varepsilon \delta^{-1}) \norm{\nabla u_0}_{L^2(\Omega_{4\delta})},
\end{align*}
and
\begin{equation*}
\norm{\nabla K_{\varepsilon,\delta}(\nabla u_0) }_{L^2(\Omega)} \le C \norm{\nabla^2 u_0}_{L^2(\Omega\setminus \Omega_\delta)} + C\delta^{-1} \norm{\nabla u_0}_{L^2(\Omega_{4\delta})}.
\end{equation*}
Therefore, it is left to estimate $\norm{\nabla^2 u_0}_{L^2(\Omega\setminus \Omega_\delta)} $ and $\norm{\nabla u_0}_{L^2(\Omega_{4\delta})}$.
To estimate $\norm{\nabla u_0}_{L^2(\Omega_{4\delta})}$, we write $u_0 = v + h$, where
\begin{equation*}
v(x) = \int_{\Omega} \Gamma_0(x-y) (-\lambda u_0 + F(y))dy,
\end{equation*}
and $\Gamma_0$ denotes the matrix of fundamental solutions for homogenized operator $\mathcal{L}_0$ in $\mathbb{R}^d$, with the pole at the origin. Note that $\mathcal{L}_0 v= -\lambda u_0 + F$ and then by the well-known singular integral and fractional integral estimates,
\begin{align*}
\norm{ v}_{H^2(\Omega)} &\le C\norm{F}_{L^2(\Omega)} + C\lambda \norm{u_0}_{L^2(\Omega)} \\
& \le C(1+\lambda) \norm{f}_{H^1(\partial\Omega)} + C\norm{F}_{L^2(\Omega)}.
\end{align*}
Thus, by Lemma \ref{lem_Oeu_g},
\begin{equation}\label{ineq_dv_delta}
\norm{\nabla v}_{L^2(\Omega_{4\delta})} \le C\delta^{1/2} \norm{\nabla v}_{H^1(\Omega)} \le C\delta^{1/2} \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\}.
\end{equation}
Next we observe that $\mathcal{L}_0 h = 0$ in $\Omega$, then
\begin{align*}
\norm{h}_{H^1(\partial\Omega)} &\le \norm{f}_{H^1(\partial\Omega)} + \norm{v}_{H^1(\partial\Omega)} \\
&\le \norm{f}_{H^1(\partial\Omega)} + C\norm{v}_{H^2(\Omega)} \\
& \le C(1+\lambda) \norm{f}_{H^1(\partial\Omega)} + C\norm{F}_{L^2(\Omega)}.
\end{align*}
Hence, it follows from the estimates for solutions of $L^2$ regularity problem in Lipschitz domains for the operator $\mathcal{L}_0$ in \cite{DKV,Gwj} that
\begin{equation*}
\norm{(\nabla h)^*}_{L^2(\partial\Omega)} \le C(1+\lambda) \norm{f}_{H^1(\partial\Omega)} + C\norm{F}_{L^2(\Omega)},
\end{equation*}
where $(\nabla h)^*$ denotes the non-tangential maximal function of $\nabla h$. This, together with (\ref{ineq_dv_delta}), gives
\begin{equation}\label{ineq_u0_delta}
\norm{\nabla u_0}_{L^2(\Omega_{4\delta})} \le C\delta^{1/2} \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\}.
\end{equation}
It remains to estimate $\norm{\nabla^2 u_0}_{L^2(\Omega\setminus \Omega_\delta)}$. Note that the interior estimate for $\mathcal{L}_0$ gives
\begin{equation*}
|\nabla^2 h(x)| \le \frac{C}{d(x)} \left( \fint_{B(x,\delta/4)} |\nabla h|^2 \right)^{1/2},
\end{equation*}
where $d(x) = \text{dist}(x,\partial\Omega)$. Hence,
\begin{equation*}
\norm{\nabla^2 h}_{L^2(\Omega\setminus \Omega_\delta)} \le C\delta^{-1/2} \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\}.
\end{equation*}
Combining this with the estimate for $\nabla^2 v$, one obtains
\begin{equation*}
\norm{\nabla^2 u_0}_{L^2(\Omega\setminus \Omega_\delta)} \le C\delta^{-1/2} \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\}.
\end{equation*}
As a result, we have shown
\begin{equation}\label{ineq_Kdu0_du0}
\norm{K_{\varepsilon,\delta}(\nabla u_0) - \nabla u_0}_{L^2(\Omega)} \le C(\varepsilon\delta^{-1/2} + \delta^{1/2}) \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\},
\end{equation}
and
\begin{equation}\label{ineq_dKdu0}
\norm{\nabla K_{\varepsilon,\delta}(\nabla u_0) }_{L^2(\Omega)} \le C\delta^{-1/2} \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\}.
\end{equation}
Now let $\delta = \Theta_{k,\sigma}(T)$. It follows from (\ref{ineq_dwdw}), (\ref{ineq_Kdu0}), (\ref{ineq_Kdu0_du0}) and (\ref{ineq_dKdu0}) that
\begin{align*}
&\int_{\Omega} A(x/\varepsilon) \nabla w_\varepsilon \cdot \nabla w_\varepsilon + \lambda \int_{\Omega} w_\varepsilon \cdot w_\varepsilon \\
&\quad \le C\Theta_{k,\sigma}(T)^{1/2} \Big\{\norm{\nabla w_\varepsilon}_{L^2(\Omega)} + \lambda^{1/2} \norm{w_\varepsilon}_{L^2(\Omega)}\Big\} \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\}\\
&\quad \quad + C(1+\lambda)^{-1/2}\norm{\nabla \chi_T - \psi}_{B^2} \norm{\nabla w_\varepsilon}_{L^2(\Omega)} \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\}.
\end{align*}
It then follows that
\begin{equation}
\norm{\nabla w_\varepsilon}_{L^2(\Omega)} + \lambda^{1/2} \norm{w_\varepsilon}_{L^2(\Omega)} \le C\omega_{k,\sigma}(\varepsilon)^{1/2} \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\}.
\end{equation}
Finally, since $w_\varepsilon \in H_0^1(\Omega)$, it follows from the Poincar\'{e} inequality that
\begin{equation}
\norm{ w_\varepsilon}_{H^1(\Omega)} + (1+\lambda)^{1/2} \norm{w_\varepsilon}_{L^2(\Omega)} \le C\omega_{k,\sigma}(\varepsilon)^{1/2} \Big\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^1(\partial\Omega)}\Big\},
\end{equation}
which ends the proof.
\end{proof}
\begin{remark}
A similar result for the Neumann problem also holds. Precisely, let $\Omega$ and $A$ be the same as in Theorem \ref{thm_H1_Lip} and $u_\varepsilon, u_0$ be the weak solution of (\ref{def_NP}) and the corresponding homogenized system with the same data, respectively, then
\begin{equation}\label{ineq_L2N_Lip}
\norm{u_\varepsilon - u_0}_{L^2(\Omega)} \le C\omega_{k,\sigma}(\varepsilon)^{1/2} \Big\{(1+\lambda)^{-1/2}\norm{F}_{L^2(\Omega)} + (1+\lambda)^{1/2} \norm{g}_{L^2(\partial\Omega)}\Big\}.
\end{equation}
\end{remark}
Although most of this section was focused on Lipschitz domains with $H^1$ Dirichlet boundary data, sometimes we are also interested in $H^s$ boundary data when $s\neq 1$. The next theorem is concerned with the rate of convergence in $C^{1,1}$ domains with $H^s(\partial\Omega)$ Dirichlet boundary data, where $1/2\le s\le 3/2$. This theorem is of independent interest, though it will not be used in this paper. Before stating the theorem, we recall that Theorem \ref{thm_L2_C11} actually shows that if $\Omega$ is a $C^{1,1}$ domain, then
\begin{equation}\label{ineq_L2_C11}
\norm{u_\varepsilon - u_0}_{L^2(\Omega)} \le C\omega_{k,\sigma}(\varepsilon) \bigg\{\norm{F}_{L^2(\Omega)} + (1+\lambda) \norm{f}_{H^{3/2}(\partial\Omega)}\bigg\}.
\end{equation}
This follows from (\ref{ineq_L2DN_C11}) and the energy estimate.
\begin{theorem}\label{thm_L2_C11H1}
Suppose that $\Omega$ is a bounded $C^{1,1}$ domain and $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}). Let $u_\varepsilon$ be the weak solution of (\ref{def_NP}) and $u_0$ be the weak solution of homogenized system with the same data $(F,f)$. Then for every $s\in [1/2,3/2]$,
\begin{equation}
\Norm{u_\varepsilon - u_0}_{L^2(\Omega)} \le C\omega_{k,\sigma}(\varepsilon) \norm{F}_{L^2(\Omega)} + C\big\{\omega_{k,\sigma}(\varepsilon)(1+\lambda)\big\}^{s-1/2}\norm{f}_{H^s(\partial\Omega)},
\end{equation}
where $C$ depends only on $d,m,\sigma, A$ and $\Omega$.
\end{theorem}
\begin{proof}
Since $f\in H^s(\partial\Omega), s\ge 1/2$ and $\Omega$ is $C^{1,1}$, then there exists a extension operator $E$ such that $Ef\in H^{s+1/2}(\mathbb{R}^d)$ and $\text{Tr} (Ef) = f$ on $\partial\Omega$ and $\norm{Ef}_{H^{s+1/2}(\mathbb{R}^d)} \le C\norm{f}_{H^s(\partial\Omega)}$, where $C$ depends only on $d$ and $\Omega$. Denote $Ef$ by $\tilde{f}$. Let $\phi\in C^\infty_0(B_1(0))$ such that $\int \phi = 1$ and $\phi_\delta(x) = \delta^{-d} \phi(x/\delta)$, where $\delta>0$ is to be determined. Set $\tilde{f}_\delta = \phi_\delta * \tilde{f}$. Clearly, $\tilde{f}_\delta$ is smooth. We claim that
\begin{equation}\label{ineq_Hs_Fourier}
\norm{\tilde{f}_\delta}_{H^2(\mathbb{R}^d)} \le C\delta^{s-3/2} \norm{\tilde{f}}_{H^{s+1/2}(\mathbb{R}^d)} \le C\delta^{s-3/2} \norm{f}_{H^s(\partial\Omega)}.
\end{equation}
Actually, this is a standard exercise for the equivalent $H^s$ norm defined by Fourier transform, i.e.
\begin{equation*}
\norm{g}_{H^s(\mathbb{R}^d)}^2 = \int_{\mathbb{R}^d} (1+|\xi|^2)^{s} |\mathcal{F} g(\xi)|^2 d\xi.
\end{equation*}
The details are left to the readers. Now we let $f_\delta = \text{Tr} \tilde{f}_\delta $. By trace theorem and (\ref{ineq_Hs_Fourier}), we know $\norm{f_\delta}_{H^{3/2}(\partial\Omega)} \le C\delta^{s-3/2} \norm{f}_{H^s(\partial\Omega)}$.
Next, we construct a Dirichlet problem as follows:
\begin{equation*}
\mathcal{L}_\varepsilon(\tilde{u}_\varepsilon) + \lambda \tilde{u}_\varepsilon = F \quad \text{in } \Omega, \qquad \text{and} \qquad \tilde{u}_\varepsilon =f_\delta \quad \text{on } \partial\Omega.
\end{equation*}
Also, the corresponding homogenized problem is:
\begin{equation*}
\mathcal{L}_0(\tilde{u}_0) + \lambda \tilde{u}_0 = F \quad \text{in } \Omega, \qquad \text{and} \qquad \tilde{u}_0 =f_\delta \quad \text{on } \partial\Omega.
\end{equation*}
Since $\Omega$ is $C^{1,1}$ and $f_\delta \in H^{3/2}(\partial\Omega)$, then it follows form (\ref{ineq_L2_C11}) that
\begin{align*}
\norm{\tilde{u}_\varepsilon - \tilde{u}_0}_{L^2(\Omega)} & \le C\omega_{k,\sigma}(\varepsilon) \big\{ \norm{F}_{L^2(\Omega)} + (1+\lambda)\norm{f_\delta}_{H^{3/2}(\partial\Omega)} \big\} \\
& \le C\omega_{k,\sigma}(\varepsilon) \big\{ \norm{F}_{L^2(\Omega)} + \delta^{s-3/2}(1+\lambda) \norm{f}_{H^{s}(\partial\Omega)} \big\}.
\end{align*}
On the other hand, $v_\varepsilon = u_\varepsilon - \tilde{u}_\varepsilon$ satisfies
\begin{equation*}
\mathcal{L}_\varepsilon(v_\varepsilon) + \lambda v_\varepsilon = 0 \quad \text{in } \Omega, \qquad \text{and} \qquad v_\varepsilon =f - f_\delta \quad \text{on } \partial\Omega.
\end{equation*}
Then it follows from energy estimate and trace theorem that
\begin{equation*}
\norm{u_\varepsilon - \tilde{u}_\varepsilon}_{L^2(\Omega)} \le C \norm{f -f_\delta}_{H^{1/2}(\partial\Omega)} \le C\norm{\tilde{f} - \tilde{f}_\delta}_{H^1(\mathbb{R}^d)} \le C \delta^{s-1/2} \norm{f}_{H^s(\partial\Omega)},
\end{equation*}
where we have used the fact $s\ge 1/2$. Similarly, we also have
\begin{equation*}
\norm{u_0 - \tilde{u}_0}_{L^2(\Omega)} \le C\delta^{s-1/2} \norm{f}_{H^s(\partial\Omega)}.
\end{equation*}
As a consequence,
\begin{align*}
&\norm{u_\varepsilon - u_0}_{L^2(\Omega)} \\
&\qquad \le \norm{u_\varepsilon - \tilde{u}_\varepsilon}_{L^2(\Omega)} + \norm{\tilde{u}_\varepsilon - \tilde{u}_0}_{L^2(\Omega)} + \norm{u_0 - \tilde{u}_0}_{L^2(\Omega)} \\
&\qquad \le C \omega_{k,\sigma}(\varepsilon) \norm{F}_{L^2(\Omega)} + C\big\{ \delta^{s-1/2} + \delta^{s-3/2} \omega_{k,\sigma}(\varepsilon) (1+\lambda) \big\} \norm{f}_{H^s(\partial\Omega)} \\
&\qquad \le C \omega_{k,\sigma}(\varepsilon) \norm{F}_{L^2(\Omega)} + C \big\{\omega_{k,\sigma}(\varepsilon)(1+\lambda)\big\}^{s-1/2} \norm{f}_{H^s(\partial\Omega)},
\end{align*}
where in the last inequality we have chosen $\delta = \omega_{k,\sigma}(\varepsilon)(1+\lambda)$.
\end{proof}
\section{Boundary Lipschitz estimate}
In this section we will study the uniform boundary Lipschitz estimates down to the scale $\varepsilon$ in $C^{1,\alpha}$ domains with Dirichlet or Neumann boundary conditions. The Dirichlet and Neumann cases will be treated in two subsections separately. We modify the argument in \cite{Shen1} to make it adapted to general $\lambda > 0$.
Let $D_r, \Delta_r$ be defined in (\ref{def_C1a}). Note that $D_r$ acts as a subset of $\Omega$ who shares the same boundary portion $\Delta_r $ with $D_r$. Therefore, to establish the boundary estimates, it suffices to consider the boundary value problems in $D_r$. Throughout this section, $\alpha \in (0,1)$ will be fixed and $\lambda $ is restricted in $[0,1]$ so that it essentially has no influence on our proofs and results. For the case $\lambda >1$, we can use rescaling $v_\varepsilon(x) = \lambda u_\varepsilon (\lambda^{-1/2}x)$ so that it reduces to the case of $\lambda = 1$. However, in this case the constant will also depend on $\lambda$.
\subsection{Dirichlet boundary value problems}
Throughout this subsection, we let $u_\varepsilon \in H^1(D_2;\mathbb{R}^d)$ be a weak solution of $\mathcal{L}_\varepsilon(u_\varepsilon) + \lambda u_\varepsilon = F$ in $D_2$ with $u_\varepsilon = f$ on $\Delta_2$. Define the following auxiliary quantities adapted for nonzero $\lambda$:
\begin{equation}\label{def_Phi}
\begin{aligned}
\Phi(t) &= \frac{1}{t} \inf_{q\in \mathbb{R}^d} \Bigg\{ \left( \fint_{D_t} |u_\varepsilon - q|^2 \right)^{1/2} + t^2 \left( \fint_{D_t} |F|^p \right)^{1/p} \\
&\qquad + t^2\lambda |q| + \norm{f -q}_{L^\infty(\Delta_{t})} + t \norm{\nabla_{\tan} f}_{L^\infty(\Delta_t)}\Bigg\},
\end{aligned}
\end{equation}
and
\begin{equation}\label{def_H}
\begin{aligned}
H(t;u) & = \frac{1}{t} \inf_{\substack{P\in \mathbb{R}^{d\times d} \\q\in \mathbb{R}^d}} \Bigg\{ \left( \fint_{D_t} |u -Px - q|^2 \right)^{1/2} + t^2 \left( \fint_{D_t} |F|^p \right)^{1/p} \\
&\qquad + t^2 \lambda \norm{Px+q}_{L^\infty(D_t)} + \norm{f -Px-q}_{L^\infty(\Delta_{t})} \\
&\qquad + t \norm{\nabla_{\tan} (f - Px) }_{L^\infty(\Delta_t)} + t^{1+\tau} \norm{\nabla_{\tan} (f - Px) }_{C^\tau(\Delta_t)} \Bigg\},
\end{aligned}
\end{equation}
where $p>d$ and $\tau \in (0,\alpha)$.
\begin{lemma}\label{lem_ue_v}
Let $\varepsilon \le r\le 1$. There exists $v\in H^1(D_r;\mathbb{R}^d)$ such that $\mathcal{L}_0 (v) + \lambda v = F$ in $D_r$, $v =f$ on $\Delta_r$ and
\begin{equation}\label{ineq_ue_flat}
\frac{1}{r} \left( \fint_{D_r} |u_\varepsilon - v|^2\right)^{1/2} \le C[\omega_{k,\sigma}(\varepsilon/r)]^{1/2} \Phi(2r),
\end{equation}
where $C$ depends only on $A,\sigma$ and $M$.
\end{lemma}
\begin{proof}
By rescaling, it is sufficient to prove (\ref{ineq_ue_flat}) with $r = 1$. First by Caccioppoli's inequality,
\begin{equation*}
\int_{D_{3/2}} |\nabla u_\varepsilon|^2 \le C \left\{ \int_{D_{2}} |u_\varepsilon|^2 + \int_{D_{2}} |F|^2 + \norm{f}^2_{L^\infty(\Delta_2)} + \norm{\nabla_{\tan} f}^2_{L^\infty(\Delta_2)}\right\}.
\end{equation*}
By the co-area formula, this implies that there exists some $t\in [5/4,3/2]$ such that
\begin{align*}
&\int_{\partial D_t\setminus \Delta_2} \left( |\nabla u_\varepsilon|^2 + |u_\varepsilon|^2 \right) \\
& \qquad \le C \left\{ \int_{D_{2}} |u_\varepsilon|^2 + \int_{D_{2}} |F|^2 + \norm{f}^2_{L^\infty(\Delta_2)} + \norm{\nabla_{\tan} f}^2_{L^\infty(\Delta_2)}\right\}.
\end{align*}
Let $v$ be the weak solution to the Dirichlet problem:
\begin{equation}
\mathcal{L}_0(v) + \lambda v = F \quad \text{in } D_t \quad \text{and} \quad v = u_\varepsilon \quad \text{on } \partial D_t.
\end{equation}
Note that $v = f$ on $\Delta_1$. Then it follows from (\ref{ineq_L2_Lip}) that
\begin{align*}
&\norm{u_\varepsilon - v}_{L^2(D_1)} \le \norm{u_\varepsilon - v}_{L^2(D_t)} \\
&\qquad \le C [\omega_{k,\sigma}(\varepsilon)]^{1/2} \left\{ \norm{F}_{L^2(D_t)} + \norm{u_\varepsilon}_{H^1(\partial D_t)} \right\}\\
&\qquad \le C [\omega_{k,\sigma}(\varepsilon)]^{1/2} \left\{ \norm{u_\varepsilon}_{L^2(D_2)} +\norm{F}_{L^2(D_2)} + \norm{f}_{L^\infty(\Delta_2)} + \norm{\nabla_{\tan} f}^2_{L^\infty(\Delta_2)} \right\}.
\end{align*}
This implies
\begin{align*}
\left( \fint_{D_1} |u_\varepsilon - v|^2\right)^{1/2} &\le C[\omega_{k,\sigma}(\varepsilon)]^{1/2} \Bigg\{ \left( \fint_{D_2} |u_\varepsilon|^2 \right)^{1/2} + \left( \fint_{D_2} |F|^p \right)^{1/p} \\
&\qquad + \norm{f}_{L^\infty(\Delta_{2})} + \norm{\nabla_{\tan} f}_{L^\infty(\Delta_2)}\Bigg\}.
\end{align*}
Finally observe that the last inequality still holds if we subtract a constant $q\in \mathbb{R}^d$ simultaneously from $u_\varepsilon, v$ and $f$. This gives us the desired estimate with $r =1$ by taking the infimum over all $q\in \mathbb{R}^d$.
\end{proof}
\begin{lemma}[Flatness property for $\mathcal{L}_0$]\label{lem_Htheta}
Let $v \in H^1(D_2;\mathbb{R}^d)$ be a weak solution of $\mathcal{L}_0(v) + \lambda v= F$ in $D_2$ with $ v = f$ on $\Delta_2$. Then there exists $\theta\in (0,1/4)$, depending only on $p,A,\tau,\alpha$ and $M$, such that
\begin{equation}
H(\theta r; v) \le \frac{1}{2} H(r; v).
\end{equation}
\end{lemma}
\begin{proof}
This lemma is similar as \cite[Theorem 7.1]{AS}, which follows form the boundary $C^{1,\alpha}$ estimate for the second-order elliptic system with constant coefficients. By rescaling, we may assume $r = 1$. By choosing $q = v(0)$ and $P = \nabla v(0)$, we can see
\begin{equation*}
H(\theta;v) \le C \theta^\beta \norm{v}_{C^{1,\beta}(D_\theta)} + C\theta^\beta \left(\fint_{D_1} |F|^p \right)^{1/p},
\end{equation*}
where $\beta = \min\{\tau, (p-d)/p\}$. By boundary $C^{1,\alpha}$ estimate, we obtain
\begin{align*}
\norm{v}_{C^{1,\beta}(D_\theta)} \le & C \bigg \{ \left( \fint_{D_1} |v|^2 \right)^{1/2} + \left(\fint_{D_1} |F|^p \right)^{1/p} \\
& \qquad + \norm{f}_{L^\infty(\Delta_1)} + \norm{\nabla_{\tan} f}_{L^\infty(\Delta_1) } + \norm{\nabla_{\tan} f}_{C^\beta (\Delta_1) } \bigg\}.
\end{align*}
Then it follows that
\begin{align}
\begin{aligned}\label{ineq_H_thev}
H(\theta;v) \le & C \theta^\beta \bigg \{ \left( \fint_{D_1} |v|^2 \right)^{1/2} + \left(\fint_{D_1} |F|^p \right)^{1/p} \\
& \qquad + \norm{f}_{L^\infty(\Delta_1)} + \norm{\nabla_{\tan} f}_{L^\infty(\Delta_1) } + \norm{\nabla_{\tan} f}_{C^\beta (\Delta_1) } \bigg\}.
\end{aligned}
\end{align}
Now note that $\mathcal{L}_0(Px+q) = 0$ for any $P\in \mathbb{R}^{d\times d}, q\in \mathbb{R}^d$. Let $w = v -Px-q$, then
\begin{equation}
\mathcal{L}(w)+\lambda w = F - \lambda(Px+q),
\end{equation}
with Dirichlet boundary data $w = f-Px-q$ on $\Delta_2$. Applying (\ref{ineq_H_thev}) to $w$, we arrive at
\begin{align}
\begin{aligned}\label{ineq_H_thew}
H(\theta;w) \le & C \theta^\beta \bigg \{ \left( \fint_{D_1} |v-Px-q|^2 \right)^{1/2} + \left(\fint_{D_1} |F|^p \right)^{1/p} \\
& \qquad + \lambda \norm{Px+q}_{L^\infty(D_1)} + \norm{f-Px-q}_{L^\infty(\Delta_1)} \\
& \qquad + \norm{\nabla_{\tan} (f-Px)}_{L^\infty(\Delta_1) } + \norm{\nabla_{\tan} (f-Px)}_{C^\beta (\Delta_1) } \bigg\}.
\end{aligned}
\end{align}
Also, it follows from triangle inequality that
\begin{equation}\label{ineq_HvHw}
H(\theta;v) \le H(\theta;w) + \theta \lambda \norm{Px+q}_{L^\infty(D_1)}.
\end{equation}
Combing (\ref{ineq_H_thew}) and (\ref{ineq_HvHw}) and taking the infimum over all $P\in \mathbb{R}^{d\times d}, q\in \mathbb{R}^d$, we obtain
\begin{equation}
H(\theta;v) \le C\theta^\beta H(1;v).
\end{equation}
The desired estimate follows by fixing a $\theta \in (0,1/4)$ so small that $C\theta^\beta \le 1/2$.
\end{proof}
\begin{lemma}[Flatness property for $\mathcal{L}_\varepsilon$]\label{lem_HPhi}
Let $0<\varepsilon<1/2$, then there exists a $\theta\in (0,1/4)$ such that for any $r\in [\varepsilon,1/2]$,
\begin{equation}
H(\theta r; u_\varepsilon) \le \frac{1}{2} H(r;u_\varepsilon) + C [\omega_{k,\sigma}(\varepsilon/r)]^{1/2} \Phi(2r),
\end{equation}
where $C$ depends only on $p,A,\alpha,\tau,\sigma$ and $M$.
\end{lemma}
\begin{proof}
Fix $r\in [\varepsilon,1/2]$. Let $v$ be a solution of $\mathcal{L}_0(v) + \lambda v= F$ in $D_r$ with $v = f$ on $\Delta_r$. Observe that
\begin{align*}
H(\theta r; u_\varepsilon) &\le H(\theta r; v) + \frac{1}{\theta r}\left( \fint_{D_{\theta r}} |u_\varepsilon - v|^2 \right)^{1/2} \\
&\le \frac{1}{2} H(r; v) + \frac{1}{\theta r} \left( \fint_{D_{\theta r}} |u_\varepsilon - v|^2 \right)^{1/2}\\
& \le \frac{1}{2} H(r; v) + \frac{C}{ r} \left( \fint_{D_{r}} |u_\varepsilon - v|^2 \right)^{1/2} \\
&\le \frac{1}{2} H(r; v) + C[\omega_{k,\sigma}(\varepsilon/r)]^{1/2} \Phi(2r)
\end{align*}
where we have used Lemma \ref{lem_Htheta} for the second inequality and Lemma \ref{lem_ue_v} for the last inequality.
\end{proof}
\begin{lemma}\label{lem_iter}
Let $H(t)$ and $h(t)$ be two nonnegative continuous functions on the interval $(0,1]$. Suppose that there exists a constant $C_0$ such that
\begin{equation}\label{ineq_iter_c1}
\max_{r\le t\le 2r} H(t) + \max_{r\le t,s \le 2r} |h(t) - h(s)| \le C_0 H(2r).
\end{equation}
for any $r\in [\varepsilon ,1/2]$. We further assume that
\begin{equation}\label{ineq_iter_c2}
H(\theta r) \le \frac{1}{2} H(r) + C_0 \eta(\varepsilon/r) \left\{H(2r) + h(2r)\right\},
\end{equation}
for any $r\in [\varepsilon,1/2]$, where $\theta\in (0,1/4)$ and $\eta$ is a nonnegative increasing function on $[0,1]$ satisfying $\eta(0) = 0$ and
\begin{equation}\label{ineq_iter_c3}
\int_0^1 \frac{\eta(t)}{t} dt < \infty.
\end{equation}
Then
\begin{equation}
\max_{\varepsilon \le r\le 1} \left\{ H(r) + h(r) \right\} \le C \left\{ H(1) + h(1) \right\},
\end{equation}
where $C$ depends only on $C_0,\theta$ and $\eta$.
\end{lemma}
This lemma was proved in \cite[Lemma 8.5]{Shen1}, where the Dini-type condition (\ref{ineq_Dini_Intro}) is involved.
\begin{proof}[Proof of Theorem \ref{thm_Lip_DP}]
We assume that $0<\varepsilon < 1/4$ and let $u_\varepsilon$ define on $D_2$ as before. For $r\in (0,1)$ and $t\in (r,2r)$, it is easy to see that $H(t;u_\varepsilon) \le C H(2r;u_\varepsilon)$.
Next, we let $h(r) = |P_r|$, where $P_r$ is the $d\times d$ matrix such that
\begin{equation}
\begin{aligned}
H(r;u_\varepsilon) & = \frac{1}{r} \inf_{q\in \mathbb{R}^d} \Bigg\{ \left( \fint_{D_r} |u_\varepsilon -P_r x - q|^2 \right)^{1/2} + r^2 \left( \fint_{D_r} |F|^p \right)^{1/p} \\
&\qquad + r^2 \lambda \norm{P_r x+ q}_{L^\infty(D_{r})} + \norm{f -P_r x-q}_{L^\infty(\Delta_{r})} \\
&\qquad + r \norm{\nabla_{\tan} (f - P_r x) }_{L^\infty(\Delta_r)} + r^{1+\tau} \norm{\nabla_{\tan} (f - P_r x) }_{C^\tau(\Delta_r)} \Bigg\},
\end{aligned}
\end{equation}
Let $t,s\in [r,2r]$. Using
\begin{align*}
|P_t - P_s| &\le \frac{C}{r} \inf_{q\in \mathbb{R}^d} \left( \fint_{D_r} |(P_t-P_s)x - q|^2 \right)^{1/2} \\
& \le \frac{C}{t} \inf_{q\in \mathbb{R}^d} \left( \fint_{D_t} |u_\varepsilon - P_tx - q|^2 \right)^{1/2} + \frac{C}{s} \inf_{q\in \mathbb{R}^d} \left( \fint_{D_s} |u_\varepsilon - P_sx - q|^2 \right)^{1/2} \\
& \le CH(t;u_\varepsilon) + CH(s;u_\varepsilon);\\
& \le CH(2r;u_\varepsilon).
\end{align*}
Thus, we obtain
\begin{equation}
\max_{r\le t,s \le 2r} |h(t) - h(s)| \le C H(2r;u_\varepsilon).
\end{equation}
Furthermore, by the definition of $\Phi$ and $H$,
\begin{equation}
\Phi(2r) \le H(2r;u_\varepsilon) + h(2r).
\end{equation}
In view of Lemma \ref{lem_HPhi}, we have
\begin{equation}
H(\theta r; u_\varepsilon) \le \frac{1}{2} H(r;u_\varepsilon) + C [\omega_{k,\sigma}(\varepsilon/r)]^{1/2} \{ H(2r;u_\varepsilon) + h(2r) \},
\end{equation}
for all $r\in [\varepsilon ,1/2]$. Note that the function $H(r) = H(r;u_\varepsilon)$ and $h(r)$ satisfies the conditions (\ref{ineq_iter_c1}), (\ref{ineq_iter_c2}) and (\ref{ineq_iter_c3}). Then by Lemma \ref{lem_iter}, we obtain that for all $r \in [\varepsilon,1/2]$,
\begin{align*}
\inf_{q\in \mathbb{R}^d} \frac{1}{r} \left( \fint_{D_r} |u_\varepsilon - q|^2 \right)^{1/2} &\le C\{ H(r;u_\varepsilon) + h(r) \} \\
& \le C\{ H(1;u_\varepsilon) + h(1)\}\\
& \le C\left\{ \left( \fint_{D_1} |u_\varepsilon|^2 \right)^{1/2} + \norm{f}_{C^{1,\tau}(\Delta_1)} + \norm{F}_{L^p(D_1)} \right\}\\
& \le C\left\{ \left( \fint_{D_1} |\nabla u_\varepsilon|^2 \right)^{1/2} + \norm{f}_{C^{1,\tau}(\Delta_1)} + \norm{F}_{L^p(D_1)} \right\},
\end{align*}
where we have used the Poincar\'{e} inequality and the fact $u_\varepsilon = f$ on $\Delta_1$ in the last inequality. This, together with the Caccioppoli inequality, gives the estimate (\ref{ineq_DP_Lip}).
\end{proof}
\begin{remark}
It is obvious to see that the argument above for the large scale boundary Lipschitz estimate also works for the interior Lipschitz estimate; see \cite[Theorem 11.1]{ShenZhuge2} for another proof. Indeed, we are able to establish
\begin{equation}\label{ineq_int_Lip}
\left( \fint_{B_r} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C\left\{ \left( \fint_{B_1} |\nabla u_\varepsilon|^2 \right)^{1/2} + \norm{F}_{L^p(B_1)} \right\},
\end{equation}
where $u_\varepsilon$ is a solution for $\mathcal{L}_\varepsilon u_\varepsilon + \lambda u_\varepsilon = F$ in $B_2$.
\end{remark}
\begin{remark}\label{rmk_Lip}
As we have mentioned in the Introduction, under the additional condition of smoothness on the coefficients, the full uniform boundary Lipschitz estimate follows from Theorem \ref{thm_Lip_DP} and a blow-up argument. In fact, it is sufficient to assume $A$ is H\"{o}lder continuous, i.e., there exist $\delta>0$ and $C$ such that
\begin{equation}\label{ineq_A_holder}
|A(x) - A(y)| \le C|x-y|^\delta,
\end{equation}
for all $x,y\in \mathbb{R}^d$.
Now we would like to give the details of the blow-up argument. Let $u_\varepsilon$ be as before. Set $\tilde{u}(x) = \varepsilon^{-1} u_\varepsilon(x/\varepsilon)$, then $\tilde{u}$ satisfies
\begin{equation}
- \text{div} (A(x)\nabla \tilde{u}(x)) = F_\varepsilon(x) \quad \text{in } \tilde{D}_1, \qquad \text{and} \qquad \tilde{u} = f_\varepsilon \quad \text{on } \tilde{\Delta}_1,
\end{equation}
where $F_\varepsilon(x) = \varepsilon F(\varepsilon x)$ and $f_\varepsilon(x) = \varepsilon^{-1}f(\varepsilon x)$ and
\begin{equation}
\begin{aligned}
\tilde{D}_1 &= \left\{ (x',x_d)\in \mathbb{R}^d: |x'|<1 \text{ and } \varepsilon^{-1} \phi(\varepsilon x') < x_d < \varepsilon^{-1}\phi(\varepsilon x') + 1 \right\}, \\
\tilde{\Delta}_1 &= \left\{ (x',x_d)\in \mathbb{R}^d: |x'|<1 \text{ and } x_d = \varepsilon^{-1} \phi(\varepsilon x') \right\}. \\
\end{aligned}
\end{equation}
Recall that $\phi(0) = 0$ and $\norm{\nabla \phi}_{C^\tau(\mathbb{R}^{d-1})} \le M$. Let $\phi_\varepsilon(x) = \varepsilon^{-1}\phi(\varepsilon x)$. Then we also have $\phi_\varepsilon(0) = 0$ and $\norm{\nabla \phi_\varepsilon}_{C^\tau(B(0,1))} \le M$. Without loss of generality, we can also assume $f(0) = 0$ by subtracting a constant from the solution since we are only concerned with the magnitude of the gradient. So we have $\norm{f_\varepsilon}_{C^{1,\alpha}(\tilde{\Delta}_1)} \le \norm{f}_{C^{1,\alpha}(\Delta_1)}$. Moreover, it is clear that $\norm{F_\varepsilon}_{L^p(\tilde{D}_1)} \le \norm{F}_{L^p(D_1)}$ for $p> d$. If $A$ satisfies (\ref{ineq_A_holder}), then we can apply the Lipschitz estimate (or $C^{1,\alpha}$ estimate) for $\tilde{u}$ and obtain
\begin{align*}
|\nabla \tilde{u}(0)| &\le C\left\{ \left( \fint_{\tilde{D}_1} |\nabla \tilde{u}|^2 \right)^{1/2} + \norm{f_\varepsilon}_{C^{1,\tau}(\tilde{\Delta}_1)} + \norm{F_\varepsilon}_{L^p(\tilde{D}_1)} \right\} \\
& \le C\left\{ \left( \fint_{D_\varepsilon} |\nabla u_\varepsilon|^2 \right)^{1/2} + \norm{f}_{C^{1,\tau}(\Delta_1)} + \norm{F}_{L^p(D_1)} \right\}.
\end{align*}
Now noting that $\nabla u_\varepsilon(0) = \nabla \tilde{u}(0)$ and combining the last inequality with Theorem \ref{thm_Lip_DP}, we obtain
\begin{equation}
|\nabla u_\varepsilon(0)| \le C \left\{ \left( \fint_{D_1} |\nabla u_\varepsilon|^2 \right)^{1/2} + \norm{f}_{C^{1,\tau}(\Delta_1)} + \norm{F}_{L^p(D_1)} \right\}.
\end{equation}
Observe that this argument works equally well for the points whose distance from boundary is less than $\varepsilon$. On the other hand, for those points far away from boundary, we can combine the large scale interior Lipschitz estimate (\ref{ineq_int_Lip}) and the blow-up argument to obtain the full uniform Lipschitz estimate. As a consequence, we obtain the following.
\end{remark}
\begin{theorem}[Global Lipschitz estimate for DP]
Let $\Omega$ be a bounded $C^{1,\alpha}$ domain. Suppose that $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}) and H\"{o}lder continuity (\ref{ineq_A_holder}). Moreover, $\omega_{k,\sigma}$ obeys the Dini-type condition (\ref{ineq_Dini_Intro}) for some $\sigma \in (0,1), k\ge 1$. If $u_\varepsilon$ is the weak solution of (\ref{def_DP}) with $F\in L^p(\Omega), p>d$ and $f\in C^{1,\tau}(\partial\Omega), \tau > 0$, then
\begin{equation*}
\norm{\nabla u_\varepsilon}_{L^\infty(\Omega)} \le C \left\{ \norm{f}_{C^{1,\tau}(\partial\Omega)} + \norm{F}_{L^p(\Omega)} \right\},
\end{equation*}
where the constant is independent of $\varepsilon$.
\end{theorem}
Finally, we mention in advance that we should be able to obtain the full Lipschitz estimate for Neumann problems, as well as full H\"{o}lder estimates (Section 5.3) for both Dirichlet and Neumann problems, by the same blow-up argument. The details are left to the readers. (For H\"{o}lder estimates, it is sufficient to assume that $A$ belongs to VMO space \cite{Shen1}.)
\subsection{Neumann boundary value problems}
Actually, Neumann problems are treated analogously as Dirichlet problems. All the lemmas and results are parallel to those proved for Dirichlet problems. For this reason, we will just list all the lemmas needed as a sketch of the proof and omit all the technical details. Throughout this subsection, we let $u_\varepsilon \in H^1(D_2;\mathbb{R}^d)$ be a weak solution of $\mathcal{L}_\varepsilon(u_\varepsilon) + \lambda u_\varepsilon= F$ in $D_2$ with $\partial u_\varepsilon/\partial \nu_\varepsilon = g$ on $\Delta_2$. Define the following auxiliary quantities:
\begin{equation}
\begin{aligned}
\Psi(t) = & \frac{1}{t} \inf_{q\in \mathbb{R}^d} \Bigg\{ \left( \fint_{D_t} |u_\varepsilon - q|^2 \right)^{1/2} + t^2 \left( \fint_{D_t} |F|^p \right)^{1/p} \\
&\qquad + t^2\lambda |q| + t \norm{g}_{L^\infty(\Delta_t)}\Bigg\},
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
J(t;u) & = \frac{1}{t} \inf_{\substack{P\in \mathbb{R}^{d\times d} \\q\in \mathbb{R}^d}} \Bigg\{ \left( \fint_{D_t} |u -Px - q|^2 \right)^{1/2} + t^2 \left( \fint_{D_t} |F|^p \right)^{1/p}\\
&\qquad + t^2 \lambda \norm{Px+q}_{L^\infty(D_t)} + t \Norm{g - \frac{\partial}{\partial \nu_0}(Px) }_{L^\infty(\Delta_t)} \\
&\qquad + t^{1+\tau} \Norm{g - \frac{\partial}{\partial \nu_0}(Px) }_{C^\tau(\Delta_t)} \Bigg\},
\end{aligned}
\end{equation}
where $p>d$ and $\tau\in (0,\alpha)$.
\begin{lemma}\label{lem_Neu_rate}
Let $\varepsilon \le r\le 1$. There exists $w\in H^1(D_r;\mathbb{R}^d)$ such that $\mathcal{L}_0 (w) + \lambda w = F$ in $D_r$, $\partial w/\partial \nu_0 =g$ on $\Delta_r$ and
\begin{equation}\label{ineq_ue_flatN}
\left( \fint_{D_r} |u_\varepsilon - w|^2\right)^{1/2} \le C[\omega_{k,\sigma}(\varepsilon/r)]^{1/2} \Psi(2r),
\end{equation}
where $C$ depends only on $A,\sigma$ and $M$.
\end{lemma}
\begin{proof}
The lemma follows from (\ref{ineq_L2N_Lip}) and the same argument as Lemma \ref{lem_ue_v}.
\end{proof}
\begin{lemma}\label{lem_Neu_L0}
Let $w \in H^1(D_2;\mathbb{R}^d)$ be a weak solution of $\mathcal{L}_0(w) + \lambda w = F$ in $D_2$ with $\partial w/\partial \nu_0 = g$ on $\Delta_2$. Then there exists $\theta\in (0,1/4)$, depending only on $p,A,\tau,\alpha$ and $M$, such that
\begin{equation}
J(\theta r; w) \le \frac{1}{2} J(r; w).
\end{equation}
\end{lemma}
\begin{proof}
This lemma follows from the boundary $C^{1,\alpha}$ estimate for Neumann problems and the similar argument as Lemma \ref{lem_Htheta}.
\end{proof}
\begin{lemma}\label{lem_Neu_Le}
Let $0<\varepsilon<1/2$, then there exists a $\theta\in (0,1/4)$ such that for any $r\in [\varepsilon,1/2]$,
\begin{equation}
J(\theta r; u_\varepsilon) \le \frac{1}{2} J(r;u_\varepsilon) + C [\omega_{k,\sigma}(\varepsilon/r)]^{1/2} \Psi(2r),
\end{equation}
where $C$ depends only on $p,A,\alpha,\tau,\delta$ and $M$.
\end{lemma}
\begin{proof}
The lemma follows from the same lines as Lemma \ref{lem_HPhi}, by combining Lemma \ref{lem_Neu_rate} and \ref{lem_Neu_L0}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_Lip_NP}]
With Lemma \ref{lem_Neu_Le} at our disposal, (\ref{ineq_NP_Lip}) follows from Lemma \ref{lem_iter}, as in the case of Dirichlet boundary conditions. We omit the details.
\end{proof}
\section{Applications}
\subsection{Improved estimates of approximate correctors}
In \cite{ShenZhuge2}, we obtained the interior uniform H\"{o}lder continuity of the system
\begin{equation}
-\text{div}(A(x/\varepsilon) \nabla u_\varepsilon) + \lambda u_\varepsilon = F + \text{div} f,
\end{equation}
down to the scale $\varepsilon$, by a compactness argument. Based on that we were able to establish the estimates (\ref{ineq_dchiT_1}) for the approximate correctors for any $\sigma>0$. However, to recover the end-point case $\sigma =0$, we have to employ the interior Lipschitz estimate under the Dini-type condition (\ref{ineq_Dini_Intro}). Actually, this has been shown in \cite{ShenZhuge2} and here we will give a slightly different approach, based on our new version of Lipschitz estimate with $\lambda = 1$, to obtain the same estimate for $\sigma = 0$.
\begin{theorem}\label{thm_dchiT_S1}
Suppose that $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}) and for some $\sigma \in (0,1)$ and $k\ge 1$, $\omega_{k,\sigma}$ satisfies the Dini-type condition (\ref{ineq_Dini_Intro}). Then,
\begin{equation}\label{ineq_dchiT_imp}
\norm{\nabla \chi_T}_{S_1^2} \le C,
\end{equation}
where the constant $C$ depends only on $\sigma,k$ and $A$.
\end{theorem}
\begin{proof}
Recall that $\chi_{T,j}^\beta$ satisfy the equations for approximate correctors (\ref{def_corrector}). Fix $x_0\in \mathbb{R}^d$, and let
\begin{equation}
u_\varepsilon(x)= T^{-2}\chi_{T,j}^\beta(Tx) + T P_j^\beta(x-x_0), \qquad T = \varepsilon^{-1},
\end{equation}
where $P_j^\beta$ is an affine function. Then $u_\varepsilon$ satisfies
\begin{equation}\label{eq_chiT_var}
-\text{div}(A(x/\varepsilon) \nabla u_\varepsilon) + u_\varepsilon = \varepsilon P_j^\beta(x-x_0), \qquad x\in \mathbb{R}^d.
\end{equation}
Therefore, with the additional Dini-type condition on the convergence rate, we can apply the interior Lipschitz estimate to the system (\ref{eq_chiT_var}). It follows that
\begin{equation*}
\begin{aligned}
\left( \fint_{B(x_0,\varepsilon)} |\nabla u_\varepsilon|^2 \right)^{1/2} & \le C \left( \fint_{B(x_0,1)} |\nabla u_\varepsilon|^2 \right)^{1/2} + C \varepsilon \left( \fint_{B(x_0,1)} |P_j^\beta(\cdot -x_0)|^p \right)^{1/p}\\
& \le CT^{-1} \left( \fint_{B(x_0,T)} |\nabla \chi_{T,j}^\beta |^2 \right)^{1/2} + CT^{-1} \\
& \le CT^{-1},
\end{aligned}
\end{equation*}
where the last inequality follows from (\ref{def_corrector}) and \cite[Lemma 3.1]{ShenZhuge2}. Hence,
\begin{equation}
\left( \fint_{B(x_0,1)} |\nabla \chi_T|^2 \right)^{1/2} \le C.
\end{equation}
This implies (\ref{ineq_dchiT_imp}) since $x_0\in \mathbb{R}^d$ is arbitrary.
\end{proof}
\subsection{Rellich estimate in $L^2$}
The classical Rellich estimate for harmonic functions claims that $\norm{\nabla u}_{L^p(\partial\Omega)}$, $\norm{\nabla_{\tan} u}_{L^p(\partial\Omega)}$ and $\norm{\frac{\partial}{\partial \nu} u}_{L^p(\partial\Omega)}$ are comparable if they are bounded. These estimates are of importance since they usually imply the solvability of $L^p$ boundary value problems\cite{KS}. In this subsection, we will show the uniform Rellich estimate for $u_\varepsilon$ in $L^2$ at large scale in Lipschitz domains without any assumption of smoothness. For simplicity, we temporarily assume $F = 0$ and $\lambda = 0$.
First, we note that if $\Omega$ is a $C^{1,\alpha}$ domain and the conditions of Theorem \ref{thm_Lip_DP} are satisfied, then (\ref{ineq_DP_Lip}) implies
\begin{equation}\label{ineq_Rel_C11}
\left( \fint_{\Omega_r} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C \norm{f}_{C^{1,\tau}(\partial\Omega)} ,
\end{equation}
for all $\varepsilon\le r\le \text{diam}(\Omega)$, where we also used a covering argument and the energy estimate for $\norm{\nabla u_\varepsilon}_{L^2(\Omega)}$. One can see that (\ref{ineq_Rel_C11}) gives a Rellich-type estimate under stronger conditions, which do not imply (\ref{ineq_RelDP}) in Lipschitz domains. However, by taking advantage of some ideas from the proof of uniform Lipschitz estimate, we can easily obtain the following.
\begin{theorem}\label{thm_Rellich_DP}
Suppose that $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}) and $\Omega$ is a bounded Lipschitz domain. Let $u_\varepsilon$ be the weak solution of Dirichlet problem
\begin{equation}
\mathcal{L}_\varepsilon(u_\varepsilon) = 0 \quad \text{in } \Omega, \qquad \text{and} \qquad u_\varepsilon =f \quad \text{on } \partial\Omega,
\end{equation}
where $f\in H^1(\partial\Omega)$. Then for any $\omega_{k,\sigma}(\varepsilon) \le r\le \text{diam}(\Omega)$,
\begin{equation}\label{ineq_RelD}
\left( \fint_{\Omega_r} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C\norm{\nabla_{\tan} f}_{L^2(\partial\Omega)},
\end{equation}
where $C$ is independent of $\varepsilon$ and $r$.
\end{theorem}
\begin{proof}
Recall $\Omega_t = \{x\in \Omega; \text{dist}(x,\partial\Omega) < t \}$. We fix $r>\omega_{k,\sigma}(\varepsilon)$ and let
\begin{equation}\label{eq_we_4r}
w_\varepsilon = u_\varepsilon - u_0 -\varepsilon \chi_{T,k}^\beta(x/\varepsilon) K_{\varepsilon,4r} \left(\frac{\partial u_0^\beta}{\partial x_k}\right).
\end{equation}
Now following the same argument of Theorem \ref{thm_H1_Lip} and choosing $\delta = 4r$ after (\ref{ineq_dKdu0}), we obtain
\begin{equation}\label{ineq_H1_r}
\norm{ w_\varepsilon}_{H^1(\Omega)} \le Cr^{1/2} \norm{f}_{H^1(\partial\Omega)}.
\end{equation}
Note that this coincides with Theorem \ref{thm_H1_Lip} if $r = \omega_{k,\sigma}(\varepsilon)$.
The point here is that the last term on the right-hand side of (\ref{eq_we_4r}) is supported in $\Omega\setminus\Omega_{2r}$. Thus, by (\ref{ineq_H1_r}) and (\ref{ineq_u0_delta}), we have
\begin{align*}
\norm{\nabla u_\varepsilon}_{L^2(\Omega_r)} &\le \norm{\nabla w}_{L^2(\Omega)} + \norm{\nabla u_0}_{L^2(\Omega_r)} \\
&\le Cr^{1/2} \norm{f}_{H^1(\partial\Omega)} \\
&\le C |\Omega_r|^{1/2} \norm{f}_{H^1(\partial\Omega)},
\end{align*}
where we used the fact $|\Omega_r| \simeq r$ for Lipschitz domains. Finally, note that $u_\varepsilon - \int_{\partial\Omega} f$ is also a solution to the same system. Then the last estimate, together with the Poincar\'{e} inequality, gives the desired estimate.
\end{proof}
It is obvious that the proof of Theorem \ref{thm_Rellich_DP} actually has nothing to do with the boundary condition. Therefore, the similar estimate holds for Neumann problem as well.
\begin{theorem}\label{thm_Rellich_NP}
Suppose that $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}) and $\Omega$ is a Lipschitz domain. Let $u_\varepsilon$ be the weak solution of Dirichlet problem
\begin{equation}
\mathcal{L}_\varepsilon(u_\varepsilon) = 0 \quad \text{in } \Omega, \qquad \text{and} \qquad \frac{\partial u_\varepsilon}{\partial \nu_\varepsilon} =g \quad \text{on } \partial\Omega, \qquad \text{and} \qquad \int_{\Omega} u_\varepsilon = 0,
\end{equation}
where $F\in L^2(\Omega)$ and $g\in L^2(\partial\Omega)$. Then for any $\omega_{k,\sigma}(\varepsilon) \le r \le \text{diam}(\Omega)$,
\begin{equation}\label{ineq_RelN}
\left( \fint_{\Omega_r} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C\norm{g}_{L^2(\Omega)},
\end{equation}
where $C$ is independent of $\varepsilon$ and $r$.
\end{theorem}
Strictly speaking, just as the uniform Lipschitz estimate, (\ref{ineq_RelD}) and (\ref{ineq_RelN}) should be called large scale uniform Rellich estimates since the left-hand side is the average integral of $\nabla u_\varepsilon$ over a relatively thick boundary layer. To recover the usual Rellich estimates, we must strengthen the conditions from two aspects:
(1) a better rate of convergence, i.e., $\omega_{k,\sigma}(\varepsilon)= O(\varepsilon)$ as $\varepsilon \to 0$;
(2) symmetry and smoothness conditions on the coefficients, i.e., $A = A^*$ and $A$ is uniformly H\"{o}lder continuous.
With condition (2) above, we are able to bound $\norm{\nabla u_\varepsilon}_{L^2(\partial\Omega)}$ by the average integral of $\nabla u_\varepsilon$ over the boundary layer $\Omega_{c\varepsilon}$. Indeed, it follows from \cite[Theorem 6.3]{KS} and \cite[Remark 3.1]{Shen1} that if $A$ is symmetric and uniformly H\"{o}lder continuous, then
\begin{equation}\label{ineq_RelDs}
\int_{\partial\Omega} |\nabla u_\varepsilon|^2 \le C\int_{\partial\Omega} |\nabla_{\tan} u_\varepsilon|^2 + \frac{C}{\varepsilon} \int_{\Omega_{c\varepsilon}} |\nabla u_\varepsilon|^2,
\end{equation}
and
\begin{equation}\label{ineq_RelNs}
\int_{\partial\Omega} |\nabla u_\varepsilon|^2 \le C\int_{\partial\Omega} \Abs{\frac{\partial u_\varepsilon}{\partial \nu_\varepsilon}}^2 + \frac{C}{\varepsilon} \int_{\Omega_{c\varepsilon}} |\nabla u_\varepsilon|^2,
\end{equation}
where $C$ and $c$ are independent of $\varepsilon$. Now using condition (1) and setting $r = \omega_{k,\sigma}(\varepsilon) = C\varepsilon $ and combining (\ref{ineq_RelD}), (\ref{ineq_RelN}), (\ref{ineq_RelDs}) and (\ref{ineq_RelNs}), we obtain the usual well-known Rellich estimates:
\begin{equation}
\norm{\nabla u_\varepsilon}_{L^2(\partial\Omega)} \le C\norm{\nabla_{\tan} f}_{L^2(\partial\Omega)}, \qquad \norm{\nabla u_\varepsilon}_{L^2(\partial\Omega)} \le C\norm{\frac{\partial u_\varepsilon}{\partial \nu_\varepsilon}}_{L^2(\partial\Omega)}.
\end{equation}
\begin{remark}
We should mention that the large scale Rellich estimate in $L^p$ can also be established by using the uniform $W^{1,p}$ estimates and convergence rate in $W^{1,p}$ , as shown in \cite{Shen1} (Some conditions of smoothness on $A$ and $\Omega$ are required). However, we will not expand in detail.
\end{remark}
\subsection{Large scale boundary H\"{o}lder estimate} As an easier application of our previous argument for Lipschitz estimate, we will show the uniform H\"{o}lder estimate near the boundary. Let $D_r, \Delta_r$ be defined as before. Let $u_\varepsilon \in H^1(D_2;\mathbb{R}^d)$ be a weak solution of $\mathcal{L}_\varepsilon(u_\varepsilon) + \lambda u_\varepsilon = F$ in $D_2$ with $u_\varepsilon = f$ on $\Delta_2$. Here we assume that $F\in L^p(D_2)$ with $p\ge 2$ and $p>d/2$ and $f$ is Lipschitz continuous on $\Delta_2$. Consider the following auxiliary quantity
\begin{equation}\label{def_Phibeta}
\begin{aligned}
\Phi_\gamma (t;u) &= \frac{1}{t^\gamma} \inf_{q\in \mathbb{R}^d} \Bigg\{ \left( \fint_{D_t} |u_\varepsilon - q|^2 \right)^{1/2} + t^2 \left( \fint_{D_t} |F|^p \right)^{1/p} \\
&\qquad + t^2\lambda |q| + \norm{f -q}_{L^\infty(\Delta_{t})} + t \norm{\nabla_{\tan} f}_{L^\infty(\Delta_t)}\Bigg\},
\end{aligned}
\end{equation}
where $\gamma < \beta = \min\{2 - d/p,1\}$.
\begin{lemma}\label{lem_uev_holder}
Let $\varepsilon \le r\le 1$. There exists $v\in H^1(D_r;\mathbb{R}^d)$ such that $\mathcal{L}_0 (v) + \lambda v = F$ in $D_r$, $v =f$ on $\Delta_r$ and
\begin{equation}\label{ineq_ueflat_holder}
\frac{1}{r^\gamma} \left( \fint_{D_r} |u_\varepsilon - v|^2\right)^{1/2} \le C[\omega_{k,\sigma}(\varepsilon/r)]^{1/2} \Phi_\gamma(2r;u_\varepsilon),
\end{equation}
where $C$ depends only on $A,\sigma$ and $M$.
\end{lemma}
\begin{proof}
The proof is exactly the same as Lemma \ref{lem_ue_v}.
\end{proof}
\begin{lemma}\label{lem_Htheta_holder}
Let $v \in H^1(D_2;\mathbb{R}^d)$ be a weak solution of $\mathcal{L}_0(v) + \lambda v= F$ in $D_2$ with $ v = f$ on $\Delta_2$. Then there exists $\theta\in (0,1/4)$, depending only on $p,A,\tau,\alpha$ and $M$, such that
\begin{equation}
\Phi_\gamma(\theta r; v) \le \frac{1}{2} \Phi_\gamma(r; v).
\end{equation}
\end{lemma}
\begin{proof}
The lemma follows from the boundary $C^\alpha$ estimate for the second-order elliptic system with constant coefficients. By rescaling, we can assume that $r = 1$. Let $\gamma < \beta_0 < \beta$ and $q = v(0)$. It is easy to see
\begin{equation}
\Phi_\gamma(\theta;v) \le C\theta^{\beta_0 - \gamma} \norm{v}_{C^{\beta_0}(D_\theta)} + C\theta^{2-\gamma - d/p} \left( \fint_{D_1} |F|^p \right)^{1/p}.
\end{equation}
Note that $\beta_0 < \beta \le 2-d/p$. Using boundary $C^{\beta_0}$ estimate for $v$, we obtain
\begin{equation}\label{ineq_vCbeta}
\norm{v}_{C^{\beta_0}(D_1)} \le C \Bigg\{ \left( \fint_{D_1} |v|^2 \right)^{1/2} + \left( \fint_{D_1} |F|^p \right)^{1/p}
+ \norm{f }_{C^{0,1}(\Delta_1)} \Bigg\}.
\end{equation}
Hence,
\begin{equation}
\Phi_\gamma(\theta;v) \le C\theta^{\beta_0 - \gamma} \Bigg\{ \left( \fint_{D_1} |v|^2 \right)^{1/2} + \left( \fint_{D_1} |F|^p \right)^{1/p}
+ \norm{f }_{C^{0,1}(\Delta_1)} \Bigg\}.
\end{equation}
Now let $w = v - q$ for some $q\in \mathbb{R}^d$. Then
\begin{equation}
\mathcal{L}_0(w) + \lambda w = F - \lambda q.
\end{equation}
Applying (\ref{ineq_vCbeta}) to $w$ with Dirichlet boundary data $w = f -q$, we arrive at
\begin{align}
\begin{aligned}\label{ineq_Phi_w}
\Phi_\gamma(\theta;w) &\le C \theta^{\beta_0 - \gamma} \Bigg\{ \left( \fint_{D_1} |v-q|^2 \right)^{1/2} \\
&\qquad \qquad \qquad + \left( \fint_{D_1} |F|^p \right)^{1/p} + \lambda |q|
+ \norm{f-q }_{C^{0,1}(\Delta_1)} \Bigg\}.
\end{aligned}
\end{align}
Observe that
\begin{equation}\label{ineq_Phi_wv}
\Phi_\gamma(\theta;v) \le \Phi_\gamma(\theta;w) + \lambda \theta^{2-\gamma} |q|.
\end{equation}
Combining (\ref{ineq_Phi_w}) and (\ref{ineq_Phi_wv}) and taking the infimum over all $q\in \mathbb{R}^d$, we obtain
\begin{equation}
\Phi_\gamma(\theta;v) \le C\theta^{\beta_0 -\gamma } \Phi_\gamma(1;v).
\end{equation}
The desired estimate follows by choosing $\theta \in (0,1/4)$ so small that $C \theta^{\beta_0 -\gamma } \le 1/2$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_holder_DP}]
By using Lemma \ref{lem_uev_holder}, \ref{lem_Htheta_holder} and the same argument of Lemma \ref{lem_HPhi}, we have
\begin{equation*}
\Phi_\gamma(\theta r; u_\varepsilon) \le \frac{1}{2} \Phi_\gamma(2r;u_\varepsilon) + C [\omega_{k,\sigma}(\varepsilon/r)]^{1/2} \Phi_\gamma(2r;u_\varepsilon).
\end{equation*}
Since $\omega_{k,\sigma}(r) \to 0$ as $r\to 0$, we can choose a particular $N$ sufficiently large such that for any $K\ge N$, we have $C [\omega_{k,\sigma}(K^{-1})]^{1/2} < 1/2$. In other words, for all $N\varepsilon \le r< 1/2$, $\Phi_\gamma(\theta r; u_\varepsilon) \le \Phi_\gamma(2r; u_\varepsilon).$ It follows by iteration that $\Phi_\gamma(r; u_\varepsilon) \le \Phi_\gamma(1; u_\varepsilon)$ for all $N\varepsilon \le r<1/2$.
Finally the case $\varepsilon < r\le N\varepsilon$ follows trivially from $\Phi_\gamma(r; u_\varepsilon) \le C\Phi_\gamma(N\varepsilon; u_\varepsilon)$. As a result,
\begin{equation}
\Phi_\gamma(r; u_\varepsilon) \le C\Phi_\gamma(1; u_\varepsilon),
\end{equation}
for all $\varepsilon < r < 1/2$.
Now by Caccioppoli's inequality,
\begin{equation*}
\begin{aligned}
\left( \fint_{D_{r/2}} |\nabla u_\varepsilon|^2 \right)^{1/2} &\le \frac{C}{r} \inf_{q\in \mathbb{R}^d} \Bigg\{ \left( \fint_{D_r} |u_\varepsilon - q|^2 \right)^{1/2} + r^2 \left( \fint_{D_r} |F|^p \right)^{1/p} \\
&\qquad + \norm{f -q}_{L^\infty(\Delta_{r})} + r \norm{\nabla_{\tan} f}_{L^\infty(\Delta_r)}\Bigg\} \\
& \le C r^{\gamma-1} \Phi_\gamma(r;u_\varepsilon) \\
& \le C r^{\gamma-1} \Phi_\gamma(1;u_\varepsilon) \\
& \le C r^{\gamma-1} \Bigg\{ \left( \fint_{D_1} |\nabla u_\varepsilon |^2 \right)^{1/2} + \left( \fint_{D_1} |F|^p \right)^{1/p} + \norm{f}_{C^{0,1}(\Delta_{1})} \Bigg\}, \\
\end{aligned}
\end{equation*}
which ends the proof.
\end{proof}
The analogous result holds for Neumann problems. By setting
\begin{equation}\label{def_Psibeta}
\begin{aligned}
\Psi_\gamma (t;u) &= \frac{1}{t^\gamma} \inf_{q\in \mathbb{R}^d} \Bigg\{ \left( \fint_{D_t} |u_\varepsilon - q|^2 \right)^{1/2} + t^2 \left( \fint_{D_t} |F|^p \right)^{1/p} \\
&\qquad + t^2\lambda |q| + t \norm{g}_{L^\infty(\Delta_t)}\Bigg\},
\end{aligned}
\end{equation}
and applying the similar argument as Theorem \ref{thm_holder_DP}, we have the following.
\begin{theorem}[Boundary H\"{o}lder estimate for NP]\label{thm_holder_NP}
Suppose that $A\in APW^2(\mathbb{R}^d)$ satisfies the ellipticity condition (\ref{def_ellipticity}). Let $u_\varepsilon \in H^1(D_2;\mathbb{R}^d)$ be a weak solution of $\mathcal{L}_\varepsilon(u_\varepsilon) + \lambda u_\varepsilon= F$ in $D_2$ with $\partial u_\varepsilon/\partial \nu_\varepsilon = g$ on $\Delta_2$, where $\lambda \in [0,1]$. Then, for any $\varepsilon \le r\le 1$,
\begin{equation}\label{ineq_Holder_NP}
\left( \fint_{D_r} |\nabla u_\varepsilon|^2 \right)^{1/2} \le C r^{\gamma-1} \Bigg\{ \left( \fint_{D_1} |\nabla u_\varepsilon |^2 \right)^{1/2} + \left( \fint_{D_1} |F|^p \right)^{1/p}
+ \norm{g}_{L^\infty(\Delta_{1})} \Bigg\},
\end{equation}
where $\gamma < 2-d/p, p\ge 2, p>d/2$. In particular, if $p = d$, then (\ref{ineq_Holder_NP}) holds for all $\gamma \in (0,1)$.
\end{theorem}
\bibliographystyle{amsplain}
|
train/arxiv
|
BkiUcxbxK1UJ-2RwWSo3
| 5 | 1 |
\section{Introduction}
Any investigation of the origin of cosmic rays (CRs) has to address some basic questions: 1) where and how are CRs produced/accelerated? 2) what is their chemical composition at the sources? 3) How do they propagate to us? 4) What is the anisotropy that results from source distribution (in space and time) and from propagation?
After the pioneering proposal of Baade and Zwicky \cite{Baade:1934p884}, the idea that the bulk of Galactic CRs may be produced in supernova remnants (SNRs) has become increasingly more popular. The main reason for this success is that on energetic grounds these astrophysical sources are the most reliable possibility in that an efficiency of $\sim 10\%$ in the form of accelerated particles seems to be able to provide a qualitatively good description of the CR fluxes observed at Earth. The second major reason for the success has come several decades after the original idea was put forward, and relies on the proposal of diffusive shock acceleration (DSA) \cite{Bell:1978p1342,Bell:1978p1344,Blandford:1987p881} as a conversion mechanism from kinetic energy of the expanding supernova blast wave into accelerated particles. This mechanism is seen at work in shocks in the solar system and is based on relatively simple principles. These two ingredients, together with a bulk of observations, showing likely hints of efficient CR acceleration in SNRs, have determined the upgrading of the SNR idea to the rank of a standard paradigm. Yet, the paradigm has not been proven right beyond doubt so far, as discussed recently in Ref. \cite{blasiICATPP}.
The problems with confirming the paradigm originate primarily from addressing all the four questions reported above self-consistently: on one hand the spectra of CRs accelerated in SNRs, according with test-particle DSA are very close to power laws $N(E)\propto E^{-\gamma}$ with slope $\gamma\approx 2$. On the other hand, the spectra measured at Earth require a diffusion coefficient $D(E)\propto E^{\delta}$ with $\delta\sim 0.7$, so that the equilibrium spectrum is $n(E)\propto N(E)/D(E)\propto E^{-\gamma-\delta}=E^{-2.7}$. However the strong diffusion that follows from this line of thought seems to lead to exceedingly large anisotropy \cite{Ptuskin:2006p620}. Non linear effects in DSA make this problem even more severe in that spectra are predicted to be concave and lead to require an even stronger energy dependence of the diffusion coefficient at high energies, as found in Ref. \cite{Berezhko:2007p1010}, where, however, the problem of anisotropy was not discussed.
It has recently been proposed that the non-linear theory of DSA (hereafter NLDSA) may lead to somewhat steeper spectra if the velocity of the scattering centers is taken into account in solving the transport equation for CRs at SNR shocks \cite{Caprioli:2010p133,Caprioli:2010p789,Ptuskin:2010p1025}. In this case the spectra of accelerated particles may be less concave, and have an approximate power law shape with slope $\approx 2.1-2.2$, thereby implying $\delta \approx 0.5-0.6$. The steeper slope of the injection spectrum follows from both the finite velocity of the scattering waves and from the convolution over time of the acceleration history of SNRs throughout their evolution in the interstellar medium (ISM). It is noteworthy, however, that this mechanism introduces a very disappointing element in that the result depends rather strongly on poorly understood details such as the wave helicity, as well as the reflection and transmission of waves at the shock surface (in principle the effect considered here could even lead to harder spectra).
The most serious limitation to the identification of SNRs as the sources of Galactic CRs is the lack of a firm detection of gamma rays that can unequivocally be interpreted as due to production and decay of neutral pions: in the few cases in which this association appears to be easier, the SNR is close to a molecular cloud (denser target for pp interactions) and typically of old age. In such circumstances the spectrum of accelerated particles is unlikely to be representative of SNRs at their best in terms of CR production. In these few cases, observations show a rather steep spectrum of accelerated particles, quite unlike the ones predicted by NLDSA. Possible exceptions are RX J1713.7-3946 and Tycho, although for the first remnant other problematic aspects may be found \cite{Morlino:2009p140,Ellison:2010p636}. Moreover, a recent analysis of the data on this remnant by the Fermi collaboration \cite{Abdo:2011p1858} suggests that in fact the spectrum of gamma rays is very flat thereby leading to conclude that the emission is likely of leptonic origin. On the other hand the recent gamma ray detection of the Tycho supernova remnant by the VERITAS Cherenkov telescope \cite{2011ApJ...730L..20A} and by the Fermi telescope \cite{2011arXiv1108.0265G} appear to be best fit by a model in which particle acceleration occurs efficiently \cite{2011arXiv1105.6342M}.
As discussed in \cite{Caprioli:2010p789,Ptuskin:2010p1025}, a qualitatively good fit to the spectra of CRs (including nuclei heavier than hydrogen) can be achieved within the context of NLDSA with finite speed of scattering centers (implying $\delta\approx 0.55$) but some spectral features that have been recently found, the so-called discrepant hardenings \cite{Ahn:2010p624} observed by CREAM, are not explained as yet within the context of the SNR paradigm (for a different, environment related explanation, see \cite{ohiraioka:2011p729}).
From the point of view of propagation in the Galaxy, the standard calculations carried out with GALPROP \cite{GALPROP2011}, or similar propagation codes, suggest that one can obtain a good global fit also with reacceleration models, namely with diffusion coefficient $D(E)\propto E^{1/3}$ and second order Fermi acceleration: the latter is relevant only at low energies, where it allows one to better reproduce the B/C ratio. This scenario would imply a high energy injection spectrum $N(E)\propto E^{-2.4}$, very challenging for NLDSA in SNRs, but certainly preferable from the point of view of anisotropy (see Paper II \cite{paper2}).
In the present work we use the Green function formalism to describe the propagation of CRs to Earth if the sources are SNRs. These are modeled as discrete sources in space and time, with a spatial distribution and a rate deduced from observations. The effect of discreteness on the spectrum and chemical composition of CRs observed at Earth is the goal of this first paper. In a second paper \cite{paper2} we will discuss the more delicate issue of anisotropy. We concentrate here on 1) the investigation of the spectra and chemical fluctuations induced by the discrete nature of the sources; 2) the chemical composition around the knee; 3) the spectral differences between protons and heavier elements; 4) the effect of extragalactic CRs on the end of the Galactic CR spectrum.
Previous attempts at taking into account the random nature of the sources on the CR spectrum observed at Earth were presented in \cite{Busching:2005p627,Ptuskin:2006p620}. The first paper mainly focuses on the issue of anisotropy and will be discussed in more detail in the forthcoming Paper II.
In \cite{Busching:2005p627}, the authors use a series expansion of the propagation equation to show that the stochasticity of source distribution induces large temporal fluctuations in the spectrum of primary CR nuclei. Assuming a diffusion coefficient $D(E)\propto E^{0.6}$, they find fluctuations by 20\% on average, but up to 100\%, which leads them to question the goodness of the B/C ratio as an indicator of CR propagation parameters in the Galaxy. However, a diffusion coefficient scaling as $E^{0.6}$ implies a level of anisotropy well in excess of the observations, as discussed in \cite{Ptuskin:2006p620} and in Paper II. If a weaker energy dependence of the diffusion coefficient is considered, the fluctuations found by \cite{Busching:2005p627} are much reduced, weakening also their conclusion.
The present paper is organized as follows: in \S~\ref{sec:green} we introduce the relevant Green functions; in \S~\ref{sec:simple1} we derive some basic results for the simple case of a homogeneous distribution of sources in the disc of the Galaxy and discuss the limitations of the diffusion formalism for individual sources in \S~\ref{sec:nearby}. In \S~\ref{sec:simple2} we derive simple estimates of the effect of fluctuations on the spectrum for a homogeneous but discrete distribution of sources in the disc. Our results on spectra and chemical composition for a stochastic distribution of the sources in the disc are presented in \S~\ref{sec:results}), where the spiral structure of the Galaxy is first ignored (\S~\ref{sec:res_cyl}) and then taken into account (\S~\ref{sec:res_spiral}). Additional scenarios are discussed in \S~\ref{sec:more}, while we discuss the transition region between galactic and extragalactic CRs in \S~\ref{sec:extragal}. We conclude in \S~\ref{sec:conclusion}.
\section{Relevant Green functions}\label{sec:green}
The diffusive transport of cosmic rays from a point source located at a position $\vec r_{s} = (x_{s},y_{s},z_{s})$ and injecting a spectrum $N(E)$ at a time $t_{s}$ can be written as:
\,\begin{equation}
\frac{\partial n_{k}(E,\vec r,t)}{\partial t}=\nabla\left[D_{k}(E)\nabla n_{k}(E,\vec r,t)\right] - \Gamma_{k}^{sp}(E) n_{k}(E,\vec r,t) + N_{k}(E) \delta(t-t_{s})\delta^{3}(\vec r - \vec r_{s}),
\label{eq:transport}
\,\end{equation}
where $n_{k}(E,\vec r,t)$ is the density of particles of type $k$ (nuclei) with energy $E$ at the location $\vec r$ and time $t$, $D_{k}(E)$ is the diffusion coefficient assumed to be spatially constant and $\Gamma_{k}^{sp}(E) $ is the rate of spallation of nuclei of type $k$ to lead to lighter nuclei. Each source is assumed to produce nuclei of H ($k=1$), He ($k=2$), CNO ($k=3$), Mg-Al-Si ($k=4$) and Fe ($k=5$). The cutoff energy of the proton component, $E_{max}^{H}$ is taken in such a way as to achieve the best fit to the all-particle CR spectrum. For heavier nuclei the maximum energy is chosen to be $Z$ times larger than for H, where $Z=2$ for He, $Z=7$ for CNO, $Z=13$ for Mg-Al-Si and $Z=26$ for Fe nuclei. All injection spectra are therefore in the form:
\,\begin{equation}
N_{k}(E) \propto E^{-\gamma} \exp \left[ -\left( \frac{E}{E_{max,k}} \right)\right],
\,\end{equation}
where the proportionality constant for each of the nuclear species is determined {\it a posteriori} in order to fit the normalization of the spectra of nuclei of type $k$ as measured at Earth by CREAM at the energy of 1 TeV \cite{Ahn:2010p624,Ahn:2009p1595}. It is worth recalling that the scaling of the maximum energy with the charge $Z$ of the nucleus relies upon the assumption of full ionization of the nucleus. In fact, this condition is achieved only during the acceleration process and the actual maximum energy of heavy nuclei might be less than $Z$ times the maximum energy of protons, because of the difficulty in achieving photo-ionization of atoms through extraction of electrons in the inner atomic shells \cite{morlino:2011mnras}.
The diffusion region is assumed to be a cylinder of infinite radius and half height $H$. Inside this cylinder the diffusion coefficient is constant. The fact that the radius is assumed to be potentially infinite is not a limitation to the calculation because we will see that the source distributions that will be adopted are concentrated in a region of $4-10$ kpc radius (the Galactic disc) and for practical purposes the size of the diffusion region can be taken to be much larger than the radius of the disc. The escape of CRs from this model Galaxy occurs through the upper and lower boundaries. The escape is modeled by assuming that $n_{k}$ vanishes at $z=\pm H$, and the escape flux through the surfaces $z=\pm H$ is described by $D_{k}(E) \frac{\partial n_{k}}{\partial z}|_{z=\pm H}$.
In this paper we concentrate our attention on particles with energy above $\sim 1$ TeV, therefore in Eq. \ref{eq:transport} we ignored terms of advection and terms of possible second order reacceleration which are both potentially important at much lower energies. We also neglect the contribution to the flux of nuclei deriving from secondary nuclei produced in spallation events. From the point of view of diffusive propagation the calculation presented here is not too different from GALPROP or similar propagation codes. The only two relevant differences are in the more phenomenological way in which spallation is described and in the fact that here we do not impose the free escape boundary condition at some finite radius along the lateral sides of the cylinder. The latter, as discussed above, is not a relevant limitation although it might lead to some corrections for large values of $H$. The former is not important in the present context since we will limit ourselves to primary nuclei, while it might represent a limitation if we were to describe rare nuclei, such as $B$ and $^{10}Be$ or similar secondary products.
The spallation rate is defined in terms of the gas density in the diffusion region and of the cross section for spallation: $\Gamma_{k}^{sp,k}(E) = n_{gas} c \sigma_{sp,k}$ (the velocity of nuclei has been assumed to equal the speed of light $c$ since all our calculations will only concern ultra-relativistic nuclei). The cross section for spallation of a nucleus $k$ (mass $A_{k}$) can be written as \cite{Horandel:2007p877}:
\,\begin{equation}
\sigma_{sp,k}(E) = \alpha_{sp}(E) A_{k}^{\beta_{sp}(E)}~\rm mb,
\,\end{equation}
where
\,\begin{equation}
\alpha_{sp}(E) = 50.44-7.93 \log(E_{eV})+0.61\left[\log(E_{eV})\right]^{2}
\,\end{equation}
and
\,\begin{equation}
\beta_{sp}(E) = 0.97-0.022\log(E_{eV})
\,\end{equation}
where $E_{eV}$ is the energy of the nucleus of type $k$ in units of eV. The gas density is non-trivial to define in a homogeneous model as this is. We assume here that the gas density in the disc of the Galaxy (with half-thickness $h$) is $n_{disc}=3$ cm$^{-3}$ while it vanishes in the halo. In this case the mean density experienced by CRs during propagation is approximately $n_{gas}\approx n_{disc}(h/H)$.
The free Green function (namely without boundary conditions) of Eq.~\ref{eq:transport} can be written as
\,\begin{equation}
{\cal G}_{k}^{free}(\vec r, t; \vec r_{s},t_{s}) = \frac{N_{k}(E)}{\left[ 4 \pi D_{k} \tau \right]^{3/2}} \exp\left[ -\Gamma_{k}^{sp}(E) \tau \right]
\exp\left[ -\frac{(\vec r -\vec r_{s})^{2}}{4 D_{k} \tau}\right],
\label{eq:greenfree}
\,\end{equation}
where $\tau = t-t_{s}$. The Green function that satisfies the correct boundary condition at $z=\pm H$ can be obtained by using the image charge method and can be written as follows:
$$
{\cal G}_{k}(\vec r, t; \vec r_{s},t_{s}) = \frac{N_{k}(E)}{\left[ 4 \pi D_{k} \tau \right]^{3/2}} \exp\left[ -\Gamma_{k}^{sp}(E) \tau \right]
\exp\left[ -\frac{(x-x_{s})^{2}+(y-y_{s})^{2}}{4 D_{k} \tau}\right] \times
$$
\,\begin{equation}
\sum_{n=-\infty}^{+\infty} (-1)^{n} \exp \left[ -\frac{(z-z'_{n})^{2}}{4 D_{k} \tau}\right],
\label{eq:green}
\,\end{equation}
where $z'_{n}=(-1)^{n} z_{s} + 2 n H$ are the $z$ coordinates of the image sources. It is easy to check that ${\cal G}_{k}(x,y,z=\pm H,t;x_{s},y_{s},z_{s},t_{s})=0$ by simply expanding the sum term. In what follows the Earth will be located at $(x,y,z)\equiv (R_{\odot},0,0)$ with $R_{\odot}=8.5$ kpc the distance of the Sun from the center of the Galaxy. It is however useful to keep Eq.~\ref{eq:green} in its most general form because anisotropies are related to the spatial derivatives of the Green function calculated at the detection location.
It is important to realize that the Green function formalism outlined here allows us to take into account a completely arbitrary spatial distribution of the sources in the Galaxy, and also an arbitrary temporal evolution of the injection of cosmic rays from each source.
Two instances related to the specific case of SNRs as sources of CRs may clarify the importance of this point. SNRs produce CRs in a rather complicated and not entirely clear way: while the spectrum of CRs accelerated at the external shock is well defined and can be calculated using the theory of NLDSA, the spectrum of particles escaping a remnant to become CRs is much more troublesome in that it depends on how and when CRs escape from the acceleration region. Most accelerated particles in a SNR shock are advected downstream of the shock and are trapped in the expanding shell for the whole duration of the Sedov-Taylor phase, thereby losing energy adiabatically. These particles become CRs in the ISM only after the SNR ends its evolution and the shock dies out. On the other hand, during the Sedov-Taylor phase and in presence of magnetic field amplification and damping, the maximum energy of accelerated particles decreases with time. At any given time the particles with the instantaneous maximum energy may escape the system from upstream. The spectrum of these particles at any given time $t$ is close to a delta function $\delta(E-E_{max}(t))$, where $E_{max}(t)$ is the maximum energy reached at that time. The convolution over time of these peaked spectra leads to a power law injection spectrum \cite{Caprioli:2009p145} which is not directly related to the spectrum of particles at the shock. The global injected spectrum from an individual SNR is likely to be the result of the superposition of the spectrum of particles escaping the SNR from upstream and of those that escape after the end of the Sedov-Taylor phase. The whole process lasts $\sim (3-30)\times 10^{4}$ years, with the low energy particles escaping at later times than the higher energy ones. During the ejecta dominated phase that precedes the adiabatic phase, the maximum energy of accelerated particles grows and escape is not effective.
In our calculations we consider two different models of injection: 1) burst injection (all particles from a SNR are injected at the same time with a spectrum $N(E)$); 2) continuous injection in which at any time the injected spectrum is $\propto \delta\left[E-E_{max}(t)\right]$. This second choice should qualitatively reflect the statement that escape of particles from the accelerator takes place in such a way that higher energy particles escape earlier in the evolution, while lower energy particles leave later.
In the first scenario, the Green function in Eq.~\ref{eq:green} is already the solution that we are seeking. The spectrum injected by an individual SNR can be written in the form
\,\begin{equation}
N_{k}(E) = \frac{(\gamma_{k}-2) \eta_{k}\epsilon_{kin}}{E_{0,k}^{2}} \frac{1}{\left[ 1-\left( \frac{E_{max,k}}{E_{0,k}}\right)^{-\gamma_{k}+2}\right]} \left(\frac{E}{E_{0}} \right)^{-\gamma_{k}} \exp\left( - \frac{E}{E_{max,k}}\right),
\label{eq:inj}
\,\end{equation}
where $\eta_{k}$ is the fraction of the kinetic energy of the blast wave, $\epsilon_{kin}$, that goes into accelerated particles of type $k$. The reference energy $E_{0,k}$ is taken to be $1$ GeV for protons while for heavier nuclei its numerical value is not really important since we simply rescale the injected protons spectrum in order to fit the spectra observed at Earth. In Eq.~\ref{eq:inj}, $E_{max,k}$ is the maximum energy of particles of type $k$. In the expression above and in what follows we always assume that the injection spectrum is steeper than $E^{-2}$, since flatter spectra would result in unreasonable choices of the diffusion coefficient, namely in exceedingly large anisotropy and even in the breaking of the regime of diffusive propagation (see below).
The second model of injection that we consider is introduced, as we already mentioned, in order to take into account, at least in a qualitative way, the fact that escape of particles from the accelerator is expected to occur in a differential way, with higher energy particles escaping first. As discussed in \cite{Caprioli:2009p145,Caprioli:2010p133}, the escape starts at the beginning of the Sedov-Taylor (ST) phase, while during the ejecta dominated (ED) phase of the SNR evolution the particles are confined in the expanding shell, the maximum energy increases with time and the particles lose energy adiabatically. During the ST phase, in the presence of magnetic field amplification, the maximum energy decreases after reaching its maximum value between the end of the ED phase and the beginning of the ST phase.
The ST phase starts when the mass swept up by the supernova shock equals the mass of the ejecta, $M_{ej}$:
\,\begin{equation}
M_{ej} = \frac{4}{3}\pi \rho R_{ST}^{3} \to R_{ST} = \left(\frac{3 M_{ej}}{4\pi \rho} \right)^{1/3}=6.6\times 10^{18}\left(\frac{M_{ej,\odot}}{n_{1}}\right)^{1/3} cm,
\,\end{equation}
where $R_{ST}$ is the radius of the shell at the beginning of the ST phase, $M_{ej,\odot}$ is the mass of the ejecta in units of solar masses and $n_{1}$ is the gas density in the ISM in units of 1 particle per cubic cm. The kinetic energy of the explosion is related to the mass of the ejecta as
\,\begin{equation}
\epsilon_{kin} = \frac{1}{2} M_{ej} u_{ST}^{2}
\,\end{equation}
which leads to
\,\begin{equation}
u_{ST} = 10^{9} \left(\frac{\epsilon_{51}}{M_{ej,\odot}} \right)^{1/2} cm/s,
\label{eq:ust}
\,\end{equation}
where $u_{ST}$ is the velocity of the expanding shell at the beginning of the ST phase. It is interesting to notice that
\,\begin{equation}
\epsilon_{kin} = \frac{1}{2} M_{ej} u_{ST}^{2} = \left(\frac{1}{2}\rho u_{ST}^{3} \right) \left( 4 \pi R_{ST}^{2} \right) T_{ST}\ .
\label{eq:ekin}
\,\end{equation}
This equation, together with Eq.~\ref{eq:ust}, gives an expression for the time $T_{ST}$ at which the ST phase approximately starts:
\,\begin{equation}
T_{ST}=\frac{1}{3}\frac{R_{ST}}{u_{ST}} \approx 70 yr \frac{M_{ej,\odot}^{5/6}}{\epsilon_{51}^{1/2}n_{1}^{1/3}}.
\,\end{equation}
During the ST phase the radius and velocity of the shell change as
\,\begin{equation}
R_{sh}(t) = R_{ST} \left( \frac{t}{T_{S}}\right)^{2/5} ~~~~~~~ u_{sh}(t) = \frac{6}{5} u_{ST} \left( \frac{t}{T_{ST}}\right)^{-3/5}.
\,\end{equation}
We assume that escape of CRs from the SNR occurs from upstream of the shock, therefore the spectrum of escaping particles at any given time is very much peaked around $E_{max}(t)$ (we omit here the index $k$ that identifies the type of nucleus) and for simplicity we assume $Q(E,t) = K \delta\left[ E - E_{max}(t)\right]$. In the absence of a fundamental description of the way $E_{max}$ changes in time (it depends on the details of magnetic field amplification and damping) we assume, for $t>T_{ST}$, $E_{max}(t) = E_{M} (t/T_{ST})^{-\alpha}$ ($\alpha>0$), with $E_{M}$ the maximum energy reached at the time $T_{S}$. The value of $\alpha$ is chosen in such a way as to achieve that the maximum energy at the time when the SNR dies out, $\tau_{SNR}$, is $E_{max}(\tau_{SNR})=E_{0}$ where we take $E_{0}=1$ GeV (for protons). The normalization constant $K$ is calculated by integrating $Q(E)$ over $E$:
\,\begin{equation}
\int dE Q(E,t) E = \eta(t) \left(\frac{1}{2}\rho u_{sh}^{3} \right) \left( 4 \pi R_{sh}^{2} \right) \to K(t) = \eta(t) \frac{\epsilon_{kin}}{T_{ST}}
\left( \frac{6}{5}\right)^{3} \frac{1}{E_{M}} \left( \frac{t}{T_{ST}} \right)^{\alpha-1}.
\,\end{equation}
For reasons that will be clear in the following, we also assume that $\eta(t)=\eta_{0}(t/T_{ST})^{\beta}$, with $\beta\geq 0$.
It follows that the density of CRs from a SNR at location $\vec r_{s}$ is
$$
n(E,\vec r,t) = \int_{T_{ST}}^{Min[T_{ST}+\tau_{SNR},t-t_s]} dt^{*} \eta(t^{*}) \frac{\epsilon_{kin}}{T_{ST}}
\left( \frac{6}{5}\right)^{3} \frac{1}{E_{M}} \left( \frac{t^{*}}{T_{ST}} \right)^{\alpha-1} \times
$$
\,\begin{equation}
\delta\left[ E - E_{M}\left( \frac{t^{*}}{T_{ST}} \right)^{-\alpha} \right] {\cal G}_{k}(\vec r, t; \vec r_{s},t^{*}) .
\label{eq:sol}
\,\end{equation}
The variable $\tau$ in Eq. \ref{eq:green} is clearly $\tau=t-t^{*}-t_{s}$, and the $\delta$-function in Eq. \ref{eq:sol} selects the time $t^{*}=T_{ST}(E_{M}/E)^{1/\alpha}$. One can then write:
$$
n(E,\vec r,t) = \eta_{0} ~\epsilon_{kin} \left( \frac{6}{5}\right)^{3} \frac{1}{\alpha E_{M}^{2}}
\left( \frac{E_{M}}{E} \right)^{2} \left( \frac{E_{M}}{E} \right)^{\frac{\beta}{\alpha}} \times
$$
$$
\frac{1}{\left[ 4 \pi D(E) \tau^{*} \right]^{3/2}} \exp\left[ -\Gamma_{k}^{sp}(E) \tau^{*} \right]
\exp\left[ -\frac{(x-x_{s})^{2}+(y-y_{s})^{2}}{4 D_{k} \tau^{*}}\right] \times
$$
\,\begin{equation}
\sum_{n=-\infty}^{+\infty} (-1)^{n} \exp \left[ -\frac{(z-z'_{n})^{2}}{4 D_{k} \tau^{*}}\right],
\label{eq:sol1}
\,\end{equation}
where
$$
\tau^{*}=t-t_{s}-T_{ST}\left( \frac{E_{M}}{E} \right)^{\frac{1}{\alpha}}.
$$
From Eq.~\ref{eq:sol} it is clear that the solution is non-zero only for
$$
1\leq \left( \frac{E_{M}}{E} \right)^{\frac{1}{\alpha}} \leq \rm Min\left[ 1+\frac{\tau_{SNR}}{T_{ST}},\frac{t-t_{s}}{T_{ST}}\right].
$$
The first inequality is satisfied by definition of $E_{M}$. The second inequality reads differently for old and young SNRs. For old SNRs, namely the ones that have completed their evolution before the observation time $t$, the condition reads $E\geq E_{M}\left( 1+\frac{\tau_{SNR}}{T_{ST}} \right)^{-\alpha}$, which is again satisfied by definition for the energies we are interested in, since we required that $E_{M}\left( \frac{\tau_{SNR}}{T_{ST}} \right)^{-\alpha} =E_{0}$. More interesting is the case in which $(t-t_{s})/T_{ST}\leq 1+\tau_{SNR}/T_{ST}$, namely the case of recent SNRs. In this case the spectrum in Eq.~\ref{eq:sol1} is non-zero only for
$$
E>E_{M} \left( \frac{t-t_{s}}{T_{ST}}\right)^{-\alpha}.
$$
For instance, for $\tau_{SNR}=10^{5}$ years, $E_{M}=3\times 10^{6}$ GeV and $T_{ST}\approx 1000$ years one has $\alpha\approx 3.2$ in order to satisfy $E_M(\tau_{SNR})=E_0=$1GeV. Therefore from a supernova that went off 20,000 years ago, such as Vela, we can only receive particles with energy $E>200$ GeV. It is worth noticing that this fact has nothing to do with the propagation time, which adds to limiting the minimum energy of the particles that can be detected at Earth. The limit discussed here is due instead to the fact that low energy particles could not leave the source even at relatively long times after the beginning of the ST phase. This effect is unlikely to be important for the total spectrum of CRs observed at Earth, which is dominated by the superposition of numerous distant SNRs, but it can potentially be important for the anisotropy signal as we discuss in Paper II.
One last point that is worth noticing about Eq.~\ref{eq:sol1} is related to the injection spectrum that can be obtained in the approach illustrated above: one can see that if the acceleration efficiency is constant in time ($\beta=0$) the spectrum is always $\propto E^{-2}$ and is totally independent on the spectrum of particles accelerated at the shock. The power law shape of the spectrum is uniquely related to the evolution in time of the SNR and not to the acceleration process at work at the shock. This issue, discussed at length in \cite{Caprioli:2010p133}, is hard to evade and in fact all physical effects that can be added to this approximate picture almost invariably lead to spectra even flatter than $E^{-2}$. Therefore a crucial assumption that is required in order to have injected spectra steeper than $E^{-2}$ is that $\beta>0$. Physically, this implies that the acceleration efficiency is required to increase with time while the SNR gets older. At present it has not been possible to find any physical justification for this requirement.
\section{Simple benchmark results on spectra}\label{sec:simple1}
In this section we provide a benchmark calculation of the spectrum of nuclei expected from discrete sources (SNRs) in the disc of the Galaxy. The full calculation, taking into account the spatial and temporal distribution of SNRs in the Galaxy, will be illustrated in \S~\ref{sec:results}. Here we simply present an analytical calculation that shows the main relevant scaling relations. For this purpose we consider the disc as infinitely thin in the $z$ direction (the sources are all located in a plane with $z_s=0$), and having a radius $R_{d}$. The flux of CRs is calculated at the center of the disc. In a homogeneous model as this is, this does not induce a large error. The spectrum of protons (no spallation) for bursting sources exploding with a rate $\,\mathrm{{\cal R}}$ is easily written as
\,\begin{equation}
n_{CR}(E) = \int_{0}^{\infty} d\tau \int_{0}^{R_{d}} dr \frac{2\pi r}{\pi R_{d}^{2}} \frac{N(E)\,\mathrm{{\cal R}}}{\left[4\pi D(E) \tau \right]^{3/2}}\exp \left[ -\frac{r^{2}}{4 D(E) \tau} \right]
\sum_{n=-\infty}^{+\infty} (-1)^{n} \exp \left[ -\frac{(2 n H)^{2}}{4 D(E) \tau} \right].
\label{eq:ncr1}
\,\end{equation}
Carrying out the integration on $\tau$ first and then on $r$, one easily obtains
\,\begin{equation}
n_{CR}(E) = \frac{N(E) \,\mathrm{{\cal R}}}{2 \pi D(E) R_{d}} \sum_{n=-\infty}^{+\infty} (-1)^{n} \left[ \sqrt{1+\left( \frac{2nH}{R_{d}}\right)^{2}} -
\sqrt{\left( \frac{2nH}{R_{d}}\right)^{2}} \right].
\,\end{equation}
One can check that the sum over $n$ equals $\sim H/R_{d}$ for $H\ll R_{d}$ so that
\,\begin{equation}
n_{CR}(E) = \frac{N(E) \,\mathrm{{\cal R}}}{2 \pi R_{d}^{2}} \frac{H}{D(E)} \equiv \frac{N(E) \,\mathrm{{\cal R}}}{2 H \pi R_{d}^{2}} \frac{H^{2}}{D(E)} .
\label{eq:ncr}
\,\end{equation}
The first expression clarifies that the flux of CRs scales with $H/D(E)$, a result that remains valid even in more complex versions of the calculation and in fact found also in propagation codes such as GALPROP \cite{GALPROP2011} and DRAGON \cite{dragon2010}. The second expression in Eq.~\ref{eq:ncr} clarifies that the observed density of CRs is simply equal to the total injection rate $N(E)\,\mathrm{{\cal R}}$ divided by the volume of the Galaxy $2H \pi R_{d}^{2}$, and multiplied by the escape time $H^{2}/D(E)$.
It is immediately apparent that the diffusion coefficient plays a crucial role in this type of calculations. Since in this paper we do not calculate the $B/C$ ratio or the abundance of radioactive isotopes, we choose to borrow the normalization of $D(E)$ from the results of propagation codes such as GALPROP and DRAGON. They agree on the conclusion that, if the diffusion coefficient is chosen to be in the form
\,\begin{equation}
D(E) = 10^{28} D_{28} \left( \frac{R}{3 GV}\right)^{\delta} \rm cm^{2} s^{-1}
\label{eq:diff}
\,\end{equation}
for rigidity $R>3$ GV, the secondary to primary ratios and the abundances of unstable isotopes can be best fit by choosing $D_{28}/H_{kpc}= 1.33$ for $\delta=1/3$ and $D_{28}/H_{kpc}= 0.55$ for $\delta=0.6$, where $H_{kpc}$ is the height of the halo in units of kpc.
Let us now discuss the case of nuclei, which is somewhat more interesting due to the effect of spallation. The density of CR nuclei of type $k$ in our simple model can be written as
$$
n_{k}(E) = \int_{0}^{\infty} d\tau \int_{0}^{R_{d}} dr \frac{2\pi r}{\pi R_{d}^{2}} \frac{N_{k}(E)\,\mathrm{{\cal R}}}{\left[4\pi D_{k}(E) \tau \right]^{3/2}}\exp\left[-\frac{\tau}{\tau_{sp,k}}\right]\times
$$
\,\begin{equation}
\exp \left[ -\frac{r^{2}}{4 D_{k}(E) \tau} \right]\sum_{n=-\infty}^{+\infty} (-1)^{n} \exp \left[ -\frac{(2 n H)^{2}}{4 D_{k}(E) \tau} \right],
\,\end{equation}
where we defined $\tau_{sp,k}=1/\Gamma_{sp,k}$. The integrals over $\tau$ and $r$ are both carried out analytically in this order, leading to:
$$
n_{k}(E) = \frac{N_{k}(E) \,\mathrm{{\cal R}}}{2 \pi D_{k}(E) R_{d}^{2}} \sqrt{D_{k}(E)\tau_{sp,k}} \times
$$
\,\begin{equation}
\sum_{n=-\infty}^{+\infty} (-1)^{n}\left\{
\exp\left[ -\left( 4 n^{2} \frac{\tau_{esc,k}}{\tau_{sp,k}} \right)^{1/2}\right]-
\exp\left[ - \left(\frac{\tau_{esc,k}}{\tau_{sp,k}}\right)^{1/2} \left( 4 n^{2} + \frac{R_{d}^{2}}{H^{2}}\right)^{1/2}\right]
\right\},
\label{eq:nk}
\,\end{equation}
where $\tau_{esc,k}(E) = H^{2}/D_{k}(E)$ is the escape time of nuclei of type $k$. The sum in Eq.~\ref{eq:nk} tends to $\sqrt{\tau_{esc,k}/\tau_{sp,k}}$ for $\tau_{esc,k}/\tau_{sp,k}\ll 1$ and to 1 for $\tau_{esc,k}/\tau_{sp,k}\gg 1$. It follows that when spallation is negligible
\,\begin{equation}
n_{k}(E) = \frac{N_{k}(E) \,\mathrm{{\cal R}}}{2 \pi R_{d}^{2}} \frac{H}{D_{k}(E)}, ~~~~~ \frac{\tau_{esc,k}}{\tau_{sp,k}} \ll 1
\label{eq:high}
\,\end{equation}
which is the same solution as for protons (Eq.~\ref{eq:ncr}), showing an energy dependence $n_{k}(E) \propto E^{-\gamma-\delta}$. In the opposite limit (spallation dominant over escape) one has to be careful in defining the effective density experienced by the propagating CRs, which now becomes energy dependent and given by
\,\begin{equation}
n_{gas}(E)=n_{disc}\frac{h}{\sqrt{D(E)\tau_{sp}}}=\frac{n_{disc}^2 h^2 c \sigma_{sp}}{D(E)}\ .
\label{eq:spdens}
\,\end{equation}
For the CR spectrum one obtains:
\,\begin{equation}
n_{k}(E) = \frac{N_{k}(E) \,\mathrm{{\cal R}}}{2 H \pi R_{d}^{2}} \sqrt{\tau_{sp,k}\tau_{esc,k}}~, ~~~~~ \frac{\tau_{esc,k}}{\tau_{sp,k}} \gg 1,
\label{eq:low}
\,\end{equation}
which leads to $n_{k}(E)\propto E^{-\gamma}$, due to the fact that with $n_{gas}$ given by Eq.~\ref{eq:spdens}, $\tau_{sp,k}\propto D(E)$, so that the dependence of the spectrum on the diffusion coefficient cancels out. This implies that at the transition between the two regimes one expects a progressive steepening of the spectrum of nuclei, asymptotically tending to $\delta$ for a spallation dominated regime. This is particularly relevant for heavy nuclei, for which the timescales for spallation and escape become comparable in the TeV range. However, the transition region between the two regimes is rather broad and in general a low energy hardening of the nuclear spectra (with respect to the naive $E^{-\gamma-\delta}$) may be expected even if spallation is not dominant over escape. In the case of He, with our choice of the parameters ($n_{disc}=3cm^{-3}$), $\tau_{sp}=\tau_{esc}$ occurs at $E_{crit}\approx100 GeV/n$, but the spectrum is sensibly flatter than that of H up to around the maximum energy.
Notice that the disk-like structure of the spatial distribution of sources changes the shape of the spectrum of nuclei compared with the simple leaky-box model predictions used in Ref.~\cite{Horandel:2007p877}.
\subsection{Contribution of a nearby source}\label{sec:nearby}
Here we discuss the relative contribution of an individual source with respect to the diffuse CR spectrum in the Galaxy. For simplicity we limit ourselves to the case of protons, while the generalization to nuclei of arbitrary charge $Z$ is straightforward. The spectrum of the source is assumd to be $\lambda_{N} N(E)$ where $N(E)$ is the average spectrum injected by SNRs, and appearing in Eq.~\ref{eq:ncr}. The CR density contributed by the (bursting) source is
\,\begin{equation}
n_{CR}^{s}(E) \approx \frac{\lambda_{N} N(E)}{\left( 4\pi \lambda_{D} D(E) \tau \right)^{3/2}},
\,\end{equation}
where the coefficients $\lambda_{N}$ and $\lambda_{D}$ have been introduced to allow for the possibility that the luminosity of the source be somewhat different from the mean luminosity of the CR sources, and that the local diffusion coefficient may be different from that averaged over the entire volume of the Galaxy.
This density needs to be compared with the diffuse density as given by Eq.~\ref{eq:ncr}. The condition that the flux from the source dominates over the diffuse CR flux leads to the requirement:
\,\begin{equation}
\frac{\lambda_{N}}{\left( 4\pi \lambda_{D} D(E) \tau \right)^{3/2}} \geq \frac{\,\mathrm{{\cal R}}}{2\pi R_{d}^{2}}\frac{H}{D(E)}.
\,\end{equation}
With this prescription, the condition that a local source dominates the CR flux reads
\,\begin{equation}
\tau \leq 3\times 10^{4} H_{kpc}^{-1} \lambda_{N}^{2/3} \lambda_{D}^{-1} \left( \frac{R}{3 GV}\right)^{-\delta/3} ~ \rm years\ ,
\,\end{equation}
where we assumed $\,\mathrm{{\cal R}}$=1/(30 yr) and $R_d=15$ kpc.
This result holds within a distance
\,\begin{equation}
r \leq \sqrt{4 \lambda_{D} D(E) \tau} \approx 70 \lambda_{N}^{1/3} \left( \frac{R}{3 GV}\right)^{\delta/3} \rm pc
\,\end{equation}
from the source. Within such distance the density of CR stays roughly constant.
The two $\lambda$ parameters have been introduced here with a well defined purpose: the luminosity in the form of CRs of a nearby source may be somewhat different from the average one, so that $\lambda_{N}$ allows to consider the possibility that the local source is less powerful ($\lambda_{N}<1$) or more powerful ($\lambda_{N}>1$) than average. For simplicity we kept the spectrum of accelerated particles the same in shape while only the normalization is allowed to change.
The meaning of $\lambda_{D}$ is more interesting: the diffusion of CRs in the source vicinity can be modified by the propagating particles as a result of the self-generation of turbulence. This effect leads to lowering the diffusion coefficient with respect to average, thereby leading to $\lambda_{D}<1$ (again here we assume for simplicity that only the absolute normalization of $D(E)$ changes while retaining the energy dependence). This effect is even more plausible since, as discussed above, the number density of CRs within a distance $r$ from the source is larger than on average in the Galaxy for a time $\tau$, so that self-generation of waves during this time may reduce the diffusivity of the particles trying to leave the source. For instance if $\lambda_{D}\sim 0.1$ the source contribution dominates on the diffuse CR flux for $\sim 100,000$ years (for $H=3$ kpc). Since the energy dependence of $\tau$ is very weak, this excess is present for a wide range of particle rigidities. The distance $r$ is obviously independent of the value of $\lambda_{D}$.
\section{Estimate of the fluctuations in the spectrum}\label{sec:simple2}
One of the goals of the present paper is to discuss the implications of diffusive CR propagation from discrete sources on the spectrum measured at Earth. In this section we discuss some basic concepts which help us clarifying the role of fluctuations in the source distribution on the spectrum. In order to quantify these effects we follow here the formalism of Ref. \cite{Lee:1979p1621}.
Based on this formalism, also adopted by \cite{Ptuskin:2006p620}, given $\,\mathrm{{\cal N}}$ independent sources, each one producing $N_{0}$ particles, the average spectrum these produce at a location $\vec r$ at time $t$ is:
\,\begin{equation}
\langle n_{CR} (\vec r,t) \rangle = N_{0} \,\mathrm{{\cal N}} \int d^{3}\vec r' dt' P(\vec r',t') \,\mathrm{{\cal G}} (\vec r,t;\vec r',t'),
\,\end{equation}
where $P(\vec r', t')$ is the probability of having a source with space-time coordinates $(\vec r', t')$ and $\,\mathrm{{\cal G}}(\vec r, t; \vec r', t')$ is the Green function for transport of particles from $(\vec r', t')$ to $(\vec r, t)$.
For the fluctuations in the measured spectrum one finds \cite{Lee:1979p1621}:
$$
\langle \delta n_{CR}(\vec r,t) \delta n_{CR} (\vec r',t') \rangle =
$$
\,\begin{equation}
= N_{0}^{2} \,\mathrm{{\cal N}} \int d^{3}\vec r'' dt'' P(\vec r'',t'') \,\mathrm{{\cal G}} (\vec r,t;\vec r'',t'') \,\mathrm{{\cal G}} (\vec r',t';\vec r'',t'') -
\frac{1}{\,\mathrm{{\cal N}}} \langle n_{CR}(\vec r,t)\rangle \langle n_{CR}(\vec r',t')\rangle.
\,\end{equation}
In our case, the probability distribution, as already used in Eq.~\ref{eq:ncr1}, is
\,\begin{equation}
P(\vec r,t) = \frac{\,\mathrm{{\cal R}}}{\pi R_{d}^{2}}\ ,
\,\end{equation}
where, by definition, $\,\mathrm{{\cal R}}=\,\mathrm{{\cal N}}/\Delta t$ with $\Delta t\to\infty$ the time over which the averages are calculated. For simplicity, we carry out the calculation for the simple case in which the sources are uniformly distributed in an infinitely thin disc with radius $R_{d}$, and assume for our estimates that the spectrum and its fluctuations are measured at the center of the disc, $r=0$. Moreover we limit our simple estimate to protons, hence ignoring spallation terms in the Green function. With these assumptions, the integral over the spatial coordinates reduces to an integral over cylindrical radius. The double integral over space and time diverges unless a lower bound is imposed on one of the two coordinates, either a minimum distance from the closest source, $R_{min}$, or a minimum time for receiving its contribution, $T_{min}$: the two are obviously correlated, with $R_{min}\sim\sqrt{4 D T_{min}}$ and we choose to impose a $T_{min}$ (see \cite{mertsch2011} for a more extended discussion of this effect). The correlation function evaluated in the same spatial location is then:
\,\begin{equation}
\langle \delta n_{CR}(\vec r,t) \delta n_{CR} (\vec r,t) \rangle \approx \frac{\,\mathrm{{\cal R}} N_{0}^{2}}{32 \pi^{3} R_{d}^{2} D(E)^{2} T_{min}},
\,\end{equation}
where we assumed that $H^{2}/(D(E)T_{min})\gg 1$. The strength of the fluctuations is therefore:
\,\begin{equation}
\delta_{spec} (E) = \frac{\langle \delta n_{CR}\delta n_{CR}\rangle^{1/2}}{n_{CR}(E)} = \frac{1}{2^{3/2} \pi \left( \frac{\,\mathrm{{\cal R}}}{\pi R_{d}^{2}}\right)^{1/2} H T_{min}^{1/2}}.
\,\end{equation}
This result shows the strong dependence of $\delta_{spec}$ (in terms of normalization and in terms of energy dependence) on the cutoff time $T_{min}$, which we imposed in order to avoid the divergence in the integral over $\tau$. One can see that if $T_{min}$ is naively chosen as a given number, then $\delta_{spec}$ turns out to be independent of energy. On the other hand, from the physical point of view, the time $T_{min}$ should carry information about the closest, most recent sources around the observer. For instance the time $T_{min}$ could be interpreted as the time over which one source goes off within a distance from Earth such that CRs from that source reach us within a time $T_{min}$. This condition is expressed as
\,\begin{equation}
\frac{\,\mathrm{{\cal R}} T_{min}}{\pi R_{d}^{2}} \left[4 \pi D(E)T_{min}\right] = 1 \to T_{min} = \left[\frac{4 \,\mathrm{{\cal R}} D(E)}{R_{d}^{2}}\right]^{-1/2}.
\,\end{equation}
With this prescription one obtains \cite{Ptuskin:2006p620}:
\,\begin{equation}
\delta_{spec} (E) = \frac{D(E)^{1/4}}{2 \pi^{3/4} H \left( \frac{\,\mathrm{{\cal R}}}{\pi R_{d}^{2}}\right)^{1/4}}\propto E^{\frac{\delta}{4}} .
\,\end{equation}
Since the diffusion coefficient is chosen as in Eq.~\ref{eq:diff}, the expression above can be rewritten as:
\,\begin{equation}
\delta_{spec} (E) \approx 0.03 H_{kpc}^{-3/4} \,\mathrm{{\cal R}}_{30}^{-1/4}R_{d,15kpc}^{1/2} \left( \frac{E}{3 GeV} \right)^{\delta/4},
\label{eq:deltasp}
\,\end{equation}
where we have used $D_{28}/H_{kpc}\sim 1$ (the exact value being irrelevant since the dependence of $\delta_{spec}$ on this ratio is very weak).
The energy dependence of the spectral fluctuations is very weak for the standard values of $\delta$. At energies $E\sim 10^{5}$ GeV one can expect fluctuations at the level of $\sim 5\%$ for $\delta=1/3$ and at the level of $10\%$ for $\delta=0.6$. In practice, as we show below, the fluctuations may be somewhat larger because of the combined effect of all nuclei and the occasional dominance of some local source contributing most of the CR flux at Earth for some time (see discussion in \S \ref{sec:nearby}). This latter scenario is most likely to happen for $\delta\sim 0.6$.
\section{Results for realistic distributions of SNRs}\label{sec:results}
In this section we describe our more realistic calculations of spectra and chemical composition, taking into account the discreteness in space and time of sources in the Galaxy. First of all let us describe our model of CR diffusion in the Galaxy. We assume that the diffusion region has a height $H$ above and below the disc, with the latter assumed to have half-width $h$. In the radial direction the diffusion region is not bounded. As we discuss below, this assumption is not a serious limitation thanks to the form of the SNR spatial distribution. The diffusion coefficient in the whole region is assumed to be spatially constant (the same assumption as in propagation codes such as GALPROP) and to depend on rigidity alone, as in Eq.~\ref{eq:diff}. As suggested by the results of GALPROP, as well as of other propagation codes, a good fit to the whole set of available CR data is obtained by taking $D_{28}/H_{kpc}\simeq 1$ (more precisely $D_{28}/H_{kpc}=1.33$ for $\delta=1/3$ and $D_{28}/H_{kpc}=0.55$ for $\delta=0.6$), therefore we adopt this relative normalization of diffusion coefficient and size of the halo.
We adopt two different models of source distributions. The first one (cylindrical model hereafter) simply accounts for the average radial and azimuthal distribution of sources in the Galaxy, as discussed for instance in \cite{Case:1996p1635}. The other model is instead an attempt at taking into account the spiral distribution of sources (hereafter spiral model), for which we adopt the formalism of \cite{FaucherGiguere:2006p1609}.
The sources are assumed to be distributed in the Galaxy according to the following function \cite{Case:1996p1635}:
\,\begin{equation}
f(r)=\frac{A}{R_{\odot}^{2}}\left( \frac{r}{R_{\odot}}\right)^{2} \exp \left[ -\beta\frac{r-R_{\odot}}{R_{\odot}}\right],
\label{eq:radial}
\,\end{equation}
where $\beta=3.53$ for $R_{\odot}=8.5$ kpc.
The constant $A$ is determined from the normalization condition:
\,\begin{equation}
\int_{0}^{\infty} dr~2\pi r f(r) = 1 \to A=\frac{\beta^{4}\exp(-\beta)}{12 \pi}.
\,\end{equation}
The SNR distribution in the $z$ direction is assumed to be
\,\begin{equation}
f(z) = \frac{A_{z}}{z_g}\exp\left[ -\frac{\abs{z}}{z_g}\right],
\,\end{equation}
where $A_{z}=1$ is again derived from the normalization condition.
In the cylindrical model, the positions of the sources are chosen by drawing at random values of $r$ and $z$ from the distributions above. The $x$ and $y$ coordinates are chosen by generating a random angle $0\leq \phi \leq 2\pi$ and using the given value of $r$. Supernovae are generated at a rate $\,\mathrm{{\cal R}}=(30-100~ \rm yr)^{-1}$, and the generation of new sources is continued until a time span much larger than $H^{2}/D(E)$ is covered, in order to make sure that the stationary solution has been reached at the lowest rigidities of interest for us ($\sim 1$ TV).
In the {\it spiral model}, the procedure we adopt is similar to that introduced in \cite{FaucherGiguere:2006p1609}. The generation of the position in the $z$ direction is the same as above, therefore we will not discuss it any further. A radial coordinate $\tilde r$ is drawn at random from the average distribution in Eq.~\ref{eq:radial}; at this point a random natural number is chosen between 1 and 4, with a flat distribution. This number identifies the arm in which the supernova is localized (Norma, Carina-Sagittarius, Perseus, Crux-Scutum, as in Table 1). At this point an angular position along the arm is associated to the SNR, following the relation:
\,\begin{equation}
\theta(r) = K \log \left(\frac{r}{r_{0}}\right) + \theta_{0}.
\,\end{equation}
The parameters $K$, $r_{0}$ and $\theta_{0}$ are reported in Table 1 (notice that the values of $\theta_{0}$ are different from those in Table 2 of \cite{FaucherGiguere:2006p1609}, simply because the axes are rotated by $\pi/2$ with respect to their choice). The Sun is located at $(x,y,z)=(8.5 kpc,0,0)$.
\begin{table}
\centering
\caption{\label{table1} Parameters of Galactic arms.}
\vspace{1ex}
\begin{tabular}{c|*{3}{|@{\hspace{1em}}c@{\hspace{1em}}}}
\hline \hline
\rule{0pt}{4ex}
arm number/name & $K(rad)$ & $r_{0}(kpc)$ & $\theta_{0}(rad)$ \\
\hline \hline
\rule[-2ex]{0pt}{6ex}
1: Norma & 4.25 & 3.48 & 0 \\\hline
\rule[-2ex]{0pt}{6ex}
2: Carina - Sagittarius & 4.25 & 3.48 & $\pi$ \\\hline
\rule[-2ex]{0pt}{6ex}
3: Perseus & 4.89 & 4.90 & 2.52 \\\hline
\rule[-2ex]{0pt}{6ex}
4: Crux - Scutum & 4.89 & 4.90 & -0.62 \\\hline
\hline \hline
\end{tabular}
\end{table}
Following the prescription of Ref. \cite{FaucherGiguere:2006p1609}, we blur the angle $\theta(r)$ by $\theta_{corr}\exp(-0.35 \tilde r/kpc)$, where $\theta_{corr}$ is chosen from a flat random distribution between 0 and $2\pi$. Similarly the radial coordinate is also blurred by choosing a {\it new} value from a normal random distribution with mean $\tilde r$ and variance $0.07\tilde r$. A pictorial illustration of the two scenarios is shown in Fig. \ref{fig:space} where we show the distribution of $\sim 30,000$ SNRs in the cylindrical model (left) and the spiral model (right). The position of the Sun is illustrated by the thick (red) symbol.
\begin{figure}[t]
\centering\leavevmode
\includegraphics[width=2.9in,angle=0]{cyl.ps}
\includegraphics[width=2.9in,angle=0]{spiral.ps}
\caption{A face on view of the spatial distribution of SNRs in the Galaxy in the two models: cylindrical in the left panel and spiral on the right. About $3 \times 10^4$ sources are shown in each panel. Units are in $kpc$ and the position of the Sun is marked by a thick (red) symbol.}
\label{fig:space}
\end{figure}
For each source the spectrum of CRs (protons, He nuclei, CNO nuclei, Mg-Al-Si nuclei and Fe nuclei) at the Earth is calculated using the appropriate Green functions, as described in \S~\ref{sec:green}. For each realization of the source distribution in the Galaxy we also compute the chemical composition of CRs, as derived from the superposition of the flux of different chemicals. The efficiencies of acceleration of nuclei are calculated {\it a posteriori} from requiring a fit to the available spectra in the $TeV$ region. The calculations are carried out for different choices of the propagation parameters. We account for spallation as discussed in \S~\ref{sec:green}, with the average gas density in the propagation volume taken as $n_{gas} = n_{disc}(h/H)$. Notice that with this set of prescriptions the grammage traversed by CRs with rigidity $R$ reads
\,\begin{equation}
x(R) = n_{gas} c m_{p} \frac{H^{2}}{D(R)} = n_{disc} h c m_{p} \frac{H}{D(R)} \approx x_{0} \left( \frac{n_{disc}}{1~\rm cm^{-3}}\right) \left( \frac{R}{3~GV}\right)^{-\delta} \rm g~cm^{-2},
\label{eq:grammage}
\,\end{equation}
which only depends on the density of gas in the Galactic disc and not on the spatial extent of the halo, once $H_{kpc}/D_{28}\sim 1$ is used. In our calculations we adopt $n_{disc}=3~cm^{-3}$, which leads to $x_{0}\simeq 9~\rm g~cm^{-2}$ for $\delta=1/3$ and $x_{0}\simeq 4.3 ~\rm g~cm^{-2}$ for $\delta=0.6$. It is worth stressing that although Eq.~\ref{eq:grammage} is normalized at rigidity $3$ GV, all of our calculations refer to much higher energies, in the TeV range.
\subsection{Spectra and chemical composition for the cylindrical model}\label{sec:res_cyl}
In this section we discuss many results related to the dependence of the spectra and chemical composition on the diffusion coefficient, on discreteness of the sources in space and time and the size of the halo for the cylindrical model of source distribution.
In Fig.~\ref{fig:varyR} we illustrate the all-particle spectrum obtained in 10 realizations of source distributions in the cylindrical model, using $\delta=1/3$, $H=4$ kpc and a rate of supernova explosions in the Galaxy $\,\mathrm{{\cal R}}=1/100 yr^{-1}$ (left) and $\,\mathrm{{\cal R}}=1/30 yr^{-1}$ (right). In both cases we impose that the slope $\gamma$ of the injection spectrum is related to $\delta$ through $\gamma+\delta=2.67$, where 2.67 turns out to be the slope of the data provided in Ref.~\cite{Horandel:2004p1543} as {\it average} among all experiments (dots with error bars in Fig.~\ref{fig:varyR}). The red, staircase line, represents the average spectrum resulting from the 10 random realizations.
\begin{figure}[t]
\centering\leavevmode
\includegraphics[width=2.8in,angle=0]{H4delta0.33R100.ps}
\includegraphics[width=2.8in,angle=0]{H4delta0.33R30.ps}
\caption{All-particle CR spectrum for ten random realizations of sources in the cylindrical model, assuming $\delta=1/3$, a SN rate $R=1/100 yr^{-1}$ ($R=1/30 yr^{-1}$) on the left (right). The halo size is $H=4$ kpc. The injection spectrum of all chemicals is assumed to have a slope (below the cutoff) such that $\gamma+\delta=2.67$.}
\label{fig:varyR}
\end{figure}
As expected, the random distribution of SNRs in space and time leads to fluctuations in the all-particle spectrum. One aspect that can be immediately noticed is that the strength of the fluctuations is sensibly reduced in the case with a higher SN rate ($\,\mathrm{{\cal R}}=1/30 yr^{-1}$), as could be expected based on the simple estimate in Eq.~\ref{eq:deltasp}. Though relatively small, the effect of fluctuations is sufficient to induce a visible wiggle in the global shape of the all-particle spectrum.
All spectra present a pronounced knee which compares well with the data and is the consequence of the rigidity dependent nature of the acceleration in the sources. In order to provide a reasonably good fit to the data one has to assume a maximum energy in the proton spectrum $E_{max}=6\times 10^{6}$ GeV, not far from that obtained in non-linear theories of particle acceleration at SNR shocks \cite{Blasi:2007p144}. This number can however reasonably be smaller in more realistic models since the spectrum of escaping particles in general does not have an exponential cutoff, due to the superposition of advected particles and particles escaping from upstream at all times throughout the history of the remnant \cite{Caprioli:2008p123}. The fact that our model spectrum underpredicts the data at the highest energies is most likely related to the effect of extragalactic cosmic rays, as we discuss below.
The fluctuations are much more pronounced in the case of faster diffusion ($\delta=0.6$), as illustrated in Fig.~\ref{fig:H2delta06R30}, an effect that could be expected based on Eq.~\ref{eq:deltasp}. Independent of the shape of the average spectrum, one can clearly see that the spectra of individual realizations are in general rather wiggly and can fit the observations only in a rather loose sense. The bumpiness of the spectrum is the result of the importance of nearby and recent supernova events. These realizations also lead to exceedingly large anisotropy as we discuss in Paper 2. Both aspects, however, bumpiness and anisotropy, are likely to hint to the breaking of the diffusion approximation for CRs coming from nearby sources.
\begin{figure}[t]
\centering\leavevmode
\includegraphics[width=4.8in,angle=0]{H4delta0.6R30.ps}
\caption{All-particle CR spectrum for ten random realizations of sources in the cylindrical model, assuming $\delta=0.6$, a SN rate $R=1/30 yr^{-1}$ and a halo with size $H=4$ kpc. The injection spectrum of all chemicals is assumed to have a slope (below the cutoff) such that $\gamma+\delta=2.67$.}
\label{fig:H2delta06R30}
\end{figure}
Using Eq.~\ref{eq:diff} for the diffusion coefficient and recalling that $D(E)$ can also be written in terms of the scattering length $\lambda(R)$ as $D(R)=(1/3) c \lambda (R)$, one easily obtains
\,\begin{equation}
\lambda(R) \approx 10^{18} H_{kpc} \left( \frac{R}{3 GV}\right)^{\delta} \rm cm,
\,\end{equation}
where for simplicity of presentation we assumed $D_{28}/H_{kpc}=1$, independent of $\delta$.
The diffusion approximation for an individual source is valid only if the source is located at a distance $d\gg \lambda$, but at $R=10^{5}$ GV and with $H=4$ kpc, $\lambda\approx 380$ pc for $\delta=0.6$. In other words at high energy one should be very careful in treating the propagation from nearby sources in order to avoid incorrect or artificial conclusions on the spectrum and anisotropy.
It is interesting to discuss the spectra of individual elements for the different realizations. We illustrate our results for $\delta=1/3$ for four realizations in Fig.~\ref{fig:H2delta033chems}. The different lines refer to protons (solid/orange), He (dashed/blue), CNO (dot-dashed/cyan), MgAlSi (dor-dot-dot-dashed/yellow) and Fe (dotted/green). The red solid line is the all-particle spectrum compared with data.
\begin{figure}[t]
\centering\leavevmode
\includegraphics[width=4.8in,angle=0]{H4delta0.33g267Rate30chems.ps}
\caption{Spectra of chemicals in the cylindrical model, for $\delta=1/3$, a SN rate $\,\mathrm{{\cal R}}=1/30 yr^{-1}$ and a halo with size $H=4$ kpc. The injection spectrum of all chemicals is assumed to have slope (below the cutoff) such that $\gamma+\delta=2.67$. The different curves are: solid (orange) for protons, dashed (blue) for He, dot-dashed (cyan) for CNO, dot-dot-dot-dashed (yellow) for Mg-Al-Si, and, finally, dotted (green) for Fe. Each of the four different panels shows the results for a different random realization of the sources' distribution.}
\label{fig:H2delta033chems}
\end{figure}
For $\delta=1/3$ the fluctuations are rather small and a good fit to the all-particle spectrum can always be obtained. The most important feature shown by these curves is the flattening of the spectra of chemicals at lower energies, which results from spallation. In particular the spectrum of He appears to be flatter than the spectrum of protons up to $\sim 10^{5}$ GeV. This prediction is observationally consistent with recent CREAM \cite{Ahn:2010p624,Ahn:2009p1595} and PAMELA data, and implies that at the knee He dominates over protons, even if by a small amount. It is very important to realize that the spectral flattening does not imply that spallation dominates over escape from the Galaxy. The latter condition occurs at lower energies when the escape times become larger. For the energies we are interested in, the flattening is simply induced by the secular action of spallation during the propagation time, and is certainly more pronounced for elements heavier than He, as Fig.~\ref{fig:H2delta033chems} clearly shows. In our calculations the slope of the proton spectrum is $\sim 2.67$ but that of the He spectrum is $\sim 2.6$ (the difference in slope is $\sim 0.05-0.07$ depending on realization). As shown in Eq.~\ref{eq:low}, if spallation were dominant the change of slope would be $\delta\sim 0.33$. These numbers should be compared with the slopes of the proton and He spectra as observed by CREAM, that are $2.66\pm 0.02$ and $2.58\pm 0.02$ \cite{Ahn:2010p624} respectively.
The case $\delta=0.6$ is somewhat different, as shown in Fig.~\ref{fig:H2delta06chems}. As stressed above, the fit to the all-particle spectrum is not that solid and depends on the specific realization. Moreover, the hardening in the He spectrum is not evident and in fact the He and proton spectra are mostly parallel in the realizations shown here. Nearby sources induce wiggles and occasionally give rise to concave spectra. These are not to be confused with the ones predicted by non linear diffusive shock acceleration.
It is important to keep in mind that the spallation-based explanation of the flatter He spectrum compared with the H spectrum relies on the requirement of a large enough grammage at energies above $200$ GeV/nucleon or so. At these energies the B/C ratio is not very constraining. On the other hand, trying to obtain a comprehensive fit of B/C and particle spectra at lower energies is also problematic, since the spectra of protons and He both show a sharp hardening at $200$ GeV/nucleon, that can, equally well, be the result of local sources or be due to a reduction of the scattering rate of particles (larger diffusion coefficient). The first explanation would not affect the global B/C ratio, while the latter would indeed affect it, since it results in a decreased grammage at low energies. In this sense the origin of the harder He spectrum should be investigated together with the origin of the change of spectral slope seen for all species at $200$ GeV/nucleon.
From the point of view of energetics the two scenarios with $\delta=1/3$ and $\delta=0.6$ are appreciably different. On average the case $\delta=1/3$ leads to require an efficiency of acceleration (for protons) of the order of $\sim 6-7\%$ for $\,\mathrm{{\cal R}}=1/30 yr^{-1}$. For $\delta=0.6$ the efficiency of proton acceleration must be $\sim 10\%$. The fact that (especially for $\delta=1/3$) the acceleration efficiency for individual SNRs is so small might be a possible reason why it is very difficult to detect the gamma ray emission from pion production and decay. It can also provide a useful hint to the development of non linear theories of shock acceleration: on one hand the acceleration efficiency is not required to be high, on the other it has to be large enough to induce the magnetic field amplification necessary for particle acceleration up to the knee region.
\begin{figure}[t]
\centering\leavevmode
\includegraphics[width=4.8in,angle=0]{H4delta0.6g267Rate30chems.ps}
\caption{Same as for Fig.~\ref{fig:H2delta033chems} but for $\delta=0.6$.}
\label{fig:H2delta06chems}
\end{figure}
A summary of the results on chemical composition is provided by the mean value of the logarithmic mass as a function of energy. We plot this function in Fig.~\ref{fig:H2compo} for $\delta=1/3$ (left) and $\delta=0.6$ (right). The results of our calculations are compared with the data compiled in Ref.~\cite{Horandel:2007p877}. Unfortunately the large error bars in the data do not allow to add information to what we have already pointed out above. The behaviour of the data in the highest energy bins might be interpreted as suggestive of the need for a lighter component, possibly contributed by extragalactic cosmic rays.
\begin{figure}[t]
\centering\leavevmode
\includegraphics[width=2.8in,angle=0]{H4delta0.33Rate30compo.ps}
\includegraphics[width=2.8in,angle=0]{H4delta0.6Rate30compo.ps}
\caption{Mean Log of the mass of CRs for ten random realizations of the distribution of SNRs in the cylindrical model, assuming $\delta=1/3$ ($\delta=0.6$) on the left (right) and a halo with size $H=4$ kpc. The red staircase line is the average over the random realizations.}
\label{fig:H2compo}
\end{figure}
\subsection{Spectra and chemical composition for the spiral model}\label{sec:res_spiral}
In this section we discuss the implications of the spiral distribution of SNRs in the Galaxy for the spectrum and chemical composition of cosmic rays observed at Earth. As shown in Fig.~\ref{fig:SpH2delta033g267chems}, again a very good fit to the measured all-particle spectrum can be obtained adopting the same conditions as for the cylindrical model, namely $\gamma+\delta=2.67$. In this case, however, an interesting dependence of the results on the size of the magnetized halo arises.
From Eq.~\ref{eq:grammage} we see that the grammage depends very weakly on $H$ because of the condition $H_{kpc}/D_{28}\sim 1$. This implies that changing (decreasing or increasing) $H$ has no appreciable consequences in the case of the cylindrical model, where the source density around the Sun is roughly constant (within a distance of order $\sim H$). This conclusion remains approximately true also in the spiral model, although here changing $H$ leads to a change in the volume (and number of arms) contributing a flux at Earth. Now, by taking a look at Fig.~\ref{fig:space}, it is immediate to notice that the position of the Sun is right in between two Galactic arms, so that most of the sources are some distance away. Nuclei that have to be transported from the arm to the Sun suffer spallation and their depletion at low energy is not compensated by the contribution of nearby sources. This is exactly the situation in which changing the volume of contributing sources might make a difference.
This fact is illustrated in Fig.~\ref{fig:H2vs4} where we show the all-particle spectrum for $H=2$ (left) and $H=4$ kpc (right). For $H=4$ kpc, we have already seen that the spiral model reproduces the data with a spectral index at injection such that $\gamma_{obs}=2.67$: the same result that we obtained for the cylindrical model, independent of $H$. On the other hand, for $H=2$ kpc$, \gamma_{obs}=2.67$ is only marginally compatible with the data, while $\gamma+\delta=2.7$ is found to be favorite. This is perfectly consistent with the fact that in the spiral model, for a given value of $\gamma+\delta$, heavy nuclei have a slightly harder spectrum. The difference in slope between the two cases, even if small, makes a sensible difference on the all-particle spectra because it sums up over many decades in energy.
In perfect analogy with the cylindrical case, the secular action of spallation leads also in this case to the flattening of the spectra of heavy elements for $\delta=1/3$. The He spectrum is again flatter than the protons' by the same amount as in the previous model, and again up to an energy of $\sim 10^5$ GeV.
\begin{figure}[t]
\centering\leavevmode
\includegraphics[width=4.8in,angle=0]{SpH4delta0.33g267Rate30chems.ps}
\caption{Spectra of chemicals in the spiral model, for $\delta=1/3$, a SN rate $\,\mathrm{{\cal R}}=1/30 yr^{-1}$ and a halo with size $H=4$ kpc. The injection spectrum of all chemicals is assumed to have slope (below the cutoff) such that $\gamma+\delta=2.67$.}
\label{fig:SpH2delta033g267chems}
\end{figure}
\begin{figure}[t]
\centering\leavevmode
\includegraphics[width=2.8in,angle=0]{SpH2delta0.33g270R30.ps}
\includegraphics[width=2.8in,angle=0]{SpH4delta0.33g267R30.ps}
\caption{Dependence of the all-particle spectrum on the realization of sources in the spiral model, for $H=2$ (left) and $H=4$ (right) kpc. In the left (right) panel we used $\gamma+\delta=2.7$ ($\gamma+\delta=2.67$).}
\label{fig:H2vs4}
\end{figure}
\subsection{Comments on the He hardening}
Two important observational findings recently appeared in the literature on cosmic rays: first, the sharp hardening found by ATIC-2 \cite{wefel08}, CREAM \cite{Ahn:2010p624} and PAMELA \citep{Adriani:2011p69} in both the proton and He spectra at $\sim 200$ GeV/nucleon; second, the fact that the He spectrum above this energy appears to be harder than the proton spectrum. The latter effect is observed at a very high level of significance by all three experiments mentioned above.
Taken at face value, the first finding alone represents a serious challenge to our attempts at achieving a comprehensive fit to all cosmic ray observables, an attempt that is routinely made by using GALPROP or similar propagation codes. Two possibilities come to mind: that the steep, low energy, part of the spectrum may be contributed by a so-far unknown nearby, probably old, cosmic ray source; or that at low energy cosmic rays experience a scattering rate that decreases with decreasing energy more than expected based on extrapolation from high energies, namely the diffusion coefficient has a stronger energy dependence with consequent steepening of the equilibrium spectra.
The first scenario effectively decouples the B/C ratio from the measured spectral shape of primary cosmic rays that produce B through spallation reactions: the production of B depends on the mean Galactic CR spectrum, which in this case would be different, at low energies, from the spectrum at Earth. In this scenario it is virtually impossible to know the actual spectrum of CRs in the Galaxy, though some clues can be gathered from diffuse backgrounds, such as gamma rays and radio emission. It is worth stressing that in this case, propagation codes such as GALPROP cannot be easily used since they do not include the possibility of taking into account discrete sources.
In the second scenario one expects the dependence on energy of the grammage to be stronger at low energies than it is at high energies, thereby leading to a steepening of the equilibrium spectra of all cosmic ray components. The B/C ratio would reflect this complex situation, but would not provide any strong constraint on the grammage experienced by high energy particles.
In both situations extrapolating functional forms from high to low energies appears unjustified and potentially dangerous in that it may lead to flawed conclusions. In a recent paper \cite{vladimirov2011}, among other things, the authors also comment on the results presented on a preliminary version of the present paper (which had been made publicly available as an arXiv preprint): the authors extrapolate our curves and conclusions to very low energies in order to compare predictions with B/C data where they are available. They conclude that if one interprets the He harder spectrum as a consequence of spallation, the grammage traversed by particles at lower energies leads to exceed the flux of antiprotons and the B/C ratio. However, antiproton measurements are only available at energies lower than 200 GeV \cite{Collaboration:2010p785}, namely outside the range of applicability of our model, which at those energies cannot even reproduce the CR spectrum, as already stressed above. On the other hand, for the B/C ratio there are two measurements, by CREAM \cite{Ahn:2008p1594} and TRACER \cite{TRACER2011} at energies higher than 1 TeV, but both have large error bars. If these two points are taken with the error bars shown in \cite{TRACER2011}, no inconsistency can be claimed between our model and the relevant data. The obvious conclusion is then the same that could be reached by observation of the spectral steepening suffered by all chemicals below $\sim$200 GeV/nucleon: the grammage cannot be extrapolated down in energy with the same slope that we assume at TeV energies.
As far as the spallation-related explanation of the He hardening is concerned, we think that there is still enough uncertainty in the B/C ratio in the TeV range to warrant the benefit of doubt on whether this may be the correct explanation. In our view this scenario is certainly less {\it ad hoc} and more physically motivated than assuming artificial breaks in the source spectra or unknown source populations. To our knowledge this is the only attempt so far at understanding the new data on differential hardening in terms of well established propagation physics, rather than simply fitting them.
Finally, a very important feature of this scenario is that it has the potential to be disproved not too far in future, through more accurate measurements of secondaries in the high energy range, with TRACER and AMS-2. While extrapolation to energy regions where the initial assumptions are known not to hold might lead to misleading conclusions, we think that any attempt should be made to test whether the spallation of He at TeV energies may lead to observables that are in contradiction with data in the same energy range.
\section{Additional scenarios}\label{sec:more}
Other physically relevant scenarios have been tested but did not induce particularly interesting effects on the spectrum and chemical composition: one is the continuous leakage of CRs from SNRs, as discussed in \S~\ref{sec:green}, and the other is the scenario in which supernova explosions occur dominantly in OB associations. Both these assumptions have negligible consequences in terms of changes induced in the spectrum and chemical composition of CRs observed at Earth.
\section{The effect of extragalactic cosmic rays}\label{sec:extragal}
The spectrum and the chemical composition in the energy region above the knee may be substantially affected by a contribution from CRs of extragalactic origin. This is true in both descriptions of the transition from galactic to extragalactic that are presently considered as most likely, namely the electron-positron dip model \cite{Berezinsky:2005p1708} and the mixed composition scenario \cite{Allard:2005p1728}. Since the investigation of the transition is not among the purposes of the present paper, here, for simplicity, we illustrate this effect only for the case of the dip. We calculate the spectrum of extragalactic CRs as discussed in detail in \cite{Aloisio:2007p131} and normalize the absolute flux to the HiRes data (\cite{Sokolsky:2007p1741} and references therein). The injection spectrum of extragalactic CRs is assumed to be a power law with slope $-2.7$. No evolution of the source luminosity with redshift is taken into account since the only purpose of the calculation is to illustrate the effect, and not to explore the region of allowed parameters. As discussed in \cite{Aloisio:2007p131}, the dip model provides a suitable explanation of the observations provided that no more than $\sim 15\%$ of the extragalactic flux is contributed by He nuclei. Therefore we assume a pure proton composition of extragalactic CRs.
Even a small magnetic field in the intergalactic medium can suppress the flux of CRs coming from extragalactic sources, thereby introducing a low energy cutoff in the spectrum. Such low energy cutoff can also be intrinsic to the source if the sources harbor relativistic shocks: in this case the minimum energy of the accelerated particles is $\sim 4 \Gamma^{2} m_{p} c^{2}$, with $\Gamma$ the Lorentz factor of the shock.
We artificially model this low energy suppression with an exponential cutoff at $E_{cut}=10^{7}$ GeV and $10^{8}$ GeV. The all-particle spectrum for these two values of $E_{cut}$ is plotted in Fig.~\ref{fig:specEcut}: one can see that the superposition of the Galactic and extragalactic CR spectra allows us to provide a smooth fit to the available data in both cases. In Fig.~\ref{fig:logAEcut} we plot the mean logarithmic mass, $\langle \log A \rangle$, as a function of energy for the two values of $E_{cut}$: the goodness of the fit is negatively affected by an extragalactic proton spectrum extending too low in energy, therefore if we are to believe this type of interpretation of the transition, and take strictly the data on the mean log mass, then the low energy cutoff of the extragalactic CR spectrum has to be at $\gtrsim 10^{8}$ GeV. At very high energy, Fig.~\ref{fig:logAEcut} suggests that the transition has to occur through mixing of different chemicals. However one has to keep in mind that the measurements of the composition in the transition region are still affected by large systemic errors which do not allow very firm conclusions in this important energy range, as also discussed in \cite{Aloisio:2008p126}. Recent data from HiRes \cite{Sokolsky:2007p1741} and Telescope Array \cite{Thomson:2010p1545} seem to find a chemical composition at $\sim 10^{18}$ eV dominated by protons, which is not reflected in the data used in Fig.~\ref{fig:logAEcut}. The most recent data on the chemical composition as observed with the Pierre Auger Observatory also show a proton dominated composition at $\sim 10^{18}$ eV, but gradually shifting to heavier composition at higher energies (see for instance \cite{Roulet:2011p1892}).
\begin{figure}[t]
\centering\leavevmode
\includegraphics[width=2.8in,angle=0]{Ecut7.ps}
\includegraphics[width=2.8in,angle=0]{Ecut8.ps}
\caption{All particle spectrum of CRs including the extragalactic component with $E_{cut}=10^{7}$ (left) and $E_{cut}=10^{8}$ GeV (right).}
\label{fig:specEcut}
\end{figure}
\begin{figure}[t]
\centering\leavevmode
\includegraphics[width=2.8in,angle=0]{H4delta0.33compoEcut7.ps}
\includegraphics[width=2.8in,angle=0]{H4delta0.33compoEcut8.ps}
\caption{Mean value of $\log A$ including the extragalactic component with $E_{cut}=10^{7}$ (left) and $E_{cut}=10^{8}$ GeV (right).}
\label{fig:logAEcut}
\end{figure}
Interestingly enough our results do not require an additional component (possibly super-heavy Galactic nuclei invoked by \cite{Hillas:2005p171} and \cite{Horandel:2004p1543}, see also \cite{deDonato:2009p1547}) to fit the shape of the all-particle spectrum. The need for such an extra component was probably arising from assuming that the maximum energy of protons accelerated in SNRs equal the energy where the knee appears. In our calculations, as discussed above, the maximum energy of protons that best fits the all particle spectrum is $\sim 6\times 10^{6}$ GeV, well above the knee, but the knee itself appears to be mainly contributed by Helium. The shape of the cutoff in the spectra of individual chemicals also affects the need for extra-components.
\section{Summary and Conclusions}\label{sec:conclusion}
In this paper we have tried to make progress in the quest for the origin of CRs, trying to clarify the requirements on propagation and source spectra that come from observations of CRs at Earth.
We computed the spectrum and chemical composition of CRs at Earth assuming that SNRs are the primary sources of galactic CRs and taking into account the discrete distribution in both space and time of these sources. Each source is assumed to inject protons and nuclei with a power law spectrum up to some maximum energy, above which the spectrum is exponentially cut off. The power law index and maximum energy are assumed to be the same for all sources and are determined from observations, with $E_{max}$ simply scaling with rigidity for nuclei heavier than H. The propagation of CRs from their sources to Earth is treated through a Green function approach allowing for a completely general dependence on space and time of the injected spectrum, as well as taking into account spallation. The diffusion coefficient has been taken constant in space within the galactic halo and has a power-law dependence on rigidity alone with index $\delta=1/3$ or $\delta=0.6$. In terms of its normalization, the diffusion coefficient is assumed to scale with the size of the halo, following the findings of other propagation codes, such as GALPROP or DRAGON.
We have considered a few different scenarios for the distribution of sources. As far as their spatial distribution is concerned, we have carried out the calculations both for a smooth distribution of SNRs within the galactic disk (cylindrical model) and for the case in which SNRs follow the spiral structure of the Galaxy (spiral model). We have also considered the possibility that SNRs are clustered in superbubbles but we find that in terms of spectrum and chemical composition this assumption makes negligible difference.
Concerning the dependence on time of the injected spectrum, we have considered two different scenarios: a case in which all the CRs a SNR has ever accelerated are released at the end of its evolution, and a case in which, at all times through the Sedov phase of the SNR evolution, there is continuous leakage of CRs at the maximum energy from upstream of the decelerating blast wave, with the maximum energy decreasing with time down to $\sim 1GeV$. These two different models for the time dependence of the injection lead to very similar outcomes in terms of spectrum and chemical composition, once the spatial distribution of the sources and the energy dependence of the diffusion coefficient are fixed.
Our main findings in the present paper are summarized in the following.
First of all the ability of our approach at taking into account the discreteness of the source distribution allows us to illustrate that the all-particle CR spectrum at Earth is not in general guaranteed to be a power law. In particular if the diffusion coefficient depends on rigidity as $E^\delta$ with $\delta=0.6$ a number of wiggles are likely to appear in the spectrum, due to the contribution of nearby sources. This fact also suggests that this value of $\delta$ will lead to large levels of anisotropy.
For a diffusion coefficient scaling with $E^{1/3}$, the CR spectrum and chemical composition observed at Earth can be reproduced in a satisfactory way in both the cylindrical and spiral model of source distribution in the Galaxy. The required maximum energy of the accelerated particles is $E_{max}=6\times 10^6$ GeV for protons, reasonably close to (but somewhat higher than) what can be expected from a SNR based on the non linear theory of diffusive shock acceleration. The knee in the all-particle spectrum results from the scaling of the maximum particle energy with rigidity. The required power-law index of the injected particles is the same in both scenarios for a halo size $H=4\ kpc$, namely $\gamma=2.34$. A slightly steeper power-law is required instead in the spiral case if the magnetized halo has size $H=2\ kpc$. In this latter case one finds $\gamma=2.37$, and the small difference is induced by the need of compensating for the effects of spallation that lead to a hardening of the spectrum at low energy, with a deficiency of low energy particles that in this scenario cannot be supplied by nearby sources due to the inter-arm position of the solar system.
These spectral indices represent at present the most problematic aspect of the supernova remnant paradigm for the origin of CRs. The required acceleration efficiency is relatively low, for $D\propto E^{1/3}$ and $\,\mathrm{{\cal R}}=1/30\ yr^{-1}$ being only of order $6-7\%$.
Proper account of the effects of spallation leads to a hardening of the spectrum of heavy elements at low energy, even when this process is not dominant over escape. This is the key element that leads us to expect that, for $\delta=1/3$, He nuclei have a flatter spectrum than protons at low energies and dominate the all-particle spectrum at the knee, in qualitative agreement with the recent findings of CREAM \cite{Ahn:2010p624,Ahn:2009p1595} and PAMELA \cite{Adriani:2011p69}. The slopes of the protons and Helium spectra that we find are $\sim 2.67$ and $\sim 2.6$ respectively, to be compared with the measured values of $2.66\pm 0.02$ and $2.58\pm 0.02$ \cite{Ahn:2010p624}. Our calculations show that for $\delta=0.6$ this effect disappears, that might indeed suggest that Galactic diffusion is characterized by a low value of $\delta$.
The prediction of the hardening in the spectra of nuclei compared with that of protons is one of the most important results of our calculations. As noted by \cite{vladimirov2011}, if extrapolated down to much lower energies this finding may cause excessive B and antiprotons production. However, a more straightforward consequence of such a naive extrapolation would be that the spectra of H and He would not reflect the observed steepening of both components at rigidities $<200$ GeV/nucleon, which we make no attempt at explaining. In other words, what this is suggesting is simply that at low energies something new is happening. Currently this is not yet explained in physical terms: the data can only be reproduced by introducing artificial breaks in either the injection spectra or the diffusion coefficient, or both. We limited ourselves to consider the high energy range and the simplest possible physical scenario for the very reason that we did not want to introduce additional parameters before having an idea of the nature of the underlying phenomena. After all, if one applied the same approach to the knee in the CR spectrum, assuming breaks in the injection spectra and/or in the diffusion coefficient could certainly lead to a good fit to the data, but would make little contribution to our understanding of the origin of this spectral feature.
Currently we cannot explain the fact that the spectra of all species flatten rather abruptly at $\sim 200$ GeV/n as found by the PAMELA experiment \cite{Adriani:2011p69}. However we do not share the opinion of the authors claiming the failure of the SNR paradigm or the existence of an unknown accelerator. We rather think that the SNR paradigm is in fact more complex than usually assumed in doing these claims, and that its consequences are not yet so well understood as sometimes people would like to believe.
At high energies, above the knee, we underpredict the all-particle spectrum and slightly overpredict the mass mean Log of CRs. This seems to suggest that a lighter CR component is probably contributed at those energies by extragalactic sources. Since fitting the details of the observed CR spectrum at high energy was not our main purpose in this paper, we modeled this extragalactic component only within the scenario of the Dip \cite{Berezinsky:2005p1708}, in which the extragalactic CR composition is dominated by protons. In this case we find that the spectrum can be fitted very well, while the chemical composition suggests that the extragalactic component should have a low energy cutoff at energies larger than $10^8$ GeV. Fitting the all-particle spectrum does not require any additional heavy component as previously postulated in Refs. \cite{Hillas:2005p171} and \cite{Horandel:2004p1543}.
Our general conclusion is that if the diffusion coefficient in the Galaxy scales with energy as $E^{1/3}$ SNRs can account for the all-particle spectrum and chemical composition of CRs detected at Earth, provided that they are able to accelerate protons up to $\sim 6 \times 10^6$ GeV and together give spectra as steep as $E^{-\gamma}$ with $\gamma=2.3-2.4$. This is indeed a bit of a challenge for the non-linear theory of diffusive shock acceleration, which is believed to describe the acceleration process in this context. On the one hand, efficient acceleration leads to hard (and even concave) particle spectra, at odds with the relatively large values of $\gamma$ obtained above. On the other hand, the inferred maximum energy can only be achieved if the magnetic field is largely amplified with respect to its average value in the ISM. Efficient amplification is usually associated with efficient particle acceleration, whereby the difficulties at reconciling steep spectra and high maximum energies. These difficulties might be overcome in the context of NLDSA if the velocity of the scattering centers that enters the transport equation is that computed in the amplified magnetic field, rather than the unperturbed Alfv\`en velocity \cite{Caprioli:2011p1899}. In fact, the use of the unperturbed velocity derives from a strict application of quasi-linear theory, which is however questionable in the presence of such large levels of field amplification as those usually found appropriate for young SNRs. In addition, as soon as the Alfv\`en velocity in the self-generated magnetic field is considered, the acceleration efficiency is much reduced, while it may still happen that the maximum proton energy reaches the knee. This recent result is very interesting, especially in the light of our findings in this paper, that the required acceleration efficiency is only about $5\%$ in our best fit scenario, suggesting that the acceleration process has to guarantee decoupling between a large maximum energy and efficient particle acceleration.
Another possibility to reconcile steep spectra of the accelerated particles and a high maximum energy might be related to the shock obliquity. The theory of NLDSA has so far been developed only for parallel shocks, while particle acceleration at perpendicular and more generally oblique shocks has not been devoted the same amount of attention. Perpendicular shocks are fast accelerators, and in principle allow to reach higher energies than parallel shocks. This is especially interesting in that the value we require for the maximum energy of protons is close to the upper limit of what can be achieved within NLDSA at parallel shocks. Whether the required combination of steep spectra and high maximum energies can be achieved at highly oblique shocks is an issue that deserves further investigation.
\bibliographystyle{JHEP}
|
train/arxiv
|
BkiUgnjxK7IADIh7Rx3C
| 5 | 1 |
\section{Introduction}
The PLANCK satellite, currently in flight, should give the more
accurate measurement of the anisotropies of the CMB in temperature and
polarization with a sensitivity of $2 \mu K$ and an angular resolution
of 5 arcmin~\cite{bluebook}. In particular its estimation of the
BB-modes should set an upper limit on the tensor-scalar ratio
(expected at 0.1~\cite{efstathiou}). The knowledge of this ratio
should confirm the existence of primordial gravitational waves
generated during the inflation and would set the energy scale of the
inflation~\cite{lyth} and provide constraint on inflationnary models~\cite{baumann}. In order to obtain its optimal sensitivity it is required to estimate the foreground emissions and
the residual contamination due to these foreground emissions on the
CMB signal. Indeed for the full sky these emissions have the same order
of magnitude than the CMB in temperature and dominate by a factor of 10 in
polarization~\cite{bluebook}. The principal polarized Galactic microwave emissions come
from 2 effects : thermal dust emission and synchrotron emission. The
synchrotron has already been measured by the 408 MHz all-sky continuum survey~\cite{haslam}, by Leider between 408 MHz and 1.4 GHz~\cite{wolleben}, by Parkes at 2.4 GHz~\cite{duncan1999}, by the MGLS {\it Medium Galactic Latitude Survey} at
1.4 GHz~\cite{uyaniker} and by the satellite WMAP {\it Wilkinson Microwave Anisotropies Probe} (see e.g.~\cite{hinshaw}). The synchrotron emission is due to ultrarelativist
electrons spiraling in a large-scale magnetic field and is dominant at
low frequencies. The dust thermal
emission which have already been well constrained by IRAS~\cite{schlegel}, COBE-FIRAS~\cite{boulanger} and
Archeops~\cite{macias,benoit} is due to dust grains
which interact with the Galactic magnetic field and emit a polarized
submillimetric radiation~\cite{boulanger} and dominates at high frequencies. The polarization of these
two radiation is orthogonal to the field lines. To obtain a realistic model of
these emissions we propose models based on a 3D modelling of the
Galactic magnetic field and of the matter density in the
Galaxy. The models are optimized using preexisting data and then are used to estimate the bias due to
these emissions on the CMB measurement.
\section{3d modelling of the Galaxy}
\label{sec:model}
\indent A polarized emission is described by the Stokes
parameters I, Q and U~\cite{kosowsky}. For the polarized foreground
emissions integrating along the line of sight we obtain, for synchrotron~\cite{ribicki}:
\begin{eqnarray}
\label{eq:map_sync}
\centering
I_s &=& I_{\mathrm{Has}} \left(\frac{\nu_s}{0,408}\right)^{\beta_s},\\
Q_s &=& I_{\mathrm{Has}}
\left(\frac{\nu_s}{0,408}\right)^{\beta_s}\frac{\int \cos(2\gamma)p_s n_e\left(B_l^2 + B_t^2 \right)}{\int n_e\left(B_l^2 + B_t^2 \right)} ,\\
U_s &=& I_{\mathrm{Has}}
\left(\frac{\nu_s}{353}\right)^{\beta_s}\frac{\int \sin(2\gamma)p_s n_e\left(B_l^2 + B_t^2 \right)}{\int n_e\left(B_l^2 + B_t^2 \right)},
\end{eqnarray}
\noindent where $B_n$, $B_l$ and $B_t$ are the magnetic field components
along, longitudinal and transverse to the ligne of sight. $p_s$ is the
polarization fraction set to 75\%~\cite{ribicki}. $I_{Has}$ is a
template temperature map obtained from the 408 MHz all-sky continuum survey~\cite{haslam}. The maps are extrapolated to the Planck
frequencies using the spectral index $\beta_s$ which is a free parameter of the model.
For the thermal dust emission : \\
\begin{eqnarray}
\centering
I_d &=& I_{sfd} \left( \frac{\nu_d}{353} \right)^{\beta_d},\\
Q_d &=& I_{sfd} \left( \frac{\nu_d}{353}\right)^{\beta_d} \int n_d\frac{\cos(2 \gamma) \sin^2(\alpha) f_{\mathrm{norm}}p_d}{n_d} ,\\
U_d &=& I_{sfd} \left(\frac{\nu_d}{353}\right)^{\beta_d} \int n_d \frac{\sin(2 \gamma) \sin^2(\alpha) f_{\mathrm{norm}}p_d}{ \int n_d} ,
\end{eqnarray}
\noindent where the polarization fraction $p_d$ is set to 10 \%~\cite{ponthieu2005},
$\beta_d$ is the spectral index (set at 2.0) and $f_{norm}$ is an
empiric factor, fit to the Archeops data. The
$I_{sfd}$ map is the model 8 of~\cite{finkbeiner}.\\
\indent The models are based on a exponential distribution of
relativistic electrons on the Galactic disk
following~\cite{drimmel} where the radial scale $h_r$ is a free
parameter. The distribution of dust grains $n_d$ is
also choose exponential~\cite{benoit}. The Galactic magnetic field is composed of two
parts: a regular component and a turbulent component. The regular
component is based on the WMAP team model~\cite{page} which is close to a
logarithmic spiral to reproduce the shape of the spiral
arms~\cite{han2006,sofue}. The pitch angle $p$ between two arms is a free
parameter of the model. The turbulent component is described by
a law of Kolmogorov~\cite{han2006,han2004} spectrum of relative amplitude $A_{turb}$.
\section{Comparison to data}
\label{sec:test}
\indent We computed Galactic profiles in temperature and polarization
for various bands of longitude and latitude and various values of the free
parameters. In order to optimize these 3D models we compare them to
Galactic profiles computed from preexisting data using a $\chi^2$
test. For the synchrotron emission in temperature, we use the 408 MHz all-sky
continuum survey~\cite{haslam} as shown on
Figure~\ref{fig:gal_has}. In polarization we compared to the K-band
WMAP 5 years data. Thermal dust emission model is optimized using the
polarized Archeops data~\cite{benoit} at 353 GHz.
\begin{figure}
\centering
\includegraphics[height=8cm,width=6cm]{profil_has_Isync_BSS_ref__hr_1_Aturb_0_cresun_quadrafit_nocste_nside32_muKRJ.ps}\caption{Galactic profiles in temperature at 408 MHz built using Haslam data in black and our synchrotron emission model for various values of the pitch angle $p$ {\it (form green to red)}.\label{fig:gal_has}}
\end{figure}
\indent The best fit parameters for the 3D model in polarization
are given in the Table~\ref{tab:param}. The results are consistent for
the 3 sets of data, and in particular we obtain compatible results
for the synchrotron and thermal dust emission models. $A_{turb}$ is
not strongly constrained but its range of best fit value is compatible
with previous results~\cite{sun,dusta,han2004}.
$h_r$ is badly constrained as was already the case in Sun {\it et al}~\cite{sun}. The
best fit value of the pitch angle $p$ is compatible with results
obtained by other study~\cite{sun,page}.
The best fit value for the spectral index of the synchrotron
emission is lower than value found by~\cite{sun,page} but it is
probably due to the choice of normalisation using the 408 MHz template. \\
\begin{table}[h]
\begin{center}
\caption{Best fit parameters for synchrotron and thermal dust emission models and $3\sigma$ confidence levels for the best fitting model.\label{tab:param}}
\vspace{0.3cm}
\begin{tabular}{|c|c|c|c|c|c|} \hline
$$ & $ p (deg)$& $A_{turb} $ & $h_r$ & $\beta_s$ & $\chi^2_{min}$ \\\hline
$WMAP$ & $ -30.0^{+40.0}_{-30.0}$ & $< 1.25$ (95.4 \% CL) & $ <20$ (95.4 \%
CL) & $-3.4^{+0.1}_{-0.8}$ & $5.72$ \\\hline
$HASLAM$ & $ -20.0^{+60.0}_{-50.0}$ & $< 1.0$ (95.4 \% CL) & $ 4.0^{+16.0}_{-3.0} $ & $\emptyset$ & $5.81$ \\\hline
$ARCHEOPS$ & $ -20^{+80}_{-50}$ & $ < 2.25 (95.4 \% CL)$ & $\emptyset$ & $\emptyset$ & $ 1.98$ \\\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\includegraphics[height=9cm,width=13cm]{./cl_wmap_model_galcut5_mukrj_bis.ps}\caption{Clockwise from top left : power spectra $C^{TT}_l$, $C^{EE}_l$, $C^{BB}_l$, $C^{TE}_l$, $C^{TB}_l$, $C^{EB}_l$ at 23 GHz built with the WMAP 5 years data \emph{(black)} and the model of synchrotron emission with BSS magnetic field for the best fit model parameters \emph{(green to red for a spectral index between -3.2 and -3.4)}, applying a Galactic cut $|b|<5^{\circ}$.\label{spec_wmap_gp}}
\end{figure}
\indent Using the best fit parameters obtained for the Galactic
emissions models we computed maps and power spectra in temperature and polarization for synchrotron and dust thermal
emission. We compare them to maps and power spectra built respectively using polarized
WMAP and ARCHEOPS data. Like represented in the Figure~\ref{spec_wmap_gp} the synchrotron emission model is efficient to
reproduce the global feature of the data in polarization. We show on Figure~\ref{spec_dust_gp} the angular power spectra computed from the Archeops data at 353 GHz and the
thermal dust model using the method presented in~\cite{ponthieu2005}. Our model efficient to
reproduce the features of the spectra at all scales.
\begin{figure}
\centering
\includegraphics[angle=90,height=9cm,width=13cm]{./cl_archdata_rmodel_galcut_0_nside32_muKRJ2.ps}\caption{Clockwise from top left : power spectra $C^{TT}_l$, $C^{EE}_l$, $C^{BB}_l$, $C^{TE}_l$, $C^{TB}_l$, $C^{EB}_l$ at 353 GHz computed from Archeops data \emph{(black)} and the model of thermal dust emission with BSS magnetic field for the best fit model parameters \emph{(red)} without applying Galactic cut.\label{spec_dust_gp}}
\end{figure}
\indent From the above best fit parameters we estimate the contamination of the CMB PLANCK data by the
polarized galactic emissions. Figure~\ref{fig:spect_cmb_for}
shows the temperature and polarization power spectra at 143 GHz for
the CMB\footnote{We simulate CMB assuming cosmological parameters for a model $\Lambda$CDM like proposed in~\cite{komatsu} with a ratio tensor-scalar of 0.03.} (red)
and the Galactic foreground emissions applying a Galactic cut of
$|b|<15^{\circ}$. The residual foreground contamination seems to be
weak but for the BB-modes for which an accurate foreground substraction is extremely important for
the detection of the primordial gravitational waves.
\begin{figure}
\centering
\includegraphics[height=9cm,width=13cm]{cl_planck_dust_ff_sync_cresun_hr_1.00000_CRELo_1_betas_-3.3000000_BSS_Aturb_0.00000_p_-30.000000_galcut_15_nside128_143GHz_mukrj2.ps}
\caption{Clockwise from top left : power spectra $C^{TT}_l$, $C^{EE}_l$, $C^{BB}_l$, $C^{TE}_l$, $C^{TB}_l$, $C^{EB}_l$ at 143 GHz for $|b|<15^{\circ}$ (see text for details).\label{fig:spect_cmb_for}}
\end{figure}
\section{Conclusions}
\indent We propose in this study consistent models of the main
Galactic polarized emissions based on a 3D modelisation of the
Galaxy. By comparison with preexisting data we are able to give
consistent constraints
on the parameters of the synchrotron and dust thermal emissions models
compatibles with thus appear in the literature. From this we build map
and power spectra enable to reproduce the features of the data at
various frequencies. Using a rough mask we then estimate the residual
contamination due to these foregrounds on the assumed CMB PLANCK data.
|
train/arxiv
|
BkiUdIg4uBhiy0IClSmb
| 5 | 1 |
\section{Introduction}
Obstacle problems form an important class of problems in analysis and applied
mathematics as they appear, in particular, in the mathematical study of
variational inequalities and free boundary problems. The classical obstacle
problem involving the Laplace operator is to find the equilibrium position of
an elastic membrane, whose boundary is held fixed, and which is constrained to
lie above a given obstacle. This problem is closely related to the study of
minimal surfaces and to inverse problems in potential theory. Other
applications where obstacle problems occur, involving the Laplace operator or
more general operators, include control theory and optimal stopping, financial
mathematics, fluid filtration in porous media, constrained heating and
elasto-plasticity. As classical references for obstacle problems and
variational inequalities, as well as their applications, we mention Frehse
\cite{Fr72}, Kinderlehrer-Stampacchia \cite{KS80}, \cite{KS} and Friedman
\cite{FA82}. For an outline of the modern approach to the regularity theory of
the free boundary, in the context of the obstacle problem, we refer to
Caffarelli \cite{C98}.
In this paper we continue to develop a theory for the obstacle problem for a
general class of second order subelliptic partial differential equations in
non-divergence form modeled on a system of vector fields satisfying
H{\"{o}}rmander's finite rank condition. In particular, we consider operators
\begin{equation}
\mathcal{H}=\sum_{i,j=1}^{q}a_{ij}(x)X_{i}X_{j}+\sum_{i=1}^{q}b_{i
(x)X_{i}-a_{0}(x)X_{0},\ \ x\in\mathbf{R}^{n},\ n\geq3, \label{operator
\end{equation}
where $q\leq n$ is a positive integer. In Section \ref{assumptions}, we will
state the assumptions in detail. To formulate the obstacle problem, let
$\mathcal{H}$ be as in (\ref{operator}), and let $f,g,\varphi,\gamma
:\overline{\Omega}\rightarrow\mathbf{R}^{n}$ be continuous and bounded
functions such that $g\geq\varphi$ on $\overline{\Omega}$. We consider the
problem
\begin{equation
\begin{cases}
\max\{\mathcal{H}u(x)+\gamma(x)u(x)-f(x),\varphi(x)-u(x)\}=0, & \text{in
\ \Omega,\\
u(x)=g(x), & \text{on}\ \partial\Omega.
\end{cases}
\label{e-obs
\end{equation}
We say that $u$ is a strong solution to (\ref{e-obs}) if $u\in\mathcal{S
_{X,loc}^{1}(\Omega)\cap C(\overline{\Omega})$ satisfy the differential
inequality (\ref{e-obs}) almost everywhere in $\Omega$, while the boundary
datum is attained at all points of $\partial\Omega.$ Here $\mathcal{S
_{X,loc}^{1}$ denotes certain intrinsic Sobolev spaces, defined in Subsection
\ref{fspace}. The main result is the following.
\begin{theorem}
\label{obstacle} Under the assumptions in Subsection \ref{assumptions}, there
exists a unique strong solution to the obstacle problem in \eqref{e-obs}.
Furthermore, given $p$, $1\leq p<\infty$, and an open subset $\Omega^{\prime
}\subset\subset\Omega$ there exists a positive constant $c$, depending on
$\mathcal{H}$, $\Omega^{\prime}$, $\Omega$, $p$, $||f||_{L^{\infty}(\Omega)}$,
$||\gamma||_{L^{\infty}(\Omega)}$, $||g||_{L^{\infty}(\Omega)}$ and
$||\varphi||_{L^{\infty}(\Omega)}$, such that
\begin{equation}
||u||_{\mathcal{S}_{X}^{p}(U)}\leq c. \label{Sp bound
\end{equation}
\end{theorem}
To briefly put Theorem \ref{obstacle} into context we first consider the
parabolic case, that is when $q=n-1$ and $X=\{X_{0},X_{1},...,X_{q}\}$ is
identical to $\{\partial_{t},\partial_{x_{1}},...,\partial_{x_{n}}\}$. Then
there is an extensive literature on the existence of generalized solutions to
the obstacle problem in \eqref{e-obs} in Sobolev spaces, starting with the
pioneering papers \cite{FA75}, \cite{vM}, \cite{vM1} and \cite{McK}. The most
extensive and complete treatment of the obstacle problem for the heat equation
is due to Caffarelli, Petrosyan and Shahgholian \cite{CPS} and we refer to
\cite{CPS} for further references.
When $q<n,$ that is in the subelliptic case, with a general lower order term
$X_{0}$ there are, to our knowledge, no results concerning the problem in
\eqref{e-obs}. However, \cite{DGP07} and \cite{DGS03} treat the subelliptic
case when $X_{0}$ is lacking. Existence, uniqueness and regularity results for
solutions in the special case when $X_{0}=\partial_{t}$ are contained
in\ \cite{F11} and \cite{FGN12}. Similar results, but in the case of second
order differential operators of Kolmogorov type, are contained in \cite{FPP},
\cite{FNPP} and \cite{P08}. We note that the results presented here are new
due to the presence of the general lower order term $X_{0},$ and neither of
the above mentioned results cover the class of operators studied here, as
demonstrated in Section \ref{exempel}.
The proof of Theorem \ref{obstacle} is based on the classical penalization
technique introduced by Lewy and Stampacchia in \cite{LS69}. In particular, we
consider a family $(\beta_{\varepsilon})_{\varepsilon\in(0,1)}$ of smooth
functions such that, for fixed $\varepsilon\in(0,1),$ $\beta_{\varepsilon}$ is
an increasing function
\begin{equation}
\beta_{\varepsilon}(0)=0,\ \ \ \beta_{\varepsilon}(s)\leq\varepsilon
,\mbox{ whenever }s>0, \label{beta1
\end{equation}
and such that
\begin{equation}
\lim_{\varepsilon\rightarrow0}\beta_{\varepsilon}(s)=-\infty,\mbox{
whenever }s<0. \label{beta2
\end{equation}
A key step in the proof of Theorem \ref{obstacle} is to consider the penalized
proble
\begin{equation}
\left\{
\begin{array}
[c]{ll
\mathcal{H}^{\delta}u_{\varepsilon,\delta}+\gamma^{\delta}u_{\varepsilon
,\delta}=f^{\delta}+\beta_{\varepsilon}(u_{\varepsilon,\delta}-\varphi
^{\delta})\ \ \ \ \ & \text{in }\Omega,\\
u_{\varepsilon,\delta}=g^{\delta} & \text{on }\partial\Omega,
\end{array}
\right. \label{penalized2
\end{equation}
where the superscript $\delta$, $\delta\in(0,1)$, indicate certain mollified
versions of the objects at hand. The subscripts $\varepsilon,\delta$ in
$u_{\varepsilon,\delta}$ indicate that the solution depends on $\varepsilon$
through the penalizing function $\beta_{\varepsilon}$ and on $\delta$ through
the mollifier. We first prove that a classical solution to the problem in
(\ref{penalized2}) exists. By a classical solution we mean that
$u_{\varepsilon,\delta}\in C_{X}^{2,\alpha}(\Omega)\cap C(\overline{\Omega}),$
where the H{\"{o}}lder space $C_{X}^{2,\alpha}(\Omega)$ is defined in terms of
the intrinsic distance induced by the vector fields. In particular, this imply
that (\ref{penalized2}) is in fact satisfied pointwise.
Thereafter, a monotone iterative method is used to prove that $u_{\varepsilon
,\delta}$ is the limit of a sequence $\{u_{\varepsilon,\delta}^{j
\}_{j=1}^{\infty}$ where $u_{\varepsilon,\delta}^{j}\in C_{X}^{2,\alpha
}(\Omega)\cap C(\overline{\Omega})$. A key step in the argument is to ensure
compactness in $C_{X,loc}^{2,\alpha}(\Omega)\cap C(\overline{\Omega})$ of the
sequence constructed, which requires the use of certain a priori estimates. In
particular, we use interior Schauder estimates to conclude that there exists a
solution $u_{\varepsilon,\delta}$ to the problem in (\ref{penalized2}) such
that $u_{\varepsilon,\delta}\in C_{X,loc}^{2,\alpha}(\Omega)\cap
C(\overline{\Omega})$.
The final step is then to consider limits as $\varepsilon$ and $\delta$ tend
to $0$ and to prove that $u_{\varepsilon,\delta}\rightarrow u$ where $u$ is a
strong solution to the obstacle problem in \eqref{e-obs}. However, the
penalization technique only allows us to establish quite weak bounds on
$u_{\varepsilon,\delta}$ given that those bounds should be independent of
$\varepsilon$ and $\delta$. Hence, to prove that as $\varepsilon$,
$\delta\searrow0$ the function $u_{\varepsilon,\delta}\rightarrow u$ weakly in
$\mathcal{S}_{X,loc}^{p}$, $p\in\lbrack1,\infty)$, we use a priori interior
$\mathcal{S}_{X}^{p}$ estimates. To be able to subsequently conclude that in
fact $u_{\epsilon,\delta}\rightarrow u$ in $C_{X,loc}^{1,\alpha}(\Omega)\cap
C(\overline{\Omega})$, we also prove the following theorem.
\begin{theorem}
\label{Embedding}Under the assumptions in Subsection \ref{assumptions}, let
$\Omega^{\prime}\subset\subset\Omega$. If $u\in\mathcal{S}^{p}(\Omega)$, for
some $p\in(Q/2,Q)$, the
\begin{equation}
||u||_{C^{1,\alpha}(\Omega^{\prime})}\leq c||u||_{\mathcal{S}^{p}(\Omega)}
\label{badda
\end{equation}
for$\ \alpha=(p-Q)/p$. Moreover, the constant $c$ only depend on $\mathcal{G
$, $\mu$, $p$, $s$, $\Omega$ and $\Omega^{\prime}.$
\end{theorem}
In the context of the circle of techniques and ideas used in this paper it is
also fair to mention \cite{BB00b}, \cite{BZ11}, \cite{FGN12} and \cite{FPP}.
Throughout the paper, when we write that a constant $c$ depends on the
operator $\mathcal{H}$, $c=c(\mathcal{H})$, we mean that the constant $c$
depends on $n$, $q$, $X=\{X_{0},X_{1},...,X_{q}\}$, $\{a_{ij}\}_{i,j=1}^{q}$,
$\{b_{i}\}_{i=1}^{q}$ and$\ \lambda$. Furthermore, if $\alpha$ and $\Omega$
are given, then $c$ only depends on $||a_{ij}||_{C_{X}^{0,\alpha}(\Omega)}$,
$||b_{i}||_{C_{X}^{0,\alpha}(\Omega)}$, and not on any other properties of
these coefficients.
The remainder of this paper is organized as follows. Subsection
\ref{assumptions} contains assumptions on the vector fields, the operator, and
the domain for which our results hold. In Section \ref{prel}, which is of
preliminary nature, we introduce some notation as well as some basic facts
about homogeneous groups and subelliptic metrics, in particular, we account
for the proper function spaces. In Section \ref{estm} we present some
estimates for subelliptic operators. Section \ref{SecProof} is devoted to the
proof of the main theorem, and in Section \ref{SecEm} we prove the embedding
theorem. Finally, in Section \ref{exempel}, we give some examples of operators
to which our results apply. In particular, we demonstrate when and how our
results overlap with known ones and provide the reader with examples for which
our results are new, and not previously considered in the literature.
\section{Assumptions\label{assumptions}}
Here we present the assumptions made to be able to prove Theorem
\ref{obstacle}.
\textbf{The vector fields.} $X=\{X_{0},X_{1},...,X_{q}\}$ is a system of
smooth vector fields in $\mathbf{R}^{n}$ satisfying two main conditions. The
first of which is H\"{o}rmander's finite rank condition. To further explain
this, recall that the Lie-bracket between two vector fields $X_{i}$ and
$X_{j}$ is defined as $[X]_{i,j}=[X_{i},X_{j}]=X_{i}X_{j}-X_{j}X_{i}.$ For an
arbitrary multiindex $\alpha=(\alpha_{1},..,\alpha_{\ell})$, $\alpha_{k
\in\{0,1,..,q\}$, we define weights
\[
w_{0}=2\text{ and }w_{i}=1\text{ for }i=1,...,q.
\]
Using this we se
\begin{equation}
|\alpha|=\sum_{i=1}^{\ell}w_{\alpha_{i}} \label{multiindex
\end{equation}
and define the commutator $[X]_{\alpha}$ of length $|\alpha|$ by
\[
\lbrack X]_{\alpha}^{{}}=[X_{\alpha_{\ell}},[X_{\alpha_{\ell-1}
,...[X_{\alpha_{2}},X_{\alpha_{1}}]]].
\]
$X=\{X_{0},X_{1},...,X_{q}\}$ is said to satisfy H{\"{o}}rmander's finite rank
condition, introduced in \cite{H67}, if there exists an integer $s$,
$s<\infty$, such that
\begin{equation}
\text{Lie}(X_{0},X_{1},\ldots,X_{q})=\{[X]_{\alpha}^{{}}:\alpha_{i
\in\{1,...,q\},\ |\alpha|\leq s\}\text{ spans }\mathbf{R}^{n}\text{ at every
point}. \label{1.1x
\end{equation}
Moreover, we assume that there exists a family of dilations , $\{D_{\lambda
}\}_{\lambda>0}$ in $\mathbf{R}^{n},$ such that $X_{1}$, $X_{2}$,
$\ldots,X_{q}$ are of $D_{\lambda}$-homogeneous degree one, and $X_{0}$ is of
$D_{\lambda}$-homogeneous degree 2.
These two conditions are enough to ensure the existence of a composition law
$\circ$ such that the triplet $(\mathbf{R}^{n},\circ,D_{\lambda})$ is a
homogeneous Lie group where the $X_{i}$'s are left invariant, see Subsection
\ref{homogen}. However, this homogeneity assumption is \textbf{only} essential
to proving the embedding theorem, Theorem \ref{Embedding}. This means that,
should the embedding theorem become available for general H\"{o}rmander vector
fields, then this proof carries over directly to this more general case.
\textbf{The coefficients. }Concerning the $q\times q$ matrix-valued function
$A=A(x)=\{a_{ij}(x)\}=\{a_{ij}\}$ and $a_{0}$ we assume that $A=\{a_{ij}\}$ is
real symmetric, with bounded and measurable entries and that there exists
$\lambda\in\lbrack1,\infty)$ such that
\begin{equation}
\lambda^{-1}|\xi|^{2}\leq\sum_{i,j=1}^{q}a_{ij}(x)\xi_{i}\xi_{j}\leq
\lambda|\xi|^{2},\ \ \lambda^{-1}\leq a_{0}(x)\leq\lambda,\ \ \ \mbox{
whenever }x,\in\mathbf{R}^{n},\ \xi\in\mathbf{R}^{q}. \label{1.2
\end{equation}
Concerning the regularity of $a_{ij}$ and $b_{i}$ we will assume that $a_{ij}
$ and $b_{i}$ have further regularity beyond being only bounded and
measurable. In fact, we assume that
\begin{equation}
a_{ij},\ b_{i}\in C_{X,loc}^{0,\alpha}(\mathbf{R}^{n})\mbox{ whenever }i,j\in
\{1,..,q\}, \label{1.2+
\end{equation}
for $\alpha\in(0,1),$ where $C_{X,loc}^{0,\alpha}(\mathbf{R}^{n})$ is the
space of functions which are bounded and H{\"{o}}lder continuous on every
compact subset of $\mathbf{R}^{n}$. Here the subscript $X$ indicates that
H{\"{o}}lder continuity is defined in terms of the Carnot-Carath\'{e}odory
distance induced by the set of vector fields $X$, see Section \ref{fspace}. In
particular, by (\ref{1.2}) we may divide the entire equation by $a_{0},$ and
hence consider (\ref{operator}) with $a_{0}=1$.
\textbf{The domain. }$\Omega$ is assumed to be a bounded domain such that
there exists, for all $\varsigma\in\partial\Omega$ and in sense of Definition
\ref{eter}, an exterior normal $v$ to $\overline{\Omega}$ relative
$\widetilde{\Omega},$ such that $C(\varsigma)v\neq0.$ Here $\widetilde{\Omega
}$ is a neighborhood of $\overline{\Omega}$ and $C(\cdot) $ is the matrix
valued function given by $(X_{1},...,X_{q})^{T}=C(x)\cdot(\partial_{x_{1
},...,\partial_{x_{n}})^{T}$. The assumption $C(\varsigma)v\neq0$ assures that
(\ref{fafa}) holds, and thus, that we can use Theorem \ref{Bony}.
\textbf{The equation.} Let $f,\gamma,g,\varphi:\overline{\Omega
\rightarrow\mathbf{R}^{n}$ be such that $g\geq\varphi$ on $\overline{\Omega}$
and assume that $f,\gamma,g,\varphi$ are continuous and bounded on
$\overline{\Omega}$, with $\gamma\leq\gamma_{0}<0$.
Concerning the obstacle $\varphi$ we assume that $\varphi$ is Lipschitz
continuous on $\overline{\Omega}$, where Lipschitz continuity is defined in
terms of the intrinsic homogeneous distance. We also assume that there exists
a constant $c\in\mathbf{R}^{+}$ such that
\begin{equation}
\sum_{i,j=1}^{q}\zeta_{i}\zeta_{j}\int_{\Omega}X_{i}X_{j}\psi(x)\varphi
(x)dx\geq c|\zeta|^{2}\int_{\Omega}\psi(x)dx \label{dobs
\end{equation}
for all $\zeta\in\mathbf{R}^{q}$ and for all nonegative test functions
$\psi\in C_{0}^{\infty}(\Omega)$. The reader might want to think of this as a
convexity assumption.
\section{Preliminaries\label{prel}}
In this section we introduce notations and concepts to be used throughout the
paper. For a more detailed exposition we refer to the monograph \cite{BLU}
written by Bonfiglioli, Lanconelli and Uguzzoni.
In the following we assume that $X=\{X_{0},X_{1},...,X_{q}\}$ satisfies
\eqref{1.1x}. From now on we will write $Xf$ when a vector field $X$ act on a
function $f$ as a differential operator. We begin by defining the
Carnot-Carath\'{e}odory distance, also known as the control distance, see
\cite{BBP} and \cite{NSW85}.
\begin{definition}
For any $\delta>0,$ let $\Gamma(\delta)$ be the class of all absolutely
continuous mappings $\gamma:[0,1]\rightarrow\Omega$ such tha
\[
\gamma^{\prime}(t)=\sum_{i=0}^{q}\lambda_{i}(t)X_{i}(\gamma(t))\ \ \text{a.e.
}t\in(0,1)
\]
with $|\lambda_{0}(t)|\leq\delta^{2}$ and $|\lambda_{i}(t)|\leq\delta$ for
$i=1,...,q$. Then we define the Carnot-Carath\'{e}odory distance between two
points $x,y\in\Omega$ to b
\[
d(x,y)=\inf\{\delta:\exists\gamma\in\Gamma(\delta)\text{ with }\gamma
(0)=x\text{ and }\gamma(1)=y\}.
\]
\end{definition}
It is a non-trivial result that any two points in $\Omega$ can be connected by
such a curve, and the proof relies on a connectivity result of Chow,
\cite{Cho40}. We remark that the Carnot-Carath\'{e}odory distance $d$ is in
fact a quasi-distance because the triangle inequality does not hold. Instead,
the inequality has the form
\[
d(x,z)\leq C(d(x,y)+d(y,z))
\]
where the constant $C$ depends on the vector fields. Moreover, there exist
constants $c_{1},c_{2}$, depending on $\Omega$, such that
\begin{equation}
c_{1}|x-y|\leq d(x,y)\leq c_{2}|x-y|^{1/s}\ \ \text{for all }x,y\in\Omega,
\label{Carnot-Euclid
\end{equation}
where $s$ is the rank in the H\"{o}rmander condition, see Proposition 1.1 in
\cite{NSW85}. This is not immediate, but follows from \cite[Section 5]{BBP}.
\subsection{Homogeneous groups\label{homogen}}
Let $\circ$ be a given group law on $\mathbf{R}^{n}$ and suppose that the map
$(x,y)\mapsto y^{-1}\circ x$ is smooth. Then $\mathbf{G}=(\mathbf{R}^{n
,\circ)$ is called a \emph{Lie group}. $\mathbf{G}$ is said \emph{homogeneous}
if there exists a family of \emph{dilations} $\left( D_{\lambda}\right)
_{\lambda>0}$ on $\mathbf{G,}$ which are also automorphisms, of the form
\begin{equation}
D_{\lambda}(x)=D_{\lambda}(x^{(1)},...,x^{(l)})=(\lambda x^{(1)
,...,\lambda^{l}x^{(l)})=(\lambda^{\sigma_{1}}x_{1},...,\lambda^{\sigma_{n
}x_{n}), \label{str1
\end{equation}
where $1\leq\sigma_{1}\leq...\leq\sigma_{n}.$ Note that in \eqref{str1} we
have that $x^{(i)}\in\mathbf{R}^{n_{i}}$ for $i\in\{1,...,l\}$ and
$n_{1}+....+n_{l}=n$. On $\mathbf{G}$ we define a homogeneous norm $||\cdot||
$ as follows; for $x\in\mathbf{R}^{n}$, $x\neq0,$ se
\[
||x||=\rho\text{ \ \ if and only if \ \ }|D_{1/\rho}(x)|=1,
\]
where $|\cdot|$ denotes the standard Euclidean norm, and set $||0||=0.$ This
norm satisfies the following:
\begin{description}
\item[i)] $||D_{\lambda}(x)||=\lambda||x||$ for all $x\in\mathbf{R
^{n},\ \lambda>0.$
\item[ii)] The set $\{x\in\mathbf{R}^{n}:||x||=1\}$ coincides with the
Euclidean unit sphere.
\item[iii)] There exist $c(\mathbf{G})\geq1$ such that for every
$x,y\in\mathbf{R}^{n}
\[
||x\circ y||\leq c(||x||+||y||)\text{ \ \ and \ \ }||x^{-1}||\leq c||x||.
\]
\end{description}
\noindent We also define a quasidistance $d$ on $\mathbf{R}^{n}$ throug
\[
d(x,y)=||y^{-1}\circ x||.
\]
For this quasidistance there exist $c=c(\mathbf{G})$ such that for all
$x,y,z\in\mathbf{R}^{n}$ the following holds;
\begin{description}
\item[iv)] $d(x,y)\geq0$ and $d(x,y)=0$ if and only if $x=y$.
\item[v)] $c^{-1}d(y,x)\leq d(x,y)\leq cd(y,x).$
\item[vi)] $d(x,y)\leq c(d(x,z)+d(z,y)).$
\end{description}
The previously mentioned Carnot-Caratheodory distance is one example of an
appropriate distance function. Alternatively, one could begin by defining
$||x||=\sum_{j=1}^{n}|x_{j}|^{1/\sigma_{j}}$, with the induced distance
$d(x,y)=||x^{-1}\circ y||$ satisfying the properties above as well.
\noindent We define balls with respect to $d$ b
\[
B(x,r)=\{y\in\mathbf{R}^{n}:d(x,y)<r\}.
\]
In particular, we note that $D_{r}(B(0,1))=B(0,r).$ Moreover, in \cite[p.
619]{S} it is proved that the Lebesgue measure in $\mathbf{R}^{n}$ is the Haar
measure of $\mathbf{G}$ and tha
\begin{equation}
|B(x,r)|=|B(0,1)|r^{Q}, \label{bollar
\end{equation}
where $Q$ is the natural number
\begin{equation}
Q:=n_{1}+2n_{2}+...+ln_{l}, \label{e-Q
\end{equation}
also called the \emph{homogeneous dimension} of $\mathbf{G}$.
The {convolution} of two functions $f,g,$ defined on $\mathbf{G,}$ is defined
a
\[
(f\ast g)(\zeta)=\int_{\mathbf{R}^{N}}f(\zeta\circ\xi^{-1})g(\xi)d\xi
\]
whenever the integral is well defined. Let $P$ be a differential operator and
let $\tau_{\xi}$ be the {left translation operator}, i.e., $(\tau_{\xi
}f)(\zeta)=f(\xi\circ\zeta)$ whenever $f$ is a function on $\mathbf{G}$. A
differential operator $P$ is said to be {left invariant} if
\[
P(\tau_{\xi}f)=\tau_{\xi}(Pf).
\]
Further, we say that the differential operator $P$ is {homogeneous of degree
}$\delta$ if, for every test function $f,\ \lambda>0$ and $\xi\in
\mathbf{R}^{N}$,
\[
P(f(D(\lambda)\xi))=\lambda^{\delta}(Pf)(D(\lambda)\xi).
\]
Similarly, a function $f$ is {homogeneous of degree }$\delta$ if
\[
f(D((\lambda)\xi))=\lambda^{\delta}f(\xi)\mbox{ whenever
$\lambda >0$, $\ \xi \in \mathbf{R}^{n}$}.
\]
Note that if $P$ is a differential operator homogeneous of degree $\delta_{1}$
and if $f$ is a function homogeneous of degree $\delta_{2}$ then $fP$ is a
differential operator homogeneous of degree $\delta_{1}-\delta_{2}$ and $Pf$
is a function homogeneous of degree $\delta_{2}-\delta_{1}.$We conclude this
section with a proposition which will be used to prove the embedding theorem,
see \cite[Proposition 1.15]{F75}.
\begin{proposition}
\label{MMM}Let $f\in C^{1}(\mathbf{R}^{n}\backslash\{0\})$ be homogeneous of
degree $\delta.$ Then there exist $c=c(\mathcal{G},f)>0$ and $M=M(\mathcal{G
)>1$ such tha
\[
|f(x\circ y)-f(x)|+|f(y\circ x)-f(x)|\leq c||y||\cdot||x||^{\delta-1},
\]
for every $x,y$ such that $||x||\geq M||y||$.
\end{proposition}
\begin{comment}We shall also use the fact that there exists a constant $c=c(\mathbf{G})$ such
tha
\begin{equation}
c^{-1}d_{h}(x,y)\leq d(x,y)\leq cd_{h}(x,y),\label{equivNorm
\end{equation}
following \cite[Subsections 1.3, 5.1-5.2]{BLU} and the fact that $d$ is
homogeneous. \textbf{I HAVE CHECKED THIS USING THE DEFINITION, PROPOSITION
1.2.3 AND PROPOSITION 5.1.4 IN BLU. I GUESS THIS IS KNOWN, AND THEREFORE DID
NOT WANT TO WRITE IT UP, BUT I DO NOT KNOW OF A REFERENCE. SHOULD WE WRITE
MORE ABOUT THIS?}
\end{comment}
\subsection{Function spaces\label{fspace}}
Let $U\subset\mathbf{R}^{n}$ be a bounded domain and let $\alpha\in(0,1]$.
Given $U$ and $\alpha$ we define the {H\"{o}lder space $C_{X}^{0,\alpha}(U)$
as $C_{X}^{0,\alpha}(U)=\{u:U\rightarrow\mathbf{R}:\ ||u||_{C^{0,\alpha
(U)}<\infty\}$, where
\begin{align*}
||u||_{C_{X}^{0,\alpha}(U)} & =|u|_{C_{X}^{0,\alpha}(U)}+||u||_{L^{\infty
}(U)},\\
|u|_{C_{X}^{0,\alpha}(U)} & =\sup\left\{ \frac{|u(x,t)-u(y,t)|
{d(x,y)^{\alpha}}:x,y\in U\text{ and}\ x\neq y\right\} .
\end{align*}
Given a multiindex $I=(i_{1},i_{2},...,i_{m})$, with $0\leq i_{j}\leq q$,
}${1}\leq j\leq m,$ {we define the weighted length of the multiindex, $|I|,$
as in (\ref{multiindex})$\ $and we set $X^{I}u=X_{i_{1}}X_{i_{2}}\cdots
X_{i_{m}}u$. Now, given a domain $U$, an exponent $\alpha$ and an arbitrary
non-negative integer $k$ we define $C_{X}^{k,\alpha}(U):=\{u:U\rightarrow
\mathbf{R}:\ ||u||_{C_{X}^{k,\alpha}(U)}<\infty\}$, where
\[
||u||_{C_{X}^{k,\alpha}(U)}=\sum_{|I|\leq k}||X^{I}u||_{C_{X}^{0,\alpha}(U)}.
\]
Sobolev spaces are defined a
\[
\mathcal{S}_{X}^{p}(U)=\left\{ u\in L^{p}(U):X_{0}u,\ X_{i}u,\ X_{i}X_{j}u\in
L^{p}(U)\text{ for}\ i,j=1,...,q\right\}
\]
and we define the Sobolev norm of a function }$u$ by
\[
||u||_{\mathcal{S}_{X}^{p}(U)}=||u||_{L^{p}(U)}+\sum\limits_{i=0}^{q
||X_{i}u||_{L^{p}(U)}+\sum\limits_{i,j=1}^{q}||X_{i}X_{j}u||_{L^{p}(U)}.
\]
}Above the $L^{p}$-norms are taken with respect to the standard Euclidean
metric, in particular, we integrate with respect to the Lebesgue measure. {Let
$U\subset\mathbf{R}^{n}$ be a domain, not necessarily bounded. If $u\in
C_{X}^{k,\alpha}(V)$ for every compact subset $V$ of $U$, then we say that
$u\in C_{X,loc}^{k,\alpha}(U).$ Similarly, if $u\in\mathcal{S}_{X}^{p}(V)$ for
every compact subset $V$ of $U$, then we say that $u\in\mathcal{S}_{X,loc
^{p}(U).$ }
An important result about compactly supported test functions multiplied by
Sobolev functions is the following lemma \cite[Corollary 1]{BB00b}.
\begin{lemma}
\label{cutoff}If $u\in\mathcal{S}^{p}(\Omega),\ 1\leq p<\infty,$ and $\phi\in
C_{0}^{\infty}(\Omega),$ then $u\phi\in\mathcal{S}_{0}^{p}(\Omega).$
\end{lemma}
This lemma will be used when $\phi$ is a cutoff function. The existence of
smooth cutoff functions is not immediate, but by \cite[Lemma 5]{BB00b}, we
have the following.
\begin{lemma}
\label{cutoff2}For any $\sigma\in(0,1),r>0,k\in\mathbf{Z}_{+},$ there exists
$\phi\in C_{0}^{\infty}(\mathbf{R}^{n})$ with the following properties
\[
B_{\sigma r}\prec\phi\prec B_{\sigma^{\prime}r}\ \ \ \text{with
\sigma^{\prime}=(1+\sigma)/2;
\
\[
|X^{\alpha}\phi|\leq\frac{c(\mathcal{G},j)}{\sigma^{j-1}(1-\sigma)^{j}r^{j
}\text{ \ \ for all multiindices }|\alpha|=j\in\{1,...,k\}.
\]
\end{lemma}
\section{Estimates for subelliptic operators with drift\label{estm}}
Here we collect a number of theorems which concern subelliptic operators with
drift, all of which are important tools in the proof of the obstacle problem.
We begin with a result of Bony \cite[Theoreme 5.2]{B69} which is both a
comparison principle and a result on solvability of the Dirichlet problem.
Before we state the theorem we introduce the notion of an exterior normal.
\begin{definition}
\label{eter} A vector $v$ in $\mathbf{R}^{n}$ is an exterior normal to a
closed set $S\subset\mathbf{R}^{n}$ relative an open set $U$ at a point
$x_{0}$ if there exists an open standard Euclidean ball $B_{E}$ in
$U\backslash S$ centered at $x_{1}$ such that $x_{0}\in\overline{B_{E}}$ and
$v=\lambda(x_{1}-x_{0})$ for some $\lambda>0.$
\end{definition}
\begin{theorem}
\label{Bony}(Bony) Let $U\subset\mathbf{R}^{n}$ be a bounded domain and let
$H:=\sum_{i=1}^{r}Y_{i}^{2}+Y_{0}+\gamma=$ $\sum_{i,j=1}^{n}a_{ij}^{\ast
}\partial_{x_{i}x_{j}}+\sum_{i=1}^{n}a_{i}^{\ast}\partial_{x_{i}}+\gamma$.
Assume that the set of vector fields $Y=\{Y_{0},Y_{1},...,Y_{r}\}$ satisfies
H\"{o}rmande
\'{
s finite rank condition, that $\gamma(x)\leq\gamma_{0}<0$ for all $x\in U$ and
that $a_{ij}^{\ast},\ a_{i}^{\ast},\ \gamma\in C^{\infty}(U).$ In addition,
assume that for all $x\in U$ and for all $\xi\in\mathbf{R}^{n}$ the quadratic
form $\sum_{i,j=1}^{n}a_{ij}^{\ast}(x)\xi_{i}\xi_{j}\geq0.\ $Further, assume
that $D$ is a relatively compact subset of $U$ and that at every point
$x_{0}\in\partial D$ there exists an exterior normal $v$ such tha
\begin{equation}
\sum_{i,j=1}^{n}a_{ij}^{\ast}(x_{0})v_{i}v_{j}>0. \label{fafa
\end{equation}
Then, for all $g\in C(\partial D)$ and $f\in C(\overline{D})$, the Dirichlet
proble
\[
\left\{
\begin{array}
[c]{cc
Hu=-f, & \mbox{in $D$},\\
u=g, & \mbox{on $\partial D$},
\end{array}
\right.
\]
has a unique solution $u\in C(\overline{D})$. Furthermore, if $f\in C^{\infty
}(D)$, then $u\in C^{\infty}(D)$ and if $f$ and $g$ are both positive then so
is $u$.
\end{theorem}
We remark that we cannot use this theorem directly since we only assume that
our coefficients $a_{ij}$ and $b_{i}$ are H\"{o}lder continuous. However, for
smooth coefficients and using linear algebra, our operator $\mathcal{H}$ in
(\ref{operator}) can be rewritten as a H\"{o}rmander operator in accordance
with Bony's assumptions. We will also use a Schauder type estimate, the
particular one we use can be found in \cite[Theorem 2.1]{BZ11}.
\begin{comment}\textbf{I'M NOT SURE THAT THE BOLD STATEMENT IS STILL TRUE WITH THE LOWER ORDER TERMS $\sum_{i=1}^q b_iX_i.$ HAVE YOU CHECKED THIS BEFORE?}\end{comment}
\begin{theorem}
\label{Schauder} (Schauder estimate) Assume that the operator $\mathcal{H}$ is
strucutured on a set of smooth H\"{o}rmander vector fields and that the
coefficients $a_{ij},b_{i}\in C_{X}^{0,\alpha}(\Omega)$ for some $\alpha
\in(0,1),\ a_{0}\in L^{\infty}(\Omega).$ Then for every domain $\Omega
^{\prime}\subset\subset\Omega$ there exists a constant $c,\ $depending on
$\Omega^{\prime}$, $\Omega$, $X$, $\alpha$, $\lambda$, $||a_{ij
||_{C_{X}^{0,\alpha}(\Omega)}$, $||b_{i}||_{C_{X}^{0,\alpha}(\Omega)}$ and
$||a_{0}||_{C_{X}^{0,\alpha}(\Omega)}$ such that for every $u\in
C_{X}^{2,\alpha}(\Omega)$ one ha
\[
||u||_{C_{X}^{2,\alpha}(\Omega^{\prime})}\leq c\left\{ ||\mathcal{H
u||_{C_{X}^{0,\alpha}(\Omega)}+||u||_{L^{\infty}(\Omega)}\right\} .
\]
\end{theorem}
We emphasize that in \cite{BZ11} this is only proved when the lower order
terms $b_{i}\equiv0.$ However, by arguing as in the proof of Theorem 10.1 in
\cite{BB07} this also hold for $b_{i}\in C_{X}^{0,\alpha}(\Omega).$ This
Schauder estimate will be used together with an a priori $\mathcal{S}^{p}$
interior estimate to assure proper convergence of a constructed sequence,
converging to a solution to the obstacle problem. The proof is to be found in
\cite[Theorem 2.2]{BZ11}
\begin{theorem}
\label{a priori} (A priori $\mathcal{S}^{p}$ interior estimate) Assume that
the operator $\mathcal{H}$ is strucutured on a set of smooth H\"{o}rmander
vector fields and that the coefficients $a_{ij}\in C_{X}^{0,\alpha}$ for some
$\alpha\in(0,1).$ Then for every domain $\Omega^{\prime}\subset\subset\Omega$
there exists a constant $c,\ $depending on $\Omega^{\prime}$, $\Omega$, $X$,
$\alpha$, $\lambda$, $||a_{ij}||_{C_{X}^{0,\alpha}(\Omega)}$, $||b_{i
||_{C_{X}^{0,\alpha}(\Omega)}$ and $||a_{0}||_{C_{X}^{0,\alpha}(\Omega)}$ such
that for every $u\in\mathcal{S}_{X}^{p}(\Omega)$ one ha
\[
||u||_{\mathcal{S}_{X}^{p}(\Omega^{\prime})}\leq c\left\{ ||\mathcal{H
u||_{L^{p}(\Omega)}+||u||_{L^{\infty}(\Omega)}\right\} .
\]
\end{theorem}
Also here, we can generalize the results in \cite{BZ11} to hold for $b_{i}\in
C_{X}^{0,\alpha}$, this time arguing as in Section 5.5 in \cite{FGN12}.
\section{Proof of Theorem \ref{obstacle}\label{SecProof}}
To prove Theorem \ref{obstacle} we will, as outlined in the introduction, use
the classical penalization technique and we let $(\beta_{\varepsilon
})_{\varepsilon\in(0,1)}$ be a family of smooth functions satisfying
\eqref{beta1} and \eqref{beta2}. For $\delta\in(0,1)$ we let $\mathcal{H
^{\delta}$ denote the operator obtained from $\mathcal{H}$ by regularization
of the coefficients $a_{ij},\ b_{i},\ i,j=1,...,q,$ using a smooth mollifier,
\[
\mathcal{H}^{\delta}=\sum_{i,j=1}^{q}a_{ij}^{\delta}(x)X_{i}X_{j}+\sum
_{i=1}^{q}b_{i}^{\delta}(x)X_{i}-X_{0},\ \ x\i
\mathbb{R}
^{n}.
\]
We also regularize $\varphi$, $\gamma$ and $f$ and denote the regularizations
$\varphi^{\delta}$, $\gamma^{\delta}$ and $f^{\delta}$ respectively.
Especially, we are able to extend these functions by continuity to a
neighborhood of $\Omega$. As stated in the introduction, see the discussion
above \eqref{dobs}, we assume that $\varphi$ is Lipschitz continuous on
$\overline{\Omega}$ and we denote its Lipschitz norm on $\overline{\Omega
\ $by $\mu.$ Then, since $g\geq\varphi$ on $\partial\Omega$ we see that
\[
g^{\delta}:=g+\mu\delta\geq\varphi^{\delta}\mbox{ on }\partial\Omega.
\]
Note that since $g$ is continuous, $g^{\delta}$ is also continuous and can
thus be used as boundary value function. As a first step we consider the
penalized proble
\begin{equation}
\left\{
\begin{array}
[c]{ll
\mathcal{H}^{\delta}u+\gamma^{\delta}u=f^{\delta}+\beta_{\varepsilon
}(u-\varphi^{\delta})\ \ \ \ \ & \text{in }\Omega,\\
u=g^{\delta} & \text{on }\partial\Omega
\end{array}
\right. \label{penalized
\end{equation}
and we prove that there exists a classical solution to this problem. This is
achieved in two steps, the first being:
\begin{theorem}
\label{thm2} Assume that $\mathcal{H}$ satisfies (\ref{1.1x}), (\ref{1.2}) and
(\ref{1.2+}), let $\Omega\subset\mathbf{R}^{n}$ be a bounded domain. Assume
that at every point $x_{0}\in\partial\Omega$ there exists an exterior normal
satisfying condition \eqref{fafa} in Theorem \ref{Bony}. Let $g\in
C(\partial\Omega)$ and let $h=h(x,u)$ be a smooth Lipschitz continuous
function, in the standard Euclidean sense, on $\overline{\Omega}. $ Then there
exists a classical solution $u\in C^{2,\alpha}(\Omega)\cap C(\overline{\Omega
})$ to the proble
\[
\left\{
\begin{array}
[c]{ll
\mathcal{H}^{\delta}u=h(\cdot,u)\ \ \ \ & \text{in }\Omega,\\
u=g & \text{on }\partial\Omega.
\end{array}
\right.
\]
Furthermore, there exists a positive constant $c$, only depending on $h$ and
$\Omega$, such tha
\begin{equation}
\sup_{\Omega}|u|\leq c\left( 1+||g||_{L^{\infty}(\partial\Omega)}\right) .
\label{hha
\end{equation}
\end{theorem}
\begin{proof}
To prove Theorem \ref{thm2} we will use the same technique as in the proof of
Theorem 3.2 in \cite{FPP}, i.e., a monotone iterative method. To start the
proof we note that, since $h=h(x,u)$ is a Lipschitz continuous function in the
standard Euclidean sense, there exists a constant $\mu$ such that
$|h(x,u)|\leq\mu(1+|u|)$ for $x\in\overline{\Omega}$. We le
\begin{equation}
\label{init}u_{0}(x)=c(1+||g||_{L^{\infty}(\partial\Omega)})-1,
\end{equation}
for some constant $c$ to be chosen later, and we recursively define, for
$j=1,2,...,
\begin{equation}
\left\{
\begin{array}
[c]{ll
\mathcal{H}^{\delta}u_{j}-\mu u_{j}=h(\cdot,u_{j-1})-\mu u_{j-1}\ \ \ \ \ &
\text{in }\Omega,\\
u_{j}=g & \text{on }\partial\Omega.
\end{array}
\right. \label{monotone pde
\end{equation}
The linear Dirichlet problem in (\ref{monotone pde}) has been studied by Bony
in \cite{B69} and since the coefficients of the operator $\mathcal{H}^{\delta
}$ are smooth in a neighborhood of $\Omega$ it follows that $\mathcal{H
^{\delta}$ can be rewritten as a H\"{o}rmander operator in line with Theorem
\ref{Bony}. Hence, using Theorem \ref{Bony} we can conclude that a classical
solution $u_{j}\in C^{\infty}(\Omega)$ exists. In particular $u_{j}\in
C(\overline{\Omega})$ and combining Theorem \ref{Bony} with
(\ref{Carnot-Euclid})\ it follows that $u_{j}\in C_{loc}^{2,\alpha}(\Omega).$
We prove, by induction, that $\{u_{j}\}_{j=1}^{\infty}$ is a decreasing
sequence. By definition $u_{1}<u_{0}$ on $\partial\Omega$ and we can choose
the constant $c$ appearing in the definition of $u_{0}$, depending on $h$, so
that
\[
\mathcal{H}^{\delta}(u_{1}-u_{0})-\mu(u_{1}-u_{0})=h(\cdot,u_{0
)-\mathcal{H}^{\delta}u_{0}=h(\cdot,u_{0})+c(1+u_{0})\geq0
\]
holds. Thus, by the maximum principle, stated at the end of Theorem
\ref{Bony}, we conclude that $u_{1}<u_{0}$ on $\overline{\Omega}.$ Assume, for
fixed $j\i
\mathbb{N}
$, that $u_{j}<u_{j-1}.$ Then by the inductive hypothesis we see that
\begin{align*}
\mathcal{H}^{\delta}(u_{j+1}-u_{j})-\mu(u_{j+1}-u_{j}) & =h(\cdot
,u_{j})-h(\cdot,u_{j-1})-\mu(u_{j}-u_{j-1})\\
& =h(\cdot,u_{j})-h(\cdot,u_{j-1})+\mu|u_{j}-u_{j-1}|\geq0.
\end{align*}
Hence, by the maximum principle $u_{j+1}<u_{j}$ which proves that
$\{u_{j}\}_{j=1}^{\infty}$ is a decreasing sequence. By repeating this
calculation for $u_{j}+u_{0}$, we get the following bounds
\begin{equation}
-u_{0}\leq u_{j+1}\leq u_{j}\leq u_{0}.\label{uj-bound
\end{equation}
As $u_{j}\in C_{loc}^{2,\alpha}(\Omega)\cap C(\overline{\Omega})$ we can now
use Theorem \ref{Schauder} to conclude that
\begin{align}
||u_{j}||_{C^{2,\alpha}(U)} & \leq c\left( \sup_{\Omega}|u_{j
|+||\mathcal{H}^{\delta}u_{j}||_{C^{0,\alpha}(\Omega)}\right) \nonumber\\
& \leq c\left( u_{0}+||h(\cdot,u_{j-1})||_{C^{0,\alpha}(\Omega)}+||\mu
(u_{j}-u_{j-1})||_{C^{0,\alpha}(\Omega)}\right) ,\label{uj-c2alfa
\end{align}
whenever $U$ is a compact subset of $\Omega$. Thus $||u_{j}||_{C^{2,\alpha
}(U)}$ is clearly bounded by some constant $c$ independent of $j$ due to
(\ref{uj-bound})-(\ref{uj-c2alfa}) and the fact that $h$ is Lipschitz. Thus
$\{u_{j}\}_{j=1}^{\infty}$ has a convergent subsequence in $C_{loc}^{2,\alpha
}(\Omega)$ and in the following we will denote the convergent subsequence
$\{u_{j}\}_{j=1}^{\infty}$. As $j\rightarrow\infty$ in (\ref{monotone pde}) we
have tha
\[
\left\{
\begin{array}
[c]{ll
\mathcal{H}^{\delta}u=h(\cdot,u)\ \ \ \ \ & \text{in }\Omega,\\
u=g & \text{on }\partial\Omega.
\end{array}
\right.
\]
We next prove that $u\in C(\overline{\Omega})$ by a barrier argument. For
fixed $\varsigma\in\partial\Omega$ and $\varepsilon>0,$ let $V$ be an open
neighborhood of $\varsigma$ such tha
\[
|g(x)-g(\varsigma)|\leq\varepsilon\mbox{ whenever }x\in V\cap\partial\Omega.
\]
Let $w:V\cap\overline{\Omega}\rightarro
\mathbb{R}
$ be a function with the following properties:
\begin{align}
(i) & \mbox{$\mathcal{H}^{\delta }w\leq-1$ in $V\cap \Omega$},\nonumber\\
(ii) &
\mbox{$w>0$ in $V\cap \overline{\Omega}\backslash \{\varsigma\}$ and $w(\varsigma)=0.$}\nonumber
\end{align}
That such a function $w$ exists follows from the assumption that there exists
an exterior normal for all points on $\partial\Omega$ , see Definition
\ref{eter} and Remark \ref{remark} below. We define
\[
v^{\pm}(x)=g(\varsigma)\pm(\varepsilon+kw(x))\mbox{ whenever }x\in
V\cap\partial\Omega
\]
for some constant $k>0$ large enough to ensure tha
\[
\mathcal{H}^{\delta}(u_{j}-v^{+})\geq h(\cdot,u_{j-1})-\mu(u_{j-1
-u_{j})+k\geq0
\]
and that $u_{j}\leq v^{+}$ on $\partial(V\cap\Omega).$ Thus, the maximum
principle asserts that $u_{j}\leq v^{+}$ on $V\cap\Omega$ and likewise
$u_{j}\geq v^{-}$ on $V\cap\Omega.$ Note that $k$ can be chosen to depend on
the Lipschitz constant of $h$, $\mu$ and $u_{0}$ only and, in particular, $k$
can be chosen independent of $j.$ Passing to the limit we see that
\[
g(\varsigma)-\varepsilon-kw(x)\leq u(x)\leq g(\varsigma)+\varepsilon
+kw(x),\ \ \ \ x\in V\cap\Omega,
\]
and hence
\[
g(\varsigma)-\varepsilon\leq\underset{x\rightarrow\varsigma}{\lim\inf
}\ u(x)\leq\underset{x\rightarrow\varsigma}{\lim\sup}\ u(x)\leq g(\varsigma
)+\varepsilon
\]
where the limit $x\rightarrow\varsigma$ is taken through $x\in V\cap\Omega$.
Since $\varepsilon$ can be chosen arbitrarily we can conclude that $u\in
C(\overline{\Omega}).$ Finally, \eqref{hha} follows from an application of the
maximum principle.
\end{proof}
\begin{remark}
\label{remark} In the proof above we used barrier functions, plainly stating
that proper barrier functions exists. To see that this is actually the case,
let $\varsigma\in\partial\Omega$, then using our assumption on the domain
$\Omega$, see\ also Definition \ref{eter}, we see that there exists a standard
Euclidean ball in $\mathbf{R}^{n},$ $B_{E}(x_{0},\rho),$ with center $x_{0
\in\tilde{\Omega}\backslash\Omega$ and radius $\rho$, such that $B_{E
(x_{0},\rho)\subset\tilde{\Omega}$ and $\overline{B_{E}(x_{0},\rho)
\cap\overline{\Omega}=\{\varsigma\}.$ Using $x_{0}$ we define, for $K\gg1$,
\[
w(x)=e^{-K|\varsigma-x_{0}|^{2}}-e^{-K|x-x_{0}|^{2}}.
\]
Then, $w(\varsigma)=0$ and $w(x,t)>0$ for $x\in V\cap\overline{\Omega
}\backslash\{\varsigma\}.$ To see that $\mathcal{H}^{\delta}w\leq-1,$ we note
that since the coefficients of the operator $\mathcal{H}^{\delta}$ are smooth
in a neighborhood of $V\cap\overline{\Omega}$, $\mathcal{H}^{\delta}$ can be
rewritten as a H\"{o}rmander operator in line with Theorem \ref{Bony}. In
particular, using the notation of Theorem \ref{Bony} we have
\begin{align*}
\mathcal{H}^{\delta}w(x) & =-e^{-K|x-x_{0}|^{2}}\left( 4K^{2}\sum
_{i,j=1}^{n}a_{ij}^{\ast}(x)(x^{i}-x_{0}^{i})(x^{j}-x_{0}^{j})\right. \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.
-2K\sum_{i=1}^{n}\left( a_{ii}^{\ast}(x)+a_{i}^{\ast}(x)(x^{i}-x_{0
^{i})\right) +\gamma(x)w(x)\right) ,
\end{align*}
where $a_{ij}^{\ast},a_{i}^{\ast}$ and $\gamma$ denote the coefficients of the
H\"{o}rmander operator $\mathcal{H}^{\delta}$\ as stated in Theorem
\ref{Bony}. Hence, for $V$ small and choosing $K$ large enough, $\mathcal{H
^{\delta}w(x)\leq-1$ on $V\cap\overline{\Omega}.$ Thus, $w$ is indeed a proper
barrier function.
\end{remark}
\noindent\textbf{Proof of Theorem \ref{obstacle}.} We first note, using
Theorem \ref{thm2}, that the problem in (\ref{penalized}) has a classical
solution $u_{\varepsilon,\delta}\in C^{2,\alpha}(\Omega)\cap C(\overline
{\Omega})$. The assumption $\gamma<0$ enable us to use the maximum principle.
To proceed we first prove that
\begin{equation}
|\beta_{\varepsilon}(u_{\varepsilon,\delta}-\varphi^{\delta})|\leq c
\label{step1
\end{equation}
for some constant $c$ independent of $\varepsilon$ and $\delta.$ By definition
$\beta_{\varepsilon}\leq\varepsilon$ and hence we only need to prove the
estimate from below. Since $\beta_{\varepsilon}(u_{\varepsilon,\delta
-\varphi^{\delta})\in C(\overline{\Omega})$ this function achieves a minimum
at a point $(\varsigma,\tau)\in\overline{\Omega}.$ Assume that $\beta
_{\varepsilon}(u_{\varepsilon,\delta}(\varsigma)-\varphi^{\delta
(\varsigma,))\leq0,$ otherwise we are done. If $\varsigma\in\partial\Omega$,
then, since $g\geq\varphi$
\[
\beta_{\varepsilon}(u_{\varepsilon,\delta}(\varsigma)-\varphi^{\delta
}(\varsigma))=\beta_{\varepsilon}(g^{\delta}(\varsigma)-\varphi^{\delta
}(\varsigma))\geq0.
\]
On the other hand, if $\varsigma\in\Omega$, then the function $u_{\varepsilon
,\delta}-\varphi^{\delta}$ also reaches its (negative) minimum at $\varsigma$
since $\beta_{\varepsilon}$ is increasing. Now, due to the maximum principle
\begin{equation}
\mathcal{H}^{\delta}u_{\varepsilon,\delta}(\varsigma)-\mathcal{H}^{\delta
}\varphi^{\delta}(\varsigma)\geq0\geq-\gamma^{\delta}(\varsigma
)(u_{\varepsilon,\delta}(\varsigma)-\varphi^{\delta}(\varsigma)). \label{lse
\end{equation}
Because of (\ref{dobs}) and the assumption that $a_{0},b_{i}\in L^{\infty
}(\Omega)$ we conclude that $\mathcal{H}^{\delta}\varphi^{\delta}\geq\eta$ for
some constant $\eta$ independent of $\delta.$ Now, since $\gamma,\ f\in
L^{\infty}(\Omega)$ and using (\ref{lse}), we obtain
\begin{align*}
\beta_{\varepsilon}(u_{\varepsilon,\delta}-\varphi^{\delta}) &
=\mathcal{H}^{\delta}u_{\varepsilon,\delta}(\varsigma)+\gamma^{\delta
}(\varsigma)u_{\varepsilon,\delta}(\varsigma)-f^{\delta}(\varsigma)\\
& \geq\mathcal{H}^{\delta}\varphi^{\delta}(\varsigma)+\gamma^{\delta
}(\varsigma)\varphi^{\delta}(\varsigma)-f^{\delta}(\varsigma)\geq c,
\end{align*}
for some constant $c$ independent of $\varepsilon$ and $\delta$ and hence
(\ref{step1}) holds. We next use (\ref{step1}) to prove that $u_{\varepsilon
,\delta}\rightarrow u$ for some function $u\in C^{2,\alpha}(\Omega)\cap
C(\overline{\Omega})$ and that $u$ is a solution to the obstacle problem
(\ref{e-obs}). To do this we first prove that there exist constants $c_{1}$
and $c_{2}$ such tha
\begin{equation}
||u_{\varepsilon,\delta}||_{L^{\infty}(\Omega)}\leq c_{2}\left(
||g||_{L^{\infty}(\Omega)}+||f||_{L^{\infty}(\Omega)}+c_{1}\right) .
\label{u sup
\end{equation}
In fact, this follows by considering solutions to
\[
\left\{
\begin{array}
[c]{ll
\mathcal{H}^{\delta}v_{\varepsilon,\delta}-||\gamma^{\delta}||_{L^{\infty
}(\Omega)}v_{\varepsilon,\delta}=-2(||f^{\delta}||_{L^{\infty}(\Omega
)}+||\beta_{\varepsilon}(u_{\varepsilon,\delta}-\varphi^{\delta
)||_{L^{\infty}(\Omega)})\ \ \ \ \ & \text{in }\Omega,\\
u=||g^{\delta}||_{L^{\infty}(\Omega)} & \text{on }\partial\Omega.
\end{array}
\right.
\]
Using the maximum principle on $v_{\varepsilon,\delta}-u_{\varepsilon,\delta
},$ we see that $u_{\varepsilon,\delta}<v_{\varepsilon,\delta}.$ Moreover,
since $||\beta_{\varepsilon}(u_{\varepsilon,\delta}-\varphi^{\delta
})||_{L^{\infty}(\Omega)}$ is bounded uniformly for $\varepsilon,\delta,$ and
since the $L^{\infty}$-norm of the regularized version of a function is
bounded by the $L^{\infty}$-norm of the function itself, (\ref{u sup})
follows.\textbf{ }Then we use (\ref{step1}) and (\ref{u sup}) together with
Theorem \ref{a priori} to conclude that for every $U\subset\subset\Omega$ and
$p\geq1$ the norm $||u_{\varepsilon,\delta}||_{\mathcal{S}^{p}(U)}$ is bounded
uniformly in $\varepsilon$ and $\delta.$ Consequently $\{u_{\epsilon,\delta
}\}$ converges weakly to a function $u$ on compact subsets of $\Omega$ as
$\varepsilon,\delta\rightarrow0$ in $\mathcal{S}^{p}$, and by Theorem
\ref{Embedding} in $C^{1,\alpha}$. Also, by construction
\[
\underset{\varepsilon,\delta\rightarrow0}{\lim\sup}\ \beta_{\varepsilon
}(u_{\varepsilon,\delta}-\varphi^{\delta})\leq0
\]
and therefore $\mathcal{H}u+\gamma\leq f$ a.e. in $\Omega.$ In the set
$\{u\geq\varphi\}\cap\Omega$ equality holds. Together with the estimate
(\ref{step1}) this shows that $\max\{\mathcal{H}u+\gamma u-f,\varphi-u\}=0$ on
$\Omega.$ Proceeding as in the end of the proof of Theorem \ref{thm2}, using
barrier functions, we conclude that $u\in C(\overline{\Omega})$ and $u=g$ on
$\partial\Omega$, hence $u$ is a strong solution to the obstacle problem
(\ref{e-obs}). The bound (\ref{Sp bound}) is a direct consequence of the above
calculations. Altogether, this completes the proof. \hfill$\Box$
\section{Proof of Theorem \ref{Embedding}\label{SecEm}}
The embedding theorem we aim to prove is not as general as we would have
hoped, and actually, when we began working on this paper we did believe that
the proof was already out there. Despite several attempts on finding a proper
reference we were unable to find one, and in the end, we decided to add the
assumption that we are working on a homogeneous group and that the vector
fields $X_{1},...,X_{q}$ are left invariant and homogeneous of degree one
while $X_{0}$ is left invariant and homogeneous of degree two. This enables us
to prove the necessary embedding, that is that the $C^{1,\alpha}$-norm of
solutions are bounded by the $\mathcal{S}^{p}$-norm. In the case of stratified
groups this was proved by Folland in \cite[Theorem 5.15]{F75}, and no
assumption on $u$ solving a particular equation had to be made. In the pure
subelliptic content, that is, when there is no lower order term, this has been
extensively investigated, see for instance Lu \cite[Theorem 1.1]{Lu96} and the
references therein. In the subelliptic parabolic case, that is, when
$X_{0}=\partial_{t},$ this was proved in \cite[Theorem 1.4]{FGN12}. The
approach used to the case $X_{0}=\partial_{t}$ cannot be applied to this case
since we lack enough information about the fundamental solution. Finally, a
slightly less general formulation of the embedding theorem was proved in
\cite[Theorem 7]{BB00b}, where the $C^{0,\alpha}$-norm is bounded by the
$\mathcal{S}^{p}$-norm.\textbf{\ }
\noindent\textbf{Proof of Theorem \ref{Embedding}. }First, we note that by
\cite[Theorem 4]{BB00b} we have, for $\alpha=2-Q/p,
\[
||u||_{C^{0,\alpha}(\Omega^{\prime})}\leq c\left( ||\mathcal{H
u||_{L^{r}(\Omega)}+||u||_{L^{p}}\right) ,
\]
for some $c$ depending only on $\mathbf{G}$, $\mu$, $p$, $s$, $\Omega$ and
$\Omega^{\prime}$\ (it is stated in a slightly different way, but restricted
to our choice of $p$ and $s$ this is what is actually proved). It remains to
show that the same holds when $u$ on the left hand side is replaced by
$X_{i}u$ for $i=1,...,q.$ Let $H=\sum_{i=1}^{q}X_{i}^{2}+X_{0}$ and let
$\Gamma$ be the corresponding fundamental solution. Such a fundamental
solution exists by a classical result of Folland \cite[Theorem 2.1]{F75}.
Moreover, $\Gamma$ is homogeneous of degree $2-Q.$ This means that, for $u\in
C_{0}^{\infty}(B_{R}),$ we can writ
\[
u=Hu\ast\Gamma.
\]
Let $\phi$ be a cutoff function with $B_{R/2}(x_{0})\prec\phi\prec B_{R
(x_{0})$, for some $x_{0}\in\Omega,R\in\mathbf{R}$ such that $B_{2R}(x_{0
)\in\Omega$. That such a cutoff function exists follows from Lemma
\ref{cutoff2}. By Lemma \ref{cutoff}, $u\phi\in\mathcal{S}_{0}^{p}(B_{R}).$
Since H\"{o}lder continuity is a local property, we can restrict ourselves to
balls, and by a density argument we can look at smooth functions $u$.
Therefore, assume that $u\in C_{0}^{\infty}(\Omega)$ and let $M$ be as in
Proposition \ref{MMM}, the
\[
X_{i}u(x)=X_{i}\ \int_{\mathbf{R}^{n}}\Gamma(y^{-1}\circ x)Hu(y)dy.
\]
Since $u$ is smooth with compact support we may differentiate inside integral,
and we obtai
\begin{align}
|X_{i}u(x)-X_{i}u(y)| & \leq\int_{\mathbf{R}^{n}}|X_{i}\Gamma(z^{-1}\circ
x)-X_{i}\Gamma(z^{-1}\circ y)|\ |Hu(z)|dz\nonumber\\
& \leq\int_{||z^{-1}\circ x||\geq M||y^{-1}\circ x||}...dz+\in
_{||z^{-1}\circ x||<M||y^{-1}\circ x||}...dz=I+II. \label{hund
\end{align}
Above, it is implicitly understood that the vector fields act on $\Gamma$ as a
function of $x$ respectively $y$ (hence, do not differentiate with respect to
the $z$-variable). Since $\Gamma$ is homogeneous of degree $2-Q$ and
$X_{i},\ i=1,...,q,$ is homogeneous of degree $1,$ $X_{i}\Gamma$ is
homogeneous of degree $1-Q.$ By Proposition \ref{MMM} we ge
\[
I\leq c(\mathbf{G},p)\ ||y^{-1}\circ x||\ \int_{||z^{-1}\circ x||\geq
M||y^{-1}\circ x||}\frac{|Hu(z)|}{||z^{-1}\circ x||^{Q}}dz.
\]
Further, we introduce the set
\[
\sigma_{k}=\{z\in\mathbf{R}^{n}:2^{k}M||y^{-1}\circ x||\leq||z^{-1}\circ
x||\leq2^{k+1}M||y^{-1}\circ x||\},
\]
for $k=0,1,...,$ and note that the Euclidean volume of the set $\sigma_{k},$
by (\ref{bollar}), is equal t
\begin{align}
& |B(0,2^{k+1}M||y^{-1}\circ x||)|-|B(0,2^{k}M||y^{-1}\circ x||)|\nonumber\\
& =|B(0,1)|\ \left( \left( 2^{k+1}M||y^{-1}\circ x||\right) ^{Q}-\left(
2^{k}M||y^{-1}\circ x||\right) ^{Q}\right) \\
& =|B(0,1)|\ (2^{Q}-1)\ 2^{Qk}M^{Q}||y^{-1}\circ x||^{Q}. \label{bolldiff
\end{align}
By assumption $u\in\mathcal{S}^{p}(\Omega)$ for some $p.$ Let $q$ be such that
$\frac{1}{p}+\frac{1}{q}=1.$ Then, we obtai
\begin{align*}
I & \leq c(\mathbf{G},p)\ ||y^{-1}\circ x||\ \int_{||z^{-1}\circ x||\geq
M||y^{-1}\circ x||}\frac{|Hu(z)|}{||z^{-1}\circ x||^{Q}}dz\\
& \leq c(\mathbf{G},p)\ ||y^{-1}\circ x||\ \sum_{k=0}^{\infty}\left(
2^{k}M||y^{-1}\circ x||\right) ^{-Q}\int_{\sigma_{k}}|Hu(z)|dz\\
& \leq c(\mathbf{G},p)\ ||y^{-1}\circ x||^{1-Q}\ \sum_{k=0}^{\infty
2^{-kQ}\left[ \left( 2^{Q}-1\right) 2^{kQ}M^{Q}||y^{-1}\circ x||^{Q
\right] ^{1/q}\ ||Hu||_{L^{p}(\sigma_{k})}\\
& \leq c(\mathbf{G},p)\ ||y^{-1}\circ x||^{1-Q+Q/q}\ ||Hu||_{L^{p}(\Omega
)}\sum_{k=0}^{\infty}2^{-k(Q-Q/q)}.
\end{align*}
This sum converges, and for $Q<p$ we have that the exponent of $||y^{-1}\circ
x||$ is larger than zero.
Next step is to look at $II$ in (\ref{hund}). In a similar way we define the
set
\[
\widetilde{\sigma}_{k}=\{z\in\mathbf{R}^{n}:2^{-(k+1)}M||y^{-1}\circ
x||\leq||z^{-1}\circ x||\leq2^{-k}M||y^{-1}\circ x||\},
\]
for $k=0,1,...,$ and in this case we ge
\[
II\leq\underset{II_{1}}{\underbrace{\int_{||z^{-1}\circ x||<M||y^{-1}\circ
x||}\frac{|Hu(z)|}{||z^{-1}\circ x||^{Q-1}}dz}}+\underset{II_{2
}{\underbrace{\int_{||z^{-1}\circ x||<M||y^{-1}\circ x||}\frac{|Hu(z)|
{||z^{-1}\circ y||^{Q-1}}dz}}.
\]
To begin with, we deal with the first term above, which by the compact support
of $u,$ is bounded b
\begin{align*}
II_{1} & \leq c(\mathbf{G},p)\sum_{k=0}^{\infty}\ \int_{\widetilde{\sigma
}_{k}}\frac{|Hu(z)|}{\left( 2^{-(k+1)}M\ ||y^{-1}\circ x||\right) ^{Q-1
}dz\\
& \leq c(\mathbf{G},p)\ ||y^{-1}\circ x||^{-(Q-1)}\sum_{k=0}^{\infty
}2^{(k+1)(Q-1)}\left( \int_{\widetilde{\sigma}_{k}}1dz\right) ^{1/q
||Hu||_{L^{p}(\Omega)}\\
& \leq c(\mathbf{G},p)\ ||y^{-1}\circ x||^{-(Q-1)}||Hu||_{L^{p}(\Omega)
\sum_{k=0}^{\infty}2^{(k+1)(Q-1)}\left[ \left( 2^{-k}M\ ||y^{-1}\circ
x||\right) ^{Q}\right] ^{1/q}\\
& =c(\mathbf{G},p)\ ||y^{-1}\circ x||^{-(Q-1-Q/q)}||Hu||_{L^{p}(\Omega)
\sum_{k=0}^{\infty}2^{k(Q-1-Q/q)}.
\end{align*}
The sum converges for $Q<p,$ and in that case
\[
II_{1}\leq c(\mathbf{G},p)\ ||y^{-1}\circ x||^{(p-Q)/p}||Hu||_{L^{p}(\Omega
)}.
\]
To bound $II_{2}$, note that if $||z^{-1}\circ x||<M||y^{-1}\circ x||,$ then
$||z^{-1}\circ y||\leq c(||z^{-1}\circ x||+||y^{-1}\circ x||)\leq
c(1+M)||y^{-1}\circ x||$. This means that we can argue as for $II_{1}$, to
find that $II_{2}\leq c(\mathbf{G},p)\ ||y^{-1}\circ x||^{(p-Q)/p
||Hu||_{L^{p}(\Omega)}.$ Put together, we have shown tha
\[
|X_{i}u(x)-X_{i}u(y)|\leq c(\mathbf{G},p)\ ||y^{-1}\circ x||^{(p-Q)/p
||Hu||_{L^{p}(\Omega)}.
\]
That is, (\ref{badda}) hold for functions $u\in\mathcal{S}^{p}(\Omega)\cap
C_{0}^{\infty}(\Omega).$ The general case follows, as previously mentioned, by
using a density argument and cutoff functions. Note that, we proved this for
H\"{o}lder spaces defined by means of the distance $d_{h},$ however, this
carries over directly to our case. \hfill$\square$
\section{Homogeneous H\"{o}rmander operators\label{exempel}}
We will now give some examples as to when our results apply. The two first
examples shows operators for which our results overlap with the existing
literature, while the third and fourth example shows that our results covers
equations previously not considered for obstacle problems.
\begin{example}
(Subelliptic parabolic equations) When we replace $a_{0}X_{0}$ with
$\partial_{t}$ we get a subelliptic parabolic operator
\[
\mathcal{H}=\sum_{i,j=1}^{q}a_{ij}(x,t)X_{i}X_{j}+\sum_{i=1}^{q
b_{i}(x,t)X_{i}-\partial_{t},\ \ \ x\in\mathbf{R}^{n},t\in(0,T),n\geq3.
\]
In this case, by \cite{FGN12}, we need not assume that we have a homogeneous group.
\end{example}
\begin{example}
(Kolmogorov equations) Le
\begin{equation}
\mathcal{H}=\sum_{i,j=1}^{q}a_{ij}(x,t)\frac{\partial^{2}}{\partial
x_{i}\partial x_{j}}+\sum_{i=1}^{q}b_{i}(x,t)\frac{\partial}{\partial x_{i
}+\sum_{i,j=1}^{n}c_{ij}x_{i}\frac{\partial}{\partial x_{j}}+\partial_{t},
\label{KE
\end{equation}
where $(x,t)\in\mathbf{R}^{n}\times\mathbf{R}$, $q<n$, with the usual
assumptions on $a_{ij}$ and $b_{i}$, while $C=\{c_{ij}\}$ is a matrix of
constant real numbers. For $(x_{0},t_{0}),$ fixed but arbitrary, we introduce
the vector field
\begin{equation}
X_{0}=\sum_{i,j=1}^{n}c_{ij}x_{i}\frac{\partial}{\partial x_{j}}+\partial
_{t},\ \ \ X_{i}=\frac{1}{\sqrt{2}}\sum_{j=1}^{q}a_{ij}(x_{0},t_{0
)\frac{\partial}{\partial x_{j}},\ \ \ i\in\{1,...,q\}. \label{XKE
\end{equation}
A condition which assures that $\mathcal{H}$ in (\ref{KE}) is a H\"{o}rmander
operator is that $\{X_{0},X_{1},...,X_{q}\}$ in (\ref{XKE}) satisfy the
H\"{o}rmander condition. An equivalent condition is that the matrix $C$ has
the following block structur
\
\begin{pmatrix}
\ast & C_{1} & 0 & \cdots & 0\\
\ast & \ast & C_{2} & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
\ast & \ast & \ast & \cdots & C_{k}\\
\ast & \ast & \ast & \cdots & \ast
\end{pmatrix}
\]
where $C_{j}$, for $j\in\{1,...,k\}$, is a $q_{j-1}\times q_{j}$ matrix of
rank $q_{j},$ $1\leq q_{k}\leq...\leq q_{1}\leq q=q_{0}.$ Further,
$q+q_{1}+...+q_{k}=n$, while $\ast$ represents arbitrary matrices with
constant entries. In the case of Kolmogorov equations, results on existence of
solutions was proved in \cite{FPP}.
\end{example}
\begin{example}
For $(x,y,z,w,t)\in\mathbf{R}^{5}$, consider the vector field
\[
X=\partial_{x}-xy\partial_{t},\ \ \ Y=\partial_{y}+x\partial_{w
,\ \ \ Z=\partial_{z}+x\partial_{t}.
\]
These vector fields satisfy H\"{o}rmanders condition sinc
\[
W=[X,Y]=\partial_{w}+x\partial_{t},\ \ \ T=[X,Z]=\partial_{t}.
\]
We note that the Lie algebra generated by these vector fields are nilpotent of
step 4, but we do not have a stratified group since $\partial_{t
=[X,Z]=[X,W]$. Moreover, the group law $\circ$ is given b
\[
(x,y,z,w,t)\circ(\xi,\eta,\zeta,\omega,\tau)=(x+\xi,y+\eta,z+\zeta
,w+\omega+x\eta,t+\tau-\frac{1}{2}y\xi^{2}-x\xi\eta+x\zeta+x\omega),
\]
and we can define (non-unique) translation
\[
D_{\lambda}(x,y,z,w,t)=(\lambda x,\lambda y,\lambda^{2}z,\lambda^{2
w,\lambda^{3}t).
\]
This is neither a subelliptic parabolic equation, nor is it a Kolmogorov type
equation, and the results presented here is therefore new.
\end{example}
\begin{example}
(Link of groups) Following \cite{LK}, we can link groups together. The
simplest example is obtained if we define the vector field
\[
X_{0}=x\partial_{w}-\partial_{t},\ \ \ X_{1}=\partial_{x}+y\partial
_{s},\ \ \ X_{2}=\partial_{y}-x\partial_{s},
\]
for $(x,y,s,w,t)\in\mathbf{R}^{5}$. Then, in the variables $(x,y,s,t)$ we get
the heat operator on the Heisenberg group, while in the variables $(x,y,s,w)$
we get a Kolmogorov operator. This again defines a homogeneous H\"{o}rmander
operator, which previously have not been studied in the setting of obstacle problems.
\end{example}
|
train/arxiv
|
BkiUcjk4eILhQLGUTupU
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
\input{body/01-introduction}
\section{Related Work}
\label{sec:related-work}
\input{body/02-related-work}
\section{Design Goals for Visualizing Thing Constellation}
\label{sec:thing-constellation}
\input{body/03-thing-constellation}
\section{Thing Constellation Visualizer: Visualizing Thing Constellation through Objects Co-occurrence}
\label{sec:tool-design}
\input{body/04-tool-design}
\section{Design Workshops for Exploring Thing Constellation}
\label{sec:design-workshops}
\input{body/05-design-workshops}
\section{Findings}
\label{sec:findings}
\input{body/06-findings}
\section{Discussion}
\label{sec:discussion}
\input{body/07-discussion}
\section{Design Implications}
\label{sec:implications}
\input{body/07-implications}
\section{Limitations and Future Work}
\label{sec:limitations-future-work}
\input{body/08-limitations-future-work}
\section{Conclusion}
\label{sec:conclusion}
\input{body/09-conclusion}
\begin{acks}
We thank all our participants for investing their time and effort in this project. We also thank Nanyi Bi and Yen-Ling Kuo for their proofreading feedback and the anonymous reviewers for their constructive feedback, which has helped improve this paper. This research was supported in part by the Ministry of Science and Technology of Taiwan (MOST 108-2911-I-011-505, MOST 108-2633-E-002-001), National Taiwan University and Intel Corporation.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
\subsection{Towards Computational Thing Ethnography}
While prior design research uses sensor-equipped objects as co-ethnographers to collect data in situ from a thing perspective and then rely on experts or design researchers to make sense of empirical data to understand humans' social practices~\cite{Giaccardi:DIS2016,Chang:DIS2017}, we apply a very different approach to reuse a large-scale public dataset containing millions of photos shared by people voluntarily. Those public photos have rich information about everyday practices with fewer privacy concerns. However, we have admitted that analyzing public data without permission still raise certain ethical concerns. To reduce the ethical concerns, we decide to remove the actual photos and only keep co-occurrence patterns for further investigations and explorations by designers. Then, we only provide an abstract network representation with nodes and links as a defamiliarized structure to stimulate designers to use their own experiences or imaginations to interpret these emerging patterns extracted from a large amount of data. Our tool provides a mask to protect the actual data to be seen, to be interpreted, or to be judged in the design practice. Furthermore, by using a computational approach (e.g. co-occurrence similarity, community detection, spreading activation) to analyzing and visualizing the data, we are able to observe emerging data-driven patterns through a statistical and computational lens.
{\color{change}
Data need human interpretation to be meaningful. While a large collection of empirical data can reveal emergent patterns of everyday practice, data do not speak for themselves and they require humans to give them rich meaning. As Dourish and G\'{o}mez Cruz argue, ``Data must be narrated—put to work in particular contexts, sunk into narratives that give them shape and meaning, and mobilized as part of broader processes of interpretation and meaning-making~\cite{Dourish:2018}.'' In this work, we see the potential benefits of providing designers the freedom to interact with data by using interactive threshold and perspective-changing functionality. Such interactivity enables designers to add their own interpretations upon data and facilitates active discussions within a group. Particularly, they are able to use different snapshots or angles to discuss a specific object or community, and then discover interesting phenomena that are linked to their prior experiences.
As the first step, this work has successfully demonstrated the benefits of combining data-driven patterns with human interpretations. For the next step, we plan to relax certain constraints to allow designers to modify the thing constellation by bringing their own datasets or specifying their relationships of interest (e.g., similarity of context usage or ownership). Designers can use our tool to explore alternative thing constellations in a more flexible way. For example, designers can incrementally add data into the dataset or replace the entire dataset with their own data collected through sensors or by participants from a specific location or context (e.g., home, hospital, train station, factory, school). By doing so, they can observe and discover different objects, object ecosystems and social patterns that emerge in a specific context, rather than general patterns from collective data (as is the case for the current dataset).
Moreover, we plan to explore the interactive dialog between designers and design materials (i.e., data and their social patterns) by enabling designers to directly modify the graph visualization (e.g., add, delete, or modify nodes and links). The algorithms or parameters used in algorithms can also be changed based on users' needs or the characteristics of the data. By opening up for human modification on the graph, designers can bring their own thoughts into shaping the thing constellation and observe the changing patterns under their adjustments. Such interactive dialog could empower designers to try multiple experiments and gain in-depth insights during the exploration process.
}
We argue that our computational thing ethnographic approach is not to replace experts' unique perspectives but to provide great opportunities to allow people with diverse backgrounds to contribute their ideas, experience, and interpretations on the emerging patterns extracted from empirical data. However, we are aware of several challenges of our computational approach. The major one is that data-driven constellations only represent parts of everyday patterns based on the collected dataset. Data are naturally biased, and thus data-driven insights cannot fully represent the actual picture of global constellations among everyday things. Therefore, we believe that data-driven insights should be combined with designerly interpretations. The ultimate goal of the computational thing ethnography is to complement meaningful qualitative insights elicited from the existing ethnography and design research. Particularly, the computational thing ethnography encourages researchers to make data perform themselves and collaborate with data to investigate our everyday practices towards a more-than human-centred understanding.
{\color{contribution}
\subsection{How Computational Thing Ethnography Contributes to CSCW}
CSCW is a research field that is firmly grounded in ethnographic studies of collaborative activities. Blomberg and Karasti reflect on the important role of ethnography in CSCW and suggest an alternative way of repositioning ``ethnography not as a tool for design but as deeply integrated into the doing of design in CSCW~\cite{Blomberg:CSCW2013}.'' In particular, they emphasize that ethnography can contribute to new understandings of the sociality and materiality of work. The goal of this work is to provide a new approach and tool that empowers people to use alternative perspectives to revisit everyday practice and gain an in-depth understanding of emergent relationships among objects. While most ethnographic studies help us see the ``here and now'' and identify temporal and spatial connections among activities, this work contributes a new approach and tool to support designers to explore the possible future by playing with empirical data.
Our work expands the CSCW literature by adding a new concept---computational thing ethnography. We see our tool, thingCV, to be a facilitator to enable designers to actively discuss their experiences and generate insights in collaboration with other people in a group. Our tool also inspires designers to speculate the diverse future IoT scenarios that are grounded in practical data. Moreover, our tool enables designers not only to reflect on their experiences but also to critically compare and contrast the different experiences with others. They can use alternative perspectives to rethink the relationship between people and objects. In this way, it is possible to use our tool to explore cooperative IoT cooperative scenarios.
}
\subsection{The Role of Domestic Things for IoT design}
The vision of the Internet of Things (IoT) suggests that computing can be embedded in anything, even in the most mundane objects (e.g., cup, fork, bottle, etc)~\cite{Ashton:1999}. While IoT research has largely focused on designing new devices, exploring new interactions between humans and objects could provide insights and inspire future IoT design. Recent studies have investigated everyday objects at home to understand daily mundane routines between family and objects~\cite{Crabtree:CSCW2016} and rethink possibilities of reconfiguring objects to meet the household's specific needs~\cite{Williams:CHI2020}. Crabtree and Tolmie offer empirical insights of a day in a life of things in the home and unpack distinct categories of everyday things and social patterns of human-thing interaction~\cite{Crabtree:CSCW2016}. Williams et al. contribute empirical insights about how family members imagine a future smart home that incorporates their existing everyday objects by reconfiguring an object's role in the home and rethinking evolving relationships of humans and objects~\cite{Williams:CHI2020}. In this paper, we aim to explore the emergent relationships between people and objects, but from a different angle, by making use of object co-occurrence patterns and human interpretations together.
\subsection{From Things to Social Things}
Giaccardi et al. proposed Thing Ethnography that allows everyday objects (e.g., kettles and mugs) equipped with sensors and cameras to capture social practices and the patterns of use on a daily basis~\cite{Giaccardi:DIS2016,Giaccardi_Cila:2016}. The aim was to understand the evolving use and applications of things in everyday life by introducing a new object-centred perspective to perceive the world. They suggested that thing's perspectives could help designers discover unexpected and invisible relationships among objects from unique perspectives that could not be discovered through (human-centred) observations and interviews~\cite{Giaccardi:DIS2016}. In another work, Chang et al. applied a thing's perspective to smart mobility design, by equipping a motorcycle with cameras to understand the ``life'' of a motorcycle in a specific cultural context from its perspective~\cite{Chang:DIS2017}. Cheng et al. built a camera to capture the relationship among multiple things~\cite{Cheng:DIS2019}.
Envisioning a day when everyday objects and systems can access the Internet to share data and interact with each other, some researchers have developed research prototypes to understand the complex relationship between things and people for IoT design~\cite{Wakkary:DIS2017,Nicenboim:2018}. One representative example is Morse Things~\cite{Wakkary:DIS2017}. Morse Things are sets of ceramic bowls and cups that can communicate with each other over a home's network. With the connection capability, Morse Things not only communicate with other things but also their human roommates. These things can send dots and dash as Morse codes to each other to know whether the other thing is there. While the bowls and cups can be used to eat or drink by humans, they could still have their ``social life'' with other things that do not need to be shared with their human owners. Morse Things extends the thing ethnography~\cite{Giaccardi:DIS2016} towards different directions and forces people to rethink the social relationships between things and people in a broader networked perspective.
\subsection{Entangled Ethnography and Constellation}
Another line of research seeks to expand thing-centred design towards a broader view and emphasize the needs of constellation design where people, objects, data, algorithms, and the environment are entangled~\cite{Frauenberger:TOCHI2019, Murray-Rust:HTTF2019, Coulton_Lindley:2019}. Recent work puts much effort into shaping the notion of constellations through a theory-driven exploration~\cite{Frauenberger:TOCHI2019,Murray-Rust:HTTF2019,Coulton_Lindley:2019}. Coulton and Lindley used a constellation metaphor to encompass the interdependent and independent relationships between humans, non-human actants, and environments. They demonstrated how the design could be put into practice through a speculative design~\cite{Coulton_Lindley:2019}. Furthermore, Murray-Rust and Burnett developed Entangled Ethnography that uses a theory-driven perspective to illustrate the vision in which humans, objects, and data are entangled and the computational intelligence will also be used to collaboratively make sense of data with researchers and networks of people and objects~\cite{Murray-Rust:HTTF2019}. Frauenberger further develops Entanglement HCI grounded on entanglement theories to reframe knowledge production practices in HCI, focusing on the performative relationship between humans and technology and ethical challenges~\cite{Frauenberger:TOCHI2019}. The common goal of these studies is to push forward the design paradigm that moves from human-centred toward more-than human-centred design.
\subsection{The Role of Co-occurrence in Investigating Social Relationships of Things}
Recent studies often capture object co-occurrence to investigate the social relationships of things in practice. Object co-occurrence, the pattern of two or more things present in the same place, allows design researchers to capture the practical aspect of the changing relationships in the mundane everyday. Such co-occurrence is not only easily captured by the tool (e.g. camera) but also provides an open space for people to interpret the possible meanings related to social relationships. For example, Giaccardi et al.~\cite{Giaccardi:DIS2016} captured the pattern of any things (people, objects) that are nearby or in front of the object and interpreted the thing-centred relationships in practice. Cheng~\cite{Cheng:DIS2019} also captured object co-occurrence patterns in a home and discussed the meaning behind such co-occurrence with the participants. Desjadrins et al.~\cite{Desjardins:CHI2020} took a step forward by not only capturing the patterns but also speculating about the possible data stream between them, as a way to inspire people to envision the future IoT data design. In these examples, object co-occurrence enables complex social relationships to be investigated with limited data input. The results are also easily observed and understood by people. Therefore, we propose object co-occurrence as a potential anchor to stimulate people's curiosity to interpret the possible contexts and the meanings around everyday things.
{\color{contribution}
\subsection{IoT as Cooperative Work between Objects}
While the IoT research has envisioned that objects, people and spaces will be intertwined and rely on some kind of cooperation to achieve common goals, we cannot ignore the socio-technical aspects between objects, people, and spaces. Atzori et al. emphasize that large numbers of objects are ``able to interact with each other and cooperate with their neighbors to reach common goals~\cite{Atzori:ComputerNetworks2010}.'' These activities may or may not be directly linked to cooperative work between people. Therefore, we need to consider the cooperation between objects more seriously and regard it as a kind of cooperative work. This is an important perspective informed by insights of CSCW. Although we learned from Actor Network Theory (ANT) the value of taking the non-human actor into account in socio-technical networks, the notion of objects and other non-human actors cooperating with each other to achieve common goals is not relatively new in CSCW~\cite{Robertson:ECSCW2015}. However, while prior work focuses on exploring theory-driven conceptual frameworks, our work aims to bridge the gap between conceptual, theoretical visions and design practice. The insights of possible interactions and cooperations among objects can contribute to the CSCW community and bring new social-technical aspects into IoT cooperative scenarios.
\subsection{Computer-Supported Creativity in CSCW}
The HCI and CSCW community have established a great interest in understanding various approaches and tools to facilitate human innovation and creativity, including individual or group creativity~\cite{Frich:CHI2019,Shi:CSCW2017,Koch:CSCW2020,Wang:CSCW2011}. Prior work has used computational visual stimuli to facilitate group creativity~\cite{Shi:CSCW2017} and explored factors such as group dynamics in enhancing the ideation process~\cite{Wang:CSCW2011}. Researchers found that appropriately incorporating intelligent tools and algorithms can support diverse human-computer partnerships on collaborative ideation~\cite{Koch:CSCW2020}. While these studies contribute to various strategies to the general ideation process, our work focuses more on the unique challenge of IoT ecosystem design. We extend the prior literature on computer-supported creativity and contribute a novel tool that enables designers to play with social patterns extracted from large-scale empirical data. Particularly, this tool aims to facilitate a group of designers to discover familiar but unnoticed experiences and generate diverse future IoT scenarios.
The strength of our tool is to allow people to explore data-driven patterns flexibly and add their own interpretations upon data. Align with what Gaver et al. suggested~\cite{Gaver:CHI2007}, the potential benefits of combining automatic inferencing and ambiguous output is to encourage user interpretation. They emphasize that human interpretation is needed to make data more meaningful. Dourish and G\'{o}mez Cruz also argue that data ``do not speak for themselves'' and that narratives are necessary to contextualize in order to give them meaning and shape~\cite{Dourish:2018}. Our work demonstrates the great potential of combining data-driven patterns with human interpretations and the ability of the tool to enable people to discover more interesting everyday phenomenons. The new insights generated from the exploratory process will contribute to IoT ecosystem design.
}
\subsection{Social-Centric Constellation}
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/constellation-46.jpg}
\caption{{\color{change}Social-centric network is a network of objects based on their co-occurrence in the same images.}}
\label{fig:sample-photos-MSCOCO}
\end{figure}
We build a network of objects based on their co-occurrence in the same images. The main assumption is that two objects are more likely to have a close relationship if they frequently appear or be used together. For example, forks are more likely to co-occur with a dining table or a bottle because they are frequently used by people while eating food (see Figure~\ref{fig:sample-photos-MSCOCO}). We use \textit{Jaccard similarity} to measure asymmetric relative co-occurrence between objects. Let $A$ and $B$ be the sets of images containing $object_{A}$ and $object_{B}$, respectively. The relative co-occurrence is defined as the size of the intersection of set $A$ and set $B$ (i.e. the number of common images) over the size of the union of set $A$ and set $B$ (i.e. the number of unique images) (see Equation~\ref{eq:similarity}). $\textit{Relation}(A, B)$ is the relative co-occurrence of $A$ and $B$. $|A \cap B|$ is the number of images in which two objects co-occur and $|A \cup B|$ is the number of photos in which appear in any one of the two objects. In other words, we compute the proportion of object overlapping as object similarity. In the end, we can build the entire network structure by calculating the similarity of any two nodes and assign the similarity score to every link.
\begin{equation}
\label{eq:similarity}
\textit{Relation}(A,B) = \frac{|A \cap B|}{|A \cup B|}
\end{equation}
\subsection{Ego-Centric Constellation}
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/constellation-47.jpg}
\caption{{\color{change}Ego-centric network is retrieved in the social-centric network by spreading energy from a target node to neighboring nodes.}}
\label{fig:spreading-activition}
\end{figure}
To understand a social structure of a specific object, we use the spreading activation technique to expand an ego-centric network~\cite{Crestani:1997}. Spreading activation is a method of searching associative networks. The process is initiated by giving an activation value (i.e., $E=1.0$) to a single node, and then the value is propagated through the network by gradually decaying the value (e.g., $d = 0.8$) till below a threshold (e.g., $t=0.05$). It is similar to the breath-first-traversal of the graph where the activation is spread to all neighbour nodes, but our method only spreads activation to nodes where they have a link with a weight that is higher than the threshold (see Figure~\ref{fig:spreading-activition}). The energy for each node is defined in Equation~\ref{eq:energy-spreading}. The energy is propagated to neighbouring nodes only if the remaining energy is above the threshold.
\begin{equation}
\label{eq:energy-spreading}
\textit{Energy}(o_j) = \textit{Energy}(o_i) * \textit{Relation}(o_i, o_j) * \textit{Decay}
\end{equation}
\subsubsection{Community detection}
To understand the network structure, we use the \textit{Louvain} algorithm to detect communities~\cite{Blondel:2008}. The Louvain algorithm is a hierarchical clustering method that recursively merges communities into a single node and executes the modularity clustering on the condensed graphs. This algorithm separates the network in communities by optimizing the modularity (i.e., the structure of the network) after trying various grouping operations on the network. This algorithm is very computationally efficient for detecting communities in a large and complex network.
\subsection{Dataset}
\label{subsec:dataset}
To validate our idea, we decide to choose an existing open-source dataset rather than a self-collected dataset because we want to focus on designing rich interactive experiences and interactions for our tool rather than spend too much time on the early stage of the data collection process. We are also aware of the potential ethical issues of using open source data as research exploration; the biases might cause some issues. We will discuss these concerns at the end of this paper.
In this work, we choose MS-COCO as our testing dataset based on three major reasons. First, the MS-COCO is one of the most notable benchmarking datasets for object detection, scene understanding, and visual reasoning. Second, the 80 object categories are selected by experts with thorough considerations~\cite{Lin:ECCV2014}. The object categories are specific, and all of them are entry-level categories, i.e., the category labels are commonly used by humans when describing objects. Also, the categories are from a representative set of all categories, which are relevant to practical applications and occur with high enough frequency. Every image captures everyday objects in diverse contexts. Third, every object category contains large amounts of data. The average number of objects per category is 27,472.5, which is the richest dataset for objects in the context. In this experiment, we used the MS-COCO 2017 dataset. There are 123,287 images with 896,782 annotations and 80 object categories. The 80 object categories are grouped by 11 super-categories, including Person \& Accessory, Vehicle, Outdoor Object, Animal, Sports, Kitchenware, Food, Furniture, Electronics, Appliance, and Indoor Object (see Figure~\ref{fig:mscoco-data}).
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/MSCOCO-data.pdf}
\caption{There are 80 object categories classified into 11 super-categories in the MS-COCO dataset~\cite{Lin:ECCV2014}. All object annotations are generated by crowd workers from Amazon Mechanical Turk (AMT). The whole dataset is downloaded from the MSCOCO website (https://cocodataset.org).}
\label{fig:mscoco-data}
\end{figure}
\subsection{Interactive Interface for Thing Constellation}
We design Thing Constellation Visualiser (thingCV), a tool that visualizes a constellation intertwined with 80 common objects (see Figure~\ref{fig:thing-constellation-tool}). With thingCV, designers can use the threshold slider to observe the changing patterns of the two types of constellations (e.g., social-centric and ego-centric views). By clicking the switch button, the tool will zoom in to an ego-centric view or zoom out to a social-centric view. The social-centric view provides an overview structure of the constellation. Objects with higher scores in co-occurrence will be grouped into the same community highlighted in the same color. Second, the ego-centric view provides a detailed look at an ego-centric constellation. By clicking any objects on the panel, users can jump into different types of ego-centric constellations to investigate specific objects.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/thing-constellation-tool.jpg}
\caption{The thingCV consists of (a) object panel, (b) threshold slider, (c) switch view button, and (d) two types of Thing Constellation visualization (i.e., social-centric view and ego-centric view). The tool and source code are released to the public (https://thingconstellation.github.io)}
\label{fig:thing-constellation-tool}
\end{figure}
\subsubsection{Object panel}
The object panel contains a list of objects in the Thing Constellation. Each object is filled with one color from the spectral diverging color scheme; objects in the same community, calculated by the community detection algorithm, are filled with the same color. Users can click an object to trigger an ego-centric constellation with a focus node (i.e., the clicked object).
\subsubsection{Threshold slider}
Thing Constellation is a weighted social network of objects where the links among nodes (i.e., objects) have weights (i.e., co-occurrence scores) assigned to them. The threshold slider is designed to filter links that are smaller than a threshold value. Users can use the slider to adjust the threshold value to determine the density (i.e., the number of links between nodes) of the social network. The threshold value is ranging from 0.5 to 0, which is determined by the actual weight distributions calculated from the dataset. The constellation with the highest threshold value (0.5) represents a fully disconnected network; by contrast, the constellation with the lowest value (0) represents the original aspect of a social network with all links (i.e., 2686 links). The community detection is triggered immediately while changing the threshold value, and all objects will be assigned to their belonged communities. The objects in the same community are filled in the same color. Through the threshold adjustment, users can see a dynamic change of network structure and changing communities---groups emerge or disappear, multiple groups merge together or break into small pieces.
\subsubsection{Switch button}
The switch button is used to change between the social-centric and the ego-centric view under the same threshold value. With this feature, users can flexibly decide which aspect they want to focus on and use the slider to see the changing patterns. Also, the interface only presents one type of constellation at the same time with the goal of allowing users to focus on one perspective for deep investigation.
\subsubsection{Social-centric view}
The social-centric view presents a social-centric social network using a force-directed graph layout~\cite{Bannister:GD2012}, where nodes within the same community are colored in the same color. Each node is clickable and users can click a node to enter an ego-centric view.
\subsubsection{Ego-centric view}
The ego-centric view presents an ego-centrical social network using a Tidy Tree Layout~\cite{Bostock:IEEE2011}. The network consists of a central node as the target object, and its neighbor nodes are presented in outside circles. The neighbor object is shown at the different levels of circles based on the distance between the target object and themselves. The ego-centric network is expanded by the spreading activation technique with a decay factor of 0.8 and a firing threshold of 0.05. The values (e.g., decay and threshold) are determined by iterative experiments.
\subsection{Implementation}
To build a social network of objects, we used COCO API to retrieve all images based on a specific category and calculate a co-occurrence score between every two objects based on our proposed method. In the end, we built a social-centric social network based on 123,287 images. The network has 80 nodes and 2686 edges; the average degree of the graph is 67.15. Each edge has its own weight, ranging from 0 to 1. We built a web-based visualization tool that uses jLouvain.js\footnote{https://github.com/upphiminn/jLouvain} to detect object communities and uses D3.js\footnote{https://d3js.org} to visualize the constructed network for allowing design researchers to explore complex relationships among everyday objects through global (i.e., social-centric constellations) and local perspective (i.e., ego-centric constellations). The interactive tool and source code are both released to the public\footnote{https://thingconstellation.github.io}.
\subsection{Object Co-occurrence as Design Materials for Shaping Thing Constellation}
The thing constellation is invisible and can be imperceptible. We regard object co-occurrence as a design material to embody the possible constellation situated in everyday contexts. Object co-occurrence can represent one of our social interactions with objects. To organize our daily routines and activities, people usually arranged the commonly used objects in the same place~\cite{Koren:2003}. These objects, whether to be situated at the backstage or used by us at the front stage, can work all at the same time. For example, watching TV while eating snacks can be represented as a group of front-stage objects that people directly use in the activity, including TV, a remote control, and snacks; at the same time, there are backstage objects that people may not notice during that activity, including tables, curtains, speakers. However, object co-occurrence patterns can capture these objects to represent `the way how people order their lives~\cite{Csikszentmihalyi:1981}'. While co-occurrence is a simplified relation and may not completely represent complex social relations among objects, this work marks a starting point to explore design possibilities of the thing constellation. Our workshops suggested that the designers were able to use this simple form of a co-occurrence object network to revisit their everyday practices and uncover meaningful insights.
\subsection{A Platform Empowers Designers to Explore Thing Constellation in Their Ways}
Our thingCV makes the thing constellation not only visible but also interactable. Such interactivity (e.g. interactive threshold, zoom in and out in perspectives) empowers a designer to explore the constellation and to identify interesting objects to discuss with the other designers. For example, the interactive threshold allows our participants to adjust and select the constellations that are more meaningful to them. The ability to switch perspectives allows the participants to identify interesting objects and communities and to further speculate on them. Therefore, our interactive interface empowers the designers to interpret the constellation in their own ways, to identify meaningful objects to themselves, and to share their interpretations with others.
While prior work has investigated the constellation intertwined by various things' perspectives such as the bowl and cup~\cite{Wakkary:DIS2017}, toaster~\cite{Rebaudengo:2012}, and water kettle~\cite{Cila:2015}, these objects are usually pre-selected by the design researchers before the study, whereas there are many more diverse objects in everyday life. As different objects can be meaningful to different people, our tool embraces such diversity and empowers designers to explore the constellation based on their own interests, preferences or needs. Even though there are currently only 80 objects shown in the constellation, this number is not a hard constraint and can be expanded with additional annotations. The objective of developing this tool is to demonstrate the possibility for designers to investigate a relatively large set of things and their constellations. This work thus makes a small but important first step to enable a ``tangible'' constellation for designers to play with exploring these common objects digitally in an interactive way.
This work also echoes the metaphor of constellation that ``different cultures observing the same constellations of stars interpret them variable too.~\cite{Lindley:2020}'' Our thingCV allows every object to have its own place on the stage and invites diverse users of the tool to observe and interpret an object's meanings differently. As Nansen et al. mentioned that every object has its own social life~\cite{Nansen:OzCHI2014}, the thingCV empowers every object to be shown equally as a star, for someone to pass her stories forward.
\subsection{Abstract Social Network as a Defamiliarized Narrative}
Our thingCV visualizes an abstract social network of the everyday practice which serves as a `defamiliarized narrative'~\cite{Bell:ToCHI2005}. The `defamiliarized narrative' is a common strategy in design research where the goal is to create strangeness to encourage people to revisit their familiar practice from an unfamiliar perspective in order to facilitate their creative decision-making~\cite{Carlson:CC2013}, reflections and speculations to the past, current and future~\cite{Sterling:2009,Sterling:2005}. In our tool, this familiar strangeness comes from the network structure (i.e., nodes and links). Every node represents a familiar object, yet it is also a general and ambiguous entity that is remixed by various living styles, habits, and contexts. As such, the thingCV potentially shows familiar objects but with strange links among them, where these connections can be conflicting to the participants' prior experiences and understanding. For example, some of our participants found a strange network about the `sink'. Several objects which were usually not used in the same context connected to the sink all at the same time. While participants see such network structure was strange, participants were triggered to reflect and speculate its possible contexts. These familiar but strange links stimulate participants' imaginations, open up an interpretation space~\cite{Gaver:CHI2003}, and further lead to discovering new possibilities in their own practices. We see that our thingCV enables such defamiliarized narrative for participants to revisit the everyday relationships from a fresh perspective.
{\color{change}
\subsection{The Role of the Underlying Dataset}
Dataset plays a pivotal role for designers to dive into the practical world and make sense of what is happening via empirical data. In this work, we chose MS-COCO, an open image dataset, as a starting point to study the relationships between objects and people. By doing so, we can focus on designing rich interactive experiences and interactions for our tool instead of spending too much time on data collection. The selected dataset is a collective dataset in which data are collected by different people and cover diverse contexts. The goal is to understand how designers with no specific interest in a particular context can also resonate with these common objects and social patterns. In this work, we indeed found that the collective dataset can successfully capture familiar and easily understandable relationships known to our participants. The data-driven patterns can facilitate active discussions among people and enable people to compare the detailed nuances with their understandings. Thus, we see our use of the current dataset as a good starting point for researchers to observe and experience a thing constellation that is often imperceptible in their daily contexts.
\subsubsection{Limitations from the underlying dataset}
The chosen dataset would affect the experiences of designers (i.e., tool users) and the insights obtained in the exploration process. First, image data in the current dataset are only collected by people. Different ways of collecting data can provide distinct entry points for people to understand different constellations because data might contain distinct events and social interactions in specific contexts. For instance, our participants speculated about a new constellation shaped by cats' perspectives. Second, our current dataset was collected from the Internet and designed for training a machine to recognize common objects, rather than understanding emergent relationships among objects in the field. Therefore, there might be a gap between the co-occurrence relationships that are detectable in the dataset and those in the real situation. For example, some objects might be occluded, and some objects might be too intimate for people to share online due to privacy concerns. As a result, some relationships may be missing in the current constellation from this dataset. Third, our current dataset consists of images that present a snapshot of time and do not contain dynamic patterns of the relationships. The way how people organize and interact with objects can change over time, depending on people's preferences, habits and even emotions. Thus, such a collective dataset might also miss the long-term interaction with an object.
\subsubsection{Possible dataset for capturing emergent relationships}
There is no perfect and fixed dataset for understanding emergent patterns of everyday practice. However, we could articulate what kind of dataset can be used to investigate such relationships between people and everyday objects. First, the dataset can be collected not only by people but also by other non-human actors (i.e., cat, dog, objects). Thus, the dataset can provide richer perspectives about the role of a particular object and its relationships with other objects. Second, any dataset that could be represented as nodes and links can be analyzed by our approach and tool. The node can be a person, an object or a concept; the link can be one type of relationship between two things. The relationship needs to be defined by researchers, such as object co-using frequency, or object ownerships. To enhance the quality of data, some other considerations, such as capturing perspectives and privacy concerns, can be explored in the future. Last but not least, the dataset can be dynamic and evolve over time. To do so, we can allow the users to add, delete and modify the data in the existing dataset and directly modify the graph (i.e., nodes and links) to enrich important information that might be missing during the data collection or analysis process.
}
\subsection{From Constellation Visualizer to IoT Context Inspirer}
Our workshops showed that the designers can identify social things, discover hidden instances and contexts, and change perspectives, which contributes new insights to IoT design.
The social qualities (e.g., sociable, lonely objects) suggest IoT design could consider different social capabilities to things. While researchers have envisioned that the future IoT can enable a new community consisting of many social agents~\cite{Nansen:OzCHI2014}, this does not mean that every object needs to be sociable and connected. In our findings, some objects can be isolated whereas some can be only connected to a specific object (e.g., bridging object). These various social qualities encourage designers to redefine the social links between objects and be aware of the diversity of social things. In addition, the emerging contexts stimulate possible contexts for IoT design. For example, our results showed that the participants, inspired by the unfamiliar social network, actively discussed the possible implications in their own practice. Our tool not only visualizes the constellation but also inspires different contexts. Finally, the changing perspective encourages designers to see every node (e.g., people, food, objects and pet) as equally important. Our results also suggested new design opportunities could also be generated (i.e. designing for cat) through the exploration process. Therefore, our tool makes designers consider other things, beyond human when designing the future IoT.
\subsection{Workshop Procedure}
The workshop was composed of three sections: 1) introduction, 2) observation through the tool, and 3) group discussion (see Figure~\ref{fig:workshop}). In the introduction section, the researchers first welcomed every voluntary participant to join this workshop and asked them to introduce themselves. Then we introduced the purpose of this workshop---inviting participants to test and play our developing tool, thingCV. We introduced the tool, which provides designers a new way of investigating their everyday objects from the field for IoT design inspirations. We also explained that the tool is a data-driven visualization tool to present the co-occurrences of things (e.g. living and nonliving objects) from millions of online photos shared by the general public. Then, the researchers demonstrated the functionalities of the tool on the laptop, such as a threshold slider to adjust the number of links in the network, a switch button to change two different thing constellations (i.e., social-centric and object-centric view), and a hotkey to search for specific objects. After the introduction, the participants were given a link to access the thingCV and used the tool on their own laptops (they were told to prepare their own laptops before the workshop). 15 to 20 minutes were given in this observation section. During the observation section, we asked participants to play with the tool and captured any interesting findings by screenshots or took notes on the paper. Finally, one of the researchers, as a facilitator, hosted a group discussion with participants.
During the group discussion, three topics were generally given by the facilitator to guide participants to discuss one by one. First, the participants were asked to share their first impressions and experiences of using the thingCV individually. Second, the participants took turns to present their interesting findings and discussed them with other participants. In the end, all participants were asked to summarise their most interesting objects during their discussions. The participants were also asked to reflect on these findings and share their new understanding or ideas inspired by the tool. Note that the participants did not always follow the topics given by the facilitators. Instead, the facilitator even encouraged the participants to actively share, debate, or speculate more on other topics related to the tool with other participants. The facilitator only controlled the time and sometimes cued the participants to make sure each of them responded to the main three topics in an equal time. Finally, two design workshops were successfully conducted, and we gathered rich feedback on the tool and findings with eight experienced designers. The total length of the workshop was 90 minutes, and all of the discussions were audio-recorded, transcribed, reviewed, and summarized into transcripts for data analysis.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/workshop_procedure-12.jpg}
\caption{The workshop consists of three sections. 1) Introduction: the facilitator (i.e., one of the authors) introduces the tool and task. 2) Observation: the facilitator asks participants to play with the tool and capture the interesting screenshot of the tool. 3) Discussion: the facilitator hosts a discussion with participants to share their findings and reflections.}
\label{fig:workshop}
\end{figure}
\subsection{Data Gathering and Analysis}
After the workshops, we analyzed the collected data, including a transcript of the audio recording of the workshop, screenshots captured by participants, and paper drawings and notes taken by the participants during the observation with the tool. Researchers used thematic analysis~\cite{Braun_Clarke:2006} to analyze the collected data collaboratively to obtain four main themes with related findings.
{\color{change}The coding and analytical procedure includes three stages. First, two researchers (i.e., two of the authors) read through all the interview data, including audio transcripts, screenshots, paper drawing and observation notes; then, they identified the common features in the data for further analysis. The two researchers annotated and discussed the data scaffolded by the group discussion structure of the workshop, which are tool usage process and experiences (i.e., how they use and feel about the tool), data interpretation (i.e., what they found in the tool), and general reflection (i.e., what new understanding they obtained while using the tool). Second, the researchers prepared and presented all annotated data to a group of experts who have mixed backgrounds in design and computer science (i.e., all authors). The researchers carefully examined the data, compared the initial themes with raw data, and developed sub-themes. Last, the researchers (i.e., all authors) combined and refined sub-themes in an iterative, dialogic process until everyone agreed and then the final themes were generated. The final themes synthesize how thingCV engages participants to investigate the emergent relationships among everyday objects.}
\subsection{Observing Thing Constellation Flexibly}
The results showed that all participants were engaged in making sense of thing constellation with our tool through a flexible exploratory process. They have discovered their own interesting objects and gained new understandings of everyday social practices by revisiting familiar objects from an alternative perspective. The participants' first impressions of using the tool are very positive. For example, ``\textit{This is so cool.} (P1)'', ``\textit{Cool! I like the dynamic effect.} (P5)'', ``\textit{I don't know they can be shown at the same time that often. Amazing.} (P3)'' To further present the benefits of this tool, we showed how the three key features of the tool support the design exploratory process.
\subsubsection{Perspective-switching button allows zoom in \& out of thing constellation}
The two perspectives (social-centric and ego-centric) enabled the participants to zoom in and out to investigate different angles of the thing constellation. Social-centric perspective allowed them to `zoom out' on the network to interpret how objects were connected with each other, such as ``\textit{I firstly use the social-centric perspective to observe an overview of the network and identify how social and lonely an object can be.}'' (P2). Whereas the ego-centric perspective allowed them to `zoom in' the network to look into the details of the connections to a specific object: ``\textit{then I use the object-centric perspective to look into the details of how objects link to each other.}'' (P2). By switching back and forth between two perspectives, participants not only looked into the details to unpack the social life from a single object perspective but also examine the entire social network intertwined by various objects.
\subsubsection{Abstract network representations promote open interpretations on thing constellation}
While the object instances presented by our tool were collective objects extracted from millions of everyday photos, the participants were able to use such an abstract form of a social network of collective objects to discuss interesting findings with other participants within the group. They have speculated various potential scenarios and gained new understandings of everyday objects. Such abstraction creates an ambiguity of `what these objects are exactly', which sparks active discussions among participants. For example, `person' was interpreted differently by participants. P5 recognized `person' as users to interact with diverse types of objects; however, P8 interpreted `person' as non-users such as pedestrians or other family members because ``\textit{`person' can be recognized because they are non-users who is taken photos by `the user'---the one who controls the camera.}'' Taking ``bear'' as another example, participants interpreted ``bear'' as different instances, such as `the real bear living in the zoo (P5)', `the toy bear sitting on the bed (P6)' or `the bear from the painting (P8)' so that participants speculated diverse contexts and had active discussions with others. Every participant had their own interpretations of these abstract nodes or links. Therefore, the tool enables a platform for every participant to share, discuss, and even debate with each other. In a way, the ambiguity of such data representation opens an open space for every participant to explore potential possibilities happening in the thing constellation.
\subsubsection{Interactive threshold enables dynamic constellation observations}
Participants were engaged in observing the dynamic changes between nodes and links by using the interactive threshold. For example, ``\textit{I am pretty engaged in seeing their dynamic changes. It is very cool to see which object is the latest one that `joins' the group.} (P1)'' By adjusting the threshold, participants were able to explore and identify their own interesting segments of the constellation. While they used the same tool with data visualization, participants discovered very different things in objects and object communities. As a result, such diversity sparked interesting discussions among participants. Participants were inspired by others and interpreted more diverse contexts and developed new understandings by revisiting their everyday practice with objects. For example, P5 initially pointed out `book' and further shared his/her interpretation that ``\textit{`chair' is the key object to connect `book' and `person'.}'' Inspired by P5, P7 took a further look at the community which `book' was in and found other objects different from the `chair': ``\textit{Really? That is interesting! Let me check, too (...) but I found that `bottle' is the real key object to connect `book' and `person'.}'' Furthermore, P7 found more possible links around the book. While both of them observe the book with different thresholds, they were able to see the distinct structure of the network and were engaged in discussing various contexts for the target object (i.e., book). For example, P5 speculated a context where the book is the one used to lift the chair. Then, P7 shared another context where an IoT system asks the bottle to look for the book. To sum up, the interactive threshold allows participants to explore their own interesting segments and discover more contexts with other people.
\subsection{Projecting Social Quality onto Things}
During the workshop, we found that participants projected their social experiences onto things. When they observed the dynamic changes of thing constellation, they used ``social roles'' in people's networks to describe the social characteristic of identified objects and groups of objects. For example, the participants have identified levels of sociability for each object based on the number of social links for each object. The object connecting to more other objects is a sociable object, which indicates that they have many friends; the object connecting to no object is an isolated object, which indicates that they do not have friends. Also, the participants looked into the network structure and identified the direct and indirect links for a group of objects. They interpreted the objects having different social distances and further identified bridging objects which connect to two different objects or clusters. Finally, the participants observed a dynamic sequence in which objects connected to each other at different points in time. They interpreted this sequence as joining time for each object: ``\textit{I firstly noticed `chair' is the first one that `joins' the group.}(P6)'' The following examples showed how the participants identified different social levels of an object from sociable to isolated and different joining time from active to passive.
\subsubsection{Person and couch are sociable and busy.}
Participants described the objects that are linked to the most as the sociable and busy objects such as `person' and `couch' (P7) (see Figure~\ref{fig:sociable}-1 \& 2). These objects are linked with multiple object clusters. Participants saw such an object as a sociable object which is involved in various social activities. As such, P7 was inspired by these sociable objects and found a new design opportunity in the future IoT design. For example, P7 shared that ``\textit{couch is connected with various objects to support diverse activities in people's everyday lives. Maybe the couch can be the perfect IoT object or the interface to control various activities or do something smart.}'' However, not all of the participants were interested in sociable objects. For example, P4 was less interested in identifying sociable objects because ``\textit{these objects can work and be placed in any context. It is a bit difficult for me to be immediately inspired by any special context.}'' Nevertheless, sociable objects were still discussed within the group, and played an important role for many participants to compare with the opposite ones, isolated objects.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/keyfinding/sociable-01.jpg}
\caption{The sociable object is the most connected objects in the network. Two selected examples shows 1) Person (social-centric view), and 2) Couch (ego-centric view) are sociable.}
\label{fig:sociable}
\end{figure}
\subsubsection{Hair dryer is the most lonely object.}
Participants were interested in isolated objects, which have no links with other objects. `Hairdryer' is one of the most discussed isolated objects in the two groups. When adjusting the threshold from high to low, participants (P1, P4, P6, P7) found that the hair dryer was the object which was always floating alone (see Figure~\ref{fig:isolated}). P1 and P4 actively discussed how it could be so lonely because they used this object almost every day. They kept asking why questions; P1 even compared the hair dryer with other unfamiliar objects (e.g., snowboards and animals), which are barely seen in his/her everyday life, to find the possible reasons. P1 was surprised ``\textit{how could the hair dryer be as lonely as animals? The hair dryer is a common object that appears every day but it is more isolated than the snowboard.}'' P7 found that the hair dryer is less active than the elephant. ``\textit{the elephant joins the group earlier than the hair dryer, how comes?}'' At the end of the discussion, P6 felt sorry for the hair dryer which has no friends and said ``\textit{what a poor little thing!}'' Participants further discussed any possible contexts or reasons to make the hair dryer alone. For example, ``\textit{ the hair dryer was usually used alone} (P4)'' or ``\textit{maybe we seldom put them on the surface so that the hair dryer could possibly link with other objects. We stored them in the cabinet immediately after using the hair dryer.} (P8)'' Hair dryer is one of the most common everyday objects which could be seldom discussed by designers; however, by the use of the tool, the hair dryer became the most special object to be discussed.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/keyfinding/isolated-02.jpg}
\caption{The isolated object is the least connected objects in the network. One selected example shows `hair drier' is floating alone (social-centric view). }
\label{fig:isolated}
\end{figure}
\subsubsection{Camera is the outlier (invisible tool guy)}
Besides identifying objects presented on the tool, P6 also identified a missing object, a camera, which was not shown on the tool. P6 said, ``\textit{these objects were all taken by cameras, but the camera is not linked to anyone and is even invisible on the network (...) The camera is left behind and working as an invisible tool guy to document others' lives without his/hers.}'' P6 interpreted that the camera always watched other objects' activities, but it never joined them. This finding also made the other participants continually discuss and agree that the camera is the truly isolated object, the outlier of the social network.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/keyfinding/bridge-03.jpg}
\caption{The bridging object is the bridge to connect the two different qualities of object clusters. One selected example shows `bowl' is the bridge to connect the edible (red boxes) and inedible object (blue boxes).}
\label{fig:bridge}
\end{figure}
\subsubsection{Bowl bridges `food' with the outside (inedible) world}
P8 found the `bowl' interesting because ``\textit{it bridges two different qualities of objects, edible and inedible.}'' (see Figure~\ref{fig:bridge}) P8 reflected on the everyday practice with food that ``\textit{without bowl, the food can never be exposed in the outside world (socially interact with other types of objects, inedible objects).}'' P8 further took an example of the fruit basket: ``\textit{even the fruit, if they want to be placed on the table, they still need a bowl or a basket, and even need to be placed beautifully.}'' In this case, we see participants project and interpret a ritual for objects reflected from their social norms.
\subsubsection{Cat joins the community earlier than dog.}
Participants observed and interpreted different joining times of each object when adjusting the threshold of the thing constellation from high to low. For example, P6 compared two objects, cat and dog (both are usually pets for people), and were surprised by the gap of their joining time: ``\textit{cat joins the cluster earlier than the dog, why!?}'' This finding elicited active discussion among the participants (see two different joining time between cat and dog in~\ref{fig:join}). For example, P5 explained that cats were usually indoor pets and dogs were outdoor pets, so ``\textit{cats can make friends with everyday objects earlier than dogs.}'' Since the tool visualized the photos taken from people, P6 also reflected on the photo-taking preferences from the general public: ``\textit{perhaps, people like to take photos of cats more than dogs.}''
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/keyfinding/joining-04.jpg}
\caption{ Participants compared the joining time between two objects, `cat' and `dog': 1) `cat' joins the big community earlier than`dog'; 2) `dog' finally joins the big community later than `cat'.}
\label{fig:join}
\end{figure}
\subsection{Discovering Emerging Diverse Contexts via Object Clusters}
The desire to map contexts (e.g. scenarios composed of who, where, what, when) onto object clusters (or communities) emerged too: ``\textit{when seeing `person', `wearables' and `transportations' forming into the same cluster, I immediately have an image about the hustle and bustle city where peoples and cars are crossing by. }(P3)'' Diverse contexts emerged because participants can imagine a space placed by all of these objects from the same cluster, and these objects can support a certain activity altogether. For example, P5 even drew a space situated by these objects from the same cluster: ``\textit{I imagined how they would be placed in the same space. And I found that all of them [the objects] played reasonable roles to support various contexts in the space without feeling out of place.}'' (see Figure~\ref{fig:drawing})
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/keyfinding/drawing-05.jpg}
\caption{One participant (P5) drew every objects she found in the same network and speculated a possible context around these objects. }
\label{fig:drawing}
\end{figure}
Moreover, a context changes from one to another by dynamically increasing or decreasing links to different objects with threshold adjustment. For example, P8 interpreted a context transition from a pedestrian walking on the street to a family member interacting at home. Different context mapping can also be found even in the same object cluster. For example, P7 found various contexts for the cluster of the sink: ``\textit{I found different contexts for using the sink such as after using the toilet, cooking in the kitchen, and brushing teeth. These contexts are happening in different spaces. However, it is also possible that this is a mini apartment where all the activities have happened in the same sink.}'' (see Figure~\ref{fig:sink}) Similar context mapping was made by P2 to interpret various contexts for `cup' when observing different links with the cup.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/keyfinding/sinkcontext-06.jpg}
\caption{Participants interpreted multiple contexts even if they observed the same object clusters. This example shows two contexts for the sink-related object clusters: 1) people lives in a big house consisted of multiple sinks and each sink support different usages (i.e., washing hands, food, or clothing); 2) people live in a mini apartment with a single sink that is used for all-purpose.}
\label{fig:sink}
\end{figure}
Finally, participants found that such `emerging contexts' from the tool is useful and can be potentially used for exploring contexts for IoT design. For example, P1 said, ``\textit{without using the tool, I may not be highly aware of so many possible links with a single object. This tool can inspire designers to think about new communications or IoT systems between objects and clusters.}'' Furthermore, the emerging contexts can also be inspiring because participants found that they not only emerge common contexts from familiar everyday settings but also discover or even speculate the unexpected or unfamiliar contexts. Participants even reflected on the missing objects which were not shown on the tool. Thus, the hidden contexts and even the contexts in different cultures emerged. To better present these examples, the following presented their emerging contexts from familiar to unfamiliar, hidden contexts reflected on invisible objects, and contexts in different cultures.
\subsubsection{Familiar but unnoticed contexts: using a book to lift up the Chair}
A familiar context but usually unnoticed by participants in their daily practice emerged---`using a book to lift up the chair.' When observing the book, P5 was surprised by a social distance between a book and a person: "\textit{why doesn't the book link to the person directly? Instead, there is a chair in-between.}" P5 was curious about this social distance because the first context that came into P5's mind was a scenario where someone was reading. Such indirect links between book and person inspired P5 to come up with another possible context which is usually unaware of in daily practice: ``\textit{could it be the book used for lifting up the chair by placing it underneath?}'' Sometimes, the chair cannot stand stably due to the uneven rough ground surfaces. P5 found from his/her experiences to project a new context for the book---it is not for reading but for lifting up the chair!
\subsubsection{Speculative contexts: bottle! please help me call book.}
A speculative context also emerged when participants identified unfamiliar links between objects such as the link between `bottle', `book' and `person': ``\textit{`person' is indirectly communicating with `book'. `Bottle' is the object which negotiates in-between.}(P7)'' P7 was inspired by this observation and further speculated a fictional scenario and animated objects (see their speculations in Figure~\ref{fig:book}). P7 shared that a bottle that might have an intimate relationship with the book, can be the agent of the book. If someone wants to find any specific book, they can only call the bottle for help. This context seemed to be fictional, but it enabled the participants to identify a ``human-decentralized'' everyday practice, which shows strong evidence that participants changed their perspectives during the observations and explorations.
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/keyfinding/bookcontext-07.jpg}
\caption{ 1) Participants speculated a context between `bottle', `person' and `book' and imagined a new IoT system for them. 2) Participants reflected on the social links between `person', `chair' and `book', and discovered a familiar but unnoticed context in their prior experiences: ``the book can be used to lift up the chair.''}
\label{fig:book}
\end{figure}
\subsubsection{Hidden contexts: my intimate cosmetic products}
We also found that participants speculated contexts on those hidden but everyday-used objects which were not shown on the tool. For example, P5 observed that cosmetics and skin-care products were hidden in the social network. P5 further explained that ``\textit{This person doesn't want to let anyone find out he/she will make up. This person wants to make people think he/she is born with beautiful skin naturally. Therefore, these objects (cosmetics and skin-care products) might be secrete and intimate objects which cannot be taken by a camera and shared in public.}'' While the current tool only presented 80 objects, participants are able to speculate more objects and even speculate hidden contexts such as privacy and use preferences.
\subsubsection{Cultural contexts: who plays with Dog \& where are Chopsticks?}
Contexts in different cultures were also discussed by the participants. For example, P4 found that the cluster of frisbee and dog can be immediately mapped to a context about `people are playing with dogs in the park'. However, P4 found it strange and unfamiliar in his/her everyday practice with dogs. Although this context is quite common in the movie, P4 explained that ``\textit{playing frisbee with dogs is not very common in my country.}'' Instead, P1 added that ``\textit{dogs should be linked with bikes or cars}.'' Additionally, P2 found a different living and eating style than his/hers: ``\textit{the cluster of food (broccoli, carrot, wine), tablewares (fork, knife), snow sports equipment, and home supplements presented a very different lifestyle than mine. The tool presents an exotic western diet and a house with big yards in the snowy north. However, mine is from the eastern culture, which definitely includes `chopsticks' (it is not shown in the tool).}'' When context can emerge by the tool instinctively to participants, these contexts also make participants reflect and discover various cultures which can be depicted by object clusters.
\subsection{Changing Perspectives to Revisit Everyday Practice}
The participants changed their perspectives to revisit and reflect their everyday practice with objects when switching between the social-centric and ego-centric perspectives. The following presents their changing perspectives elicited by the tool to empathize themselves into different characters and thus to engage with more-than human-centred perspective.
\subsubsection{Projecting myself into that `object'}
The participants projected themselves as one of the objects to imagine the world they will be living in. For example, P5 shared that ``\textit{I cannot help projecting myself into one of the objects, the `person'. I have noticed any object linked to that `person', and imagine that if I am that `person' what kinds of living I will have, and what objects will contact me.}'' Imagining themselves as being this person switches their perspectives from revisiting their own familiar everyday experiences to engaging within a speculative or fictional context that is shaped by all of the objects.
\subsubsection{Discovering a human-decentralized everyday}
The participants changed their perspectives from human-centred to more-than human-centred. For example, P7 reflected that ``\textit{before using the tool, I thought every object would be definitely linked to `person', because objects were only used when people touched them based on my intuition. However, it is not a fact at all. During the observation, I found that there are some objects that are already connected as a community, a community without any people involved. And the `person' is just one of the busy objects like `chair' and `sofa' in the Thing Constellation.}'' P7 was surprised by this finding and described this everyday setting presented by the tool showed him/her a human-decentralized everyday practice. P7 interpreted that ``\textit{It perfectly and truly illustrates the nature of IoT because not every object is fully controlled and surrounded by humans (users). In the background, there will also be some autonomous objects working and communicating on their own without humans.} (P7)'' (see the human decentralized network and the possible IoT scenario in Figure~\ref{fig:decentralised})
\begin{figure}[h!]
\includegraphics[width=\textwidth]{figures/keyfinding/humandecentralised-10.jpg}
\caption{Many objects were connected without being connected to `person'. This picture makes participants change their perspectives in seeing their everyday relationships with things are `human-decentralised' structure. This structure also echos to IoT-related scenarios. For example, the right part shows a scenario that many IoT devices autonomously work and communicate in the background when people is not present.}
\label{fig:decentralised}
\end{figure}
\subsubsection{Imagining different constellations to be shaped by a cat}
By playing with our tool, participants were inspired to speculate a different version of Thing Constellation (e.g., a constellation from a cat-centric perspective) by asking a ``what if'' question. For example, P6 speculated, `what if this visualization is based on the data captured by a cat?' P6 said, ``\textit{the current tool only visualizes the photos captured by humans. If these photos are taken by cats, what kinds of Thing Constellation would be shaped? Maybe, the most sociable object would be the `cat food' or `my face' because my cat always slaps my face every morning. }(P6)'' While the current social network of object co-occurrence is analyzed by the photos taken from humans, participants can also be inspired and change their perspectives to speculate a possible Thing Constellation shaped by different species.
\subsection{Use Co-occurrence as a Starting Point for Exploring Social Links among Objects}
To shape Thing Constellation, the fundamental step is to define the meaning of the `social links' among objects. A link can be defined in different ways~\cite{Atzori:ComputerNetwork2012,Wuest:2012,Giaccardi:DIS2016}, depending on different focuses. Giaccardi et al. identified social links based on the use frequencies of objects~\cite{Giaccardi:DIS2016,Wakkary:DIS2017}. The focus is to explore not only the relationships between human owners and objects but also objects' own relationships with other objects. Nansen et al. emphasized that an object can be not only a social actor but also an object user~\cite{Nansen:OzCHI2014}.
In this work, we aim to construct a social network of objects from real-world data for designers, so they are able to actually play with networked objects digitally. Thus, practicality and implementation are very important considerations. We use object co-occurrence as a starting point for exploring social links among objects. The resulting co-occurrence social networks are a basic form of social networks, which is commonly used to capture potential relationships between people, concepts, or other entities. In addition, the co-occurrence technique is easily implementable, so it is an appropriate first step to explore the possibility of Thing Constellations.
\subsection{Look for a Large-Scale Dataset Containing Objects in Diverse Contexts}
To visualize an object co-occurrence social network, we need a large amount of data to capture general emergent patterns in practice. Prior research focuses on collecting images or sensor data in the wild. The common challenge is around scaling up and privacy issues. By considering the amount of data and privacy concerns, we decided to look for a large-scale dataset that contains everyday objects in a variety of contexts that are voluntarily contributed by the general public.
\subsection{Provide an Interactive Tool to Enable Flexible Sensemaking on Data}
To support flexible sense-making, we aim to provide an interactive tool that presents diverse perspectives to explore a social network of objects. The interface should be flexible and should enable people to manipulate parameters to observe, to interpret, and to reflect about object co-occurrences and their own personal experiences.
\subsection{Provide Data-Driven Patterns for Encouraging Open Discussions}
To facilitate designers to interpret everyday alternatively, we only present the computational patterns driven by data and provide an interactive interface for people to contribute their subjective interpretations. By doing so, designers will not be influenced by others' biases but can make interpretations freely. Designers will also not be dominated by other people but share their intuitive thoughts and explore ideas based on the facts.
|
train/arxiv
|
BkiUd8U4eIOjRy7MQ_7R
| 5 | 1 |
\section*{\label{sec:Atlas}Experimental setup}
\newcommand{\AtlasCoordFootnote}{%
ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP)
in the centre of the detector and the $z$-axis along the beam pipe.
The $x$-axis points from the IP to the centre of the LHC ring,
and the $y$-axis points upwards.
Cylindrical coordinates $(r,\phi)$ are used in the transverse plane,
$\phi$ being the azimuthal angle around the $z$-axis.
The pseudorapidity is defined in terms of the polar angle $\theta$ as $\eta = -\ln \tan(\theta/2)$.
Angular distance is measured in units of $\Delta R \equiv \sqrt{(\Delta\eta)^{2} + (\Delta\phi)^{2}}$.
The photon or electron transverse energy is $\et= E \sin(\theta)$, where $E$ is its energy.}
The ATLAS detector is a cylindrical particle detector\footnote{\AtlasCoordFootnote} composed of several sub-detectors~\cite{ATLAS_detector_paper}.
The inner tracking detector (ID) consists of a silicon pixel system, a silicon microstrip detector and a straw-tube tracker immersed in a 2\,T magnetic field provided by a superconducting solenoid.
The ID track reconstruction efficiency is estimated in Ref.~\cite{Aaboud:2016itf} for minimum-bias $pp$ events that, like UPC Pb+Pb events, have a low average track multiplicity.
For charged hadrons in the transverse momentum range $100<\pt<200$~\MeV~the efficiency is about 50\% and grows to 80\% for $\pt>200$~\MeV.
Around the tracker there is a system of EM and hadronic calorimeters, which use liquid argon and lead, copper, or tungsten absorbers for the EM and forward ($|\eta| > 1.7$) hadronic components of the detector, and scintillator-tile active material and steel absorbers for the central ($|\eta|< 1.7$) hadronic component.
The muon spectrometer (MS) consists of separate trigger and
high-precision tracking chambers measuring the trajectory of muons in a magnetic field generated by superconducting air-core toroids.
The ATLAS minimum-bias trigger scintillators (MBTS) consist of scintillator slabs positioned between the ID and the endcap calorimeters with each side having an outer ring of four slabs segmented in azimuthal angle, covering $2.07 < |\eta| < 2.76$ and
an inner ring of eight slabs, covering $2.76 < |\eta| < 3.86$.
The ATLAS zero-degree calorimeters (ZDC), located along the beam axis at 140 m from the IP on both sides, detect neutral particles (including neutrons emitted from the nucleus).
The ATLAS trigger system~\cite{Aad:1388600} consists of a Level-1 trigger implemented using a combination of dedicated electronics and programmable logic, and a software-based high-level trigger (HLT).
\section*{\label{sec:DataMC}Monte Carlo simulation and theoretical predictions}
Several Monte Carlo (MC)
samples are produced to estimate background contributions and corrections to the fiducial measurement.
The detector response is modelled using a simulation based on GEANT4~\cite{Agostinelli:2002hh,Aad:2010ah}.
The data and MC simulated events are passed through the same reconstruction and analysis procedures.
Light-by-light signal events are generated taking into account box diagrams with charged leptons and quarks in the loops, as detailed in Ref.~\cite{Klusek-Gawenda:2016euz}.
The contributions from $W\textrm{-boson}$ loops are omitted in the calculations since they are mostly important for diphoton masses $m_{\gamma\gamma} > 2m_{W}$~\cite{Fichet:2014uka}.
The calculations are then convolved with the \PbPb\ EPA spectrum from the \starlight 1.1 MC generator~\cite{Klein:2016yzr}.
Next, various diphoton kinematic distributions are cross-checked with predictions from Ref.~\cite{Enterria:2013yra} and good agreement is found.
The theoretical uncertainty on the cross section is mainly due to limited knowledge of the nuclear electromagnetic form-factors
and the related initial photon fluxes. This is studied in Ref.~\cite{Enterria:2013yra} and the relevant uncertainty is conservatively estimated to be $20\%$.
Higher-order corrections (not included in the calculations) are also part of the theoretical uncertainty and are of the order of a few percent for diphoton invariant masses below 100~\GeV~\cite{Bern:2001dg, Klusek-Gawenda:2016nuo}.
The sources of background considered in this analysis are:
\ggee, central exclusive production (CEP) of photon pairs, exclusive production of quark--antiquark pairs ($\gamma\gamma\rightarrow q\bar{q}$) and other backgrounds that could mimic the diphoton event signatures.
The \ggee{} background is modelled with \starlight 1.1~\cite{Klein:2016yzr},
in which the cross section is computed by combining the \PbPb\ EPA with the leading-order formula for \ggee.
This process has been recently measured by the ALICE Collaboration, and a good agreement with \starlight is found~\cite{Abbas:2013oua}.
The exclusive diphoton final state can be also produced via the strong interaction through a quark loop in the exchange of two gluons in a colour-singlet state.
This CEP process, $gg\rightarrow\gamma\gamma$, is modelled using \superchic 2.03~\cite{Harland-Lang:2015cta}, in which the $pp$ cross section has been scaled by $A^2R_g^4$ as suggested in Ref.~\cite{Enterria:2013yra}, where $A=208$ and $R_g\approx0.7$ is a gluon shadowing correction~\cite{Eskola:2009uj}.
This process has a large theoretical uncertainty, of $\mathcal{O}(100\%)$, mostly related to incomplete knowledge of gluon densities~\cite{Aaltonen:2011hi}.
The $\gamma\gamma\rightarrow q\bar{q}$ contribution is estimated using \textsc{Herwig}++ 2.7.1~\cite{Bahr:2008pv} where the EPA formalism in $pp$ collisions is implemented.
The $\gamma\gamma\rightarrow q\bar{q}$ sample is then normalised to the corresponding cross section in \PbPb\ collisions~\cite{Klein:2016yzr}.
\section*{\label{sec:Selection}Event selection}
Candidate diphoton events were recorded in the \PbPb\ run in 2015 using a dedicated trigger for events with moderate activity in the calorimeter but little additional activity in the entire detector.
At Level-1 the total \et\ registered in the calorimeter after noise suppression was required to be between $5$ and $200\gev$.
Then at the HLT, events were rejected if more than one hit was found in the inner ring of the MBTS (MBTS veto) or if more than ten hits were found in the pixel detector.
The efficiency of the Level-1 trigger is estimated with \ggee\ events passing an independent supporting trigger.
This trigger is designed to select events with mutual dissociation of Pb nuclei and small activity in the ID.
It is based on a coincidence of signals in both ZDC sides and a requirement on the total \et\ in the calorimeter below 50~\GeV.
Event candidates are required to have only two reconstructed tracks and two EM energy clusters.
Furthermore, to reduce possible backgrounds, each pair of clusters (cl1, cl2) is required to have a small acoplanarity ($1-\Delta\phi_{\textrm{cl1,~cl2}}/\pi<0.2$).
The extracted Level-1 trigger efficiency is provided as a function of the sum of cluster transverse energies (\etclone+\etcltwo).
The efficiency grows from about $70\%$ at $(\etclone+\etcltwo)=6\gev$ to $100\%$ at $(\etclone+\etcltwo)>9\gev$.
The efficiency is parameterised using an error function fit which is then used to reweight the simulation.
Due to the extremely low noise, very high hit reconstruction efficiency and low conversion probability of signal photons in the pixel detector (around 10\%), the uncertainty due to the requirement for minimal activity in the ID is negligible.
The MBTS veto efficiency was studied using \ggll~events ($\ell=e,~\mu$) passing a supporting trigger and it is estimated to be $(98\pm2)\%$.
Photons are reconstructed from EM clusters in the calorimeter and tracking information provided by the ID, which allows the identification of photon conversions.
Selection requirements are applied to remove EM clusters with a large amount of energy from poorly functioning calorimeter cells, and a timing requirement is made to reject out-of-time candidates. An energy calibration specifically optimised for photons~\cite{Aad:2014nim} is applied to the candidates to account for upstream energy loss and both lateral and longitudinal shower leakage.
A dedicated correction~\cite{Aaboud:2016yuq} is applied for photons in MC samples to correct for potential mismodelling of quantities which describe the properties (``shapes'') of the associated EM showers.
The photon particle-identification~(PID) in this analysis is based on three shower-shape variables: the lateral width of the
shower in the middle layer of the EM calorimeter, the ratio of the energy difference associated with the largest and second
largest energy deposits to the sum of these energies in the first layer, and the fraction of energy reconstructed in the first layer relative to the total energy of the cluster.
Only photons with $\et>3\gev$ and $|\eta|<2.37$, excluding the calorimeter transition region $1.37<|\eta|<1.52$, are considered. The pseudorapidity requirement ensures that the photon candidates pass through regions of the EM calorimeter where the first layer is segmented into narrow strips, allowing for good separation between genuine prompt photons and photons coming from the decay of neutral hadrons.
A constant photon PID efficiency of 95\% as a function of $\eta$ with respect to reconstructed photon candidates is maintained.
This is optimised using multivariate analysis techniques~\cite{Hocker:2007ht}, such that EM energy clusters induced by cosmic-ray muons are rejected with 95\% efficiency.
Preselected events are required to have exactly two photons satisfying the above selection criteria, with a diphoton invariant mass greater than 6~\GeV.
In order to reduce the dielectron background, a veto on the presence of any charged-particle tracks (with $\pt>100$~\MeV{}, $|\eta|<2.5$ and at least one hit in the pixel detector) is imposed.
This requirement further reduces the fake-photon background from the dielectron final state by a factor of 25, according to simulation.
It has almost no impact on \gggg\ signal events, since the probability of photon conversion in the pixel detector is relatively small and converted photons are suppressed at low \et\ (3--6~\GeV) by the photon selection requirements.
According to MC studies, the photon selection requirements remove about 10\% of low-\et\ photons.
To reduce other fake-photon backgrounds (for example, cosmic-ray muons), the transverse momentum of the diphoton system~(\ptgg) is required to be below 2~\GeV.
To reduce background from CEP $gg\rightarrow\gamma\gamma$ reactions, an additional requirement on diphoton acoplanarity, $\textrm{Aco}=1-\Delta\phi_{\gamma\gamma}/\pi<0.01$, is imposed.
This requirement is optimised to retain a high signal efficiency and reduce the CEP background significantly, since the transverse momentum transferred by the photon exchange is usually much smaller than that due to the colour-singlet-state gluons~\cite{Baur:2001jj}.
\section*{\label{sec:DetCalib}Performance and validation of photon reconstruction}
Since the analysis requires the presence of low-energy
photons, which are not typically used in ATLAS analyses,
detailed studies of photon reconstruction and calibration are performed.
High-\pt\ \ggll production with a final-state radiation~(FSR) photon is used for the measurement of the photon PID
efficiency. Events with a photon and two tracks corresponding to oppositely charged particles with $\pt>1\gev$ are required to pass the same trigger as in the diphoton selection or the supporting trigger.
The $\Delta R$ between a photon candidate and a track is required to be greater than 0.2 in order to avoid leakage of the electron clusters from the \ggee\ process to the photon cluster.
The FSR event candidates are identified using a $\pTttg<1\gev$ requirement, where \pTttg\ is the transverse momentum of the three-body system consisting of two charged-particle tracks and a photon.
The FSR photons are then used to extract the photon PID efficiency, which is defined as the probability for a reconstructed photon to satisfy the identification criteria.
Figure~\ref{fig:pt-fsr-eff}(a) shows the photon PID efficiencies in data and simulation as a function of reconstructed photon \et.
Within their statistical precision the two results agree.
The photon reconstruction efficiency is extracted from data using \ggee{} events where one of the electrons emits a hard-bremsstrahlung photon due to interaction with the material of the detector.
Events with exactly one identified electron, two reconstructed charged-particle tracks and exactly one photon are studied.
The electron \eT is required to be above 5 \GeV{} and the \pT of the track that is unmatched with the electron (trk2) is required to be below $2$~\GeV{}.
The additional hard-bremsstrahlung photon is expected to have $\eT^{\gamma}\approx (\eT^e-\pT^\textrm{trk2})$.
The $\pT^\textrm{trk2}<2$ \GeV{} requirement ensures a sufficient $\Delta R$ separation between the expected photon and the second electron, extrapolated to the first layer of the EM calorimeter.
The data sample contains 247 $\ggee$ events that are used to extract the photon reconstruction efficiency, which is presented in Figure~\ref{fig:pt-fsr-eff}(b).
Good agreement between data and \ggee~MC simulation is observed and the photon reconstruction efficiency is measured with a 5--10\% relative uncertainty at low \eT (3--6~\GeV).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.4\textwidth]{FSRPhotonEff.pdf}\put(-95,-8){{\footnotesize(a)}}
\includegraphics[width=0.4\textwidth]{photon_reco_tap.pdf}\put(-95,-8){{\footnotesize(b)}}
\caption{Photon identification and reconstruction efficiencies. (a) Photon PID efficiency as a function of photon \et\ extracted from FSR event candidates. (b) Photon reconstruction efficiency as a function of photon \eT\ (approximated with $\eT^e - \pT^\textrm{trk2}$) extracted from \ggee events with a hard-bremsstrahlung photon.
Data (closed markers) are compared with MC simulations (open markers).
The statistical uncertainties arising from the finite size of the data and simulation samples are indicated by vertical bars.}
\label{fig:pt-fsr-eff}
\end{center}
\end{figure}
In addition, a cross-check is performed on $Z\rightarrow\mu^+\mu^-\gamma$ events identified in \pp\ collision data from 2015 corresponding to an integrated luminosity of $1.6\,\text{fb}^{-1}$.
The results support (in a similar way to Ref.~\cite{ATLAS-CONF-2012-143}) the choice to use the three shower-shape variables in this photon PID selection in an independent sample of low-\et photons.
The photon cluster energy resolution is extracted from data using \ggee\ events.
The electrons from the \ggee\ reaction are well balanced in their transverse momenta, with very small standard deviation, $\sigma_{\pt^{e+}-\pt^{e-}}<30 $~\MeV{}, much smaller than the expected EM calorimeter energy resolution.
Therefore, by measuring ($\etclone-\etcltwo$) distributions in \ggee\ events, one can extract the cluster energy resolution, $\sigma_{\et^\textrm{cl}}$.
For electrons with $\et <10$~\GeV{} the $\sigma_{\et^\textrm{cl}}/\et^\textrm{cl}$ is observed to be approximately $8\%$ both in data and simulation.
An uncertainty of $\delta \sigma_{\et^{\gamma}}/\sigma_{\et^{\gamma}} = 15\%$ is assigned to the simulated photon energy resolution and takes into account differences between $\sigma_{\et^\textrm{cl}}$ in data and $\sigma_{\et^{\gamma}}$ in simulation.
Similarly, the EM cluster energy scale can be studied using the ($\etclone+\etcltwo$) distribution.
It is observed that the simulation provides a good description of this distribution, within the relative uncertainty of 5\% that is assigned to the EM cluster energy-scale modelling.
\section*{\label{sec:Background}Background estimation}
Due to its relatively high rate, the exclusive production of electron pairs ($\ggee$) can be a source of fake diphoton events.
The contribution from the dielectron background is estimated using \ggee\ MC simulation (which gives 1.3 events) and is verified using the following data-driven technique.
Two control regions are defined that are expected to be dominated by \ggee\ backgrounds.
The first control region is defined by requiring events with exactly one reconstructed charged-particle track
and two identified photons that satisfy the same preselection criteria as for the signal definition.
The second control region is defined similarly to the first one, except exactly two tracks are required ($\Ntrk=2$).
Good agreement is observed between data and MC simulation in both control regions, but the precision is limited by the number of events in data. A conservative uncertainty of 25\% is therefore assigned to the \ggee background estimation, which reflects the statistical uncertainty of data in the $\Ntrk=1$ control region.
The contribution from a related QED process, $\gamma\gamma\rightarrow e^+e^-\gamma\gamma$, is evaluated using \textsc{MadGraph5$\_$aMC$@$NLO} MC generator~\cite{Alwall:2014hca} and is found to be negligible.
The Aco $<0.01$ requirement significantly reduces the CEP $gg\rightarrow\gamma\gamma$ background.
However, the MC prediction for this process has a large theoretical uncertainty, hence an additional data-driven normalisation is performed in the region Aco $>b$, where $b$ is a value greater than 0.01 which can be varied.
Three values of $b$ (0.01,~0.02,~0.03) are used, where the central value $b=0.02$ is chosen to derive the nominal background prediction and the values $b=0.01$ and $b=0.03$ to define the systematic uncertainty.
The normalisation is performed using the condition:
$f_{gg\rightarrow\gamma\gamma}^{\textrm{norm}, b} = \bigl( N_\textrm{data}(\textrm{Aco} > b) - N_\textrm{sig}(\textrm{Aco} > b) - N_{\gamma\gamma\rightarrow e^+e^-}(\textrm{Aco} > b)\bigr) / N_{gg\rightarrow\gamma\gamma}(\textrm{Aco} > b)$,
for each value of $b$, where $N_\textrm{data}$ is the number of observed events, $N_\textrm{sig}$ is the expected number of signal events,
$N_{\gamma\gamma\rightarrow e^+e^-}$ is the expected background from $\gamma\gamma\rightarrow e^+e^-$ events and $N_{gg\rightarrow\gamma\gamma}$ is the MC estimate of the expected background from CEP $gg\rightarrow\gamma\gamma$ events.
The normalisation factor is found to be $f_{gg\rightarrow\gamma\gamma}^{\textrm{norm}}= 0.5\pm 0.3$
and the background due to CEP $gg\rightarrow\gamma\gamma$ is estimated to be $f_{gg\rightarrow\gamma\gamma}^{\textrm{norm}} \times N_{gg\rightarrow\gamma\gamma}(\textrm{Aco} <0.01) = 0.9 \pm 0.5$ events.
To verify the CEP $gg\rightarrow\gamma\gamma$ background estimation method, energy deposits in the ZDC are studied for events before the Aco selection.
It is expected that the outgoing ions in CEP events predominantly dissociate, which results in the emission of neutrons detectable in the ZDC~\cite{Enterria:2013yra}.
Good agreement between the normalised CEP $gg\rightarrow\gamma\gamma$ MC expectation and the observed events with a ZDC signal corresponding to at least 1 neutron is observed in the full Aco range.
Low-\pt dijet events can produce multiple $\pi^0$ mesons which could potentially mimic diphoton events.
The event selection requirements are efficient in rejecting such events, and based on studies performed with a supporting trigger,
the background from hadronic processes is estimated to be $0.3\pm0.3$ events.
MC studies show the background from \ggqq\ processes is negligible.
Exclusive neutral two-meson production can be a potential source of background for LbyL events, mainly due to their back-to-back topology being similar to that of the CEP $gg\rightarrow\gamma\gamma$ process.
The cross section for this process is calculated to be below 10\% of the CEP $gg\rightarrow\gamma\gamma$ cross section~\cite{HarlandLang:2011qd,Harland-Lang:2013ncy} and it is therefore considered to give a negligible contribution to the signal region.
The contribution from other fake diphoton events (for example those induced by cosmic-ray muons) is estimated using
photons that fail to satisfy the longitudinal shower-shape requirement.
The total background due to other fake photons is found to be $0.1 \pm 0.1$~events.
As a further cross-check, additional activity in the MS is studied. It is observed that out of 18 events satisfying the inverted $\pt^{\gamma\gamma}$ requirement, 13 have at least one additional reconstructed muon.
In the region $\pt^{\gamma\gamma} < 2 \GeV$, no events with muon activity are found, which is compatible with the above-mentioned estimate of $0.1 \pm 0.1$.
\section*{\label{sec:Results}Results}
\begin{table}[t!]
\small
\begin{center}
\begin{tabular}{l|S[table-format=2.1]cS[table-format=1.1]S[table-format=2.1]|S[table-format=3.1]|c|S[table-format=3.0]}
\hline
Selection & \text{\ggee} & CEP $gg\rightarrow\gamma\gamma$ & \text{Hadronic} & \text{Other} & \text{Total} & Signal & \text{Data} \\
[-0.2em]
& & & \text{fakes} & \text{fakes} & \text{background} & & \\
\hline \hline
Preselection & 74 & 4.7 & 6 & 19 & 104 & 9.1 & 105 \\
$N_\textrm{trk} = 0$ & 4.0 & 4.5 & 6 & 19 & 33 & 8.7 & 39 \\
$\ptgg < 2$ \GeV{} & 3.5 & 4.4 & 3 & 1.3 & 12.2 & 8.5 & 21 \\
Aco $<0.01$ & 1.3 & 0.9 & 0.3 & 0.1 & 2.6 & 7.3 & 13 \\
\hline
Uncertainty & 0.3 & 0.5 & 0.3 & 0.1 & 0.7 & 1.5 & \\
\end{tabular}
\caption{The number of events accepted by the sequential selection requirements for data, compared to the number of background and signal events expected from the simulation. The signal simulation is based on calculations from Ref.~\cite{Klusek-Gawenda:2016euz}. In addition, the uncertainties on the expected number of events passing all selection requirements are given. }
\label{tab:Cutflow_opt}
\end{center}
\end{table}
Photon kinematic distributions for events satisfying the selection criteria are shown in Figure~\ref{fig:LbL-control_opt}.
The shape of the diphoton acoplanarity distribution for \ggee events in Figure~\ref{fig:LbL-control_opt}(a) reflects the trajectories of the electron and positron in the detector magnetic field, before they emit hard photons in their collisions with the ID material.
In total, 13 events are observed in data whereas 7.3 signal events and 2.6 background events are expected.
In general, good agreement between data and MC simulation is observed. The effect of sequential selection requirements on the number of events selected is shown in Table~\ref{tab:Cutflow_opt}, for each of the data, signal and background samples.
To quantify an excess of events over the background expectation, a test statistic based on the profile likelihood ratio~\cite{Cowan:2010js} is used.
The $p$-value for the background-only hypothesis, defined as the probability for the background to fluctuate and give an excess of events as large or larger than that observed in the data, is found to be $5\times 10^{-6}$.
The $p$-value can be expressed in terms of Gaussian tail probabilities, which, given in units of standard deviation ($\sigma$), corresponds to a significance of $4.4\sigma$.
The expected $p$-value and significance (obtained before the fit of the signal-plus-background hypothesis to the data and using SM predictions from Ref.~\cite{Klusek-Gawenda:2016euz}) are $8\times 10^{-5}$ and $3.8\sigma$, respectively.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.43\textwidth]{gg_aco_dataMC_noaco.pdf}\put(-95,-8){{\footnotesize(a)}}
\includegraphics[width=0.43\textwidth]{gg_mass_dataMC_opt.pdf}\put(-95,-8){{\footnotesize(b)}}
\caption{Kinematic distributions for \gggg\ event candidates. (a) Diphoton acoplanarity before applying $\textrm{Aco}<0.01$ requirement. (b) Diphoton invariant mass after applying $\textrm{Aco}<0.01$ requirement.
Data (points) are compared to MC predictions (histograms). The statistical uncertainties on the data are shown as vertical bars.}
\label{fig:LbL-control_opt}
\end{center}
\end{figure}
The cross section for the $\textrm{Pb+Pb}\,(\gamma\gamma)\rightarrow \textrm{Pb}^{(\ast)}\textrm{+}\textrm{Pb}^{(\ast)}\,\gamma\gamma$ process is measured in a fiducial phase space defined by the photon
transverse energy $\et>3\gev$, photon absolute pseudorapidity $|\eta|<2.4$, diphoton invariant mass greater than 6~\GeV, diphoton
transverse momentum lower than 2~\GeV{} and diphoton acoplanarity below 0.01.
Experimentally, the fiducial cross section is given by
\begin{equation}
\label{EQN:CrossSectionFid}
\sigmafid = \frac{N_{\textrm{data}}-N_{\textrm{bkg}}}{C \times \int \! L \text{d} t}\,,
\end{equation}
where $N_{\textrm{data}}$ is the number of selected events in data, $N_{\textrm{bkg}}$ is the expected number of background events and $\int \! L \text{d} t$ is the integrated luminosity.
The factor $C$ is used to correct for the net effect of the trigger efficiency, the diphoton reconstruction and PID efficiencies, as well as the impact of photon energy and angular resolution.
It is defined as the ratio of the number of generated signal events satisfying the selection criteria after particle reconstruction and detector simulation to the number of generated events satisfying the fiducial criteria before reconstruction.
The value of $C$ and its total uncertainty is determined to be $0.31 \pm 0.07$.
The dominant systematic uncertainties come from the uncertainties on the photon reconstruction and identification efficiencies. Other minor sources of uncertainty are the photon energy scale and resolution uncertainties and trigger efficiency uncertainty.
In order to check for a potential model dependence, calculations from Ref.~\cite{Klusek-Gawenda:2016euz} are compared with predictions from Ref.~\cite{Enterria:2013yra}, and a negligible impact on the $C$-factor uncertainty is found.
Table~\ref{tab:AandCFactor} lists the separate contributions to the systematic uncertainty.
\begin{table}[t!]
\small
\begin{center}
\begin{tabular}{l|c}
\hline
Source of uncertainty & Relative uncertainty \\
\hline \hline
Trigger & \ 5\% \\
Photon reco efficiency & 12\% \\
Photon PID efficiency & 16\% \\
Photon energy scale & \ 7\% \\
Photon energy resolution & 11\% \\
\hline
Total & 24\% \\
\end{tabular}
\caption{Summary of systematic uncertainties. The table shows the relative systematic uncertainty on detector correction factor $C$ broken into its individual contributions. The total is obtained by adding them in quadrature.}
\label{tab:AandCFactor}
\end{center}
\end{table}
The uncertainty on the integrated luminosity is $6\%$.
It is derived following a methodology similar to that detailed in Refs.~\cite{Aad:2013ucp,Aaboud:2016hhf}, from a calibration of the luminosity scale using \textit{x}--\textit{y} beam-separation scans performed in December 2015.
The measured fiducial cross section is $\sigmafid = 70 \pm 24~\textrm{(stat.)} \pm 17~\textrm{(syst.)}$ nb, which is in agreement with the predicted values of $45 \pm 9$ nb~\cite{Enterria:2013yra} and $49 \pm 10$ nb~\cite{Klusek-Gawenda:2016euz} within uncertainties.
\section*{\label{sec:Conclusion}Conclusion}
In summary, this article presents evidence for the scattering of light by light in quasi-real photon interactions from 480 $\mu{\mathrm{b}}^{-1}$ of ultra-peripheral \PbPb\ collisions at \sqrtNN$=5.02\TeV$ by the ATLAS experiment at the LHC.
The statistical significance against the background-only hypothesis is found to be $4.4$ standard deviations.
After background subtraction and analysis corrections, the fiducial cross section for the $\textrm{Pb+Pb}\,(\gamma\gamma)\rightarrow \textrm{Pb}^{(\ast)}\textrm{+}\textrm{Pb}^{(\ast)}\,\gamma\gamma$ process
was measured and is compatible with SM predictions.
\section*{Acknowledgements}
We thank CERN for the very successful operation of the LHC, as well as the
support staff from our institutions without whom ATLAS could not be
operated efficiently.
We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS, CEA-DSM/IRFU, France; GNSF, Georgia; BMBF, HGF, and MPG, Germany; GSRT, Greece; RGC, Hong Kong SAR, China; ISF, I-CORE and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; FOM and NWO, Netherlands; RCN, Norway; MNiSW and NCN, Poland; FCT, Portugal; MNE/IFA, Romania; MES of Russia and NRC KI, Russian Federation; JINR; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TAEK, Turkey; STFC, United Kingdom; DOE and NSF, United States of America. In addition, individual groups and members have received support from BCKDF, the Canada Council, CANARIE, CRC, Compute Canada, FQRNT, and the Ontario Innovation Trust, Canada; EPLANET, ERC, ERDF, FP7, Horizon 2020 and Marie Sk{\l}odowska-Curie Actions, European Union; Investissements d'Avenir Labex and Idex, ANR, R{\'e}gion Auvergne and Fondation Partager le Savoir, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF; BSF, GIF and Minerva, Israel; BRF, Norway; CERCA Programme Generalitat de Catalunya, Generalitat Valenciana, Spain; the Royal Society and Leverhulme Trust, United Kingdom.
The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref.~\cite{ATL-GEN-PUB-2016-002}.
\printbibliography
\newpage
\input{atlas_authlist}
\end{document}
|
train/arxiv
|
BkiUblfxaL3Suhgbj2PQ
| 1 | 0.2 |
\section*{\bibname}}
\usepackage{amssymb, amsmath, amsthm, latexsym}
\usepackage{url}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{tabularx}
\usepackage{paralist}
\usepackage{mathtools}
\usepackage{bbm}
\usepackage{makecell}
\usepackage{multirow}
\usepackage{booktabs}
\usepackage{nicefrac}
\usepackage[flushleft]{threeparttable}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{multirow}
\usepackage{colortbl}
\definecolor{bgcolor}{rgb}{0.8,1,1}
\definecolor{bgcolor2}{rgb}{0.8,1,0.8}
\definecolor{niceblue}{rgb}{0.0,0.19,0.56}
\hypersetup{colorlinks,linkcolor={blue},citecolor={blue},urlcolor={blue}}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\usepackage[dvipsnames]{xcolor}
\usepackage{pifont}
\newcommand{\cmark}{{\color{PineGreen}\ding{51}}}%
\newcommand{\xmark}{{\color{BrickRed}\ding{55}}}%
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{P}}{\mathbf{P}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\stackrel{\text{def}}{=}}{\stackrel{\text{def}}{=}}
\newcommand{\ve}[2]{\left\langle #1 , #2 \right\rangle}
\def\<#1,#2>{\left\langle #1,#2\right\rangle}
\newcommand{\oldstuff}[1]{ {\small \color{blue} #1}}
\newcommand{\redstuff}[1]{ {\small \color{red} #1}}
\newtheorem{lemma}{Lemma}[section]
\newtheorem{theorem}{Theorem}[section]
\newtheorem{example}{Example}[section]
\newtheorem{definition}{Definition}[section]
\newtheorem{proposition}{Proposition}[section]
\newtheorem{assumption}{Assumption}
\newtheorem{corollary}{Corollary}
\newtheorem{remark}{Remark}[section]
\theoremstyle{plain}
\usepackage{xspace}
\newcommand{\squeeze}{}
\newcommand{\algname}[1]{{\sf #1}\xspace}
\newcommand{\algnamex}[1]{{\sf #1}\xspace}
\newcommand\tagthis{\addtocounter{equation}{1}\tag{\theequation}}
\newcommand{\mathop{\arg\!\min}}{\mathop{\arg\!\min}}
\newcommand{\text{\ding{172}}}{\text{\ding{172}}}
\newcommand{\text{\ding{173}}}{\text{\ding{173}}}
\newcommand{\text{\ding{174}}}{\text{\ding{174}}}
\newcommand{\text{\ding{175}}}{\text{\ding{175}}}
\newcommand{\text{\ding{176}}}{\text{\ding{176}}}
\newcommand{\text{\ding{177}}}{\text{\ding{177}}}
\newcommand{\text{\ding{178}}}{\text{\ding{178}}}
\newcommand{\text{\ding{179}}}{\text{\ding{179}}}
\newcommand{\text{\ding{180}}}{\text{\ding{180}}}
\newcommand{\text{\ding{181}}}{\text{\ding{181}}}
\usepackage[colorinlistoftodos,bordercolor=orange,backgroundcolor=orange!20,linecolor=orange,textsize=scriptsize]{todonotes}
\newcommand{\adrien}[1]{\todo[inline]{{\textbf{Adrien:} \emph{#1}}}}
\newcommand{\ggi}[1]{\todo[inline]{{\textbf{Gauthier:} \emph{#1}}}}
\newcommand{\eduard}[1]{\todo[inline]{{\textbf{Eduard:} \emph{#1}}}}
\renewcommand{\Re}{\mathrm{Re}}
\renewcommand{\Im}{\mathrm{Im}}
\newcommand{\mathrm{Sp}}{\mathrm{Sp}}
\newcommand{\mathrm{Id}}{\mathrm{Id}}
\newcommand{\mathrm{Tr}}{\mathrm{Tr}}
\newcommand{{\cal A}}{{\cal A}}
\newcommand{{\cal B}}{{\cal B}}
\newcommand{{\cal C}}{{\cal C}}
\newcommand{{\cal D}}{{\cal D}}
\newcommand{{\cal E}}{{\cal E}}
\newcommand{{\cal F}}{{\cal F}}
\newcommand{{\cal G}}{{\cal G}}
\newcommand{{\cal H}}{{\cal H}}
\newcommand{{\cal J}}{{\cal J}}
\newcommand{{\cal K}}{{\cal K}}
\newcommand{{\cal L}}{{\cal L}}
\newcommand{{\cal M}}{{\cal M}}
\newcommand{{\cal N}}{{\cal N}}
\newcommand{{\cal O}}{{\cal O}}
\newcommand{{\cal P}}{{\cal P}}
\newcommand{{\cal Q}}{{\cal Q}}
\newcommand{{\cal R}}{{\cal R}}
\newcommand{{\cal S}}{{\cal S}}
\newcommand{{\cal T}}{{\cal T}}
\newcommand{{\cal U}}{{\cal U}}
\newcommand{{\cal V}}{{\cal V}}
\newcommand{{\cal X}}{{\cal X}}
\newcommand{{\cal Y}}{{\cal Y}}
\newcommand{{\cal W}}{{\cal W}}
\newcommand{{\cal Z}}{{\cal Z}}
\newcommand{\mathrm{Var}}{\mathrm{Var}}
\newcommand{{\bf A}}{{\bf A}}
\newcommand{{\bf B}}{{\bf B}}
\newcommand{{\bf C}}{{\bf C}}
\newcommand{{\bf E}}{{\bf E}}
\newcommand{{\bf F}}{{\bf F}}
\newcommand{{\bf G}}{{\bf G}}
\newcommand{{\bf H}}{{\bf H}}
\newcommand{{\bf I}}{{\bf I}}
\newcommand{{\bf J}}{{\bf J}}
\newcommand{{\bf K}}{{\bf K}}
\newcommand{{\bf L}}{{\bf L}}
\newcommand{{\bf M}}{{\bf M}}
\newcommand{{\bf N}}{{\bf N}}
\newcommand{{\bf O}}{{\bf O}}
\newcommand{{\bf P}}{{\bf P}}
\newcommand{{\bf Q}}{{\bf Q}}
\newcommand{{\bf R}}{{\bf R}}
\newcommand{{\bf S}}{{\bf S}}
\newcommand{{\bf T}}{{\bf T}}
\newcommand{{\bf U}}{{\bf U}}
\newcommand{{\bf V}}{{\bf V}}
\newcommand{{\bf W}}{{\bf W}}
\newcommand{{\bf X}}{{\bf X}}
\newcommand{{\bf Y}}{{\bf Y}}
\newcommand{{\bf Z}}{{\bf Z}}
\newcommand{\mathrm{sign}}{\mathrm{sign}}
\newcommand{x}{x}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathop{\mathrm{prox}}\nolimits}{\mathop{\mathrm{prox}}\nolimits}
\newcommand{\mathop{\mathrm{proj}}\nolimits}{\mathop{\mathrm{proj}}\nolimits}
\newcommand{\prox_{\gamma R}}{\mathop{\mathrm{prox}}\nolimits_{\gamma R}}
\newcommand{\prox_{\gamma^k R}}{\mathop{\mathrm{prox}}\nolimits_{\gamma^k R}}
\newcommand{\overline}{\overline}
\newcommand{\sum_{i=1}^n}{\sum_{i=1}^n}
\newcommand{\widetilde{x}}{\widetilde{x}}
\newcommand{\widetilde{g}}{\widetilde{g}}
\newcommand{\widetilde{y}}{\widetilde{y}}
\newcommand{\widehat{x}}{\widehat{x}}
\newcommand{\widehat{y}}{\widehat{y}}
\makeatletter
\newtheorem*{rep@theorem}{\rep@title}
\newcommand{\newreptheorem}[2]{%
\newenvironment{rep#1}[1]{%
\def\rep@title{#2 \ref{##1}}%
\begin{rep@theorem}}%
{\end{rep@theorem}}}
\makeatother
\newreptheorem{theorem}{Theorem}
\newreptheorem{lemma}{Lemma}
\graphicspath{{plots/}}
\usepackage{makecell}
\usepackage{accents}
\newlength{\dhatheight}
\newcommand{\doublehat}[1]{%
\settoheight{\dhatheight}{\ensuremath{\hat{#1}}}%
\addtolength{\dhatheight}{-0.35ex}%
\hat{\vphantom{\rule{1pt}{\dhatheight}}%
\smash{\hat{#1}}}}
\usepackage{pgfplotstable}
\usetikzlibrary{automata, positioning, arrows, shapes, fit, calc, intersections}
\usepgfplotslibrary{statistics}
\pgfplotsset{
compat=1.15
}
\include{figure}
\title{\bf Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity:\\ the Case of Negative Comonotonicity}
\author{\begin{tabular}{c}
Eduard Gorbunov$^{1}$ \\
{\small [email protected]}
\end{tabular}\quad \begin{tabular}{c}
Adrien Taylor$^{2}$\\
{\small [email protected] }
\end{tabular} \quad \begin{tabular}{c}
Samuel Horv\'ath$^{1}$\\
{\small [email protected]}
\end{tabular} \\ \\
\begin{tabular}{c}
Gauthier Gidel$^{3,4}$ \\
{\small [email protected]}
\end{tabular}}
\date{$^1$ Mohamed bin Zayed University of Artificial Intelligence, UAE\\
$^2$ INRIA \& D.I.\ \'Ecole Normale Sup\'erieure, CNRS \& PSL Research University, France\\
$^3$ Universit\'e de Montr\'eal and Mila, Canada\\
$^4$ Canada CIFAR AI Chair}
\usepackage{pgfplotstable}
\usetikzlibrary{automata, positioning, arrows, shapes, fit, calc, intersections}
\usepgfplotslibrary{statistics}
\include{figure}
\renewcommand{\leq}{\leqslant}
\renewcommand{\geq}{\geqslant}
\begin{document}
\maketitle
\begin{abstract}
Algorithms for min-max optimization and variational inequalities are often studied under monotonicity assumptions. Motivated by non-monotone machine learning applications, we follow the line of works \citep{diakonikolas2021efficient,lee2021fast,pethick2022escaping} aiming at going beyond monotonicity by considering the weaker \emph{negative comonotonicity} assumption. In particular, we provide tight complexity analyses for the Proximal Point, Extragradient, and Optimistic Gradient methods in this setup, closing some questions on their working guarantees beyond monotonicity.
\end{abstract}
\section{Introduction}
\looseness=-1
The study of efficient first-order methods for solving variational inequality problems~(VIP) have known a surge of interest due to the development of recent machine learning~(ML) formulations involving multiple objectives. VIP appears in various ML tasks such as robust learning~\citep{ben2009robust}, adversarial training~\citep{madry2018towards}, Generative Adversarial Networks~\citep{goodfellow2014generative}, or games with decision-dependent data~\citep{narang2022multiplayer}.
In this work we focus on unconstrained VIPs, which we state formally in the slightly more general form of an \emph{inclusion problem}:
\begin{equation}
\text{find } x^* \in \mathbb{R}^d \text{ such that } 0\in F(x^*), \tag{IP} \label{eq:MI}
\end{equation}
where $F:\mathbb{R}^d \rightrightarrows \mathbb{R}^d$ is some (possibly set-valued) mapping. In the sequel, we use the slightly abusive shorthand notation $F(x)$ to denote any particular image of $x$ by the mapping $F$, independently of $F$ being single-valued of not.
Among the main simple first-order methods under consideration for such problems, the extragradient method~(\ref{eq:EG})~\citep{korpelevich1976extragradient} and the optimistic gradient method~(\ref{eq:OG})~\citep{popov1980modification} occupy an important place. These two algorithm have been traditionally analyzed under the assumption that the considered operator is monotone and Lipschitz~\citep{korpelevich1976extragradient,popov1980modification} and are often interpreted as an approximation to the proximal point (\ref{eq:PP}) method~\citep{nemirovski2004prox,mokhtari2019proximal}. \ref{eq:PP} can be formally stated as an implicit iterative method generating a sequence $x^1,x^2,\ldots \in\mathbb{R}^d$ when initiated at some $x^0\in\mathbb{R}^d$:
\[ x^{k+1} = x^k - \gamma F(x^{k+1}), \tag{\algname{PP}} \]
for some well-chosen stepsize $\gamma\in\mathbb{R}$. When $F$ is single-valued, one can instead use explicit methods such as~\ref{eq:EG}:
\begin{equation}\tag{\algname{EG}}
\begin{aligned}
\widetilde{x}^{k} &= x^k - \gamma_1 F(x^k),\\
x^{k+1} &= x^k - \gamma_2 F(\widetilde{x}^k),
\end{aligned}\quad \forall k \geq 0,
\end{equation}
or~\ref{eq:OG} with the additional initialization $\widetilde{x}^{0}=x^0$:
\begin{equation}\tag{\algname{OG}}
\begin{aligned}
\widetilde{x}^{k} &= x^k - \gamma_1 F(\widetilde{x}^{k-1}),\quad \forall k > 0,\\
x^{k+1} &= x^k - \gamma_2 F(\widetilde{x}^k),\quad \forall k \geq 0,
\end{aligned}
\end{equation}
where $\gamma_1,\gamma_2\in\mathbb{R}$ are some well-chosen stepsizes.
Interestingly, until recently, the convergence rate for the last iterate of neither \ref{eq:EG} nor \ref{eq:OG} were known. First results in this direction were obtained by~\citet{golowich2020last,golowich2020tight} under some additional assumptions (namely the Jacobian of $F$ being Lipschitz). Later, \citet{gorbunov2021extragradient,gorbunov2022last,cai2022tight} closed this question by proving the tight worst-case last iterate convergence rate of these methods under monotonicity and Lipschitzness of~$F$.
As some important motivating applications involve deep neural networks, the operator $F$ under consideration is typically not monotone. Recently~\citet{diakonikolas2021efficient} proposed to analyse \ref{eq:EG} using a weaker assumption than the traditional monotonicity, which they coined as ``structured non-monotonicity''. In the sequel, this assumption is referred to as \emph{$\rho$-negative comonotonicity} (with $\rho \geq 0$). That is, for all $x,y \in \mathbb{R}^d$, the operator $F$ satisfies:
\begin{equation}
\langle F(x) - F(y), x- y \rangle \geq -\rho \|F(x) - F(y)\|^2. \label{eq:rho_neg_comon}
\end{equation}
A number of works have followed the idea of~\citet{diakonikolas2021efficient} and considered~\eqref{eq:rho_neg_comon} as their working assumption, see, e.g.,~\citep{yoon2021accelerated,lee2021fast,luo2022last,cai2022accelerated,gorbunov2022clipped}. Albeit being a reasonable first step toward the understanding of the behavior of algorithms for~\eqref{eq:MI} beyond $F$ being monotone, it remains unclear by what means the $\rho$-negative comonotonicity assumption is general enough to capture complex non-monotone operators. This question is crucial for developing a clean optimization theory that can fully encompass ML applications involving neural networks.
To the best of our knowledge, \emph{$\rho$-negative comonotonicity} is the weakest known assumption under which extragradient-type methods can be analyzed for solving~\eqref{eq:MI}. The first part of this work is devoted to providing simple interpretations of this assumption. Then, we close the problem of studying the convergence rate of the \ref{eq:PP} method in this setting, the base ingredient underlying most algorithms for solving~\eqref{eq:MI} (which are traditionally interpreted as approximations to \ref{eq:PP}, see~\citep{nemirovski2004prox}). That is, we provide upper and lower convergence bounds as well as a tight condition on its stepsize for~\ref{eq:PP} under negative comonotonicity. We eventually consider the last-iterate convergence of \ref{eq:EG} and \ref{eq:OG} and provide an almost complete picture in that case, listing the remaining open questions.
Before moving to the next sections, let us mention that many of our results were discovered using the performance estimation approach, first coined by~\cite{drori2012performance} and formalized by~\cite{taylor2017smooth,2017taylor}. The operator version of the framework is due to~\citep{ryu2020operator}. We used the framework through the packages PESTO~\citep{taylor2017performance} and PEPit~\citep{goujaud2022pepit}, thereby providing a simple way to validate our results numerically.
\subsection{Preliminaries}
In the context of~\eqref{eq:MI}, we refer to $F$ as being $\rho$-star-negative comonotone ($\rho \geq 0$) if for all $x \in \mathbb{R}^d$ and $x^*$ being a solution to~\eqref{eq:MI}, we have:
\begin{equation}
\langle F(x), x- x^* \rangle \geq -\rho \|F(x)\|^2. \label{eq:rho_star_neg_comon}
\end{equation}
Furthermore, similar to monotone operators (see~\cite{bauschke2011convex} or~\cite{ryu2020large} for details), we assume that the mapping $F$ is \emph{maximal} in the sense that its graph is not strictly contained in the graph of any other $\rho$-negative comonotone operator (resp., $\rho$-star-negative comonotone), which ensures the corresponding proximal operator used in the sequel to be well-defined.
Finally, for the analysis of the \ref{eq:EG} and \ref{eq:OG}, we further assume $F$ to be $L$-Lipschitz, meaning that for all $x,y \in \mathbb{R}^d$:
\begin{equation}
\|F(x) - F(y)\| \leq L\|x - y\|. \label{eq:Lipschitzness}
\end{equation}
Note that in that case, $F$ is a single-valued mapping. In this case, \ref{eq:MI} transforms into a variational inequality:
\begin{equation}
\text{find } x^* \in \mathbb{R}^d \text{ such that } F(x^*) = 0. \tag{VIP} \label{eq:VIP}
\end{equation}
\subsection{Related Work}
\paragraph{Last-iterate convergence rates in the monotone case.} Several recent theoretical advances focus on the last-iterate convergence of the methods for solving \ref{eq:MI}/\ref{eq:VIP} with \emph{monotone} operator $F$. In particular, \citet{he2015convergence} derive the last-iterate ${\cal O}(\nicefrac{1}{N})$ rate\footnote{Here and below we mean the rates of convergence in terms of the squared residual $\|x^N - x^{N-1}\|^2$ in the case of set-valued operators and $\|F(x^N)\|^2$ in the case of single-valued ones.} for \ref{eq:PP} and \citet{gu2020tight} show its tightness. Under the additional assumption of Lipschitzness of $F$ and of its Jacobian, \citet{golowich2020last,golowich2020tight} obtain last-iterate ${\cal O}(\nicefrac{1}{N})$ convergence for \ref{eq:EG} and \ref{eq:OG} and prove matching lower bounds for them. Next, \citet{gorbunov2021extragradient,gorbunov2022last,cai2022tight} prove similar upper bounds for \ref{eq:EG}/\ref{eq:OG} without relying on the Lipschitzness (and even existence) of the Jacobian of $F$. Finally, for this class of problems one can design (accelerated) methods with provable ${\cal O}(\nicefrac{1}{N^2})$ last-iterate convergence rate \citep{yoon2021accelerated, bot2022fast, tran2021halpern, tran2022connection}. Although ${\cal O}(\nicefrac{1}{N^2})$ is much better than ${\cal O}(\nicefrac{1}{N})$, \ref{eq:EG}/\ref{eq:OG} are still more popular due to their higher flexibility. Moreover, when applied to non-monotone problems the mentioned accelerated methods may be attracted to ``bad'' stationary points, see, e.g.,~\citep[Example 1.1]{gorbunov2022last}.
\paragraph{Best-iterate convergence under $\rho$-star-negative comonotonicity.} The convergence of \ref{eq:EG} is also studied under $\rho$-star-negative comonotonicity (and $L$-Lipschitzness): \citet{diakonikolas2021efficient} prove best-iterate ${\cal O}(\nicefrac{1}{N})$ convergence of \ref{eq:EG} with $\gamma_2 < \gamma_1$ for any $\rho < \nicefrac{1}{8L}$ and \citet{pethick2022escaping} derive a similar result for any $\rho < \nicefrac{1}{2L}$. Moreover, \citet{pethick2022escaping} show that \ref{eq:EG} is not necessary convergent when $\gamma_1 = \nicefrac{1}{L}$ and $\rho \geq \nicefrac{(1-L\gamma_2)}{2L}$.
\paragraph{Last-iterate convergence under $\rho$-negative comonotonicity.} In a very recent work, \citet{luo2022last} prove the first last-iterate ${\cal O}(\nicefrac{1}{N})$ convergence results for \ref{eq:EG} and \ref{eq:OG} applied to solve \ref{eq:VIP} with $\rho$-negative comonotone $L$-Lipschitz operator. Both results rely on the usage of $\gamma_1 = \gamma_2$. Next, for \ref{eq:EG} the result from \citep{luo2022last} requires $\rho < \nicefrac{1}{16L}$ and for \ref{eq:OG} the corresponding result is proven for $\rho < \nicefrac{4}{(27\sqrt{6}L)}$. In contrast, for the accelerated (anchored) version of \ref{eq:EG} \citet{lee2021fast} prove ${\cal O}(\nicefrac{1}{N^2})$ last-iterate convergence rate for any $\rho < \nicefrac{1}{2L}$, which is a larger range of $\rho$ than in the known results for \ref{eq:EG}/\ref{eq:OG} from \citep{luo2022last}.
\subsection{Contributions}\label{s:contrib}
We summarize the contributions as follows. First, Section~\ref{sec:neg_comon} introduces a spectral viewpoint on negative comonotonicity. Then, Section~\ref{sec:prox_point} copes with the last-iterate convergence of the proximal point method with large enough stepsizes (namely larger than $2\rho$), under $\rho$-negative comonotonicity, as well as its best-iterate convergence under $\rho$-star-negative comonotonicity. Finally, Section~\ref{sec:eg_peg} deals with improved best-iterate and last-iterate convergence bounds for the Extragradient and the Optimistic Gradient methods. Namely, we obtain best-iterate convergence for any $\rho < \nicefrac{1}{2L}$ (under star-negative comonotonicity -- this is not novel for \ref{eq:EG} due to \citet{pethick2022escaping}, but for \ref{eq:OG} it is), and counter-examples for $\rho \geq \nicefrac{1}{2L}$ showing that the methods cannot be guaranteed to convergence in this setting. Similarly, we obtain last-iterate convergence for $\rho \leq \nicefrac{1}{8L}$ (under negative comonotonicity) for \ref{eq:EG} and for $\rho \leq \nicefrac{5}{62L}$ for \ref{eq:OG}, which is better than in the other recent works, while not being able to show counter-examples nor convergence guarantees for $\nicefrac{1}{8L} < \rho < \nicefrac{1}{2L}$ in case of \ref{eq:EG} and for $\nicefrac{5}{62L} < \rho < \nicefrac{1}{2L}$ in case of \ref{eq:OG} (hence still not filling the gap entirely for this case). The codes for generating worst-case examples for \ref{eq:PP}, numerical verification of the derived results and symbolical verification of certain technical derivations are provided in the following repository: \url{https://github.com/eduardgorbunov/Proximal_Point_and_Extragradient_based_methods_negative_comonotonicity}.
\section{Closer Look at Negative Comonotonicity}\label{sec:neg_comon}
Negative comonotonicity (also known as cohypomonotonicity) was originally introduced as a relaxation of monotonicity that is sufficient for the convergence of \ref{eq:PP} \citep{pennanen2002local}. This assumption is relatively weak: one can show that $F$ is $\rho$-negative comonotone in a neighborhood of solution $x^*$ for large enough $\rho$, if the (possibly set-valued) operator $F^{-1}:\mathbb{R}^d \rightrightarrows \mathbb{R}^d$ has a Lipschitz localization around $(0,x^*) \in G_{F^{-1}}$, where $G_{F^{-1}}$ denotes the graph of $F^{-1}$ \citep[Proposition 7]{pennanen2002local}. The next lemma characterizes negative comonotone operators; it is technically very close to~\citep[Proposition 4.2]{bauschke2011convex} (on cocoercive operators).
\begin{lemma}\label{lem:expansiveness_of_neg_comon_operator}
$F:\mathbb{R}^d \rightrightarrows \mathbb{R}^d$ is $\rho$-negative comonotone ($\rho\geq 0$) if and only if operator $\mathrm{Id} + 2\rho F$ is expansive.
\end{lemma}
The proof of this lemma follows directly from the definition of negative comonotonicity. Among others, it implies the following result about the spectral properties of the Jacobian of negative comonotone operator (when it exists).
\begin{theorem}\label{thm:spectral_viewpoint_on_neg_comon}
Let $F:\mathbb{R}^d \to \mathbb{R}^d$ be a continuously differentiable. Then, the following statements are equivalent:
\begin{itemize}
\item $F$ is $\rho$-negative comonotone,
\item $\Re(\nicefrac{1}{\lambda}) \geq -\rho$ for all $\lambda \in \mathrm{Sp}(\nabla F(x))$, $\forall x \in \mathbb{R}^d$.
\end{itemize}
\end{theorem}
The condition $\Re(\nicefrac{1}{\lambda}) \geq -\rho$ means that $\lambda$ lies \emph{outside} the disc in $\mathbb{C}$ centered at $-\nicefrac{1}{2\rho}$ and having radius $\nicefrac{1}{2\rho}$, see Figure~\ref{fig:spectral_neg_comon}. In particular, for the case of twice differentiable functions $\rho$-negative comonotonicity forbids the Hessian to have eigenvalues in $(-\nicefrac{1}{\rho}, 0)$, i.e., eigenvalues of the Hessian have to be either negative with sufficiently large absolute value or non-negative. An alternate interpretation of Figure~\ref{fig:spectral_neg_comon} can be formally made in terms of scaled relative graphs, see~\citep{ryu2022scaled}; see also older references using such illustrations~\citep{eckstein1989splitting,eckstein1992douglas}, or~\citep[arXiv version $1$ to $3$]{giselsson2016linear}.
\begin{figure}[t]
\centering
\newcounter{t}
\newcounter{t2}
\newcounter{L}
\newcounter{mu}
\setcounter{t}{90}
\setcounter{L}{18}
\setcounter{mu}{5}
\setcounter{t2}{60}
\begin{tikzpicture}[scale=1]
\begin{axis}[
grid=major,
axis equal image,
ytick = {-\value{L}/2,0,\value{L}/2},
yticklabels={-$\tfrac{i}{2\rho}$,$0i$,$\tfrac{i}{2\rho}$},
xtick = {-\value{L},-\value{L}/2,0},
xticklabels={-$\tfrac{1}{\rho}$,-$\tfrac{1}{2\rho}$,$0$},
xmin={-\value{L} - 3}, xmax=6,
ymin={-\value{L}/2 - 2}, ymax={\value{L}/2 + 2},
]
\draw [domain=-180:180, draw=white, fill=red!50!white, thick, fill opacity=0.5, samples=65] plot (axis cs: -{\value{L}*(1 + cos(\value{t}))/2 + cos(\x) * \value{L}*(1 - cos(\value{t}))/2}, {sin(\x) * \value{L}*(1 - cos(\value{t}))/2});
\node[anchor=north west] (mu) at (axis cs: {\value{L} * cos(\value{t})}, 0) {$0$};
\draw [fill=black] (axis cs: {\value{L} * cos(\value{t})}, 0) circle (1pt);
\draw [fill=black] (axis cs: {-\value{L}}, 0) circle (1pt);
\node [circle, anchor=north east] (L) at (axis cs: -{\value{L}}, 0) {};
\draw [fill=black] (axis cs: {-\value{L}/2}, 0) circle (1pt);
\node [circle, anchor=north] (L) at (axis cs: -{\value{L}/2}, 0) {};
\node [circle, anchor=south] at (axis cs: {-\value{L}/2}, -9 ) {\small No $\rho$-negative comonotonicity};
\end{axis}
\end{tikzpicture}
\caption{Visualization of Theorem~\ref{thm:spectral_viewpoint_on_neg_comon}. Red open disc corresponds to the constraint $\Re(\nicefrac{1}{\lambda}) < -\rho$ that defines the set such that all eigenvalues the Jacobian of $\rho$-negative comonotone operator should lie outside this set.
}\label{fig:spectral_neg_comon}
\end{figure}
Finally, we touch the following informal question: \emph{to what extent negative comonotone operators are non-monotone?} To formalize a bit we consider a way more simpler question: \emph{can negative comonotone operator have isolated zeros/solutions of \ref{eq:VIP}?} Unfortunately, the answer is no.
\begin{theorem}\label{thm:no_isolated_minima}
If $F:\mathbb{R}^d \rightrightarrows \mathbb{R}^d$ is $\rho$-negative comonotone, then the solution set $X^* = F^{-1}(0)$ is convex.
\end{theorem}
\begin{proof}
The proof follows from the observations provided by \citet{pennanen2002local}. First, notice that $F$ and its Yosida regularization $(F^{-1} + \rho\cdot \mathrm{Id})^{-1}$ have the same set of the solutions: $((F^{-1} + \rho\cdot \mathrm{Id})^{-1})^{-1}(0) = (F^{-1} + \rho\cdot \mathrm{Id})(0) = F^{-1}(0)$. Next, by definition \eqref{eq:rho_neg_comon} we have that maximal $\rho$-negative comonotonicity of $F$ implies maximal monotonicity of $F^{-1} + \rho\cdot \mathrm{Id}$ that is equivalent to maximal monotonicity of $(F^{-1} + \rho\cdot \mathrm{Id})^{-1}$. Since the set of zeros of maximal monotone operator is convex \citep[Proposition 23.39]{bauschke2011convex}, we have the result.
\end{proof}
Therefore, despite its apparent generality, negative comonotonicity is not satisfied (globally) for the many practical tasks that have isolated optima. Nevertheless, studying the convergence of traditional methods under negative comonotonicity can be seen as a natural step towards understanding their behaviors in more complicated non-monotonic cases.
\section{Proximal Point Method}\label{sec:prox_point}
\begin{figure}[ht!]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/PP_worst_case_norm.pdf}
\caption{Worst-case $\|F(x^{N+1})\|^2$}\label{fig:1a}
\end{subfigure}\vspace{.1cm}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/Worst-case_trajectores_PP.pdf}
\caption{Worst-case trajectories}\label{fig:1b}
\end{subfigure}\vspace{.1cm}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/PP_worst_case_characteristics.pdf}
\caption{Study of the norms and angles}\label{fig:1c}
\end{subfigure}
\caption{\small In~(a), we report the solution of \eqref{eq:PEP_for_PP} for different values of $\gamma$ and $N$. The plot illustrates that for the considered range of $N$ and values of $\gamma$ \ref{eq:PP} converges as ${\cal O}(\nicefrac{1}{N})$ in terms of $\|F(x^{N})\|^2$. In~(b), we show the worst-case trajectories of \ref{eq:PP} for $N = 40$. The form of trajectories hint that the worst-case operator is a rotation operator. For each particular choice of $N$ and $\gamma > 2\rho$ we observed numerically that quantities $\nicefrac{\rho\|F(x^k)\|^2}{\|x^k - x^*\|}$ and $\nicefrac{-\langle F(x^k) , x^k - x^* \rangle}{(\|F(x^k)\|\cdot \|x^k - x^*\|)}$ remain the same during the run of the method (the standard deviation of arrays $\{\nicefrac{\rho\|F(x^k)\|^2}{\|x^k - x^*\|}\}_{k=1}^N$ and $\{\nicefrac{-\langle F(x^k) , x^k - x^* \rangle}{(\|F(x^k)\|\cdot \|x^k - x^*\|)}\}_{k=1}^N$ is of the order $10^{-6}-10^{-7}$). Finally, in~(c), we illustrate that these characteristics coincide with $\nicefrac{\rho}{\sqrt{N\gamma(\gamma - 2\rho)}}$ as long as the total number of steps $N$ is sufficiently large ($N \geq \max\{\nicefrac{\rho^2}{\gamma(\gamma-2\rho)},1\}$).}
\label{fig:1}
\end{figure}
In this section, we consider Proximal Point method \citep{martinet1970regularisation, rockafellar1976monotone}, which is usually written as $x^{k+1}=(F+\gamma\, \mathrm{Id})^{-1}\,x^k$ (where we assume here that $\gamma>0$ is large enough so that the iteration is well and uniquely defined) or equivalently:
\begin{equation}
x^{k+1} = x^k - \gamma F(x^{k+1}). \tag{\algname{PP}} \label{eq:PP}
\end{equation}
In particular, for given values of $N\in\mathbb{N}$, $R > 0$, $\rho > 0$, and $\gamma > 0$ we focus on the following question: \emph{what guarantees can we prove on $\|x^N - x^{N-1}\|^2$ (in particular: as a function of $N$), where $\{x^k\}_{k=0}^{N}$ is generated by \ref{eq:PP} with stepsize $\gamma$ after $N\geq 1$ iterations of solving \ref{eq:MI} with $F: \mathbb{R}^d \rightrightarrows \mathbb{R}^d$ being $\rho$-negative comonotone and $\|x^0 - x^*\|^2 \leq R^2$?} This kind of question can naturally be reformulated as an explicit optimization problem looking for worst-case problem instances, often referred to as \emph{performance estimation problems} (PEPs), as introduced and formalized in~\citep{drori2012performance,taylor2017smooth,2017taylor}:
\begin{eqnarray}
\max\limits_{F, x^0}&&\|x^N - x^{N-1}\|^2 \label{eq:PEP_for_PP}\\
\text{s.t.}&& F \text{ satisfies } \eqref{eq:rho_neg_comon},\notag\\
&&\|x^0 - x^*\|^2 \leq R^2,\; 0 \in F(x^*),\notag\\
&& x^{k+1} = x^k - \gamma F(x^{k+1}),\quad k=0,1,\ldots,N-1.\notag
\end{eqnarray}
As we show in Appendix~\ref{appendix:PEP},~\eqref{eq:PEP_for_PP} can be \emph{formulated} as a semidefinite program (SDP). For constructing and solving this SDP problem corresponding to \eqref{eq:PEP_for_PP} numerically, one can use the PEPit package \citep{goujaud2022pepit} (after adding the class of $\rho$-negative comonotone operators to it), which thereby allows constructing worst-case guarantees and examples, numerically. Figure~\ref{fig:1a} shows the numerical results obtained by solving~\eqref{eq:PEP_for_PP} for different values of $N$. We observe that worst-case value of \eqref{eq:PEP_for_PP} behaves as ${\cal O}(\nicefrac{1}{N})$ similarly to the monotone case.
Motivated by these numerical results, we derive the following convergence rates for \ref{eq:PP}.
\begin{theorem}\label{thm:PP_convergence}
Let $F: \mathbb{R}^d \rightrightarrows \mathbb{R}^d$ be $\rho$-star-negative comonotone. Then, for any $\gamma > 2\rho$ the iterates produced by \ref{eq:PP} are well-defined and satisfy $\forall N \geq 1$:
\begin{equation}
\frac{1}{N}\sum\limits_{k=1}^N \|x^k - x^{k-1}\|^2 \leq \frac{\gamma\|x^0 - x^*\|^2}{(\gamma - 2\rho)N}. \label{eq:PP_convergence_star_neg_comon}
\end{equation}
If $F: \mathbb{R}^d \rightrightarrows \mathbb{R}^d$ is $\rho$-negative comonotone, then for any $\gamma > 2\rho$ and any $k \geq 1$ the iterates produced by \ref{eq:PP} satisfy
\begin{equation*}
\|x^{k+1} - x^k\| \leq \|x^k - x^{k-1}\|
\end{equation*}
and for any $N \geq 1$:
\begin{equation}
\|x^N - x^{N-1}\|^2 \leq \frac{\gamma\|x^0 - x^*\|^2}{(\gamma - 2\rho)N}.\label{eq:PP_convergence_neg_comon}
\end{equation}
\end{theorem}
\begin{proof}
We start with $\rho$-star-negative comonotone case. From the update rule of \ref{eq:PP} we have
\begin{align*}
\|x^{k+1} - x^*\|^2 &= \|x^{k} - x^* - (x^k - x^{k+1}) \|^2\\
&= \|x^k - x^*\|^2 - 2\langle x^k - x^*, x^k - x^{k+1}\rangle + \|x^k - x^{k+1}\|^2\\
&= \|x^k - x^*\|^2 - 2\langle x^{k+1} - x^*, x^k - x^{k+1}\rangle - \|x^k - x^{k+1}\|^2.
\end{align*}
Since $x^k - x^{k+1} = \gamma F(x^{k+1})$, where $F(x^{k+1})$ is some value of operator $F$ at point $x^{k+1}$, we can apply $\rho$-star-negative comonotonicity and get
\begin{align*}
\|x^{k+1} - x^*\|^2 &\leq \|x^k - x^*\|^2 - \left(1 - \frac{2\rho}{\gamma}\right) \|x^k - x^{k+1}\|^2.
\end{align*}
Telescoping the above inequality for $k=0,\ldots,N-1$ and changing the index in the summation, we obtain \eqref{eq:PP_convergence_star_neg_comon}. Next, to get the last-iterate convergence we use $\rho$-negative comonotonicity \eqref{eq:rho_neg_comon} inequality written for~$x^k$ and~$x^{k+1}$:
\begin{align*}
\frac{1}{\gamma}\langle x^{k-1} - x^k - &(x^k - x^{k+1}), x^k - x^{k+1} \rangle \geq -\frac{\rho}{\gamma^2}\|x^{k-1} - x^k - (x^k - x^{k+1})\|^2,
\end{align*}
where we use that $\nicefrac{(x^{k-1} - x^k)}{\gamma}$ and $\nicefrac{(x^{k} - x^{k+1})}{\gamma}$ belongs to the values of $F$ at points $x^k$ and $x^{k+1}$ respectively. Multiplying both sides by $\gamma^2$ and rearranging the terms, we get
\begin{align*}
\gamma \|x^k - x^{k+1}\|^2 &\leq \gamma\langle x^{k-1} - x^k, x^k - x^{k+1} \rangle + \rho \|x^{k-1} + x^{k+1} - 2x^k\|^2.
\end{align*}
Finally, using $2\langle a,b \rangle = \|a\|^2 + \|b\|^2 - \|a - b\|^2$, which holds for any $a,b \in \mathbb{R}^d$, and rearranging the terms, we derive
\begin{align*}
\frac{\gamma}{2}\|x^k - x^{k+1}\|^2 &\leq \frac{\gamma}{2}\|x^{k-1} - x^k\|^2 - \left(\frac{\gamma}{2} - \rho\right)\|x^{k-1} + x^{k+1} - 2x^k\|^2.
\end{align*}
Taking into account $\gamma > 2\rho$, we obtain $\|x^{k+1} - x^k\| \leq \|x^k - x^{k-1}\|$. Together with \eqref{eq:PP_convergence_star_neg_comon} it implies \eqref{eq:PP_convergence_neg_comon}.
\end{proof}
First, the result from \eqref{eq:PP_convergence_star_neg_comon} implies only best-iterate ${\cal O}(\nicefrac{1}{N})$ rate. Such kind of guarantees are weaker the last-iterate ones but they do hold under the more general star-negative comonotonicity assumption. Note that the guarantee~\eqref{eq:PP_convergence_neg_comon} matches the best known guarantee for the monotone case (up to the factor $\nicefrac{\gamma}{(\gamma - 2\rho)}$) from~\citep{he2015convergence, gu2020tight}, and it is therefore natural to ask whether {it is possible to improve factor $\nicefrac{\gamma}{(\gamma - 2\rho)}$ in the convergence guarantee of \ref{eq:PP} for the $\rho$-negative comonotone case.}
To answer this question, one can use performanc estimation again. In particular, using the trace heuristic for trying to find low-dimensional worst-case examples to~\eqref{eq:PEP_for_PP}, we obtain $2$-dimensional worst-case examples for different values of~$\gamma$ and~$N$, see Figure~\ref{fig:1b} and Figure~\ref{fig:1c}. These figures illustrate that the worst-case examples found numerically correspond to the scaled rotation operators (similar to~\citet{gu2020tight} but with different angles). Moreover, the rotation angle and scaling parameter have non-trivial dependencies on number of iterations. These observations lead to the following result, which shows that the multiplicative cannot be removed asymptotically as $N$ grows.
\begin{theorem}\label{thm:PP_worst_case}
For any $\rho > 0, \gamma > 2\rho$, and $N \geq \max\{\nicefrac{\rho^2}{\gamma(\gamma-2\rho)},1\}$ there exists $\rho$-negatively comonotone single-valued operator $F: \mathbb{R}^d \to \mathbb{R}^d$ such that after $N$ iterations \ref{eq:PP} with stepsize $\gamma$ produces $x^{N+1}$ satisfying
\begin{equation}
\|F(x^{N+1})\|^2 \geq \frac{\|x^0 - x^*\|^2}{\gamma(\gamma - 2\rho)N\left(1 + \frac{1}{N}\right)^{N+1}}.\label{eq:PP_worst_case}
\end{equation}
Indeed, one can pick the two-dimensional $F:\mathbb{R}^2\to \mathbb{R}$: $F(x) = \alpha A x$ with $$A = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix},\quad \alpha = \frac{|\cos \theta|}{\rho}$$ for $\theta \in (\nicefrac{\pi}{2}, \pi)$ such that $\cos \theta = - \frac{\rho}{\sqrt{N\gamma(\gamma - 2\rho)}}$.
\end{theorem}
\begin{proof}
Consider the linear operator $F(x) = \alpha A x$ described above. First, we verify its $\rho$-negative comonotonicity: for any $x, y \in \mathbb{R}^d$
\begin{align*}
\langle F(x) - F(y), x - y \rangle &= \alpha \langle A(x-y), x-y \rangle\\
&= \alpha \|A(x-y)\|\cdot \|x-y\|\cdot \cos\theta\\
&= - \frac{\cos^2 \theta}{\rho} \|A(x-y)\|^2\\
&= -\rho \|F(x) - F(y)\|^2,
\end{align*}
where we use $\|A(x-y)\| = \|x-y\|$, since $A$ is the rotation matrix. Next, one can check that
\begin{equation*}
(I + \gamma\alpha A)^{-1} = \frac{1}{1 + \gamma\alpha^2(\gamma - 2\rho)} \begin{pmatrix} 1 + \gamma\alpha \cos\theta & \gamma\alpha\sin\theta \\ -\gamma\alpha\sin\theta & 1 + \gamma\alpha \cos\theta \end{pmatrix}.
\end{equation*}
Since $x^{k+1} = (I + \gamma\alpha A)^{-1}x^k$, one can verify via direct computations that
\begin{align*}
\|x^{k+1}\|^2 = \frac{1}{1 + \gamma\alpha^2(\gamma - 2\rho)}\|x^k\|^2.
\end{align*}
Unrolling this identity for $k = N, N-1, \ldots, 0$ and using $x^* = 0$, $\|F(x^{k+1})\| = \alpha\|Ax^{k+1}\| = \alpha\|x^{k+1}\|$, we get
\begin{equation*}
\|F(x^{k+1})\|^2 = \alpha^2 \left(\frac{1}{1 + \gamma\alpha^2(\gamma - 2\rho)}\right)^{N+1} \|x^0 - x^*\|^2.
\end{equation*}
Maximizing the right-hand side in $\alpha$ we get that the optimal value is $\alpha = \nicefrac{1}{\sqrt{N\gamma(\gamma-2\rho)}}$. Since $\alpha\rho = |\cos\theta|$, we should assume that $N \geq \nicefrac{\rho^2}{\gamma(\gamma - 2\rho)}$. Plugging $\alpha = \nicefrac{1}{\sqrt{N\gamma(\gamma-2\rho)}}$ in the above formula for $\|F(x^{k+1})\|^2$ we get the result.
\end{proof}
Since $\exp(1) \leq (1+ \nicefrac{1}{N})^{N+1} \leq 4$, the above result implies the tightness (up to a multiplicative constant) of Theorem~\ref{thm:PP_convergence}. One should note again that both Theorem~\ref{thm:PP_convergence} and \ref{thm:PP_worst_case} rely on the assumption that $\gamma > 2\rho$ for the proximal operation to be well-defined. That is, these results are valid only for \emph{large enough stepsizes}. This is a relatively rare phenomenon in optimization and variational inequalities. As the next theorem states, \emph{\ref{eq:PP} is not guaranteed to converge if the stepsize is too small}, even if the proximal operation is well-defined.
\begin{theorem}\label{thm:PP_counter_example}
For any $\rho > 0$ there exists $\rho$-negatively comonotone single-valued operator $F: \mathbb{R}^d \to \mathbb{R}^d$ such that \ref{eq:PP} does not converge to the solution of \ref{eq:VIP} for any $0 < \gamma \leq 2\rho$. In particular, one can take $F(x) = - \nicefrac{x}{\rho}$.
\end{theorem}
\begin{proof}
First, $F(x) = - \nicefrac{x}{\rho}$ is $\rho$-negative comonotone: for any $x,y \in \mathbb{R}^d$
\begin{align*}
\langle F(x) - F(y), x- y \rangle &= -\frac{1}{\rho}\|x-y\|^2 = -\rho\|F(x) - F(y)\|^2.
\end{align*}
Next, the iterates of \ref{eq:PP} satisfy $x^{k+1} = x^k + \gamma\nicefrac{x^{k+1}}{\rho}$. If $\gamma = \rho$, the next iterate is undefined. If $\gamma = 2\rho$, then $x^{k+1} = x^k$. Finally, when $\gamma \in (0,\rho)\cup (\rho, 2\rho)$ we have $x^{k+1} = \frac{x^k}{(1 - \nicefrac{\gamma}{\rho})}$ implying $\|x^{k+1}\| > \|x^k\|$, i.e., \ref{eq:PP} diverges.
\end{proof}
As a summary, Theorem~\ref{thm:PP_convergence} and Theorem~\ref{thm:PP_counter_example} provide a complete picture of the convergence of \ref{eq:PP} under negative comonotonicity, including the upper bounds, and worst-case examples and counter-examples justifying the need of using large enough stepsizes for \algname{PP} applied to $\rho$-negative comonotone \ref{eq:MI}/\ref{eq:VIP}.
\section{Extragradient-Based Methods}\label{sec:eg_peg}
\paragraph{Extragradient.} The update rule of Extragradient method \citep{korpelevich1976extragradient} is defined as follows:
\begin{equation}\tag{\algname{EG}}\label{eq:EG}
\begin{aligned}
\widetilde{x}^{k} &= x^k - \gamma_1 F(x^k),\\
x^{k+1} &= x^k - \gamma_2 F(\widetilde{x}^k),
\end{aligned}\quad \forall k \geq 0.
\end{equation}
In its pure form, \ref{eq:EG} has the same extrapolation ($\gamma_1$) and update ($\gamma_2$) stepsizes, i.e., $\gamma_1 = \gamma_2$. However, the existing analysis of \ref{eq:EG} under $\rho$-(star-)negative comonotonicity relies on the usage of $\gamma_2 < \gamma_1$ \citep{diakonikolas2021efficient, pethick2022escaping}. The following lemma sheds some light on this phenomenon.
\begin{lemma}\label{lem:EG_main_lemma}
Let $F$ be $L$-Lipschitz and $\rho$-star-negative comonotone. Then, for any $k\geq 0$ the iterates produced by \ref{eq:EG} after $k\geq 0$ iterations satisfy
\begin{eqnarray}
\|x^{k+1} - x^*\|^2 &\leq& \|x^k - x^*\|^2\notag\\
&& - \gamma_2\left(\gamma_1 - 2\rho - \gamma_2\right)\|F(\widetilde{x}^k)\|^2 \label{eq:EG_key_inequality_main_1}\\
&& - \gamma_1\gamma_2(1 - L^2\gamma_1^2)\|F(x^k)\|^2. \label{eq:EG_key_inequality_main_2}
\end{eqnarray}
\end{lemma}
\begin{proof}[Proof sketch]
The proof follows a quite standard pattern: we start with expanding the square $\|x^{k+1} - x^*\|^2$ and then rearrange the terms to get $\|x^k - x^*\|^2 - 2\gamma_2 \langle \widetilde{x}^k - x^*, F(\widetilde{x}^k) \rangle - 2\gamma_1\gamma_2 \langle F(x^k), F(\widetilde{x}^k) \rangle + \gamma_2^2 \|F(\widetilde{x}^k)\|^2$ in the right-hand side. After that, it remains to estimate inner products. From $\rho$-star-negative comonotonicity we have $- 2\gamma_2 \langle \widetilde{x}^k - x^*, F(\widetilde{x}^k) \rangle \leq 2\rho\gamma_2 \|F(\widetilde{x}^k)\|^2$. For the second inner product $- 2\gamma_1\gamma_2 \langle F(x^k), F(\widetilde{x}^k) \rangle$ we use $2\langle a,b \rangle = \|a\|^2 + \|b\|^2 - \|a - b\|^2$, which holds for any $a,b \in \mathbb{R}^d$, and then apply $L$-Lipschitzness to upper bound the term $\gamma_1\gamma_2\|F(x^k) - F(\widetilde{x}^k)\|^2$. Finally, we rearrange the terms, see the full proof in Appendix~\ref{appendix:eg}.
\end{proof}
From the above result one can easily notice that the choice of $\gamma_2 \leq \gamma_1 - 2\rho$ and $\gamma_1 < \nicefrac{1}{L}$ implies best-iterate convergence in terms of the squared norm of the operator. However, in this proof, $\gamma_2$ should be positive, i.e., this proof is valid only for $\gamma_1 > 2\rho$. In other words, one can derive best-iterate ${\cal O}(\nicefrac{1}{N})$ rate for \ref{eq:EG} whenever $\rho < \nicefrac{1}{2L}$, which is also known from \citet{pethick2022escaping} (though \citet{pethick2022escaping} do not provide analogs of Lemma~\ref{lem:EG_main_lemma}).
Next, to get the last-iterate convergence of \ref{eq:EG} we assume $\rho$-negative comonotonicity, since even for \ref{eq:PP} -- a simpler algorithm -- we do this. Moreover, even in the monotone case the existing proofs of the last-iterate convergence of \ref{eq:EG} rely on the usage of same stepsizes $\gamma_1 = \gamma_2 = \gamma$ \citep{gorbunov2021extragradient, cai2022tight}. This partially can be explained by the following fact: $\|F(x^{k+1})\|$ can be larger than $\|F(x^k)\|$ if $\gamma_1 \neq \gamma_2$ \citep{gorbunov2021extragradient}. Therefore, we also assume that $\gamma_1 = \gamma_2 = \gamma$ to derive last-iterate convergence rate.
However, as Lemma~\ref{lem:EG_main_lemma} indicates, the choice $\gamma_1 = \gamma_2 = \gamma$ may complicate the proof because the term from \eqref{eq:EG_key_inequality_main_1} becomes non-negative. Moreover, it is natural to expect that the proof will work for smaller range of $\rho$. Nevertheless, using computer-assisted approach, we derive that for any $\rho \leq \nicefrac{1}{8L}$ and $4\rho\leq \gamma \leq \nicefrac{1}{2L}$ \ref{eq:EG} the iterates of \ref{eq:EG} satisfy $\|F(x^{k+1})\| \leq \|F(x^k)\|$ which is the main building block of the obtained proof.
We summarized the derived upper-bounds for \ref{eq:EG} in the following result.
\begin{theorem}\label{thm:EG_convergence}
Let $F$ be $L$-Lipschitz and $\rho$-star-negative comonotone with $\rho < \nicefrac{1}{2L}$. Then, for any $2\rho < \gamma_1 < \nicefrac{1}{L}$ and $0 < \gamma_2 \leq \gamma_1 - 2\rho$ the iterates produced by \ref{eq:EG} after $N\geq 0$ iteration satisfy
\begin{equation}
\frac{1}{N+1}\sum\limits_{k=0}^N \|F(x^k)\|^2 \leq \frac{\|x^0 - x^*\|^2}{\gamma_1\gamma_2(1 - L^2\gamma_1^2)(N+1)}. \label{eq:EG_best_iterate}
\end{equation}
If, in addition, $F$ is $\rho$-negative comonotone with $\rho \leq \nicefrac{1}{8L}$ and $\gamma_1 = \gamma_2 = \gamma$ such that $4\rho \leq \gamma \leq \nicefrac{1}{2L}$, then for any $k \geq 0$ the iterates produced by \ref{eq:EG} satisfy $\|F(x^{k+1})\| \leq \|F(x^k)\|$ and for any $N\geq 1$
\begin{equation}
\|F(x^N)\|^2 \leq \frac{28 \|x^0 - x^*\|^2}{N\gamma^2 + 320\gamma \rho}. \label{eq:EG_last_iterate}
\end{equation}
\end{theorem}
The results similar to \eqref{eq:EG_best_iterate} are known in the literature: \citet{diakonikolas2021efficient} derives best-iterate ${\cal O}(\nicefrac{1}{N})$ convergence for $\rho < \nicefrac{1}{8L}$ and \citet{pethick2022escaping} generalizes it to the case of any $\rho < \nicefrac{1}{2L}$. In this sense, \eqref{eq:EG_best_iterate} recovers the one from \citet{pethick2022escaping}, though the proof is different.
Next, the last-iterate convergence result from \eqref{eq:EG_last_iterate} holds for any $\rho \leq \nicefrac{1}{8L}$, which is much smaller than the range $\rho < \nicefrac{1}{2L}$ allowed for the best-iterate result. Nevertheless, the previous best-known last-iterate rate requires $\rho$ to be smaller than $\nicefrac{1}{16L}$ \citep{luo2022last}, which is $2$ times smaller than what is allowed for \eqref{eq:EG_last_iterate}.
This discussion naturally leads us to the following question: \emph{for given $L > 0$ what is the maximal possible $\rho$ for which there exists a choice of stepsizes in \ref{eq:EG} such that it converges for any $\rho$-negative comonotone $L$-Lipschitz operator $F$?} This question is partially addressed by \citet{pethick2022escaping}, who prove that if $\gamma_1 = \nicefrac{1}{L}$, then for $\rho \geq \nicefrac{(1 - L\gamma_2)}{2L}$ \ref{eq:EG} does not necessary converge. Guided by the results obtained for \ref{eq:PP}, we make a further step and derive the following statement.
\begin{theorem}\label{thm:EG_counter_example}
For any $L > 0$, $\rho \geq \nicefrac{1}{2L}$, and any choice of stepsizes $\gamma_1, \gamma_2 > 0$ there exists $\rho$-negative comonotone $L$-Lipschitz operator $F$ such that \ref{eq:EG} does not necessary converges on solving \ref{eq:VIP} with this operator $F$. In particular, for $\gamma_1 > \nicefrac{1}{L}$ it is sufficient to take $F(x) = L x$, and for $0 < \gamma_1 \leq \nicefrac{1}{L}$ one can take $F(x) = L A x$, where $x \in \mathbb{R}^2$, $$A = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix},\quad \theta = \frac{2\pi}{3}.$$
\end{theorem}
This result corroborates Theorem~\ref{thm:PP_counter_example} and known relationship between \ref{eq:EG} and \ref{eq:PP}. That is, from the one side, it is known that \ref{eq:EG} can be seen as an approximation of \ref{eq:PP} \citep[Theorem 1]{mishchenko2020revisiting}. Since for \ref{eq:PP} converges only for the stepsizes larger than $2\rho$, it is natural to expect that \ref{eq:EG} also needs to have at least one stepsize larger than $2\rho$ (otherwise, it can be seen as an approximation of \ref{eq:PP} with stepsize not larger than $2\rho$ that is known to be non-convergent). From the other side, unlike \ref{eq:PP}, \ref{eq:EG} does not converge for arbitrary large stepsizes, which is a standard phenomenon for explicit methods in optimization. In particular, one has to take $\gamma_1 \leq \nicefrac{1}{L}$ (otherwise there exists a very good -- $L$-cocoercive -- operator such that \ref{eq:EG} diverges). These two observations explain the intuition behind Theorem~\ref{thm:EG_counter_example}.
\paragraph{Optimistic gradient.} Optimistic gradient \citep{popov1980modification} is a single-call version of \ref{eq:EG} having the following form:
\begin{equation}\tag{\algname{OG}}\label{eq:OG}
\begin{aligned}
\widetilde{x}^{k} &= x^k - \gamma_1 F(\widetilde{x}^{k-1}),\quad \forall k > 0,\\
x^{k+1} &= x^k - \gamma_2 F(\widetilde{x}^k),\quad \forall k \geq 0,
\end{aligned}
\end{equation}
where $\widetilde{x}^0 = x^0$. Guided by the results and intuition developed for \ref{eq:EG}, here we also deviate from the original form of \ref{eq:OG}, which has $\gamma_1 = \gamma_2$, and allow $\gamma_1$ and $\gamma_2$ being different. The main goal of the rest of this section is in the obtaining the results on the convergence of \ref{eq:OG} similar to what are derived for \ref{eq:EG} earlier in this section.
Before we move on, we would like to highlight the challenges in the analysis of \ref{eq:OG}. Although \ref{eq:EG} and \ref{eq:OG} can both be seen as approximations of \ref{eq:PP} \citep{mokhtari2020unified}, they have some noticeable theoretical differences going beyond algorithmic ones. For example, even for monotone $L$-Lipschitz operator $F$ the iterates produced by \ref{eq:OG} do not satisfy $\|F(x^{k+1})\| \leq \|F(x^k)\|$ or $\|F(\widetilde{x}^k)\| \leq \|F(x^k)\|$ in general \citep{gorbunov2022last}, while for \ref{eq:EG} $\|F(x^{k+1})\| \leq \|F(x^k)\|$ holds \citep{gorbunov2021extragradient}. This fact makes the analysis of \ref{eq:OG} more complicated than in the case of \ref{eq:EG}. Moreover, the known convergence results in the monotone case for \ref{eq:OG} require smaller stepsizes than for \ref{eq:EG} \citep{gorbunov2022last, cai2022tight}. In view of the obtained results for \ref{eq:PP} and \ref{eq:EG}, this fact highlights non-triviality of obtaining convergence results for \ref{eq:OG} under $\rho$-negative comonotonicity for the same range of allowed values $\rho$ as for \ref{eq:EG}.
Nevertheless, we obtain the best-iterate ${\cal O}(\nicefrac{1}{N})$ convergence of \ref{eq:OG} for any $\rho < \nicefrac{1}{2L}$, i.e., for the same range of $\rho$ as for \ref{eq:EG}. We also derive last-iterate ${\cal O}(\nicefrac{1}{N})$ convergence of \ref{eq:OG} but for $\rho \leq \nicefrac{5}{62L}$, which is a smaller range than we have for \ref{eq:EG}. The results are summarized below.
\begin{theorem}\label{thm:OG_convergence}
Let $F$ be $L$-Lipschitz and $\rho$-star-negative comonotone with $\rho < \nicefrac{1}{2L}$. Then, for any $2\rho < \gamma_1 < \nicefrac{1}{L}$ and $0 < \gamma_2 \leq \min\{\nicefrac{1}{L} - \gamma_1, \gamma_1 - 2\rho\}$ the iterates produced by \ref{eq:OG} after $N\geq 0$ iteration satisfy
\begin{equation}
\frac{1}{N+1}\sum\limits_{k=0}^N \|F(x^k)\|^2 \leq \frac{\|x^0 - x^*\|^2}{\gamma_1\gamma_2(1 - L^2(\gamma_1 + \gamma_2)^2)(N+1)}.
\label{eq:OG_best_iterate}
\end{equation}
If, in addition, $F$ is $\rho$-negative comonotone with $\rho \leq \nicefrac{5}{62L}$ and $\gamma_1 = \gamma_2 = \gamma$ such that $4\rho \leq \gamma \leq \nicefrac{10}{31L}$, then for any $N \geq 1$ the iterates produced by \ref{eq:OG} satisfy
\begin{equation}
\|F(x^N)\|^2 \leq \frac{717\|x^0 - x^*\|^2}{N \gamma(\gamma - 3\rho) + 800\gamma^2}. \label{eq:OG_last_iterate}
\end{equation}
\end{theorem}
The derived best-iterate rate \eqref{eq:OG_best_iterate} is new\footnote{In the final stage of writing our paper, we became aware of the recent work by \citet{cai2022accelerated_single} who also prove best-iterate ${\cal O}(\nicefrac{1}{N})$ convergence rate for \ref{eq:OG}. However, their result requires $\rho < \nicefrac{1}{12\sqrt{3}L}$, which is a $\approx 10$ times smaller range for $\rho$ than Theorem~\ref{thm:OG_convergence} allows for the best-iterate convergence.} for the literature on \ref{eq:OG}. Similarly to the case of \ref{eq:EG}, it is valid for any $\rho < \nicefrac{1}{2L}$. Next, the last-iterate ${\cal O}(\nicefrac{1}{N})$ rate is recently obtained for \ref{eq:OG} by \citet{luo2022last}. It holds for any $\rho < \nicefrac{8}{(27\sqrt{6}L)}$, while the rate that we obtain is valid for any $\rho \leq \nicefrac{5}{62L}$, which is $\approx 1.33$ times larger range.
Finally, as for \ref{eq:EG}, we derive the following result about the largest possible range for $\rho$ in the case of \ref{eq:OG}.
\begin{theorem}\label{thm:OG_counter_example}
For any $L > 0$, $\rho \geq \nicefrac{1}{2L}$, and any choice of stepsizes $\gamma_1, \gamma_2 > 0$ there exists $\rho$-negative comonotone $L$-Lipschitz operator $F$ such that \ref{eq:OG} does not necessary converges on solving \ref{eq:VIP} with this operator $F$. In particular, for $\gamma_1 > \nicefrac{1}{L}$ it is sufficient to take $F(x) = L x$, and for $0 < \gamma_1 \leq \nicefrac{1}{L}$ one can take $F(x) = L A x$, where $x \in \mathbb{R}^2$, $$A = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix},\quad \theta = \frac{2\pi}{3}.$$
\end{theorem}
Note that the counter-examples are exactly the same as for \ref{eq:EG}. Moreover, since \ref{eq:OG} can be seen as an approximation of \ref{eq:PP}, this result is expected and the same has the same intuition behind as Theorem~\ref{thm:EG_counter_example}.
\section{Discussion}
In this work, we studied worst-case convergence of methods for solving \ref{eq:MI}/\ref{eq:VIP} with (star-)negative-comonotone operators, which we believe is an important first step for going beyond the very popular monotonocity assumption, that is usually not satisfied in practice.
Namely, we study the proximal point~\eqref{eq:PP}, the extragradient~\eqref{eq:EG}, and the optimistic gradient~\eqref{eq:OG} methods. Although the basic understanding of the convergence of \ref{eq:PP} and best-iterate convergence of \ref{eq:EG} and \ref{eq:OG} is relatively complete, several open-questions about last-iterate convergence of \ref{eq:EG} and \ref{eq:OG} remain. In particular, it is unclear what is the largest possible range for $\rho$ for which one can guarantee last-iterate ${\cal O}(\nicefrac{1}{N})$ convergence of \ref{eq:EG}/\ref{eq:OG} under $\rho$-negative comonotonicity and $L$-Lipschitzness.
Moreover, another important direction for future research is identifying weaker assumptions allowing to prove non-asymptotic convergence rates for \ref{eq:PP}/\ref{eq:EG}/\ref{eq:OG} and at the same time allowing to have isolated optima or non-convex solution sets, as discussed in Section~\ref{sec:neg_comon}.
|
train/arxiv
|
BkiUdiLxK1UJ-2RwWv53
| 5 | 1 |
\section{Introduction}
Convenience for large-scale data processing, privacy preservation, and parallel algorithm execution rendered the design of distributed optimization algorithms an attractive field for scholars \cite{yu2017distributed,yang2019survey,mertikopoulos2017distributed,tekin2015distributed,chouvardas2015greedy,marano2013nearest,swenson2015empirical}. However, the distributed nature of such methods, for example, physically separated servers connected over a network, exposes the system to vulnerabilities not faced by their traditional centralized counterparts \cite{chen2018internet}. The robustness and security of distributed methods need to be taken into account when assessing algorithm performance \cite{yang2019survey}.
In a centralized system, data can be cleaned, faultless computation can be established by reliable hardware, and communication requirements are minimal. On the other hand, typical distributed algorithms assume trustworthy data, faultless computation, and reliable communication. Also, privacy constraints might not allow for data corruption checks, while distributed computing infrastructure increases the likelihood of faulty hardware, e.g., personal devices~\cite{konevcny2016federated}.
Lastly, unreliable communication might occur due to noisy wireless communication, or more importantly, due to man-in-the-middle adversarial attacks.
In man-in-the-middle attacks, an adversary can take over network sub-systems and arbitrarily alter the information stored in and communicated between the machines to prevent convergence to the optimal solution, i.e., Byzantine attacks~\cite{vempaty2013distributed}.
Robust distributed optimization under adversarial manipulation has been studied for various corruption models, see~\cite{kairouz2019advances,yang2020adversary} for comprehensive reviews. For example, gradients communicated over a network are usually modeled as corrupted by: non-malicious noise~\cite{mnih2012learning},
adversarial noise~\cite{tramer2019adversarial},
quantization~\cite{alistarh2016qsgd},
or because the gradients are inexact oracles~\cite{dvurechensky2017gradient}.
Although robust optimization methods with strong theoretical guarantees are well established~\cite{natarajan2013learning,tramer2019adversarial}, a drawback of these approaches is that the corrupt gradients are assumed to be within a bounded neighborhood of the trustworthy ones, i.e., corruption can be modeled as a bounded additive noise to the trustworthy gradients.
On the other hand, an adversarial corruption model, which can be unbounded and arbitrary, has been extensively studied in the distributed learning literature under categories of data poisoning \cite{steinhardt2017certified}
and model update poisoning attacks \cite{wu2020federated,bagdasaryan2020backdoor}.
This line of work models corruption as an arbitrary manipulation on the information sent by the agents or on the data samples stored at the agents. However, the adversary is often assumed to have limited capability, i.e., the adversary is only able to manipulate a certain fraction of agents or data samples. Although successful defense mechanisms based on robust aggregation methods \cite{chen2017distributed,chen2018draco,cao2020distributed,blanchard2017machine,yin2019defending,yin2018byzantine,yang2019byrdie}
and data sanitation using robust statistics \cite{steinhardt2017certified}
are shown to be robust to these types of manipulation, robust estimation techniques rely on a bounded $\alpha$ fraction of agents/data points being corrupt at all times. Therefore, they are not applicable if there exists iterations with more than $\alpha$ fraction of corrupted agents. For instance, if at any iteration more than half of the agents behave unpredictably and send arbitrarily corrupt information, then the aggregate will be arbitrarily corrupted. In fact, it was recently shown that even more benign-looking manipulations are able to get through these defense mechanisms with corruption rates as low as $.5-1\%$~\cite{wang2020attack}.
In this paper, we study another corruption model where existing defense mechanisms are prone to failure. In particular, we adopt a distributed optimization framework where a group of agents communicates local gradient information to a central machine that aggregates and distributes information back to the agents. By modeling the temporal dynamics of the agents' states (either trustworthy or corrupted/Byzantine) via a two-state Markov chain, we allow \emph{all the agents} to be susceptible to \emph{arbitrary corruption}. This means that, even with low corruption rates, there could exist iterations at which the majority of the agents send corrupt gradients to the central machine. For example, the setting where an adversary is able to hack into the communication channels for a finite duration of time, but also is able to take over any communication channel with some success rate fits into the scope of this model. Besides adversarial manipulation, applications that rely on user data and choices (e.g., text data) can encounter this type of corruption since users can make (un)intentional mistakes (e.g., typing in multiple languages) at random times.
For this setting, we develop a robust distributed optimization algorithm with provable convergence guarantees for a number of function classes.
\noindent\textbf{Contributions:}
Our main contribution is a distributed stochastic optimization algorithm, named Robust Averaging Normalized Gradient Method (RANGE), that achieves order-optimal statistical error rates and convergence rates while being robust to a new proposed Markovian gradient corruption model.
\begin{itemize}[leftmargin=5mm, noitemsep, topsep = .5mm]
\item We propose a novel Markovian Byzantine agent model that models dynamically changing sets of Byzantine agents with no assumptions on the maximum fraction of Byzantine agents at a particular iteration.
\item We study two settings for stochastic optimization for RANGE, namely Sample Average Approximation (SAA) and Stochastic Approximation (SA). We prove that for both SAA and SA, when the parameters are tuned appropriately according to the spectral gap of the Markov chain, RANGE converges to a neighborhood of the optimal solution at a linear rate for strongly convex cost functions.
\item We prove that for smooth (possibly non-convex) cost functions, RANGE converges to a neighborhood of a stationary point at a rate of ${\cal O}(1/\sqrt{T})$, where $T$ is the number of iterations.
\item For the SAA setting, we show that RANGE achieves lower error rates in the Markovian Byzantine agent setup with expected $\alpha$ fraction of Byzantine agents than state-of-the-art algorithms in the setup with a bounded $\alpha$ fraction of Byzantine agents for all iterations.
\item We show that RANGE achieves lower statistical error rates in the SA setting than the SAA setting for sufficiently low corruption rates, i.e., the expected fraction of Byzantine agents. We provide explicit characterization of such bound.
\item We provide numerical evidence demonstrating efficacy and robustness of RANGE in the proposed setting.
\end{itemize}
RANGE is designed with three ingredients: (1) temporal gradient averaging, (2) robust aggregation, and (3) gradient normalization. The temporal gradient averaging step estimates the robust mean of each agent's historical gradient data over a finite window to compute a robustified gradient for each agent. Informally, the received gradients over a period of time contain a fraction of trustworthy information that can be extracted by the algorithm to perform faithful computations rather than applying potentially corrupt gradients directly. In case the robustified gradient produced by temporal averaging becomes contaminated by corruption, another layer of defense mechanism is implemented via robust aggregation of all the agents' robustified gradients. Lastly, normalization preserves only the directional information and thus prevents large updates that corrupt gradients might cause in case the temporal averaging and robust aggregation steps do not sufficiently eliminate corruptions.
\noindent\textbf{Related work:} Our work has connections to the literature on (i) normalized gradient method, (ii) gradient clipping, and (iii) delayed gradient descent.
\begin{itemize}[wide, labelindent=0pt,topsep=.5mm]
\item \emph{Normalized Gradient Method: } Normalized gradient method is a well-studied algorithm for optimization and is supported by theoretical convergence guarantees for convex~\cite{nesterov2004introductory}
and quasi-convex optimization~\cite{hazan2015beyond}. Using normalized updates is gaining popularity, especially for non-convex optimization~\cite{you2017large},
since for non-convex objectives, unlike the convex ones, the magnitude of the gradient provides less information about the value of the function, while the direction still indicates the direction of steepest descent.
An important benefit of this is the fast evasion of saddle points~\cite{levy2016power}.
Seeing the need for large batch sizes for variance reduction of stochastic gradients as a drawback of normalized updates, a recent work \cite{cutkosky2020momentum} proves that adding momentum removes the need for large batch sizes on non-convex objectives while matching the best-known convergence rates. In a preliminary conference report~\cite{turan2020robustness}, we investigated the robustness properties of the normalized subgradient method for solving deterministic optimization problems in a centralized fashion. In the current work, we expand \cite{turan2020robustness} into a distributed setup with a stochastic objective function, additionally study non-convex objectives both theoretically and numerically, and employ two additional layers of defense by means of robust mean estimation before applying normalization to improve our algorithm.
\item \emph{Gradient Clipping: }
As a similar method to normalization, gradient clipping is a common technique in optimization used for privacy \cite{abadi2016deep}.
Recent studies demonstrate that gradient clipping can be applied for robustness to model update poisoning attacks \cite{sun2019can} and label noise~\cite{menon2020can}.
However, similar to robust distributed optimization literature, due to the limitations on the amount of corruption and adversarial agents, their methods are inapplicable in our setting and can be outperformed, as we will show numerically in Section~\ref{sec:numerical}.
\item \emph{Delayed Gradient Descent: }Gradient averaging step of our method is in principle similar to a delayed gradient descent method \cite{nedic2001distributed}, since averaging is a linear combination of the past gradients. Motivated by applications to distributed optimization over networks, researchers have established convergence guarantees for deterministic \cite{gurbuzbalaban2017convergence}
and stochastic delayed gradient methods \cite{agarwal2012distributed}.
Given strong theoretical results, we integrate the delayed gradient method to our algorithm via averaging and show that it improves robustness.
\end{itemize}
\noindent
\textbf{Paper Organization:}
The remainder of the paper is organized as follows. In Section~\ref{sec:problem}, we define the problem setting. In Section~\ref{sec:range}, we describe our algorithm called RANGE and discuss how it can solve the proposed problem. In Sections~\ref{sec:SAA} and \ref{sec:SA}, we present the convergence properties of RANGE for the SAA and SA settings, respectively. In Section~\ref{sec:special}, we discuss two special cases of RANGE, one without temporal averaging and one with independent random corruption. In Section~\ref{sec:numerical}, we provide numerical results for RANGE.
\noindent \textbf{Notations and conventions: } Unless otherwise specified, $\| \cdot \|$ denotes the standard Euclidean norm. For any $N \in \mathbb N$, $[N]$ denotes the finite set $\{1,...,N\}$. Given a vector $v$, if $\|v\|=0$, then $v/\|v\|=0$. The ${\cal O}(\cdot)$ notation hides constants, logarithmic terms, and only includes the dominant terms. Given a function $f(x,z)$, $\partial_k f(x,z)$ denotes the partial derivative of $f(x,z)$ with respect to $k$'th coordinate of $x$.
\section{Problem Setup}\label{sec:problem}
In this section, we formally set up our problem and introduce key concepts and definitions that will be used in the paper. We are interested in the stochastic optimization problem
\begin{equation}\label{eq:mainproblem}
x^\star=\underset{x\in{\cal X}}{\arg\min}~ F(x)=\underset{x\in{\cal X}}{\arg\min}~\underset{z\sim {\cal D}}{\mathbb{E}}[f(x,z)],
\end{equation}
where $f(x,z)$ is a cost function of a parameter vector $x\in {\cal X}\subseteq \mathbb{R}^d$ associated with a data point $z\in {\cal Z}$ and the data points are sampled from some unknown distribution ${\cal D}$. To solve \eqref{eq:mainproblem}, we study two settings for stochastic optimization, namely Sample Average Approximation (SAA) \cite{kleywegt2002sample} and Stochastic Approximation (SA) \cite{wasan2004stochastic}, in a distributed setup with one central machine and $N$ agents that compute stochastic gradients at a point $x$
via $\nabla f(x, z)$ based on independent samples $z\sim {\cal D}$.
In iterative distributed first-order methods, given the parameter vector $x_t$ at iteration $t$, the central machine receives the feedback $\nabla F_{i,t}(x_t)$ from all the agents, aggregates by computing the average, and applies a descent step to get the updated parameter $x_{t+1}$. Here,
\begin{equation}
F_{i,t}(x_t)=\frac{1}{b}\sum_{j=1}^b f(x_t,z_{i,t}^{j})
\end{equation}
is the empirical risk function and $\{z_{i,t}^{j}\}_{j\in[b]}$ are the $b$ data points used for gradient computation at agent $i$ and iteration $t$. In SAA, each agent uses a fixed set of data samples to estimate the gradient at all iterations, i.e., $\{z_{i,\tau}^j\}_{j\in[b]}=\{z_{i,\tau'}^{j}\}_{j\in[b]}$ and $F_{i,\tau}(x)=F_{i,\tau'}(x)$ $\forall i\in[N],x\in{\cal X},\tau,\tau'\in{\mathbb N}_0$ \cite{kim2015guide}. In SA, the agents sample $b$ new data points from ${\cal D}$ at each iteration and therefore $F_{i,\tau}(x)$ and $F_{i,\tau'}(x)$ are independent random variables $\forall i\in [N]$, $x\in{\cal X}$, $\tau,\tau'\in{\mathbb N}_0$ such that $\tau\neq \tau'$ \cite{kim2015guide}.
Such methods, however, rely on the feedback received from each agent being trustworthy gradient information and might fail to converge when the feedback becomes corrupt, as one single corrupt feedback can have an arbitrarily large effect. Denote the set of agents communicating a corrupt gradient information, i.e., Byzantine agents, at iteration $t$ by ${\cal B}^t$, and the set of agents communicating a trustworthy gradient information, i.e., trustworthy agents, at iteration $t$ by ${\cal T}^t$. At each iteration $t$, the feedback is determined as:
\begin{equation}\label{eq:minibatchgradients}
g_{i,t}=\left\{\begin{array}{cl}
\nabla F_{i,t}(x_t) & \text{if }i\in {\cal T}^t, \\
\star & \text{if } i\in{\cal B}^t,
\end{array}\right.
\end{equation}
where the corrupt feedback $\star$ is arbitrary and is possibly chosen by an adversary, who may have full knowledge of the problem. We note that this model encompasses a large class of scenarios where the feedback can become corrupt (e.g., errors in communication or computation, corrupt data, adversarial manipulation) since we set no restrictions on $\star$.
Contrary to existing literature, we study dynamically changing sets of Byzantine agents ${\cal B}^t$ and trustworthy agents ${\cal T}^t$, where the transition of each agent from Byzantine/trustworthy state to trustworthy/Byzantine state happens probabilistically at each iteration. In particular, we define
\begin{align}
&p_b=\mathbb{P}(i\in{\cal B}^{t+1}|i\in{\cal T}^t),\quad\forall i\in[N],\forall t,\label{eq:pb}\\
&p_t=\mathbb{P}(i\in{\cal T}^{t+1}|i\in{\cal B}^t),\quad\forall i\in[N],\forall t,\label{eq:pt}
\end{align}
where $0<p_b<p_t<1$. Accordingly, each agent's state transition over time is governed by a two-state Markov chain with transition matrix
\begin{equation}\label{eq:transitionmatrix}
M=\begin{bmatrix}
1-p_b & p_b\\
p_t & 1-p_t
\end{bmatrix},
\end{equation}
and stationary distribution
\begin{equation}
\pi^\star=\left[\frac{p_t}{p_t+p_b}~\frac{p_b}{p_t+p_b}\right],
\end{equation}
where state $0$ corresponds to the trustworthy state and state $1$ corresponds to the Byzantine state. We note that the exact knowledge of the transition probabilities is not necessary. We can take $p_b$ as an upper bound on the trustworthy to Byzantine transition probability, and $p_t$ as a lower bound on the Byzantine to trustworthy transition probability.
In the next section, we explain the first-order method we propose to obtain a near-optimal solution to~\eqref{eq:mainproblem} in setting defined by \eqref{eq:minibatchgradients}-\eqref{eq:pt}. In addition to achieving order-optimal convergence rates, our goal is to converge to a neighborhood with order-optimal statistical error rates.
For completeness, we end this section with a couple of standard definitions from convex analysis regarding a differentiable function $f:{\mathbb R^d\rightarrow{\mathbb R}}$.
\begin{definition}\label{def:smooth}
A differentiable function $f$ is said to be \mbox{\textbf{$\boldsymbol{L}$-smooth}} if there exists $L>0$ such that
\begin{equation}
\|\nabla f(x_1)-\nabla f(x_2)\|\leq L \|x_1-x_2\|,
\end{equation}
for all $x_1,x_2\in \cal X$.
\end{definition}
\begin{definition}\label{def:strong}
A differentiable function $f$ is said to be \mbox{\textbf{$\boldsymbol{\mu}$-strongly convex}} if there exists $\mu>0$ such that
\begin{equation}
\langle \nabla f(x_1)-\nabla f(x_2),x_1-x_2 \rangle\geq \mu\|x_1-x_2\|^2,
\end{equation}
for all $x_1,x_2\in \cal X$.
\end{definition}
\section{Robust Averaging Normalized Gradient Method (RANGE)}\label{sec:range}
\begin{algorithm}[tb]
\caption{Robust Averaging Normalized Gradient Method (RANGE)}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Initialize $x_1\in {\cal X}$, step size $\gamma$, window size $m$, $m_0\in{\mathbb N}_0$, $T$, and $\alpha_1,\alpha_2<0.5$ s.t. $\alpha_1 m,\alpha_2 N \in \mathbb{N}_0$.
\FOR{$t=1$ {\bfseries to} $T+m-1+m_0$}
\STATE Broadcast $x_t$ to the agents.
\STATE Receive $g_{i,t}$, defined in~\eqref{eq:minibatchgradients}, for $i\in[N]$.
\IF{$t\leq m-1$}
\STATE Set $\hat{g}_{i,t}=g_{i,t}$.
\ELSE
\STATE Compute the robust mean $\hat{g}_{i,t}$ of $\{g_{i,t-\tau}\}_{\tau=0}^{m-1}$ using~\eqref{eq:median} with parameters $\alpha_1$ and $m$, for $i\in[N]$.\label{avgstep}
\ENDIF
\STATE Compute the robust mean $\hat{\hat{g}}_t$ of $\{\hat{g}_{i,t}\}_{i\in [N]}$ using~\eqref{eq:median} with parameters $\alpha_2$ and $N$.\label{aggregationstep}
\STATE Compute $x_{t+1}=\Pi_{\cal X}\left(x_t-\gamma \hat{\hat{g}}_t/\|\hat{\hat{g}}_t\|\right)$.\label{updatestep}
\ENDFOR
\end{algorithmic}
\label{alg:distriutedavgnormalizedgd}
\end{algorithm}
To solve Problem~\eqref{eq:mainproblem} in the Byzantine setting defined by \eqref{eq:minibatchgradients}-\eqref{eq:pt}, we propose an algorithm called Robust Averaging Normalized Gradient MEthod (RANGE), which is summarized in Algorithm~\ref{alg:distriutedavgnormalizedgd}. There are three main interacting ideas behind Algorithm~\ref{alg:distriutedavgnormalizedgd} to guarantee convergence and robustness: 1) temporal gradient averaging, 2) robust aggregation, and 3) gradient normalization. Temporal gradient averaging in Step~\ref{avgstep} of Algorithm~\ref{alg:distriutedavgnormalizedgd} aims to compute a robustified gradient for all agents by estimating
a robust mean of a window of past gradients. The intuition behind this is that despite corruptions, the feedback received over a long period from every single agent contains a fraction of trustworthy information that the algorithm can extract to perform accurate computations. To defend against the scenarios where the robustified gradient produced by the temporal averaging step is corrupted for some of the agents (e.g., if the window only contains corrupted gradients), in Step~\ref{aggregationstep} of Algorithm~\ref{alg:distriutedavgnormalizedgd} we implement a second layer of robust mean estimation when aggregating all the agents' robustified gradients in order to eliminate those corrupted gradients. Lastly, by gradient normalization in Step~\ref{updatestep} of Algortihm~\ref{alg:distriutedavgnormalizedgd}, we restrict the aggregate gradient to only contain a directional information. This prevents arbitrarily large updates in case the temporal averaging and robust aggregation steps do not sufficiently eliminate the corruptions.
Let us provide a summary of Algorithm~\ref{alg:distriutedavgnormalizedgd}. At each iteration $t$, the central node receives the feedback $g_{i,t}$ according to~\eqref{eq:minibatchgradients} from all the agents. If $t\geq m$, it estimates a robustified gradient $\hat{g}_{i,t}$ for each agent $i\in[N]$ by performing a robust temporal averaging over a window of gradients $\{g_{i,t-\tau}\}_{\tau=0}^{m-1}$ using the median-based mean estimator that will be described later. If $t<m$, it simply sets $\hat{g}_{i,t}=g_{i,t}$. Then, it aggregates the robustified gradients $\{\hat{g}_{i,t}\}_{i\in[N]}$ using the same median-based mean estimator to get the robust aggregate $\hat{\hat{g}}_t$ and moves the iterate along $\hat{\hat{g}}_t/\|\hat{\hat{g}}_t\|$ with step size $\gamma$. Finally the algorithm projects the point back to the decision set $\cal X$.
To get a good grasp of why RANGE works, let us discuss how the mechanics of each step assist the convergence of the algorithm, starting with the robust mean estimator.
\noindent\textbf{Robust mean estimator:}
Suppose that we have a set of $k$ vectors $\{v_i\in \mathbb{R}^d\}_{i=0}^{k-1}$ that may contain corrupted values, whose identities are not known. We wish to estimate the mean of the trustworthy vectors robustly by minimizing the impact of the corrupt gradients on the mean estimate, potentially by filtering the corrupt gradients out. We consider a simple median-based estimator applied to each coordinate $j=1,\dots,d$. First, define the coordinate-wise median as $\left[ { v}_{\sf med} \right]_j = {\sf med}\left( \{ [ v_{i}]_j \}_{i=0}^{k-1} \right)$,
where ${\sf med(\cdot)}$ computes the coordinate-wise medians. Then, our estimator is computed as the mean of the nearest \mbox{$(1-\alpha)k$} neighbors of $\left[ { v}_{\sf med} \right]_j$, where $\alpha$ is a chosen threshold parameter such that $\alpha k\in\mathbb{N}_0$.
We propose the estimator
\begin{equation}\label{eq:median}
[\widehat{v} ]_j = \frac{1}{(1-\alpha)k} \sum_{i \in {\cal N}_j} [{ v}_i ]_j,
\end{equation}
where $ {\cal N}_j = \{ i \in \{0,1,\dots,k-1\}: \big| \big[ { v}_{i} - { v}_{\sf med} \big]_j \big| \leq r_j \}$,
such that $r_j$ is chosen to satisfy $|{\cal N}_j| = (1-\alpha)k$.
The outcome of this estimator depends on the threshold parameter $\alpha$. If $\alpha$ is chosen such that the number of trustworthy vectors is less than $(1-\alpha)k$, then ${\cal N}_j$ will contain arbitrarily corrupted gradients and the estimate will be arbitrarily corrupted. However, if $\alpha$ is chosen such that the number of trustworthy vectors is at least $(1-\alpha)k$, we have the following theoretical guarantees for the performance of this estimator:
\begin{proposition}\cite[Proposition 2]{turan2021resilient}\label{prop:median}
Let ${\cal H}$ be the set of trustworthy vectors and $|{\cal H}|\geq (1-\alpha)k$. Let $\overline{v}_{\cal H}$ be the mean of the trustworthy vectors. Suppose that $\max_{i \in {\cal H}} \| v_{i} - \overline{ v}_{\cal H} \|_\infty \leq r$,
then for any $\alpha \in [0, {1}/{2})$, it holds that:
\begin{equation}
\label{eq:median_bound}
\|\widehat{v} - \overline{v}_{\cal H} \| \leq \frac{2\alpha}{1-\alpha} \left( 1 + \sqrt{\frac{(1-\alpha)^2}{1-2\alpha}} \right) r \sqrt{d}.
\end{equation}
\end{proposition}
We note that the right hand side of~\eqref{eq:median_bound} can be approximated as ${\cal O}(\alpha r\sqrt{d})$ for small $\alpha$.
\noindent \textbf{Temporal averaging:}
Following the mechanics of the robust mean estimator, two scenarios can happen every time robust temporal averaging is applied in Step~\ref{avgstep} of Algorithm~\ref{alg:distriutedavgnormalizedgd} to a window of $m$ latest gradients from each agent: (i) there are less than $(1-\alpha_1)m$ trustworthy gradients in the window of size $m$; (ii) there are at least $(1-\alpha_1)m$ trustworthy gradients in the window of size $m$. Under scenario (i), ${\cal N}_j$ contains arbitrarily corrupted gradients, and therefore we have to assume that the estimated mean is arbitrarily corrupted. We say that the temporal averaging \emph{fails} in this scenario. Under scenario (ii), the estimated mean is close to the true mean of the trustworthy gradients, and the error is bounded by \eqref{eq:median_bound}. Therefore if scenario (ii) happens at any iteration, instead of using a probably corrupt gradient that can be adversarial, the robust temporal averaging step computes a robustified gradient close to the mean of past trustworthy gradients. We say that the temporal averaging \emph{succeeds} in this scenario. Accordingly, we can view scenario (ii) as a perturbed version of the delayed gradient method, whose convergence properties have been well-studied \cite{gurbuzbalaban2017convergence}.
Note that both scenarios (i) and (ii) happen with some probability determined by $p_t$, $p_b$, $\alpha_1$ and $m$.
\noindent \textbf{Robust aggregation:}
Similar to the temporal averaging step, two scenarios can happen every time robust aggregation is applied in Step~\ref{aggregationstep} of Algorithm~\ref{alg:distriutedavgnormalizedgd} to $N$ robustified gradients: (I) there are less than $(1-\alpha_2)N$ agents for which the temporal averaging step succeeds, (II) there are at least $(1-\alpha_2)N$ agents for which the temporal averaging step succeeds. By similar arguments as above, scenario (I) results in an arbitrarily corrupted estimate, whereas scenario (II) results in an estimate that is close to the true mean of the successfully robustified gradients, and the error is bounded by \eqref{eq:median_bound}. We note that both scenarios (I) and (II) happen with some probability determined by $\alpha_2$, $N$, and probabilities of scenarios (i) and (ii).
\noindent \textbf{Gradient normalization:}
The main idea behind the normalization step is to prevent large updates that corrupt gradients might cause. Since in case of scenario (I), the temporal averaging and the robust aggregation steps fail to produce an aggregate gradient estimate for which the theoretical error bounds hold, we have to assume that the aggregate gradient estimate becomes arbitrarily corrupted. When this happens and corruptions get past through the two layers of defense, we limit the amount of damage caused by the corrupted aggregate gradient estimate by normalization.
In the next section, we state the convergence guarantees of RANGE for strongly convex and smooth (possibly non-convex) cost functions for the SAA setting.
\section{Convergence Properties of RANGE \\ for the SAA Setting} \label{sec:SAA}
Before presenting the convergence results, we need to state some technical assumptions.
\begin{assumption}\label{ass:subgamma}
For all $k\in [d]$ and $x\in{\cal X}$, define the random variable $f_k(x,z)\vcentcolon=\partial_k f(x,z)-\partial_k F(x)$. We assume that for all $k\in [d]$ and $x\in{\cal X}$, $f_k(x,z)$ is a sub-gamma random variable with variance factor $\sigma$ and scale parameter $a$ for some $a\geq 0$, i.e.:
\begin{equation}
\ \log{\underset{z\sim{\cal D}}{\mathbb{E}}[e^{\lambda f_k(x,z)}]}\leq \frac{\lambda^2\sigma^2}{2(1-a|\lambda|)},~\forall x, k, |\lambda|<\frac{1}{a}.
\end{equation}
\end{assumption}
Assumption~\ref{ass:subgamma} shows bounded moments of the loss function with respect to the data distribution. Note that sub-Gaussian/sub-exponential random variables satisfy Assumption~\ref{ass:subgamma} with $a=0$/$a=\sigma$, respectively. Therefore, Assumption~\ref{ass:subgamma} is less restrictive than sub-Gaussian/sub-exponential assumptions in the literature \cite{yin2018byzantine,yin2019defending}.
\begin{assumption}\label{ass:smooth}
The function $f(\cdot,z)$ is $L$-smooth, $\forall z\in {\cal Z}$.
\end{assumption}
In addition, when $F(\cdot)$ is strongly convex, we have the following assumption on ${\cal X}$ and the minimizer of $F(\cdot)$:
\begin{assumption}\label{ass:minimizergradient}
The parameter set ${\cal X}$ is assumed to be convex and compact with diameter $R$. Furthermore, $F(x)$ has a unique minimizer $x^\star \in {\cal X}$ satisfying $\nabla F(x^\star)=0$.
\end{assumption}
Together with convexity of $F$, the above assumption implies that the minimizer of $F(\cdot)$ in ${\cal X}$ is also the minimizer of $F(\cdot)$ in ${\mathbb R}^d$. We note that by selecting ${\cal X}$ as the euclidean norm ball of a large radius $R$, the assumption can be satisfied.
Recall from Proposition~\ref{prop:median} that in order for the error bound \eqref{eq:median_bound} to hold in Algorithm~\ref{alg:distriutedavgnormalizedgd} Step~\ref{avgstep}, the robust mean estimator \eqref{eq:median} requires that at least $(1-\alpha_1)m$ vectors in $\{g_{i,t-\tau}\}_{\tau=0}^{m-1}$ are trustworthy gradients (scenario (ii) in Sec.~\ref{sec:range}). Similarly, in order for the error bound \eqref{eq:median_bound} to hold in Algorithm~\ref{alg:distriutedavgnormalizedgd} Step~\ref{aggregationstep}, the robust mean estimator \eqref{eq:median} requires that at least $(1-\alpha_2)N$ vectors in $\{\hat{g}_{i,t}\}_{i\in[N]}$ are successfully robustified (scenario (II) in Sec.~\ref{sec:range}). In order to mathematically formalize these scenarios, we define the following random variables:
\begin{itemize}
\item $W_{i,t}=1$ if $i\in {\cal B}^t$, $0$ otherwise,
\item $Y_{i,t}=1$ if for agent $i$, $\sum_{\tau\in[t-m+1,t]}W_{i,\tau}>\alpha_1 m$ (scenario (i)), $Y_{i,t}=0$ otherwise (scenario (ii)),
\item $Z_t=1$ if $\sum_{i\in[N]}Y_{i,t}>\alpha_2 N$ (scenario (I)), $Z_t=0$ otherwise (scenario (II)).
\end{itemize}
Using the above definitions of random variables, when $Y_{i,t}=1$, the temporal averaging step fails to produce a robustified gradient for agent $i$. Therefore when $Z_t=1$, the algorithm fails to produce a robustified update direction as the robust aggregation step becomes contaminated. The challenge of the convergence analysis of Algorithm~\ref{alg:distriutedavgnormalizedgd} arises from studying both scenarios $Z_t=0$ and $Z_t=1$ along with their probabilities of happening. However, given the Markovian property of $\{W_{i,t}\}_{\forall t}$, $Z_t$ is not independent of the past, which presents an obstacle in the convergence analysis. To overcome this, we state the next lemma, which establishes a uniform bound on probability that $Z_t=1$ given the network state at an earlier time instant:
\begin{lemma}\label{lem:practicalbound}
Let ${\cal S}_{t}=\{x_{t},\{\pi_{t}^i\}_{i\in[N]}\}$ denote the system state at iteration $t$, where $\pi_{t}^i$ is the distribution of the state of agent $i$ at iteration $t$. Given the algorithm parameters $(m,N,\alpha_1,\alpha_2,M)$ with $\alpha_1>p_b/(p_b+p_t)$, for all $m_0\in {\mathbb N}_0$ and $t\geq m+m_0$, there exists a uniform bound on $\mathbb{P}(Z_t=1|{\cal S}_{t-m+1-m_0})=\mathbb{E}[Z_t|{\cal S}_{t-m+1-m_0}]$ independent of ${\cal S}_{t-m+1-m_0}$ such that:
\begin{equation}
\mathbb{E}[Z_t|{\cal S}_{t-m+1-m_0}]\leq P_Z(m_0,m,N,\alpha_1,\alpha_2,M).
\end{equation}
Furthermore, let $P_Z^m(m_0)\vcentcolon= P_Z(m_0,m,N,\alpha_1,\alpha_2,M)$. Then, the following holds for all $m_0\in{\mathbb N}_0$:
\begin{align}\label{eq:PZpractical}
P_Z^m(m_0){\leq}\hspace{-.4cm}\sum_{k=\alpha_2 N+1}^N\hspace{-.2cm} \binom{N}{k}(P_Y^m(m_0))^k(1{-}P_Y^m(m_0))^{(N{-}k)},
\end{align}
where
\begin{equation}\label{eq:PYpractical}
P_Y^m(m_0){=}K(m_0) \exp{\left(\frac{{-}(p_b{+}p_t)m(\alpha_1{-}\frac{p_b}{p_b+p_t})^2}{12}{+}\frac{p_b{+}p_t}{5}\right)},
\end{equation}
and
\begin{equation}\label{eq:Kpractical}
K(m_0)=\sqrt{1+(1-p_b-p_t)^{2m_0}{p_t}/{p_b}}.
\end{equation}
\end{lemma}
Proof of Lemma~\ref{lem:practicalbound} can be found in Appendix~\ref{app:practicalbound}. Given a non-negative integer $m_0\in{\mathbb N}_0$, Lemma~\ref{lem:practicalbound} sets an upper bound on $\mathbb{E}[Z_t|{\cal S}_{t-m+1-m_0}]$ independent of the system state at time $t-m+1-m_0$, but only as functions of the algorithm parameters $(m,N,\alpha_1,\alpha_2,M)$ and $m_0$. Although Lemma~\ref{lem:practicalbound} provides a practical closed form bound, it is derived by using a Chernoff-type bound for Markov chains \cite{lezaud1998chernoff}. It facilitates exposition of the method but it is not tight. In Appendix~\ref{app:tighterbound}, we provide a tighter bound on $P_Z^m(m_0)$. Note that by Hoeffding's inequality, we have $P_Z^m(m_0) \leq e^{-2(\alpha_2-P_Y^m(m_0))^2 N}$.
\subsection{Strongly Convex and Smooth Functions}
We are now ready to present the main technical result on convergence guarantees of RANGE for a strongly convex cost function $F(\cdot)$. For the following result, we do not require strong convexity of individual cost functions $f(\cdot,z)$.
\begin{theorem}\label{thm:stronglycvx}
Let $F(\cdot)$ be $\mu$-strongly convex and Assumptions~\ref{ass:subgamma},~\ref{ass:smooth}, and~\ref{ass:minimizergradient} hold. Define the condition number as $\kappa\vcentcolon= L/\mu$. Let $m_0\in {{\mathbb N}_0}$ be a non-negative integer. If the algorithm parameters $(m,N,\alpha_1,\alpha_2,M)$ satisfy
\begin{equation}\label{eq:stronglycvxcondition}
P_Z^m(m_0)< \frac{1}{1+\kappa},
\end{equation}
then for any $T\geq 1$, the iterates produced by Algorithm~\ref{alg:distriutedavgnormalizedgd} in the SAA setting, with
\begin{equation}
\gamma\leq\frac{4\sigma}{\overline{C}(m_0)\mu\sqrt{(1-\alpha_2)Nb}},
\end{equation}
have the following property:
\begin{equation}\label{eq:stronglycvxresult}
\begin{split}
\mathbb{E}&[\|x_{T+m+m_0}-x^\star\|^2]\leq\\
&\left(\|x_{1}-x^\star\|+\gamma(m+m_0-1)\right)^2(1-c_0(m_0)\gamma)^T\\
&+ {\cal O}\left(\frac{1}{\sqrt{N b}}+\frac{C_{\alpha_2}}{\sqrt{b}}\right),
\end{split}
\end{equation}
where
\begin{align}
\label{eq:c0}c_0(m_0)=&\frac{2}{\kappa R}(1-P_Z^m(m_0)(1+\kappa)),\\
\begin{split}\label{eq:cbar}
\overline{C}(m_0)=&1+4P_Z^m(m_0)(1+1/\kappa)(m-1+m_0)\\
&+4\kappa(m-1)(1+C_{\alpha_1}+2C_{\alpha_2}(C_{\alpha_1}+1)),
\end{split}\\
\label{eq:calpha}C_{\alpha_i}=&\frac{2\alpha_i}{1-\alpha_i} \left( 1 + \sqrt{\frac{(1-\alpha_i)^2}{1-2\alpha_i}}\right)\sqrt{d},~i=1,2.
\end{align}
\end{theorem}
The proof of Theorem~\ref{thm:stronglycvx} and the explicit constants of~\eqref{eq:stronglycvxresult} can be found in Appendix~\ref{app:stronglycvx}. According to Theorem~\ref{thm:stronglycvx}, RANGE provides convergence to a neighborhood of the optimal solution at a linear rate as long as~\eqref{eq:stronglycvxcondition} is satisfied, where the neighborhood of convergence is
\begin{equation}\label{eq:strongcvxneighborhood}
{\cal O}\left(\frac{1}{\sqrt{N b}}+\frac{C_{\alpha_2}}{\sqrt{b}}\right).
\end{equation}
\begin{remark}
In Appendix~\ref{app:stronglycvx}, we show that the only dependence of the neighborhood of convergence in~\eqref{eq:strongcvxneighborhood} on $m_0$ is through $P_Z^m(m_0)$ given by~\eqref{eq:PZpractical}. By taking $m_0\gg 1$, we minimize~\eqref{eq:Kpractical} to get $K(m_0)\approx 1$, which then minimizes $P_Y^m(m_0)$ and $P_Z^m(m_0)$. Hence we get a tight asymptotic bound with respect to $m_0$ for $m_0\rightarrow\infty$.
\end{remark}
\noindent\textbf{Impact of temporal averaging:}
The temporal averaging step helps reducing the neighborhood of convergence by reducing the effective fraction of Byzantine agents at each iteration. The convergence neighborhood given by \eqref{eq:strongcvxneighborhood} consists of two terms: first term due to variance of the stochastic gradients and the second term due to Byzantine agents. In \cite{yin2018byzantine}, it is shown that in the setting with a bounded $\alpha$ fraction of Byzantine agents, no algorithm can achieve an error lower than
\begin{equation}\label{eq:lowerbound}
\tilde{{\Omega}}\left(\frac{1}{\sqrt{Nb}}+\frac{\alpha}{\sqrt{b}}\right).
\end{equation}
In the stationary distribution of the Markov chain, the probability that an agent is Byzantine is equal to $p_b/(p_b+p_t)$, and the expected fraction of Byzantine agents in an iteration is $p_b/(p_b+p_t)$. Therefore, it is reasonable to argue that $\alpha$ in~\eqref{eq:lowerbound} is similar to $p_b/(p_b+p_t)$. For Lemma~\ref{lem:practicalbound} to hold, we need $\alpha_1>{p_b}/({p_b+p_t})$ and thus $\alpha_1$ is of the order of $\alpha$ in~\eqref{eq:lowerbound}. On the other hand, our error bound in \eqref{eq:strongcvxneighborhood} is a function of $C_{\alpha_2}$, where $C_{\alpha_2}={\cal O}(\alpha_2)$ for small $\alpha_2$. Because RANGE aims to eliminate $\alpha_2$ fraction of agents' robustified gradients via robust aggregation, it can be viewed as the effective fraction of Byzantine agents, and hence our bound is consistent with \eqref{eq:lowerbound}. Interestingly, we can set $\alpha_2$ as arbitrarily small as possible. Note that we need to satisfy \eqref{eq:stronglycvxcondition} for convergence. However, in~\eqref{eq:PZpractical}, we can select $\alpha_2$ sufficiently small for small $P_Y^m(m_0)$. But $P_Y^m(m_0)$ is given by \eqref{eq:PYpractical}, which is a function of $\alpha_1$, $m$, and $m_0$. As such, we can always set $m$ arbitrarily large such that $P_Y^m(m_0)$ is arbitrarily small for all $m_0\in{\mathbb N}_0$, and hence we can select $\alpha_2$ arbitrarily small to satisfy \eqref{eq:stronglycvxcondition}. As a result, by employing the robust temporal averaging step before robust aggregation, which is beneficial only when agents' states change over time, we reduce the effective fraction of Byzantine agents from $p_b/(p_b+p_t)$ to $\alpha_2$. This setup shows that the lower bound~\eqref{eq:lowerbound} does not hold for the proposed Markovian setting as a result of temporal averaging.
Observe that choice of $m$, $\alpha_1$, and $\alpha_2$ plays an important role, as they determine whether~\eqref{eq:stronglycvxcondition} holds or not. Next, we discuss how to pick $m$, $\alpha_1$, and $\alpha_2$ in practice.
\noindent \textbf{Choices of $m$, $\alpha_1$, and $\alpha_2$:}
For a given transition matrix $M$, there always exists a set of parameters $m$, $\alpha_1$, and $\alpha_2$ such that \eqref{eq:stronglycvxcondition} is satisfied. Finding a principled way to select the set of optimal parameters is currently an open question. Instead, next we describe an implementable closed form expression for the minimum window size as a function of $\alpha_1$, $\alpha_2$, $p_t$, and $p_b$. Using Hoeffding's inequality on \eqref{eq:PZpractical}, we rewrite \eqref{eq:stronglycvxcondition} as:
\begin{equation}\label{eq:hoeffdingpz}
P_Z^m(m_0)\leq \exp (-2(\alpha_2-P_Y^m(m_0))^2N)<\frac{1}{1+\kappa}.
\end{equation}
This gives us the condition on $\alpha_2$\footnote{\label{ft:alpha2}This is the condition on $\alpha_2$ for the Hoeffding bound \eqref{eq:hoeffdingpz} to hold, rather than the exact inequality in \eqref{eq:PZpractical}. Consequently, it results in an additive $\sqrt{\log(1+\kappa)/(2N)}$ term that is independent of $P_Y^m(m_0)$. However, this does not contradict our statement that we can set $\alpha_2$ arbitrarily small by reducing $P_Y^m(m_0)$ with a large window $m$, since that statement is based on the exact form in \eqref{eq:PZpractical} rather than the Hoeffding bound \eqref{eq:hoeffdingpz}.}:
\begin{equation}
\alpha_2>P_Y^m(m_0)+\sqrt{\frac{\log(1+\kappa)}{2N}}.
\end{equation}
Next, observe from \eqref{eq:PYpractical} that $P_Y^m(m_0)\leq P_Y^m(0)$, and therefore the above condition is met if
\begin{equation}
\alpha_2> P_Y^m(0)+\sqrt{\frac{\log(1+\kappa)}{2N}}.
\end{equation}
We plug \eqref{eq:PYpractical} with $m_0=0$ into the above inequality and rearrange to get the minimum window size that is sufficient for convergence as a function of $\alpha_1$, $\alpha_2$, $p_t$, and $p_b$:
\begin{corollary}\label{cor:minwindow}
Let $\alpha_2>\sqrt{\log(1+\kappa)/(2N)}$, $\alpha_1>p_b/(p_b+p_t)$, and
\begin{equation}\label{eq:minwindow}
m>\frac{12\log\left(\frac{(p_b+p_t)e^{\frac{p_t+p_b}{5}}}{p_b\left(\alpha_2-\sqrt{{\log(1+\kappa)}/{(2N)}}\right)}\right)}{(p_t+p_b)(\alpha_1-\frac{p_b}{p_t+p_b})^2},
\end{equation}
then \eqref{eq:stronglycvxcondition} holds for all $m_0\in{\mathbb N}_0$.
\end{corollary}
Corollary~\ref{cor:minwindow} is convenient in practice for selecting $\alpha_1$, $\alpha_2$, and $m$. Given $p_b$, $p_t$, and $\kappa$, one picks $\alpha_1$ such that $p_b/(p_b+p_t)<\alpha_1<0.5$ and $\alpha_2$ such that $\sqrt{\log(1+\kappa)/(2N)}<\alpha_2<0.5^{\ref{ft:alpha2}}$ and $\alpha_2 N\in{\mathbb N}_0$. Then, the window size is $m$ is picked such that $\alpha_1 m \in\mathbb{N}_0$ and~\eqref{eq:minwindow} holds. With these parameter choices, \eqref{eq:stronglycvxcondition} is satisfied for all $m_0\in{\mathbb N}_0$. Note that the $(p_t+p_b)$ term in the denominator of~\eqref{eq:minwindow} corresponds to the spectral gap of the Markov chain. Therefore, smaller spectral gap, which also implies larger relaxation and mixing time~\cite{levin2017markov}, results in a larger minimum window size. Intuitively, we select the window size at the order of the mixing time so that the Markov chain gets close to its stationary distribution and the Byzantine agents transition into trustworthy state. This allows the temporal averaging step to successfully produce a robustified gradient by extracting the trustworthy information.
\subsection{Smooth (Possibly Non-Convex) Functions }
Next, we study the convergence of RANGE for smooth (possibly non-convex) cost functions. For this problem class, we need the following assumption on ${\cal X}$:
\begin{assumption}\label{ass:noncvxball}
Problem~\eqref{eq:mainproblem} is unconstrained, i.e., ${\cal X}={\mathbb R}^d$.
\end{assumption}
The next theorem states the convergence guarantees of RANGE for smooth $F(\cdot)$:
\begin{theorem}\label{thm:noncvx}
Let Assumptions \ref{ass:subgamma}, \ref{ass:smooth}, and \ref{ass:noncvxball} hold. Choose step-size $\gamma={\gamma_0}/{\sqrt{T}}$ with $\gamma_0>0$. Let $m_0\in {{\mathbb N}_0}$ be a non-negative integer. If the algorithm parameters $(m,N,\alpha_1,\alpha_2,M)$ satisfy
\begin{equation}\label{eq:noncvxcondition}
P_Z^m(m_0) < 1/2,
\end{equation}
then for any $T \geq 1$, the iterates $\{ x_t \}_{t=m+m_0}^{T+m-1+m_0}$ produced by Algorithm~\ref{alg:distriutedavgnormalizedgd} in the SAA setting satisfy
\begin{equation}\label{eq:thmnoncvxfinalresult}
\begin{split}
&\frac{1}{T}\sum_{t=m+m_0}^{T+m-1+m_0}\mathbb{E}[\|\nabla F(x_t)\|]\\
&\leq\frac{F(x_1)-F(x^\star)}{\sqrt{T}\gamma_0(1-2P_Z^m(m_0))}+\frac{\overline{C}(m_0)\gamma_0}{\sqrt{T}}+{\cal O}\left(\frac{1}{T}\right)\\
&+{\cal O}\left(\frac{1}{\sqrt{N b}}+\frac{C_{\alpha_2}}{\sqrt{b}}\right),
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}\label{eq:cbarnoncvx}
&\overline{C}(m_0)=L\Big(1/2+4(m-1+m_0)P_Z^m(m_0)\\
&+2(m-1)(1+C_{\alpha_1}+2C_{\alpha_2}(C_{\alpha_1}+1))\Big),
\end{split}
\end{equation}
and $C_{\alpha_i}$ for $i=1,2$, are given by \eqref{eq:calpha}.
\end{theorem}
Proof of Theorem~\ref{thm:noncvx} and the full expression of~\eqref{eq:thmnoncvxfinalresult} can be found in Appendix~\ref{app:noncvx}. According to Theorem~\ref{thm:noncvx}, there exists a point $\tilde{x}\in\{x_{m+m_0},\dots,x_{T+m-1+m_0}\}$ produced by RANGE such that
\begin{equation}\label{eq:noncvxneighborhood}
\begin{split}
\mathbb{E}\|\nabla F(\tilde{x})\|\leq&{\cal O}\left(\frac{1}{\sqrt{N b}}+\frac{C_{\alpha_2}}{\sqrt{b}}+\frac{1}{\sqrt{T}}\right).
\end{split}
\end{equation}
Note that when $T\rightarrow\infty$, the right hand side of the above inequality is the same as \eqref{eq:strongcvxneighborhood}.
In the next section, we state the convergence guarantees of RANGE for strongly convex and smooth (possibly non-convex) cost functions for the SA setting.
\section{Convergence Properties of RANGE \\
for the SA Setting}\label{sec:SA}
Theorems~\ref{thm:stronglycvx} and \ref{thm:noncvx} state convergence guarantees of RANGE for the SAA setting. The next theorem states a similar convergence result for the SA setting for strongly convex cost functions.
\begin{theorem}\label{thm:stronglycvxiid}
Let $F(\cdot)$ be $\mu$-strongly convex and Assumptions~\ref{ass:subgamma}, \ref{ass:smooth}, and \ref{ass:minimizergradient} hold. Define the condition number as $\kappa\vcentcolon= L/\mu$. Let $m_0\in {{\mathbb N}_0}$ be a non-negative integer. If the algorithm parameters $(m,N,\alpha_1,\alpha_2,M)$ satisfy
\begin{equation}\label{eq:stronglycvxiidcondition}
P_Z^m(m_0)< \frac{1}{1+\kappa},
\end{equation}
then for any $T\geq 1$, the iterates produced by Algorithm~\ref{alg:distriutedavgnormalizedgd} in the SA setting, with
\begin{equation}
\gamma\leq\frac{4\sigma}{\overline{C}(m_0)\mu\sqrt{(1-\alpha_2)N(1-\alpha_1)mb}},
\end{equation}
have the following property:
\begin{equation}\label{eq:stronglycvxresultiiddata}
\begin{split}
\mathbb{E}&[\|x_{T+m+m_0}-x^\star\|^2]\leq\\
&\left(\|x_{1}-x^\star\|+\gamma(m+m_0-1)\right)^2(1-c_0(m_0)\gamma)^T\\
&+{\cal O}\left(\frac{1}{\sqrt{(1{-}\alpha_1)mNb}}{+}\frac{C_{\alpha_2}{+}C_{\alpha_1}(1{+}C_{\alpha_2})}{\sqrt{b}}\right),
\end{split}
\end{equation}
where $c_0(m_0)$, $\overline{C}(m_0)$ and $C_{\alpha_i}$ for $i=1,2$, are given by \eqref{eq:c0}, \eqref{eq:cbar}, and \eqref{eq:calpha}.
\end{theorem}
Proof of Theorem~\ref{thm:stronglycvxiid} and the full expression of \eqref{eq:stronglycvxresultiiddata} can be found in Appendix~\ref{app:stronglycvxiid}. According to Theorem~\ref{thm:stronglycvxiid}, RANGE provides convergence to a neighborhood of the optimal solution at a linear rate as long as \eqref{eq:stronglycvxiidcondition} is satisfied, where the neighborhood of convergence is
\begin{equation}\label{eq:stronglycvxerrororderiid}
{\cal O}\left(\frac{1}{\sqrt{(1{-}\alpha_1)mNb}}{+}\frac{C_{\alpha_2}{+}C_{\alpha_1}(1{+}C_{\alpha_2})}{\sqrt{b}}\right).
\end{equation}
Comparing the above result to \eqref{eq:strongcvxneighborhood}, there are two impacts of using the SA setting instead of the SAA setting:
\begin{enumerate}[wide, labelindent=0pt,topsep=.5mm]
\item The error due to variance of the stochastic gradients reduces by a factor of ${\cal O}(\sqrt{(1-\alpha_1)m})$,
\item The error due to Byzantine agents increases by a factor of ${\cal O}(1+C_{\alpha_1}+C_{\alpha_1}/C_{\alpha_2})$.
\end{enumerate}
When agents use new samples at each iteration in the SA setting, temporal gradient averaging results in a variance reduction, since it estimates the mean of $(1-\alpha_1)m$ independent minibatch gradients. However, this comes at the cost of higher error caused by the Byzantine agents.
Given these two counteracting impacts of the SA setting on the error, the order of error in \eqref{eq:stronglycvxerrororderiid} is less than \eqref{eq:strongcvxneighborhood} if
\begin{equation}
C_{\alpha_1}<\frac{1}{\sqrt{N}(1+C_{\alpha_2})}.
\end{equation}
Therefore, RANGE performs better in the SA setting compared to the SAA setting if $\alpha_1\ll 1$, which is possible if $p_b\ll p_t$. Otherwise, the benefit of variance reduction provided by temporal gradient averaging is dominated by the damage caused by the Byzantine agents.
The next theorem states the convergence result of RANGE for the SA setting for non-convex cost functions.
\begin{theorem}\label{thm:noncvxiid}
Let Assumptions \ref{ass:subgamma}, \ref{ass:smooth}, and \ref{ass:noncvxball} hold. Choose step-size $\gamma={\gamma_0}/{\sqrt{T}}$ with $\gamma_0>0$. Let $m_0\in {{\mathbb N}_0}$ be a non-negative integer. If the network and algorithm parameters $(m,N,\alpha_1,\alpha_2,M)$ satisfy
\begin{equation}
P_Z^m(m_0) < 1/2,
\end{equation}
then for any $T \geq 1$, the iterates $\{ x_t \}_{t=m+m_0}^{T+m-1+m_0}$ produced by Algorithm~\ref{alg:distriutedavgnormalizedgd} in the SA setting satisfy
\begin{equation}\label{eq:thmnoncvxfinalresultiiddata}
\begin{split}
&\frac{1}{T}\sum_{t=m+m_0}^{T+m-1+m_0}\mathbb{E}[\|\nabla F(x_t)\|]\\
&\leq\frac{F(x_1)-F(x^\star)}{\sqrt{T}\gamma_0(1-2P_Z^m(m_0))}+\frac{\overline{C}(m_0)\gamma_0}{\sqrt{T}}+{\cal O}\left(\frac{1}{T}\right)\\
&+{\cal O}\left(\frac{1}{\sqrt{(1{-}\alpha_1)mNb}}{+}\frac{C_{\alpha_2}{+}C_{\alpha_1}(1{+}C_{\alpha_2})}{\sqrt{b}}\right),
\end{split}
\end{equation}
where $\overline{C}(m_o)$ and $C_{\alpha_i}$ for $i=1,2$, are given by \eqref{eq:cbarnoncvx} and~\eqref{eq:calpha}.
\end{theorem}
Proof of Theorem~\ref{thm:noncvxiid} and the full expression of \eqref{eq:thmnoncvxfinalresultiiddata} can be found in Appendix~\ref{app:noncvxiid}. Note that when $T\rightarrow\infty$, the right hand side of \eqref{eq:thmnoncvxfinalresultiiddata} is the same as \eqref{eq:stronglycvxerrororderiid}.
\section{Special Cases}\label{sec:special}
\subsection{Window Size $m=1$}
When we select the window size $m$ to be 1, we have to set $\alpha_1=0$ since $\alpha_1 m \in{\mathbb N}_0$ and $\alpha_1<0.5$. This means that we skip the robust temporal averaging step
and set the robustified gradient $\hat{g}_{i,t}=g_{i,t}$. In this case, the counterpart of Lemma~\ref{lem:practicalbound} with $m=1$ gives:
\begin{align}\label{eq:pzm1}
P_Z^1(m_0){\leq}\hspace{-.4cm}\sum_{k=\alpha_2 N+1}^N\hspace{-.2cm} \binom{N}{k}(P_Y^1(m_0))^k(1{-}P_Y^1(m_0))^{(N{-}k)},
\end{align}
where
\begin{align}
\nonumber P_Y^1(m_0)&{=}\underset{i\in[N],t}{\max}\mathbb{P}(Y_{i,t}{=}1|{\cal S}_{t-m_0}){=}\underset{i\in[N],t}{\max}\mathbb{P}(W_{i,t}{=}1|{\cal S}_{t-m_0})\\
&=\frac{p_b+p_t(1-p_b-p_t)^{m_0}}{p_b+p_t}.\label{eq:pym1}
\end{align}
Accordingly, Theorems~\ref{thm:stronglycvx}, \ref{thm:noncvx}, \ref{thm:stronglycvxiid}, and \ref{thm:noncvxiid} hold with $m=1$ and $P_Z^1(m_0)$ given by \eqref{eq:pzm1} and \eqref{eq:pym1}. Note that $P_Y^1(m_0)\rightarrow p_b/(p_b+p_t)$ as $m_0\rightarrow \infty$.
\subsection{Independent Random Corruption}
A special case of the agents' state transition occurs when $p_b+p_t=1$. We get $M=[{\pi^\star}^T~{\pi^\star}^T]^T$, and hence the state of an agent at $t+1$ is independent of the state at $t$. In this case, an agent becomes Byzantine and sends corrupted gradient information randomly with probability $p_b$ at all iterations. Hence, we can state the counterpart of Lemma~\ref{lem:practicalbound} without the need to condition on a previous time instant, i.e.,
\begin{equation}\label{eq:pznonmarkovian}
P_Z^m\vcentcolon=\mathbb{E}[Z_t] =\hspace{-.3cm}\sum_{k=\alpha_2N +1}^{N}\hspace{-.1cm}\binom{N}{k}(P_Y^m)^k(1-P_Y^m)^{(N-k)},
\end{equation}
where
\begin{equation}\label{eq:pynonmarkovian}
P_Y^m=\sum_{j=\alpha_1m+1}^{m}\binom{m}{j}p_b^jp_t^{m-j}.
\end{equation}
Accordingly, Theorems~\ref{thm:stronglycvx}, \ref{thm:noncvx}, \ref{thm:stronglycvxiid}, and \ref{thm:noncvxiid} hold with $m_0=0$ and $P_Z^m(m_0)$ replaced by $P_Z^m$ given by \eqref{eq:pznonmarkovian} and \eqref{eq:pynonmarkovian}.
\section{Numerical Experiments}\label{sec:numerical}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{linear_regression_2.pdf}
\caption{Convergence performance of RANGE in linear regression with four configurations of $(m,\alpha_1,\alpha_2)$ for $p_t=0.1$, $p_b=0.025$.}
\label{fig:linear_regression}
\end{figure}
In this section, we present numerical evidence supporting our theoretical results and demonstrating the efficacy of RANGE. The first experiment is a simple linear regression with synthetic data to illustrate the benefits of the temporal averaging step of RANGE and to compare the SAA and the SA settings. The second experiment is an image classification task on the EMNIST dataset~\cite{cohen2017emnist} using a neural network to compare the performance of RANGE to existing distributed optimization algorithms in practical non-convex tasks for the SAA setting. Both experiments were performed on a laptop computer with Intel$^\textnormal{\textregistered}$ Core$^\textnormal{TM}$ i7-8750H CPU (6$\times$2.20 GHz) and 16 GB DDR4 2666MHz RAM.
\subsection{Linear Regression with Synthetic Data}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{linear_regression_saavssa.pdf}
\caption{Comparison of the SA and the SAA settings in linear regression.}
\label{fig:savssa}
\end{figure}
We consider the following stochastic optimization problem:
\begin{equation}
x^\star=\underset{x\in{\cal X}}{\arg\min}~\underset{V,Y}{\mathbb{E}}[|Y-V^Tx|],
\end{equation}
where $V\in {\mathbb R}^d$ is the random vector corresponding to the data points and $Y\in {\mathbb R}$ is the random variable corresponding to the associated label values or outputs. We let $d=100$ and constructed the solution vector $x^\star$ by sampling a random point from the interior of the $d$-ball with radius $R=10$. In the SAA setting, the goal is to solve the following deterministic optimization problem
\begin{equation}
\underset{x\in{\cal X}}{\min}~\|y-vx\|^2,
\end{equation}
where $v\in\mathbb{R}^{B\times d}$ is a matrix containing the $B$ data vectors in its rows and $y\in\mathbb{R}^B$ is the vector containing the $B$ associated label values or outputs.
We let $B=1000$ and randomly generated the entries of $v$ from ${\cal N}(0,1)$. We distributed the data points equally among $N=10$ agents. For all $i\in[B]$, we generated the outputs $y$ according to $y_i=v_i x^\star +\xi_i$, where $\xi_i\sim{\cal N}(0,R^2)$ is the noise and $v_i$ is the $i$'th row of~$v$.
We ran RANGE for 20k iterations $p_t=0.1$ and $p_b=0.025$ using four configurations of $(m,\alpha_1,\alpha_2)$. At each iteration, we picked the corrupt gradient as $2\|\nabla F(x_t)\|({x^\star-x_t})/{\|x^\star-x_t\|}$. In Figure~\ref{fig:linear_regression}, we plot the convergence behaviour of RANGE for all four configurations. When $m=1$ and $\alpha_2=0.1$, we observe that the iterates diverge. Since $0.1<p_b/(p_b+p_t)=0.2$, the expected value of the fraction of Byzantine agents at each iteration is larger than $\alpha_2$, and hence the aggregate gradient estimate becomes corrupted most of the time. On the contrary, setting $\alpha_2=0.3$ or $\alpha_2=0.4$ provides robustness and RANGE converges. However, we observe that the configuration with $\alpha_2=0.3$ performs slightly better than the one with $\alpha_2=0.4$. This is because smaller $\alpha_2$ aggregates more agents' gradients, which results in a larger variance reduction. All in all, while selecting a smaller $\alpha_2$ provides variance reduction, it reduces the robustness of RANGE by including more agents at the robust aggregation step.
On the other hand, the configuration with $m=100$, $\alpha_1=0.3$ and $\alpha_2=0.1$ outperforms the rest. When we utilize the robust temporal averaging step, it effectively reduces the expected value of the fraction of Byzantine agents at each iteration. Consequently, we can select a smaller $\alpha_2$ in order to benefit from larger variance reduction while still being robust thanks to robust temporal averaging.
Next, we study the SA setting by re-sampling $B$ new data points at each iteration. Keeping $p_t=0.1$ constant, we simulate a high corruption rate with $p_b=0.025$ and a low corruption rate with $p_b=0.01$. We set $\alpha_1=0.3$ when $p_b=0.025$ and $\alpha_1=0.1$ when $p_b=0.01$. We fixed the window size $m=100$ and $\alpha_2=0.1$ for both cases. In Figure~\ref{fig:savssa} we plot the convergence behaviour of RANGE in the SA and the SAA settings for both corruption rates. We observe that when the corruption rate is low, RANGE performs better in the SA setting due to variance reduction provided by temporal averaging. However, when the corruption rate is high, RANGE performs worse in the SA setting as the damage caused by the Byzantine agents dominates.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{emnist_horizontal.pdf}
\caption{Training neural networks for image classification on the EMNIST dataset. Four distributed optimization algorithms are compared under $p_t=0.2$ and $p_b=0$ (left), $p_b=0.015$ (middle), and $p_b=0.15$ (right) using test accuracy of the trained network as metric. The legend is shared among all plots.}
\label{fig:mnist}
\end{figure*}
\subsection{Image Classification with EMNIST Dataset}
In this study, we experiment with the image classification task on the EMNIST dataset \cite{cohen2017emnist} in the SAA setting. We train a feed-forward neural network with two hidden layers and 64 neurons at each hidden layer. We partition the data into $N=200$ equal sizes, representing the data at $N$ agents. The batch size for gradient computation is set to $b=300$.
We train the neural network using (a) RANGE, (b) vanilla SGD, (c) median aggregation \cite{yin2018byzantine}, and (d) norm clipping \cite{sun2019can}. We note that although the median aggregation method in \cite{yin2018byzantine} and the norm clipping method in \cite{sun2019can} are developed for the setting with a bounded fraction of Byzantine agents, we implement them in the Markovian Byzantine agent setting because no existing work studies the same setup as ours. For transition probabilities $p_t=0.2$ and $p_b\in\{0,~0.05,~0.15\}$, we train three networks with learning rates $0.1$, $0.01$, and $0.001$, and pick the best-performing one. We let $m=50$, $\alpha_1=0.25$, $\alpha_2=0.2$ for RANGE when $p_b=0$ and $p_b=0.05$; $m=50$, $\alpha_1=0.45$, $\alpha_2=0.3$ when $p_b=0.15$. We set the threshold for norm clipping to be 10 as it is shown to perform well in \cite{sun2019can}. We simulate corruption by simply inverting and boosting the magnitude of the gradient, i.e., by setting $\star=-c\nabla F_{i,t}(x_t)$ in \eqref{eq:minibatchgradients}, where $c$ is sampled uniformly from $[5,15]$ at each iteration.
In Figure~\ref{fig:mnist} we plot the test accuracy of the models during training. For $p_b=0$, all algorithms successfully train the neural network as expected. For $p_b=0.05$, vanilla SGD has negligible performance as it is not robust to corruption. On the other hand, norm clipping and median aggregation methods have satisfactory performance. Nonetheless, RANGE outperforms norm clipping and median aggregation algorithms by margins of $3.7\%$ and $6.3\%$, respectively. For $p_b=0.15$, we observe that the median aggregation method also fails. The median is no longer robust since it is corrupted if more than half of the agents become Byzantine at any iteration, which frequently happens when $p_b/(p_b+p_t)$ is high. Norm clipping is still robust, however, RANGE beats it by a margin of $8.3\%$.
\section{Conclusions and Future Work}
We introduced a distributed optimization algorithm, named RANGE, that is provably robust to Byzantine failures. By modeling each agent's state transition trajectory over time, namely from trustworthy to Byzantine and vica versa, as a two-state Markov chain, we allow all the agents to be prone to failure. RANGE is based on three ideas: 1) temporal gradient averaging, which computes a robustified gradient for each agent by estimating a robust mean of a window of past gradients, 2) robust aggregation, which computes a robust mean of all the agents' robustified gradients to estimate the aggregate gradient, and 3) gradient normalization, which restricts the aggregate gradient to only contain directional information and therefore prevents arbitrarily large updates that corrupt gradients might cause. We prove that for strongly convex and smooth non-convex cost functions RANGE achieves order-optimal statistical error rates with order-optimal convergence rates. Numerical experiments on linear regression and image classification on EMNIST dataset demonstrate the robustness and efficacy of RANGE.
Future work should study accelerated formulations of RANGE. As shown in~\cite{cutkosky2020momentum} momentum-based approaches accelerate normalized SGD. Whether RANGE benefits from momentum terms while maintaining its robustness properties is an open question. Additionally, future work should study the robustness properties of statistically preconditioning approaches for large-scale distributed optimization methods~\cite{hendrikx2020statistically,yuan2020convergence,dvurechensky2021hyperfast}.
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUclHxK19JmejM812g
| 5 | 1 |
\section{Introduction}
\subsection{Main result}
\label{sec:intro:mainresult}
Throughout $k$ is an algebraically closed base field of characteristic~$0$. All schemes are $k$-schemes.
If $\Lambda$ is a right noetherian
ring then we write ${\cal D}(\Lambda)$ for $D^b_f(\Lambda)\subset D(\Lambda)$, the bounded derived category
of right $\Lambda$-modules with finitely generated cohomology. Similarly for a noetherian scheme/stack ${\cal X}$ we write
${\cal D}({\cal X}):=D^b_{\mathop{\text{\upshape{coh}}}}({\cal X})$ and if ${\cal A}$ is a quasi-coherent sheaf of noetherian algebras on a stack ${\cal X}$ then we write ${\cal D}({\cal A})$ for $D^b_{\mathop{\text{\upshape{coh}}}}({\cal A})$.
\begin{definitions}
Let ${\cal D}$ be a triangulated category. A \emph{semi-orthogonal decomposition}
${\cal D}=\langle \Dscr_i\mid i\in I\rangle$ is a list of triangulated subcategories $(\Dscr_i)_{i\in I}$ of ${\cal D}$
indexed by a totally ordered set $I$ such that
\begin{enumerate}
\item $\Dscr$ is \emph{generated} by $\Dscr_{i}$, $i\in I$. I.e. it is
the smallest triangulated subcategory of $\Dscr$ containing $\Dscr_{i}$, $i\in I$.
\item $\operatorname {Hom}(\Dscr_{i},\Dscr_{j})=0$ for $j<i$.
\end{enumerate}
\end{definitions}
Let $X$ be a scheme. A \emph{presheaf of triangulated categories} $\widetilde{{\cal E}}$ on $X$ consists
of triangulated categories $\widetilde{{\cal E}}(U)$ for all open subschemes
$U\subset X$
together with exact restriction
functors $\widetilde{{\cal E}(U)}\rightarrow \widetilde{{\cal E}(V)}$ for $V\subset U$ satisfying the usual compatibilities. A triangulated
subpresheaf $\widetilde{{\cal F}}$ of $\widetilde{{\cal E}}$ is a collection of triangulated subcategories
$\widetilde{{\cal F}}(U)\subset \widetilde{{\cal E}}(U)$ compatible with restriction.
A semi-orthogonal decomposition $\widetilde{{\cal E}}=\langle \widetilde{{\cal E}}_i\mid i\in I\rangle$ is a list of triangulated
subpresheaves $(\widetilde{{\cal E}}_i)_{i\in I}$ of $\widetilde{{\cal E}}$
indexed by a totally ordered set $I$ such that for each open $U\subset X$
we have a semi-orthogonal decomposition $\widetilde{{\cal E}}(U)=\langle \widetilde{{\cal E}}_i(U)\mid i\in I\rangle$.
If $X$ is noetherian then we write $\widetilde{{\cal D}}_X$ for the presheaf of triangulated categories
$U\mapsto {\cal D}(U)$ on $X$. For a quasi-coherent sheaf of noetherian algebras ${\cal A}$ on $X$ we similarly
put $\widetilde{{\cal D}}_{\cal A}(U)={\cal D}({\cal A}\mid U)$.
Let $G$ be a reductive group acting on a $k$-scheme $X$ such
that a ``good quotient'' $\pi:X\rightarrow X/\!\!/ G$ exists (see \S\ref{sec:goodq}
below). Then we define a presheaf of triangulated categories $\widetilde{{\cal D}}_{X/G}$ on $X/\!\!/ G$
as follows: if $U\subset X/\!\!/ G$ is open then we put $\widetilde{{\cal D}}_{X/G}(U)={\cal D}((U\times_{X/\!\!/ G} X)/G)$.
\begin{theorems}\label{nonl}
Let $G$ be a reductive group acting on a smooth variety\footnote{Variety here means an integral separated noetherian $k$-scheme.} $X$ such
that a good quotient $\pi:X\rightarrow X/\!\!/ G$ exists
and put $\tilde{{\cal D}}=\tilde{\cal D}_{X/G}$.
There exist
one-parameter subgroups $\lambda_i:G_m\rightarrow G$ of $G$, open subgroups $H^{\lambda_i}$
of $G^{\lambda_i}$
and finite dimensional
$H^{\lambda_i}$-representations $U_i$, such that $\widetilde{{\cal D}}=\langle
\dots ,\widetilde{{\cal D}}_{-2},\widetilde{{\cal D}}_{-1},\widetilde{{\cal D}}_0\rangle$ with
$\widetilde{\Dscr}_{-i}\cong\widetilde{{\cal D}}_{\Lambda_i}$ for sheaves of
${\cal O}_{X^{\lambda_i}/\!\!/ H^{\lambda_i}}$-algebras (viewed as
${\cal O}_{X/\!\!/ G}$-algebras) defined by $\Lambda_i\cong
(\operatorname {End}(U_i)\otimes_k \pi_\ast{\cal O}_{X^{\lambda_i}})^{H^{\lambda_i}}$. The restrictions of $\Lambda_i$ to affine opens
have finite global dimension.
\end{theorems}
In this theorem the notation $(-)^{\lambda_i}$ was used for the fixed points
under $\lambda_i$ (see \S\ref{sec:goodq} below). Note that $G^{\lambda_i}$ is a reductive subgroup of $G$
(see \S\ref{sec:red}) acting on $X^{\lambda_i}$.
\medskip
Theorem \ref{nonl} applies in particular to GIT stack quotients of the
form $X^{ss}/G$ where~$X$ is a smooth projective variety over an
affine variety equipped with an ample linearization. In that way Theorem
\ref{nonl} complements \cite[Theorem 2.10]{HL} (and similar results in \cite{BFK,SegalDonovan}) which constructs a
semi-orthogonal decomposition of ${\cal D}(X/G)$ in which one of the
parts is ${\cal D}(X^{ss}/G)$.
\subsection{The linear case}\label{linear}
The proof of Theorem \ref{nonl} will be reduced ultimately to the case
that $G$ is connected and $X$ is a representation.
In this section we give a more precise description of the
semi-orthogonal decomposition in this case.
We first need to
introduce more notation. Let $T\subset B$ be a maximal torus and
a Borel subgroup in $G$. Let $X(T)$ and $Y(T)=X(T)^\vee$ be
respectively the character group of $T$ and the group of one-parameter
subgroups of $T$. Let the roots of $B$ be the negative roots and let
$X(T)^{\pm}$, $Y(T)^{\pm}$ be the \hbox{(anti-)}dominant cones in $X(T)$ and $Y(T)$.
Let $\bar{\rho}\in X(T)_{\blb R}$ be half the sum of the positive roots.
Let $W$ be a finite dimensional $G$-representation
of dimension $d$ such that $X=W^\vee$ and let $R=k[X]=\operatorname{Sym}(W)$. Let
$(\beta_i)_{i=1}^d\in X(T)$ be the $T$-weights of $W$. For $\lambda\in
Y(T)^-$ define the following subsets of $X(T)_{\blb R}$
\begin{align*}
\Sigma_\lambda&=\left\{\sum_i a_i \beta_i\mid a_i\in ]-1,0],\, \langle\lambda,\beta_i\rangle=0\right\},\qquad
\Sigma:=\Sigma_0.
\end{align*}
We denote $\Sigma_\lambda^0={\rm relint}\Sigma_\lambda=\{\sum_i a_i
\beta_i\mid a_i\in ]-1,0[,\, \langle\lambda,\beta_i\rangle=0\}$. With $W_\lambda$ we denote the coinvariants for the action of $\lambda$ (i.e. the quotient
space of $W$ obtained by dividing out the weight vectors $w_i$ such that $\langle\lambda,\beta_i\rangle\neq
0$). We further denote (see also \S\ref{sec:red} below):
\begin{align*}
G^{\lambda,+}&=\{g\in G\mid \lim_{t\rightarrow 0} \lambda(t)g\lambda(t)^{-1}\text{ exists }\},
\end{align*}
(Note that $G^\lambda=G^{\lambda,+}/\mathrm {rad}\,G^{\lambda,+}$ is
the reductive Levi factor of $G^{\lambda,+}$ containing $T$.) Let
${\mathcal W}=N(T)/T$ be the Weyl group of $G$, and let ${\mathcal W}_{G^\lambda}\subset
{\mathcal W}$ be the Weyl group of~$G^\lambda$. We write
$\bar\rho_\lambda\in X(T)_{\blb R}$ for half the sum the positive roots of $G^\lambda$
and $X(T)^\lambda$ for the
$G^\lambda$-dominant weights inside $X(T)$. For $\chi\in X(T)^\lambda$ we write
$V_{G^\lambda}(\chi)=\Ind^{G^\lambda}_{G^\lambda\cap B} \chi$. I.e $V_{G^\lambda}(\lambda)$ is the irreducible $G^\lambda$-representation with highest weight
$\chi$ (or sometimes also a $G^{\lambda,+}$-representation with the
unipotent radical acting trivially). Note that $G^\lambda$ acts on
$W_\lambda$ (when we consider it as $G^{\lambda,+}$ representation we
let $\rm{rad}\, G^{\lambda,+}$ act trivially). For a
${\mathcal W}_{G^\lambda}$-invariant $\nu\in X(T)_{\blb R}$ we put
\begin{align*}
{{\cal L}_{r,\lambda,\nu}}&=X(T)^\lambda\cap (\nu-\bar{\rho}_{\lambda}+r\Sigma_\lambda^0),\\
U_{r,\lambda,\nu}&=\bigoplus_{\mu\in {\cal L}_{r,\lambda,\nu}} V_{G^\lambda}(\mu),\\
\Lambda_{r,\lambda,\nu}&=(\operatorname {End}(U_{r,\lambda,\nu})\otimes_k \operatorname{Sym}(W_\lambda))^{G^\lambda}.
\end{align*}
\begin{propositions}\label{sigma}\cite[Theorem 1.4.1]{SVdB} Assume $r\ge 1$. Then
one has $\operatorname {gl\,dim} \Lambda_{r,\lambda,\nu}<\infty$.
\end{propositions}
\begin{proof}
As $\nu$ is ${\mathcal W}_{G^\lambda}$-invariant, this
follows from \cite[Theorem 1.4.1]{SVdB} in the case
$\Sigma_\lambda^0=\Sigma_\lambda$.
Let $\Gamma=-\bar{\rho}+\left\{\sum_i a_i \beta_i\mid a_i\le 0\right\}$, $\Gamma^0={\rm relint \Gamma}$.
It is easy to modify the proof
to hold also if $\Sigma_\lambda^0\subsetneq \Sigma_\lambda$. To
replace $\Sigma$ in \cite[Theorem 1.4.1]{SVdB} with $\Sigma^0$ one
only needs to show that $\operatorname {Hom}(P_{{\mathcal L}},P_\chi)=0$ if $\chi\not \in
\Gamma^0$, and this follows by the same proof as \cite[Lemma
11.3.1]{SVdB}. To replace $\Sigma_\lambda^0$ by
$r\Sigma_\lambda^0$ is also easy.
\end{proof}
We say that $x\in X$ is $T$-stable if $x$ has finite stabilizer and closed $T$-orbit. In the case $X=\operatorname {Spec} W$
the existence of a $T$-stable point is equivalent to the cone spanned by the weights $(\beta_i)_i$ of $W$ being equal ot $X(T)_{\blb R}$.
\begin{propositions}\label{ref-1.3}
Let ${\cal D}={\cal D}(X/G)$ and assume that $X$ has a $T$-stable point. Then there exist $r_i\ge 1$, $\lambda_i\in Y(T)^-$
and ${\mathcal W}_{G^{\lambda_i}}$-invariant $\nu_i\in X(T)_{\blb R}$
such that
${\cal D}=\langle \dots ,{\cal D}_{-2},{\cal D}_{-1},{\cal D}_0\rangle$,
$\Dscr_{-i}\cong{\cal D}(\Lambda_{r_i,\lambda_i,\nu_i})$,
is a semi-orthogonal decomposition of $\Dscr$. Moreover we may assume ${\cal D}_0\cong{\cal D}(\Lambda_{1,0,0})$.
\end{propositions}
\begin{remarks} The reader will note that (under suitable genericity conditions) $\Lambda_{1,0,0}$ is the non-commutative
resolution of $X/\!\!/ G$ constructed in \cite[Cor.\ 1.5.2]{SVdB}.
\end{remarks}
\begin{remarks}
\label{rem:notTstable}
In Proposition \ref{ref-1.3} we assume that
$X$ has a $T$-stable point. This hypothesis is not
very restrictive in view of \S\ref{sec:nonstable} below.
Roughly speaking if $X$ does not have a $T$-stable point then one may easily obtain a semi-orthogonal decomposition of ${\cal D}(X/G)$ involving a set of ${\cal D}(X'/G')$ such that $X'$ has a $T'$-stable point
for $T'$ a maximal torus of $G'$.
\end{remarks}
\subsection{Refined decompositions}
The semi-orthogonal decompositions in Theorem \ref{nonl} and Proposition \ref{sigma} are not optimal.
At the cost of extra technicalities one may decompose the parts ${\cal D}_{-i}$ further by essentially repeating
the procedure which is used to obtain the decomposition of ${\cal D}$ itself. This leads to the natural
problem to produce a decomposition which cannot be refined further in the sense that the parts admit no non-trivial
semi-orthogonal decompositions. The latter is in particular the case if the parts are all of the form ${\cal D}(\Lambda)$
where $\Lambda$ is a (possibly twisted) \emph{non-commutative crepant resolution (NCCR)} of its center \cite{Leuschke,VdB32,Wemyss1}.\footnote{The indecomposability of (twisted) NCCR's seems to be well known to experts. It may
be easily proved in a similar way as \cite[Lemma A.4]{SVdB5}.}
In \S\ref{appA} we will discuss this problem in the case that $W$ is ``quasi-symmetric'', i.e. the sum of the weights
of $W$ on each line through the origin is zero.
It is shown in
\cite[\S1.6]{SVdB} that in this case, by replacing $\Sigma$ by a polygon of
roughly half the size, one obtains smaller non-commutative
resolutions for $X/\!\!/ G$. Under favourable conditions one may even
obtain NCCRs.\footnote{See also \cite{HLSam} where, again under appropriate
conditions, it is shown that these NCCRs are of geometric origin in
the sense that they are derived equivalent to suitable $X^{ss}/G$.}
Likewise in \S\ref{appA} we will show that if~$W$ is quasi-symmetric one may obtain corresponding
more refined semi-orthogonal decomposition of ${\cal D}(X/G$) which again under favorable conditions
consists entirely of (twisted) NCCR parts. So they cannot be refined further. See \S\ref{appA}, in particular Corollary \ref{quasicor}.
As an example we mention the following explicit result which refines our construction of NCCRs for odd Pfaffians in \cite{SVdB}.
\begin{proposition}[see Proposition \ref{prop:pfaffians}]
\label{prop:intro}
Let $2n<h$, $W=V^h$, where $V$ is $2n$-dimensional vector space equipped with a non-degenerate skew-symmetric bilinear form, $G={\rm Sp}_{2n}(k)$, and let $Y_{2n,h}^-=W/\!\!/ G$ be the variety of skew-symmetric $h\times h$-matrices of rank $\leq 2n$.
We denote by $\Lambda_j$ the NCCR of $Y_{2j,h}$ given by \cite[Proposition 6.1.2]{SVdB} (by convention we put $\Lambda_0=k$).
If $h$ is odd then ${\cal D}(X/G)$ has a semi-orthogonal decomposition $\langle \dots, \Dscr_{-2},\Dscr_{-1},\Dscr_0\rangle$ with $\Dscr_0\cong \Dscr(\Lambda_n)$ such that each $\Dscr_{-i}$ for $i>0$ is of the form $\Dscr(\Lambda_j)$ for some $j<n$.
\end{proposition}
Other examples we discuss are quasi-symmetric toric representations, representations of $\rm SL_2$ and the analogue of Proposition \ref{prop:intro} for ordinary determinantal varieties.
\begin{comment}
\begin{remarks} If $W$ is ``quasi-symmetric'' (the sum of the weights
of $W$ on each line through the origin is zero) then it is shown in
\cite[\S1.6]{SVdB} that by replacing $\Sigma$ by a polygon of
roughly half the size one obtains smaller non-commutative
resolutions for $X/\!\!/ G$. Under favourable conditions one may even
obtain so-called non-commutative crepant resolutions (NCCRs) in this
way (see also \cite{HLSam} where, again under appropriate
conditions, it is shown that these NCCRs are of geometric origin in
the sense that they are derived equivalent to suitable $X^{ss}/G$).
Likewise if~$W$ is quasi-symmetric one may obtain a corresponding
more refined semi-orthogonal decomposition of ${\cal D}(X/G$).
See \S\ref{appA}, in particular Corollary \ref{quasicor}.
\end{remarks}
\end{comment}
\section{Acknowledgement}
The second author thanks J\o rgen Vold Rennemo and Ed Segal for interesting discussions regarding this paper. The authors thank Agnieszka Bodzenta and Alexey Bondal for their interest in this work and
for useful comments on the first version of this paper.
\section{Preliminaries}
\subsection{Strongly \'etale morphisms}
Let $G$ be reductive group. If $G$ acts on an affine~$k$-scheme $X$ then we put $X/\!\!/ G=\operatorname {Spec} k[X]^G$. This
is a special case of a ``good quotient'' (see \S\ref{sec:goodq} below).
Let $f:X\rightarrow Y$ be a $G$-equivariant morphism between affine $G$-schemes. Following \cite{Luna}\cite[App.\ D]{Mumford} we
say that $f$ is \emph{strongly \'etale} if $X/\!\!/ G\rightarrow Y/\!\!/ G$ is \'etale and the induced morphism $X\rightarrow Y\times_{Y/\!\!/ G} X/\!\!/ G$
is an isomorphism. This implies in particular that $X\rightarrow Y$ is \'etale, which is a special case of the following lemma with $H$ being trivial.
\begin{lemmas} Assume that $f:X\rightarrow Y$ is a strongly \'etale $G$-equivariant morphism of affine
schemes and let $H$ be a reductive subgroup of $G$. Then
$f$ is strongly \'etale as $H$-equivariant morphism.
\end{lemmas}
\begin{proof} From the fact that $H$ is reductive we easily obtain
\[
k[X]^H=(k[Y]\otimes_{k[Y]^G} k[X]^G)^H=k[Y]^H\otimes_{k[Y]^G} k[X]^G.
\]
Thus
\[
k[Y]\otimes_{k[Y]^H} k[X]^H=k[Y]\otimes_{k[Y]^H}k[Y]^H\otimes_{k[Y]^G} k[X]^G=k[Y]\otimes_{k[Y]^G} k[X]^G=k[X].
\]
\end{proof}
\subsection{The Bia\leftarrow{}ynicki-Birula decomposition in the affine case.}
\label{ref-1.2-0}
We use
\cite{Drinfeld2} as a reference for some facts about the Bia\leftarrow{}ynicki-Birula decomposition~\cite{Bia}.
Let $R$ be a commutative $k$-algebra equipped with a rational $G_m$-action $\lambda:G_m\rightarrow \Aut_k(R)$. This $G_m$-action
induces a grading on $\bigoplus_n R_n$ on $R$ where $z\in G_m$ acts on $r\in R_n$ by
$z\cdot r=z^n r$. Let $I^+$, $I^-$ be the ideals in $R$ respectively generated by $(R_n)_{n>0}$ and $(R_n)_{n<0}$
and put $I=I^++I^-$. We define $R^\lambda:=R/I$. $R^{\lambda,\pm}:=R/I^{\pm}$. Note
\begin{equation}
\label{eq:part0}
(R^{\lambda,\pm})_0=R^\lambda.
\end{equation}
If $X=\operatorname {Spec} R$ then we also write $X^\lambda=\operatorname {Spec} R^\lambda$, $X^{\lambda,\pm}=\operatorname {Spec} R^{\lambda,\pm}$. It follows from
\cite[\S1.3.4]{Drinfeld2} that $X^\lambda$ is the subscheme of fixed points of $X$ and $X^{\lambda,+}$, $X^{\lambda,-}$
are respectively the attractor and repeller subschemes of $X$.
According to \cite[Prop 1.4.20]{Drinfeld2}, $X^\lambda$, $X^{\lambda,\pm}$ are smooth if this is the case
for $X$.
\begin{lemmas}
\label{ref-1.2.1-1} Assume that $f:X\rightarrow Y$ is a strongly \'etale
$G_m$-equivariant morphism of affine schemes, with the action denoted by $\lambda$. Then
\begin{align*}
X^\lambda&= X\times_{Y} Y^\lambda\\
X^{\lambda,+}&= X \times_{Y} Y^{\lambda,+}\\
X^{\lambda,-}&= X\times_{Y} Y^{\lambda,-}\,.
\end{align*}
\end{lemmas}
\begin{proof} We have
\[
k[X]= k[Y]\otimes_{k[Y]^{G_m}} k[X]^{G_m}
\]
since $f$ is a strongly \'etale
$G_m$-equivariant morphism,
and this isomorphism is clearly compatible with the grading on both sides. Thus
\[
k[X]_n=k[Y]_n\otimes_{k[Y]_0} k[X]_{0}.
\]
The lemma now follows easily from the definitions.
\end{proof}
\subsection{The Bia\leftarrow{}ynicki-Birula decomposition when there is a good quotient.}
\label{sec:goodq}
We use the following definition from \cite{Brion} (see the discussion after Prop.\ 1.29 in loc.\ cit.).
\begin{definitions}
Let $G$ be a reductive group and let $\pi:X\rightarrow Y$ be a $G$-equivariant morphism of $k$-schemes. Then
$\pi$ is a \emph{good quotient} if the following holds
\begin{enumerate}
\item $\pi$ is affine.
\item ${\cal O}_Y=(\pi_\ast {\cal O}_X)^G$.
\end{enumerate}
\end{definitions}
It is easy to see that a good quotient is unique, if it
exists. Therefore following tradition we will usually write $Y=X/\!\!/
G$. We have already used this notation in the case that $X$ is affine.
Note the following
\begin{lemmas} Assume that $G$ is a reductive group acting on a $k$-scheme $X$ such that $X/\!\!/ G$ exists.
Let $H$ be a reductive subgroup of $G$. Then $X/\!\!/ H$ also exists.
\end{lemmas}
\begin{proof} Let $\pi:X\rightarrow X/\!\!/ G$ be the good quotient. It is easy to verify that $X/\!\!/ H=\underline{\operatorname {Spec}} (\pi_\ast {\cal O}_X)^H$.
\end{proof}
Assume now that $X$ is a $k$-scheme on which $G_m$ acts via
$\lambda:G_m\rightarrow \Aut(X)$. Assume that a good quotient $\pi:X\rightarrow X/\!\!/
G_m$ exists. If $U\subset X/\!\!/ G_m$ is an open affine subvariety
then we may define closed subvarieties $\pi^{-1}(U)^\lambda$,
$\pi^{-1}(U)^{\lambda,\pm}$ of $\pi^{-1}(U)$ as in \S\ref{ref-1.2-0}
and according to Lemma
\ref{ref-1.2.1-1} these are compatible with restrictions for
$U'\subset U$. Hence we may glue these closed subvarieties to obtain
$X^\lambda$, $X^{\lambda,\pm}\subset X$. One may verify that $X^\lambda$, $X^{\lambda,\pm}$ are still
the fixed points and the attractor/repeller subschemes for~$\lambda$.
\subsection{Good quotients and geometric invariant theory}
One way to obtain good quotients is via the machinery of geometric
invariant theory \cite{Mumford}. Let~$G$ be a reductive group and let $X$ be
a~$G$-equivariant $k$-scheme which is projective over an affine scheme, equipped with a~$G$-equivariant ample line bundle ${\cal M}$. If $f\in
\Gamma(X,{\cal M}^{\otimes_k n})^G$, $n>0$, then $X_f:=\{f\neq 0\}\subset X$ is
affine and $G$-equivariant. The semi-stable locus in~$X$ is defined as $X^{ss}=\bigcup_f X_f$. This is an open
subvariety of~$X$ which has a good quotient $X^{ss}/\!\!/ G$ which may
be obtained by gluing $X_f/\!\!/ G=\operatorname {Spec} k[X_f]^G$ for varying $f$.
Another way to
obtain $X^{ss}/\!\!/ G$ is as follows: let $\Gamma_\ast(X)=\bigoplus_n
\Gamma(X,{\cal M}^{\otimes_k n})$. Then $X^{ss}/\!\!/ G=\Proj \Gamma_\ast(X)^G$.
\medskip
The following result, whose proof we omit since we do not use it, gives an alternative description of $X^{ss,\lambda}$,
$X^{ss,\lambda,\pm}$ in the GIT setting.
\begin{propositions}
Let $G$ be a reductive group and let
$X$ be a $G$-equivariant $k$-scheme which is projective over an affine scheme. Let ${\cal M}\in \operatorname {Pic}(X)$ be a
$G$-equivariant ample line bundle on $X$ and let $X^{ss}\subset X$ be
the corresponding semi-stable locus.
Let $R=\Gamma_\ast(X)$ and let $\lambda$ be a one-parameter subgroup of $G$. Then $\lambda$ acts
on~$R$ in a way which is compatible with the grading
and $X^{ss,\lambda}$, $X^{ss,\lambda,\pm}$ are the closed subschemes
of $X^{ss}$ defined by the (graded) quotient rings $R^\lambda$, $R^{\lambda,\pm}$ of $R$ (see \S\ref{ref-1.2-0}).
\end{propositions}
\begin{comment}
\begin{proof} We will give the proof for $X^{ss,\lambda,+}$. The
proofs for $X^{ss,\lambda}$ and $X^{ss,\lambda,-}$ are similar.
For $f\in R^G_n$, $n>0$ we have
$X_f=\operatorname {Spec} (R_f)_0$. By construction $X^{ss,\lambda,+}\cap X_f=(X_f)^{\lambda,+}$.
We have to prove that $(X_f)^{\lambda,+}=\operatorname {Spec} ((R^{\lambda,+})_f)_0$.
Since we have $(X_f)^{\lambda,+}=\operatorname {Spec} ((R_f)_0)^{\lambda,+}$ this amounts
to showing $((R^{\lambda,+})_f)_0=((R_f)_0)^{\lambda,+}$. By Lemma \ref{ref-1.2.1-1} we have
$(R^{\lambda,+})_f=(R_f)^{\lambda,+}$. So ultimately we have to show
$(T^{\lambda,+})_0=(T_0)^{\lambda,+}$ for $T=R_f$. Since ${\cal M}$ is ample, $T$
is strongly graded (i.e.\ $T_aT_b=T_{a+b}$ for $a,b\in {\blb Z}$) and then it is sufficient to invoke
Lemma \ref{lem:stronglygraded} below.
\end{proof}
\begin{lemmas}
\label{lem:stronglygraded} Assume that $T$ is ${\blb Z}$-graded strongly graded
commutative $k$-algebra with a unit $f\in T_n$, $n>0$. Assume that
$T$ is in addition equipped with a $G_m$-action $\lambda:G_m\rightarrow
\Aut(T)$ which is compatible with the grading such that $f$ is
$G_m$-invariant. Then we have $(T^\lambda)_0=(T_0)^\lambda$ and
$(T^{\lambda,\pm})_0=(T_0)^{\lambda,\pm}$.
\end{lemmas}
\begin{proof}
We give the proof for $T^{\lambda,+}$. We turn the $G_m$-action on $T$ into a bigrading. I.e. $T=\bigoplus_{n,\alpha\in {\blb Z}^2}T_{n\alpha}$ with the $G_m$ action on $T_{n\alpha}$ being given by $z\cdot t=z^\alpha t$.
We have
\begin{align*}
(T^{\lambda,+})_0&=T_0/(\sum_{m,\alpha,\beta\in {\blb Z},\beta>0} T_{-m,\alpha} T_{m,\beta}),\\
(T_0)^{\lambda,+}&=T_0/(\sum_{\gamma,\delta\in {\blb Z},\delta>0} T_{0,\gamma} T_{0,\delta}).
\end{align*}
Thus we have to show that $T_{-m,\alpha} T_{m,\beta}$ for $\beta>0$ is
contained in $\sum_{\gamma,\delta\in
{\blb Z},\delta>0}T_{0,\gamma}T_{0,\delta}$. We may do this locally on
$\operatorname {Spec} T_{00}$. So now we assume that $T_{00}$ is local. By Lemma
\ref{lem:gradedlocal} below this implies that $T_{0}$ is graded local and since $T_1$ is an invertible
graded $T_0$-module it is graded free. In other words
$T\cong T_0[t,t^{-1}]$ where $\deg(t)=(1,\sigma)$ for suitable $\sigma\in {\blb Z}$.
By hypothesis $T_{n0}$ contains a unit $f$. Write $f=ht^n$. Then $h\in T_{0,-n\sigma}$ is a unit in
$T_0$. If $\sigma\neq 0$ then it is easy to see that $(T_0)^{\lambda,+}=0$ and there is nothing to prove.
So assume $\sigma=0$. In that case we have $T_{-m,\alpha} T_{m,\beta}=T_{0,\alpha}T_{0,\beta}$ and we are also done.
\end{proof}
\begin{lemmas} \label{lem:gradedlocal}
Let $S$ be a ${\blb Z}$-graded commutative ring such that $S_0$ is local. Then~$S$ is graded local.
\end{lemmas}
\begin{proof} We have to show that the homogoneous non-units form a graded ideal. Let $x,y\in S_t$ be
non-units. This implies $xS_{-t}\subset m$, $yS_{-t}\subset m$ where $m$ is the maximal ideal of $S_0$. But
then also $(x+y)S_{-t}\subset m$. So $x+y$ is not a unit.
\end{proof}
\end{comment}
\subsection{Good quotients and local generation}
\label{sec:localgen}
Let $X/k$ be a quasi-compact, quasi-separated $G$-scheme for a reductive
group $G/k$ such that a good quotient $\pi:X\rightarrow X/\!\!/ G$ exists. It is
easy to see that then $X/\!\!/ G$ is quasi-compact and quasi-separated as
well. Below we write $\pi_s:X/G\rightarrow X/\!\!/ G$ for the corresponding
stack morphism. Note that both $\pi_\ast$ and $\pi_{s\ast}$ are exact.
Below we use some notations and concepts related to derived categories which were introduced in \S\ref{sec:intro:mainresult}.
We recall
some properties of $D_{\operatorname{Qch}}(X/G)$.
\begin{theorems}
\label{th:st}
\begin{enumerate}
\item \label{c1} $D_{\operatorname{Qch}}(X/G)$ is compactly generated.
\item \label{c2} An object in $D_{\operatorname{Qch}}(X/G)$ is compact if and only if it is perfect. I.e.\ if and only if its
image in $D(X)$ is perfect.
\item \label{c3} If $X$ is separated then $D_{\operatorname{Qch}}(X/G)=D(\operatorname{Qch}(X/G))$.
\end{enumerate}
\end{theorems}
\begin{proof}
\eqref{c1} follows from \cite[Thm B]{HallRydh}. It also follows from this result and the proof of \cite[Lemma 2.2]{Neeman3}
that every compact object is perfect. On the other hand it is easy to see that in this case every perfect object
is compact. This proves~\eqref{c2}. Finally \eqref{c3} follows from \cite[Thm 1.2]{HallNeemanRydh}.
\end{proof}
For
an open $U\subset X/\!\!/ G$ we write $\tilde{U}=U\times_{X/\!\!/ G} X\subset X$.
\begin{definitions}
\label{def:locgen} Let $(E_i)_{i\in I}$
be a collection of perfect objects in $D_{\operatorname{Qch}}(X/G)$. We say that the full subcategory of $D_{\operatorname{Qch}}(X/G)$, spanned by all objects ${\cal F}$ such that for every affine open
$U\subset X/\!\!/ G$
the object ${\cal F}{|}\tilde{U}$ is in the smallest thick subcategory
of $D_{\operatorname{Qch}}(\tilde{U}/G)$ containing $(E_i{|}\tilde{U})_i$, is \emph{locally classically generated}
by $(E_i)_{i\in I}$.
\end{definitions}
Let us say that $F,G\in D_{\operatorname{Qch}}(X/G)$ are \emph{locally isomorphic} if there exists a covering $X/\!\!/ G=\bigcup_{i\in I} U_i$
such that $F{\mid} \tilde{U}_i\cong G{\mid} \tilde{U}_i$ for all $i$. It is convenient to call a subcategory
of $D_{\operatorname{Qch}}(X/G)$ \emph{locally closed} if it is closed under local isomorphism.
\begin{lemmas} Let $(E_i)_{i\in I}$ be a collection of perfect objects
in $D_{\operatorname{Qch}}(X/G)$ and let ${\cal F}\in \Perf(X/G)$. Let $X/\!\!/
G=\bigcup_{j=1}^n U_j$ be a finite open affine covering of $X/\!\!/ G$. If
for all $j$ one has that ${\cal F}{\mid}\tilde{U}_j$ is in the smallest
thick subcategory of $D_{\operatorname{Qch}}(\tilde{U}_j/G)$ containing
$(E_i{|}\tilde{U}_j)_i$ then ${\cal F}$ is in the subcategory of $D_{\operatorname{Qch}}(X/G)$
locally classically generated
by $(E_i)_{i\in I}$.
\end{lemmas}
\begin{proof} Let $U\subset X/\!\!/ G$ be an affine open. We have to show that
${\cal F}{\mid}\tilde{U}$ is in the smallest
thick subcategory of $D_{\operatorname{Qch}}(\tilde{U}/G)$ containing $(E_i{\mid} \tilde{U})_{i\in I}$.
By
replacing $X/\!\!/ G$ by $U$ and refining the cover $U=\bigcup_{j\in I} U\cap U_j$ to an affine
one we reduce to the case that $X/\!\!/ G$ is itself affine. In particular by Theorem \ref{th:st}\eqref{c3} (as affine schemes are separated),
$D_{\operatorname{Qch}}(X/G)$ is the derived category of $G$-equivariant $k[X]$-modules. The affine $U_j$
yield $G$-equivariant flat extensions $k[\tilde{U}_j]$ of $k[X]$.
Let ${\cal E}$ be the smallest
cocomplete triangulated subcategory of $D_{\operatorname{Qch}}(X/G)$
containing $(E_i)_{i\in I}$
and similarly let ${\cal E}_j$ be the smallest cocomplete triangulated subcategory of $D_{\operatorname{Qch}}(\tilde{U}_j/G)$ containing
$(E_i{|} \tilde{U}_j)_{i\in I}$.
By the Brown representability theorem \cite[Theorem 4.1]{Neeman}
there is a unique distinguished triangle
\[
{\cal F}_0\rightarrow {\cal F}\rightarrow {\cal F}_1\rightarrow
\]
where ${\cal F}_0\in {\cal E}$ and ${\cal F}_1\in {\cal E}^\perp$. Since $U_j$ is affine it is easy to see
that ${\cal F}_1{\mid} \tilde{U}_j\in {\cal E}_j^\perp$.
But by hypothesis ${\cal F}{\mid} \tilde{U}_j\in {\cal E}_j$ and thus ${\cal F}_1{\mid} \tilde{U}_j=0$.
Since this is true for all $j$ we conclude ${\cal F}_1=0$ and hence ${\cal F}\in {\cal E}$.
Since ${\cal F}$ is compact and ${\cal E}$ is compactly generated by Theorem \ref{th:st}\eqref{c1} the conclusion follows from \cite[Lemma 2.2]{Neeman3}.
\end{proof}
\begin{lemmas}
\label{lem:thick}
The category $\Perf(X/G)$ is locally classically generated by $(V\otimes_k {\cal O}_X)_V$
where $V$ runs through the irreducible representations of $G$.
\end{lemmas}
\begin{proof}
We may assume that $X$ is affine and then it is clear.
\end{proof}
It will be convenient to pick for every $E\in D_{\operatorname{Qch}}(X/G)$ a $K$-injective resolution
$E\rightarrow I_E$ and to define $\pi_{s\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E,F)$ as the complex of sheaves
$U\mapsto \operatorname {Hom}_{\tilde{U}}(I_E{\mid}\tilde{U},I_F{\mid}\tilde{U})^G$ on $X/\!\!/ G$.
With this definition $\Lambda:=\pi_{s\ast}\operatorname {R\mathcal{E}\mathit{nd}}_{X/G}(E):=\pi_{s\ast} \operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E,E)$ is a sheaf of DG-algebras
on $X/\!\!/ G$
and $\pi_{s\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E,F)$ is a sheaf of right $\Lambda$-DG-modules.
\begin{remarks}
Note that if $U\subset X/\!\!/ G$ is affine then $\Lambda{\mid} U$ is the sheaf of DG-algebras
associated to the DG-algebra $\operatorname {REnd}_{\tilde{U}/G}(E)$. We will use this routinely below.
\end{remarks}
\begin{lemmas} \label{lem:ff}
Assume that ${\cal D}\subset D_{\operatorname{Qch}}(X/G)$ is locally classically generated by the perfect complex $E$
and let $\Lambda=\pi_{s\ast}\operatorname {R\mathcal{E}\mathit{nd}}_{X/G}(E)$ be the sheaf of DG-algebras on $X/\!\!/ G$ as defined above.
The functors
\begin{align*}
{\cal D}\rightarrow \Perf(\Lambda)&:F\mapsto \pi_{s\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E,F),\\
\Perf(\Lambda)\rightarrow {\cal D}&:H\mapsto H\overset{L}{\otimes}_\Lambda E,
\end{align*}
are well-defined (the second functor is computed starting from a $K$-flat resolution\footnote{Such a $K$-flat resolution is constructed in the same way as for DG-algebras
(see \cite[Theorem 3.1.b]{Keller1}). One starts from the observation that for every $M\in D(\Lambda)$ there is a morphism $\bigoplus_{i\in I} j_{i!}(\Lambda{\mid} U_i)\rightarrow M$
with open immersions $(j_i:U_i\rightarrow X/\!\!/ G)_{i\in I}$, which is an epimorphism on the level of cohomology.
}
of $H$) and yield inverse equivalences between ${\cal D}$ and $\Perf(\Lambda)$.
\end{lemmas}
\begin{proof} The two functors are adjoint functors between $D_{\operatorname{Qch}}(X/G)$ and $D(\Lambda)$. The fact that they define functors between
${\cal D}$ and $\Perf(\Lambda)$ can be checked locally. The fact that the unit and counit are invertible can also be checked locally.
\end{proof}
\begin{lemmas} \label{gldimadm} Assume that $X$ is a smooth
$k$-scheme. Let $E\in {\cal D}(X/G)$. If $\Lambda=\pi_{s\ast}\operatorname {R\mathcal{E}\mathit{nd}}_{X/G}(E)$ is a
sheaf of algebras of finite global dimension when restricted to
affine opens in $X/\!\!/ G$ then the induced fully faithful
functor (see Lemma \ref{lem:ff})
\[
I:{\cal D}(\Lambda)\rightarrow {\cal D}(X/G):H\mapsto H\overset{L}{\otimes}_\Lambda E
\]
is admissible (i.e.\ it has a left and a right adjoint).
\end{lemmas}
\begin{proof} The right adjoint to $I$ is given $\pi_{s\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E,-)$. To construct the left adjoint note that there is a duality
${\cal D}(\Lambda^\circ)\rightarrow {\cal D}(\Lambda)$ given by $(-)^\vee:=\operatorname {R\mathcal{H}\mathit{om}}_{\Lambda}(-,\Lambda)$. One checks that the
left adjoint to $I$ is given by $\pi_{s\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X/G}(-,E)^\vee$.
\end{proof}
The following result shows that semi-orthogonal decompositions can be constructed locally.
\begin{propositions}
\label{th:recognition}
Let $I$ be a totally ordered set. Assume
${\cal D}\subset\Perf(X/G)$ is locally classically generated by
a collection of locally closed subcategories ${\cal D}_i\in \Perf(X/G)$.
Assume $\pi_{s\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X/G}({\cal D}_i,{\cal D}_j)=0$ for $i>j$. Then ${\cal D}$ is generated by $({\cal D}_i)_i$
and in particular we have a semi-orthogonal decomposition ${\cal D}=\langle {\cal D}_i\mid i\in I\rangle$.
\end{propositions}
\begin{proof} It is clear that we may first reduce to the case that $I$ finite and then to $|I|=2$. Hence
we assume $I=\{1,2\}$. In the same vein we may reduce to the case that the ${\cal D}_i$ are locally classically generated by single perfect complexes $(E_i)_{i=1,2}$. Put $\Lambda_i=\pi_{s\ast} \operatorname {R\mathcal{E}\mathit{nd}}_{X/G}(E_i)$.
Let $F\in {\cal D}$. Then for every affine $U\subset X/\!\!/ G$,
$F{\mid} \tilde{U}$ is in the thick subcategory of $\Perf(\tilde{U}/G)$ generated by $E_1{\mid} \tilde{U}$, $E_2{\mid} \tilde{U}$ (by the definition of local classical generation, cfr.
Definition \ref{def:locgen}).
Put $F_2=\pi_{s\ast} \operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E_2,F)\overset{L}{\otimes}_{\Lambda_2} E_2$.
Since $ \operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E_2,E_1)=0$ we deduce (checking locally) that $\pi_{s\ast} \operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E_2,F)\in \Perf(\Lambda_2)$ and hence that $F_2\in {\cal D}_2$.
Put
$F_1=\operatorname{cone}(F_2\rightarrow F)$.
Then we obtain $\pi_{s\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E_2,F_1)=0$ (again checking locally). Let $C$ be the cone of $\pi_{s\ast} \operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E_1,F_1)\overset{L}{\otimes}_{\Lambda_1} E_1\rightarrow F_1$.
Since $\pi_{s\ast} \operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E_i,C)=0$ for $i=1,2$ we conclude $C=0$ and thus $F_1=\pi_{s\ast} \operatorname {R\mathcal{H}\mathit{om}}_{X/G}(E_1,F_1)\overset{L}{\otimes}_{\Lambda_1} E_1$.
It follows that if $U\subset X/\!\!/ G$ is affine
then $F_1{\mid} \tilde{U}$ is in the cocomplete subcategory of $D_{\operatorname{Qch}}(X/G)$ generated by $E_1{\mid} \tilde{U}$. Since $E_1{|}\tilde{U}$, $F_1{|}\tilde{U}$ are compact
we conclude by \cite[Lemma 2.2]{Neeman3} that
$F_1{\mid} \tilde{U}$ is in the thick subcategory of $\Perf(\tilde{U}/G)$ generated by $E_1{\mid} \tilde{U}$. As this is is true for all $U$ we obtain that $F_1\in {\cal D}_1$.
Hence
${\cal D}$ is generated by ${\cal D}_1$,~${\cal D}_2$.
\end{proof}
\subsection{The Bia\leftarrow{}ynicki-Birula decomposition for reductive algebraic groups.}
\label{sec:red}
We recall the following.
\begin{propositions} \cite[Proposition 8.4.5, Exercise 8.4.6(5), Theorem 13.4.2]{Springer}
Let $G$ be a connected reductive algebraic group and let
$\lambda:G_m\rightarrow G$ be a one-parameter subgroup of $G$. Then $G^{\lambda}$, $G^{\lambda,\pm}$ are connected
subgroups of $G$. Moreover the $G^{\lambda,\pm}$ are parabolic subgroups of $G$ and $G^\lambda$ is the
Levi-subgroup of $G^{\lambda,\pm}$.
\end{propositions}
We recall the following.
\begin{lemmas}
\label{lem:recall} Let $G$ be a connected reductive algebraic group with $T\subset B\subset G$ being
a maximal torus and a Borel subgroup of $G$. Let
$\lambda\in Y(T)^-$ and $\chi\in X(T)^+$ and let $V(\chi)$ be the irreducible $G$-representation with
highest weight $\chi$. Then $\operatorname{Res}^G_{G^\lambda} V(\chi)=V_{G^\lambda}(\chi)\oplus
\bigoplus_i V_{G^{\lambda}}(\mu_i)$
with $\langle \lambda,\mu_i\rangle>\langle \lambda,\chi\rangle$.
\end{lemmas}
\begin{proof} This is similar to the proof that $\chi$ occurs with multiplicity one
among the weights of $V(\chi)$ \cite[Proposition 2.4]{Jantzen}.
All the weights $\mu$ of $V(\chi)$ satisfy $\langle
\lambda,\mu\rangle \ge \langle \lambda ,\chi\rangle$ as $\lambda\in Y(T)^-$. Hence we have
a decomposition $V(\chi)=V(\chi)^\lambda\oplus V(\chi)^+$ where
$V(\chi)^+$ is the span of the weight vectors with weights $\mu$
such that $\langle \lambda,\mu\rangle > \langle \lambda
,\chi\rangle$. It is clear that this is a decomposition as
$G^\lambda$-modules. If $V(\chi)^\lambda$ is decomposable then it is
easy to see that its indecomposable summands generate distinct
$G$-subrepresentations of $V(\chi)$ which is impossible.
Since $V(\chi)^\lambda$ contains the weight vector with weight $\chi$ we must have $V(\chi)^\lambda=
V_{G^\lambda}(\chi)$.
\end{proof}
\subsection{The $G/G_e$-action on weights}
\label{sec:definition}
Let $G$ be a reductive group such that $T\subset B\subset G_e$ are respectively a
maximal torus and a Borel subgroup of $G_e$.
Let $g\in G$ and $\sigma_g=g\,\cdot\,g^{-1}\in \Aut(G_e)$. Then
$\sigma_g(T)\subset\sigma_g(B)$ are respectively a maximal torus and a
Borel subgroup of $G_e$. Thus there exists $g_0\in G_e$ such that
$g_0\sigma_g(T)g_0^{-1}=T$, $g_0\sigma_g(B)g_0^{-1}=B$.
In the sequel if $\bar{g}\in G/G_e$ then we write $\sigma_{\bar{g}}\in
\Aut(G_e)$ for $\sigma_{g_0g}$ where $g_0g$ is an element of the coset
$\bar{g}$ such that $\sigma_{g_0g}$ preserves $(T,B)$. Since $g_0$ is
unique up to multiplication by an element of $T$,
$\sigma_{\bar{g}}$ is well defined up to conjugation by an element
of $T$. Since $\sigma_{\bar{g}}$ preserves $(T,B)$ it yields a
well defined action on $X(T)$ via $\chi\mapsto \chi\circ \sigma_{\bar{g}}$
which preserves $X(T)^+$. We will write
$\bar{g}(\chi)$ for $\chi\circ \sigma^{-1}_{\bar{g}}$. There is
also an action of $G/G_e$ on $Y(T)$ given by $\lambda\mapsto \sigma_{\bar{g}}\circ \lambda$.
Finally we have
\[
\langle \lambda,\chi\circ \sigma_{\bar{g}}\rangle=\langle \sigma_{\bar{g}}\circ\lambda,\chi\rangle.
\]
If $\lambda\in Y(T)$ then we will write $(G/G_e)^\lambda\subset G/G_e$ for the
stabilizer of $\lambda$ under the $G/G_e$-action on $Y(T)$. Let $\tilde{G}^\lambda$ be the inverse image
of $(G/G_e)^\lambda$ in $G$.
There is an obvious
inclusion $(G/G_e)^\lambda \subset G_eG^\lambda/G_e=G^\lambda/G^\lambda_e$. We will
write $H^\lambda$ for the (open) subgroup of $G^\lambda$ such that
$G^\lambda_e\subset H^\lambda$ and $H^\lambda/G^\lambda_e=(G/G_e)^\lambda $.
So $\tilde{G}^\lambda/G_e=H^\lambda/G_e^\lambda$.
For $\chi\in X(T)^+$ we put
$V_G(\chi):= \Ind^G_{B}\chi$. Note that if $G$ is not connected then
$V_G(\chi)$ will usually not be simple.
We have
\begin{equation}
\label{eq:restriction}
\operatorname{Res}^G_{G_e} V_G(\chi)=\bigoplus_{\bar{g}\in G/G_e} {}_{\sigma_{\bar{g}}} V_{G_e}(\chi)
\end{equation}
and
\begin{equation}
\label{eq:restriction1}
{}_{\sigma_{\bar{g}}}V_{G_e}(\chi)\cong V_{G_e}(\chi\circ \sigma_{\bar{g}}).
\end{equation}
\section{Reduction settings}\label{secredset}
Now we introduce our main technical tool to obtain semi-orthogonal decompositions of ${\cal D}(X/G)$.
In \S\ref{def:reductionsetting} introduce the concept of a \emph{reduction setting}.
In \S\ref{sec:mainred} we give our main technical result about such reduction settings.
Subsequent sections are concerned with the construction of reduction settings. Since the definitions
and results are quite technical and not so easy to motivate, the reader is advised to skim this section
on first reading and come back to it afterwards.
\subsection{Definition}
\label{def:reductionsetting}
Let $G$ be a reductive group such that $T\subset B\subset G_e$ are respectively a
maximal torus and a Borel subgroup of $G_e$
\medskip
Below we consider the
situation where $G$ acts on a variety $X$. In that case we also put for $\chi\in X(T)^+$
\[
P_\chi=V_G(\chi)\otimes_k {\cal O}_X \in \mathop{\text{\upshape{coh}}}(X/G).
\]
To indicate context we may also write $P_{G,\chi}$, $P_{G,X,\chi}$, etc\dots.
If ${\cal L}\subset X(T)^+$
then we put $P_{\cal L}=\bigoplus_{\chi\in {\cal L}} P_\chi$.
We make the following definition (using some notation introduced in \S\ref{linear}).
\begin{definitions}
\label{ref-2.1-2}
A \emph{reduction setting}
is a tuple
$(G,B,T,X,{\cal L},\chi,\lambda)$ with the following properties:
\begin{enumerate}
\item \label{ref-1-3}
$G$ is a reductive group and $T\subset B\subset G_e$ are respectively a maximal
torus and a Borel subgroup of $G$.
\item $\chi\in X(T)^+$.
\item $\lambda\in Y(T)^-$.
\item ${\cal L}$ is a finite subset of $X(T)^+$
invariant under $G/G_e$.
\item \label{ref-5-4}
$\forall \mu\in {\cal L}:\langle \lambda,\chi\rangle<\langle\lambda,\mu\rangle$.
\item \label{ref-6-5} $X$ is a smooth $G$-equivariant $k$-scheme such
that a good quotient $\pi:X\rightarrow X/\!\!/ G$ exists (with associated stack morphism $\pi_s:X/G\rightarrow X/\!\!/ G$).
\label{ref-5-4-2}
\item We will show in Lemma \ref{ref-2.2-9} below that (\ref{ref-5-4}) implies
\begin{equation}
\label{ref-2.1-6}
\pi_{s\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X/G}\left(P_{G,{\cal L}},\RInd^{G}_{G^{\lambda,+}_e}
(V_{G_e^\lambda}(\chi) \otimes_k j_\ast{\cal O}_{X^{\lambda,+}})\right)=0
\end{equation}
where $j$ is the inclusion $X^{\lambda,+}\hookrightarrow X$.
Consider the map
\begin{equation}
\label{ref-2.2-7}
P_{G,\chi}=
\RInd^G_{G^{\lambda,+}_e}\left(V_{G_e^\lambda}(\chi)
\otimes{\cal O}_X\right)\rightarrow \RInd^{G}_{G^{\lambda,+}_e} \left(V_{G_e^\lambda}(\chi) \otimes_k j_\ast{\cal O}_{X^{\lambda,+}}\right)
\end{equation}
obtained by applying $\RInd_{G^\lambda_e}^G(V_{G_e^\lambda}(\chi) \otimes-)$ to the obvious map
$
{\cal O}_X\rightarrow j_\ast {\cal O}_{X^{\lambda,+}}
$. Combining \eqref{ref-2.2-7} with \eqref{ref-2.1-6}
we obtain from the axioms of triangulated categories a canonical map
\begin{multline}
\label{ref-2.3-8}
\operatorname{cone}
\left(
\pi_{s\ast}\operatorname {\mathcal{H}\mathit{om}}_{X/G}(P_{G,{\cal L}},P_{G,\chi})\overset{L}{\otimes}_{\pi_{s\ast}\operatorname {\mathcal{E}\mathit{nd}}_{X/G}(P_{G,{\cal L}})} P_{G,{\cal L}} \rightarrow P_{G,\chi}
\right)\\
\rightarrow
\RInd^{G}_{G^{\lambda,+}_e}
\left(V_{G_e^\lambda}(\chi) \otimes_k j_\ast{\cal O}_{X^{\lambda,+}}\right).
\end{multline}
We require that \eqref{ref-2.3-8} is an isomorphism.
\end{enumerate}
\end{definitions}
The following lemma is necessary to complete Definition \ref{ref-2.1-2}.
\begin{lemmas} \label{ref-2.2-9}
Assume that ${\cal L}$ is as in Definition \ref{ref-2.1-2}(\ref{ref-5-4}). Then
\eqref{ref-2.1-6} holds.
\end{lemmas}
\begin{proof}
By using an affine covering of $X/\!\!/ G$ we may assume that $X$ is
affine. By adjointness we have
\begin{multline}
\label{ref-2.4-10}
\operatorname {RHom}_{X/G}(P_{G,{\cal L}},\RInd^{G}_{G^{\lambda,+}_e}
(V_{G_e^\lambda}(\chi) \otimes_k j_\ast{\cal O}_{X^{\lambda,+}}))\\=
\bigoplus_{\mu\in {\cal L}}\operatorname {Hom}_{G_e}(\operatorname{Res}^G_{G_e} \Ind^G_{G_e} V_{G_e}(\mu),
\RInd^{G_e}_{G^{\lambda,+}_e}(V_{G^\lambda_e}(\chi)\otimes_k k[X^{\lambda,+}])).
\end{multline}
By the $G/G_e$-invariance of ${\cal L}$, the simple summands of $(\operatorname{Res}^G_{G_e} \Ind^G_{G_e} V_{G_e}(\mu))_{\mu\in{\cal L}}$
are precisely the $(V_{G_e}(\mu))_{\mu\in{\cal L}}$ (see (\ref{eq:restriction}, \ref{eq:restriction1})). In other words it suffices to prove that for every $\mu$ such that
$\langle \lambda,\mu\rangle>\langle \lambda,\chi\rangle$
one has
\[
\operatorname {Hom}_{G_e}( V_{G_e}(\mu),
\RInd^{G_e}_{G^{\lambda,+}_e}(V_{G^\lambda_e}(\chi)\otimes_k k[X^{\lambda,+}]))=0
\]
Note
\begin{align}\label{BQ}
\RInd^{G_e}_{G^{\lambda,+}_e}(V_{G^\lambda_e}(\chi)\otimes_k k[X^{\lambda,+}]))
&=\RInd^{G_e}_{G^{\lambda,+}_e}(\RInd_{B}^{G_e^{\lambda,+}}(\chi\otimes_k k[X^{\lambda,+}]))\\\nonumber
&=\RInd^{G_e}_{G^{\lambda,+}_e}\RInd_{B}^{G_e^{\lambda,+}}(\chi\otimes_k k[X^{\lambda,+}])\\\nonumber
&=\RInd^{G_e}_{B}(\chi\otimes_k k[X^{\lambda,+}]).
\end{align}
Using the fact that the weights $\mu$ of $k[X^{\lambda,+}]$ all satisfy $\langle \lambda,\mu\rangle \le 0$ (see
\S\ref{ref-1.2-0}) we conclude as in the proof of \cite[Lemma 11.2.1]{SVdB} that the cohomology of $\RInd^{G_e}_{B}(\chi\otimes_k k[X^{\lambda,+}])$ are direct sums of $V_{G_e}(\mu)$ with $\langle \lambda,\mu\rangle \le \langle \lambda,\chi\rangle$.
This finishes the proof.
\end{proof}
\subsection{Reduction settings and { $\boldmath \operatorname {R\mathcal{H}\mathit{om}}$}}
\label{sec:mainred}
The following technical result will be our main application of reduction settings.
\begin{propositions} \label{Di}
Assume that we have a reduction setting $(G,B,T,X,{\cal L},\chi,\lambda)$
and assume $\chi'\in X(T)^+$ is such that
$\langle\lambda,\chi\rangle=\langle\lambda,\chi'\rangle$ and
$\langle\lambda,\bar{g}(\chi')\rangle>\langle\lambda,\chi\rangle$ for all $\bar{g}\not\in (G/G_e)^\lambda$.
Let $i_\lambda:X^\lambda/\!\!/H^\lambda \rightarrow X/\!\!/ G$ (see \S\ref{sec:definition} for notation)
be induced from $X^\lambda\rightarrow X$. It is clear
that $i_\lambda$ is affine so $i_{\lambda\ast}$ is exact.
Let $\pi_{s,\lambda}$ be the canonical map $X^\lambda/H^\lambda\rightarrow
X^\lambda/\!\!/ H^\lambda$.
We have isomorphisms
\begin{multline}
\label{eq:rhom}
\pi_{s\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X/G}(\RInd^{G}_{G^{\lambda,+}_e} (V_{G_e^\lambda}(\chi) \otimes_k {\cal O}_{X^{\lambda,+}}),
\RInd^{G}_{G^{\lambda,+}_e} (V_{G_e^\lambda}(\chi') \otimes_k {\cal O}_{X^{\lambda,+}}))\\
\cong
i_{\lambda\ast}\pi_{s,\lambda\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X^\lambda/H^\lambda}
(P_{H^\lambda,X^\lambda,\chi},P_{H^\lambda,X^\lambda,\chi'})
\end{multline}
Moreover such isomorphisms are compatible with composition when applicable.
\end{propositions}
\begin{proof} The right-hand side of \eqref{eq:rhom} only has cohomology in degree zero. Hence it is sufficient to construct an isomorphism like \eqref{eq:rhom} in the affine case, in a way which is compatible with
restriction. So we now assume $X$ is affine. Using \eqref{ref-2.1-6} (applied with $\chi$ replaced by $\chi'$) and \eqref{ref-2.3-8} we may replace the first argument
to $\operatorname {R\mathcal{H}\mathit{om}}_{X/G}(-,-)$ in \eqref{eq:rhom} by $P_{G,\chi}$ and as $P_{G,\chi}\cong \Ind_{G_e}^GV_{G_e}(\chi)\otimes_k k[X]$, by adjointness we have to
construct an isomorphism
\begin{multline}\label{firstdisplay}
\operatorname {RHom}_G(\Ind^G_{G_e}V_{G_e}(\chi),
\RInd^{G}_{G^{\lambda,+}_e} (V_{G_e^\lambda}(\chi') \otimes_k k[X^{\lambda,+}]))\\
\cong \operatorname {Hom}_{H^\lambda}(\Ind_{G_e^{\lambda}}^{H^\lambda}V_{G_e^\lambda}(\chi),\Ind_{G_e^{\lambda}}^{H^\lambda}V_{G_e^\lambda}(\chi')\otimes_k k[X^\lambda]).
\end{multline}
We do this next. We have
\begin{multline}\label{7.1.e1}
\operatorname {RHom}_G(\Ind^G_{G_e}V_{G_e}(\chi),
\RInd^{G}_{G^{\lambda,+}_e} (V_{G_e^\lambda}(\chi') \otimes_k k[X^{\lambda,+}]))\\
\cong
\operatorname {Hom}_{G_e^{\lambda,+}}(\operatorname{Res}_{G_e^{\lambda,+}}^{G_e} \operatorname{Res}^G_{G_e}\Ind_{G_e}^{G}V_{G_e}(\chi),
V_{G_e^\lambda}(\chi')\otimes_k k[X^{\lambda,+}])
\\
\overset{\text{\eqref{eq:restriction}}}{\cong}
\operatorname {Hom}_{G_e^{\lambda,+}}(\operatorname{Res}_{G_e^{\lambda,+}}^{G_e} \bigoplus_{\bar{g}\in G/G_e}
{}_{\sigma_{\bar{g}}} V_{G_e}(\chi),V_{G_e^\lambda}(\chi')\otimes_k k[X^{\lambda,+}])
\\\cong
\bigoplus_{\bar g\in \faktor{\tilde{G}^\lambda}{G_e}}\operatorname {Hom}_{G_e^{\lambda,+}}(V_{G_e}(\chi\circ \sigma_{\bar{g}}),V_{G_e^\lambda}(\chi')\otimes_k k[X^{\lambda,+}]),
\end{multline}
where the last isomorphism follows by \eqref{eq:restriction1}, the assumption
$\langle\lambda,\bar{g}(\chi)\rangle>\langle\lambda,\chi'\rangle$ for all $\bar{g}\not\in (G/G_e)^\lambda=
\tilde{G}^{\lambda}/G_e$ and the fact that all the weights $\mu$ of $k[X^{\lambda,+}]$ satisfy $\langle \lambda,\mu\rangle\le 0$ by \S\ref{ref-1.2-0}.
Assume $\bar{g}\in (G/G_e)^\lambda=\tilde{G}^{\lambda}/G_e$. By Lemma \ref{lem:recall}, as a $G_e^{\lambda}$-representation
$V_{G_e}(\chi\circ \sigma_{\bar{g}})$ is a direct sum of
$V_{G_e^\lambda}(\chi\circ \sigma_{\bar{g}})$ and representations of the form
$V_{G_e^{\lambda }}(\mu)$ with $\langle \lambda,\mu\rangle > \langle
\lambda,\chi\circ \sigma_{\bar{g}}\rangle=\langle
\lambda,\chi\rangle$. For similar reasons as above such $V_{G^\lambda_e}(\mu)$ cannot contribute to the right-hand side of \eqref{7.1.e1} so that \eqref{7.1.e1} is isomorphic to
\begin{multline}
\label{eq:step2}
\bigoplus_{\bar g\in \faktor{\tilde{G}^\lambda}{G_e}}\operatorname {Hom}_{G_e^{\lambda,+}}(V_{G^\lambda_e}(\chi\circ \sigma_{\bar{g}}),V_{G_e^\lambda}(\chi')\otimes_k k[X^{\lambda,+}])\\
\cong
\bigoplus_{\bar g\in \faktor{\tilde{G}^\lambda}{G_e}}\operatorname {Hom}_{G_e^{\lambda}}(V_{G^\lambda_e}(\chi\circ \sigma_{\bar{g}}),V_{G_e^\lambda}(\chi')\otimes_k k[X^{\lambda}])
\end{multline}
where the second isomorphism follows again by considering $\lambda$-weights and \eqref{eq:part0}. To finish the proof we recall that by definition $\tilde{G}^\lambda/G_e=H^\lambda/G^\lambda_e$ and
by \eqref{eq:restriction}
\[
\bigoplus_{\bar g\in H^\lambda/G^\lambda_e} V_{G^\lambda_e}(\chi\circ \sigma_{\bar{g}})
=\operatorname{Res}^{H^\lambda}_{G^\lambda_e}\Ind^{H^\lambda}_{G^\lambda_e} V_{G_e^\lambda}(\chi).
\]
It now suffice to apply the adjunction $(\operatorname{Res}^{H^\lambda}_{G^\lambda_e},\Ind^{H^\lambda}_{G^\lambda_e})$
to the right-hand side of \eqref{eq:step2} to obtain \eqref{firstdisplay}.
The compatibility with composition is a straightforward but tedious verification.
\end{proof}
\subsection{Reduction to unit components}
We have the following convenient fact.
\begin{propositions}\label{ref-prop-3.4}
Assume that $G$ is a reductive group acting on a smooth variety~$X$.
If
$(G_e,B,T,X,{\cal L},\chi,\lambda)$ is a reduction setting then so is $(G,B,T,X,{\cal L},\chi,\lambda)$.
\end{propositions}
\begin{proof}
We only have to verify that \eqref{ref-2.3-8} is an isomorphism and to do so we may assume that $X$ is affine.
We use Lemma \ref{lem:criterion} below.
Assume that \eqref{eq:2.3-8} holds for $G=G_e$. Then applying the functor $\Ind^G_{G_e}$
yields that it holds for $G$.
\end{proof}
\begin{lemmas}
\label{lem:criterion}
Assume that $X$ is affine and assume that conditions (\ref{ref-1-3}-\ref{ref-5-4-2}) in Definition \ref{ref-2.1-2} hold. Let $\langle {\cal P}_{G,{\cal L}}\rangle$ be the smallest cocomplete subcategory of $D_{\operatorname{Qch}}(X/G)$
containing $P_{G,{\cal L}}$.
Then
\eqref{ref-2.3-8} is an isomorphism if and only if
\begin{equation}
\label{eq:2.3-8}
P^\bullet:=\operatorname{cone}(P_{G,\chi}\rightarrow \RInd^{G}_{G^{\lambda,+}_e}(V_{G_e^\lambda}(\chi) \otimes_k j_\ast{\cal O}_{X^{\lambda,+}}))[-1]\in \langle {\cal P}_{G,{\cal L}}\rangle .
\end{equation}
Moreover in that case $P^\bullet\cong \operatorname {Hom}_{X/G}(P_{G,{\cal L}},P_{G,\chi})\overset{L}{\otimes}_{\operatorname {End}_{X/G}(P_{G,{\cal L}})} P_{G,{\cal L}}$. In particular\footnote{Replacing $\operatorname {Hom}_{X/G}(P_{G,{\cal L}},P_{G,\chi})$ with its projective
resolution over $\operatorname {End}_{X/G}(P_{G,{\cal L}})$.}~$P^\bullet$ may be represented by a complex in degrees $\le 0$ whose entries are direct sums of $P_{\mu}$, $\mu\in {\cal L}$.
\end{lemmas}
\begin{proof}
Since $P_{G,{\cal L}}$ is compact by Theorem \ref{th:st}\eqref{c2}
the inclusion functor $\langle {\cal P}_{G,{\cal L}}\rangle\hookrightarrow D_{\operatorname{Qch}}(X/G)$ has a right adjoint by Brown representability \cite{Neeman}. Moreover one checks that it is explicitly given by
\[
R=\operatorname {RHom}_{X/G}(P_{G,{\cal L}},-)\overset{L}{\otimes}_{\operatorname {End}_{X/G}(P_{G,{\cal L}})} P_{G,{\cal L}}.
\]
So \eqref{ref-2.3-8} is an isomorphism
if and only if we have a distinguished triangle\footnote{Here we use that $X$ is affine to identify global and local $\operatorname {Hom}$'s.}
\[
R(P_{G,\chi})\rightarrow P_{G,\chi}\rightarrow \RInd^{G}_{G^{\lambda,+}_e}(V_{G_e^\lambda}(\chi) \otimes_k j_\ast{\cal O}_{X^{\lambda,+}})\rightarrow
\]
where the first arrow is the counit.
This in turn is true if and only if
\begin{equation}
\label{eq:iso1}
R(P_{G,\chi})\rightarrow \operatorname{cone}(P_{G,\chi}\rightarrow \RInd^{G}_{G^{\lambda,+}_e}(V_{G_e^\lambda}(\chi) \otimes_k j_\ast{\cal O}_{X^{\lambda,+}}))[-1]\rightarrow
\end{equation}
is an isomorphism (and in that case $P^\bullet\cong R(P_{G,\chi})$). Clearly if \eqref{eq:iso1} is an isomorphism then \eqref{eq:2.3-8} holds.
Conversely assume that \eqref{eq:2.3-8} holds.
Then \eqref{eq:iso1} is a morphism in $\langle P_{G,{\cal L}}\rangle$. To test if it is an isomorphism
we may apply $\operatorname {RHom}_{X/G}(P_{G,{\cal L}},-)$. But then \eqref{eq:iso1} becomes the identity by \eqref{ref-2.1-6}.
Hence we are done.
\end{proof}
\subsection{Reduction to closed subschemes}\label{subschemes}
We will create reduction settings first in the linear case ($X$ being a representation) and then
we will restrict them to closed subschemes.
To do this will use the following theorem:
\begin{theorems}
\label{ref-3.1-13}
Let $G$ be a reductive group and let $Y\subset X$ be a closed embedding
of smooth $G$-varieties. If
$(G,B,T,X,{\cal L},\chi,\lambda)$ is a reduction setting then so is
$(G,B,T,Y,{\cal L},\chi,\lambda)$.
\end{theorems}
To prove this we may assume that $X$ is affine. We first discuss a special case.
\begin{lemmas}
\label{ref-3.2-14} Assume that $(G,X,Y)$ are as in the statement of Theorem \ref{ref-3.1-13} but
with $X$ affine. Assume in addition
that there is a $G_m$-action on $X$ in a way which commutes with the $G$-action such that
$k[X]$ has only weights $\ge 0$ and\footnote{If the $G_m$-action is denoted by $\gamma$ then this condition may also be written as
$Y=X^\gamma$, $X=X^{\gamma,-}$.} $k[Y]=k[X]^{G_m}$. Then the conclusion of Theorem \ref{ref-3.1-13} holds.
\end{lemmas}
\begin{proof} Assume that $(G,B,T,X,{\cal L},\chi,\lambda)$ is a reduction setting. If we replace the $\operatorname {End}_{X/G}(P_{G,{\cal L}})$-module $\operatorname {Hom}_{X/G}(P_{G,{\cal L}},P_{G,\chi})$ in \eqref{ref-2.3-8} by its projective resolution we obtain a resolution
\begin{equation}
\label{ref-3.1-15a}
P^\bullet_{X}\cong \operatorname{cone}(P_{G,X,\chi}\rightarrow \RInd^{G}_{G^{\lambda,+}_e}
(V_{G_e^\lambda}(\chi) \otimes_k k[X^{\lambda,+}]))[-1]
\end{equation}
as in Lemma \ref{lem:criterion} (we have switched to coordinate ring notation). Moreover we may assume
that this resolution is $G_m$-equivariant. In addition since $k[X^{\lambda,+}]$ is a quotient of $k[X]$ it only
has $G_m$-weights $\ge 0$ and this property is not affected by applying $\RInd^{G}_{G^{\lambda,+}_e}
(V_{G_e^\lambda}(\chi) \otimes_k -)$. We conclude that as $G\times G_m$-equivariant $k[X]$-module $P^n_X$ may be
assumed to be a direct sum of $P_{G,X,\mu}\otimes_k \sigma_n$, $n\ge 0$ where $\sigma_n$ is the $G_m$-character $z\mapsto z^n$ and $\mu\in {\cal L}$.
We then have
\[
(P_{G,X,\mu}\otimes_k \sigma_n)^{G_m}=
\begin{cases}
P_{G,Y,\mu}&\text{if $n=0$}\\
0&\text{if $n>0$}
\end{cases}
\]
Taking $G_m$-invariants of \eqref{ref-3.1-15a} we now get a similar resolution
\[
P^\bullet_{Y}\cong\operatorname{cone}( P^\bullet_{G,Y,\chi}\rightarrow \RInd^{G}_{G^{\lambda,+}_e}
(V_{G_e^\lambda}(\chi) \otimes_k k[X^{\lambda,+}])^{G_m}[-1].
\]
Furthermore since the $G$ and $G_m$-action do not interfere with each other we have
\[
\RInd^{G}_{G^{\lambda,+}_e} (V_{G_e^\lambda}(\chi) \otimes_k k[X^{\lambda,+}])^{G_m}= \RInd^{G}_{G^{\lambda,+}_e} (V_{G_e^\lambda}(\chi) \otimes_k k[X^{\lambda,+}]^{G_m})
\]
and finally using the description of $k[X^{\lambda,+}]$ in \S\ref{ref-1.2-0} we easily see that
$k[X^{\lambda,+}]^{G_m}=k[Y^{\lambda,+}]$. We conclude that \eqref{eq:2.3-8} holds for $Y$. This finishes the proof by Lemma \ref{lem:criterion}.
\end{proof}
We will now reduce the proof of Theorem \ref{ref-3.1-13} to the special case considered in Lemma \ref{ref-3.2-14} using the Luna slice
theorem.
\begin{lemmas}
\label{ref-3.3-16} Let $G$ be a linear algebraic group acting on an affine variety
$X$ and let $P\in D_{\operatorname{Qch}}(X/G)$. Assume that $P$ is zero in the neighborhood
of any point with closed orbit, i.e.\ any $x\in X$ such that $Gx$ is closed
has an open neighborhood $U_x$ such that $P{\mid} U_x=0$. Then $P=0$.
\end{lemmas}
\begin{proof}
Put $U'=\bigcup_x U_x$ and $U=GU'$. Then $P{\mid} U=0$ so it is
sufficient to prove $U=X$. Assume this is not the case. Since $X-U$
is closed and $G$-invariant it contains a closed orbit (e.g.\ an
orbit of minimal dimension). This is an obvious contradiction.
\end{proof}
\begin{lemmas} \label{ref-3.4-17} If $(G,B,T,X,{\cal L},\chi,\lambda)$ is
such that the conditions (\ref{ref-1-3}-\ref{ref-6-5}) from
Definition \ref{ref-2.1-2} hold, and such that $X$ is affine, then
write $C_X$ for the cone of \eqref{ref-2.3-8}. If $\alpha:Z\rightarrow X$ is
a strongly \'etale $G$-equivariant morphism
then $\alpha^\ast (C_X)=C_Z$.
\end{lemmas}
\begin{proof} This ultimately boils down to $\alpha^\ast {\cal O}_{X^{\lambda,+}}={\cal O}_{Z^{\lambda,+}}$ which is true thanks to Lemma \ref{ref-1.2.1-1}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{ref-3.1-13}]
We assume that $(G,B,T,X,{\cal L},\chi,\lambda)$ is a reduction setting
and $X$ is affine. Thus $C_X=0$
and we have to deduce from it $C_Y=0$. According to Lemma
\ref{ref-3.3-16} it suffices to do this in the neighborhood of any
closed orbit. So let $Gy$ be a closed orbit in $Y$ and let $N_Y$ be a
$G_y$-invariant complement to $T_y(Gy)$ in $T_y(Y)$. Then according to the
Luna slice theorem \cite{Luna} there is an affine $G_y$-invariant
``slice'' $y\in S\subset Y$ to the orbit of $y$ and strongly
$G_y$-equivariant \'etale morphism $S\rightarrow N_Y$ which sends $y$ to $0$
such that the induced maps
\[
Y\xleftarrow{\alpha} G\times^{G_y} S\xrightarrow{\beta} G\times^{G_y} N_Y
\]
are strongly \'etale. Then by Lemma \ref{ref-3.4-17} we have $\alpha^\ast(C_Y)=\beta^\ast(C_{ G\times^{G_y} N_Y})$.
Thus it is sufficient to prove that $C_{G\times^{G_y} N_Y}=0$.
By assumption $Gy$ is closed in $X$. Let $V$ be a $G_y$-invariant complement to $T_y(Y)$ in $T_y(X)$
and put
$N_X:=N_Y\oplus V$. Since $C_X=0$, by the same reasoning as above we conclude
that $C_{ G\times^{G_y} N_X}$ is zero in a neighborhood of the zero section of $G\times^{G_y} N_X\rightarrow G/G_y$. However
note that since $ C_{ G\times^{G_y} N_X}$ is natural, it is in particular equivariant for the scalar $G_m$-action on $N_X$. So in fact $ C_{ G\times^{G_y} N_X}=0$.
Now let $G_m$ act on $N_X=N_Y\oplus V$ by acting trivially on $N_Y$ and with weight $-1$ on~$V$. Then
the inclusion $ G\times^{G_y} N_Y\hookrightarrow G\times^{G_y} N_X$ falls under the setting
considered in Lemma \ref{ref-3.2-14}. We conclude from this lemma that $C_{G\times^{G_y} N_Y}=0$, finishing the proof.
\end{proof}
\subsection{Reduction settings in the connected linear case}
We use the notation and conventions introduced in \S\ref{linear}.
We need the twisted
Weyl group action of ${\cal W}$ on $X(T)$: $w{\ast}\chi:=w(\chi+\bar{\rho})-\bar{\rho}$. If $\chi\in X(T)$
and there is some $w{\ast} \chi$ which is dominant then we write $\chi^+=w{\ast} \chi$. Otherwise
$\chi^+$ is undefined.
\begin{propositions}\label{-iC}
Let $G$ be a connected reductive group and assume
$B,T,\chi,\lambda,{\cal L}$ satisfy (\ref{ref-1-3}-\ref{ref-5-4}) in Definition
\ref{ref-2.1-2}. Let $X=W^\vee$ where $W$ is a $G$-representation
with weights $\beta_1,\ldots,\beta_d$. Assume
$(\chi+\beta_{i_1}+\dots+\beta_{i_{-p}})^+\in {\cal L}$ for all
$\emptyset\neq\{i_1,\ldots,i_{-p}\}\subseteq\{1,\ldots,d\}$,
$i_j\neq i_{j'}$ for $j\neq j'$ such that $(\chi+\beta_{i_1}+\dots+\beta_{i_{-p}})^+$ is defined. Then
$(G,B,T,X,{\cal L},\chi,\lambda)$ is a reduction setting.
\end{propositions}
\begin{proof}
We only have to verify \eqref{ref-2.3-8}.
We denote by
$K_\lambda$ the subspace of $W$ spanned by the weight vectors $w_j$
such that $\langle \lambda,\beta_j\rangle>0$. Note that $\operatorname {Spec}
\operatorname{Sym}(W/K_\lambda)\cong X^{\lambda,+}$.
In \cite[(11.3)]{SVdB} we constructed a quasi-isomorphism
\begin{equation}
\label{quasiiso}
C_{\lambda,\chi}\rightarrow \RInd^{G}_{G^{\lambda,+}} (V_{G^\lambda}(\chi) \otimes_k k[X^{\lambda,+}]),
\end{equation}
where $C_{\lambda,\chi}$ is a complex of the form
\begin{align}\label{ref-11.3-98}
C_{\lambda,\chi}\overset{\text{def}}{=}&\left(\bigoplus_{p\le 0,q\ge 0} R^q\Ind^G_B(\chi\otimes_k \wedge^{-p} K_{\lambda})\otimes_{k} k[X][-p-q],d\right).
\end{align}
We
showed that after forgetting the differential $C_{\lambda,\chi}$ is a sum of
$G$-equivariant projective modules of the form ${P}_{\mu}$ where the $\mu$ are among
the weights
\begin{equation}
\label{eq:weights}
(\chi+\beta_{i_1}+\beta_{i_2}+\cdots+\beta_{i_{-p}})^+
\end{equation}
(with each such expression occurring at most once)
where $\{i_1,\ldots,i_{-p}\}\subset\{1,\ldots,d\}$, $i_j\neq i_{j'}$ for
$j\neq j'$ and $\langle\lambda,\beta_{i_j}\rangle>0$ \cite[Lemma 11.2.1]{SVdB}.
Moreover there is a single copy of $P_\chi$ which lives in degree
zero. It is not explicitly stated in loc.\ cit.\ but it follows easily
from the construction that this copy of $P_\chi$ yields an inclusion $P_\chi\rightarrow
C_{\lambda,\chi}$ such that the composition $P_\chi\rightarrow
C_{\lambda,\chi}\rightarrow \RInd^{G}_{G^{\lambda,+}} (V_{G^\lambda}(\chi)
\otimes_k k[X^{\lambda,+}])$ is the canonical morphism exhibited in \eqref{ref-2.2-7}.
Let $C'_{\lambda,\chi}=C_{\lambda,\chi}/P_\chi$. Then by the fact that \eqref{quasiiso} is a quasi-isomorphism we have $C'_{\lambda,\chi}\cong \operatorname{cone}(P_\chi\to \RInd^{G}_{G^{\lambda,+}} (V_{G^\lambda}(\chi)
\otimes_k k[X^{\lambda,+}]))$.
Since by hypothesis the summands $P_\mu$ of $C'_{\lambda,\chi}$ are summands
of $P_{\cal L}$ we find that \eqref{eq:2.3-8} holds and hence
we are done by Lemma \ref{lem:criterion}.
\end{proof}
\section{Partitioning $X(T)^+$}\label{partition}
\subsection{Preliminaries}
\label{sec:prelim}
We assume we are in the setting of \S\ref{linear}. In particular $G$ is
connected and acts on a representation $X=W^\vee$.
We now introduce some extra notation.
We let $\Phi\subset X(T)$ be the roots of $G$. We
write $\Phi^-$ for the negative roots of $G$ (the roots of $B$) and
$\Phi^+$ for the positive roots. We choose a positive definite ${\cal W}$-invariant quadratic form $(-,-)$ on $X(T)_{\blb R}$.
If $\alpha\in \Phi$ then $\check{\alpha}\in Y(T)_{\blb R}$ is the corresponding coroot defined by
$\langle \check{\alpha},\chi\rangle=2(\alpha,\chi)/(\alpha,\alpha)$
and the associated reflection on
$X(T)_{\blb R}$ is defined by $s_\alpha(\chi)=\chi-\langle\check{\alpha},\chi\rangle\alpha$. We put
$\Phi^\vee=\{\check{\alpha}\mid \alpha\in \Phi\}$.
We set $ \Phi_\lambda=\{\alpha\in \Phi\mid
\langle\lambda,\alpha\rangle=0\}. $ We denote by
$\Phi_\lambda^+=\Phi^+\cap\Phi_\lambda$ the set of positive roots of
$G^\lambda$.
Let us recall a slightly extended version of \cite[Corollary D.3.]{SVdB} with the same proof.
\begin{lemmas}\label{ref-A.2}
Let $\lambda\in Y(T)_{\blb R}^-$, $w\in {\cal W}$, $\chi\in X(T)$ such that $w{\ast}\chi$ is dominant. Then
$\langle \lambda, w{\ast}\chi\rangle\le \langle \lambda, \chi\rangle$
with equality if and only if $w \in {\cal W}_{G^\lambda}$.
\end{lemmas}
\begin{proof}
When comparing with \cite[Corollary
D.3.]{SVdB} note that in loc.\ cit. the role of $\lambda$ is played
by $y$, which is assumed to be dominant, rather than anti-dominant
which is the case here. So the inequalities are reversed.
Since $\chi^+:=w{\ast}\chi$ is dominant we have $\chi^+=s_{\alpha_n}{\ast}\cdots
{\ast}s_{\alpha_1} {\ast}\chi$
such that for each $\chi_i:=s_{\alpha_i}{\ast}\cdots {\ast} s_{\alpha_1}{\ast}\chi$
the inequality $\langle\check{\alpha}_{i+1},\chi_i\rangle\le -2$ holds.
In loc.\ cit.\ it is shown that $\langle\lambda,w{\ast}\chi\rangle\le \langle\lambda ,\chi\rangle$.
Going through the proof we see that the only possibility for equality to occur is when $\langle \lambda,\alpha_i\rangle=0$ for all $i$. But then $w':=s_{\alpha_n}\cdots
s_{\alpha_1}\in {\cal W}_{G^\lambda}$. Since $\chi^+$ is dominant it has trivial stabilizer
for the $\ast$-action. Since $(ww'^{-1})\ast\chi^+=\chi^+$ we conclude $w=w'\in {\cal W}_{G^\lambda}$.
\end{proof}
We will also need the following variant.
\begin{lemmas}\label{ref-A.21}
Let $\lambda\in Y(T)_{\blb R}^-$, $w\in {\cal W}$, $\chi\in X(T)$ such that $w\chi$ is dominant. Then
$\langle \lambda, w\chi\rangle\le \langle \lambda, \chi\rangle$
with equality if and only if $w\chi \in {\cal W}_{G^\lambda}\chi$.
\end{lemmas}
\begin{proof} The proofs is along the same lines as the proof of Lemma \ref{ref-A.2} except that $w\chi$ may have non-trivial stabilizer. This accounts for the slightly weaker conclusion.
\end{proof}
We define
\begin{align*}
T_\lambda^+=\{i\mid \langle\lambda,\beta_i\rangle> 0\},\quad
T_\lambda^0=\{i\mid \langle\lambda,\beta_i\rangle= 0\},\quad
T_\lambda^-=\{i\mid \langle\lambda,\beta_i\rangle< 0\}.
\end{align*}
A point $x\in X$ is {\em stable} if it has closed orbit and finite
stabilizer. $X$ has a $T$-stable point if and only if for every
$\lambda\in Y(T)\setminus\{0\}$ there exists $i$ such that
$\langle\lambda,\beta_i\rangle>0$ (i.e., not all the weights lie in a
half space defined by the hyperplane through the origin).
\medskip
\emph{In the rest of this section we assume that $X=W^\vee$ has a $T$-stable point.}
\subsection{Expression of $\chi$ in terms of faces of $\Sigma$}
\label{sec:interms}
As $X$ has a $T$-stable point, $0$ lies in the interior of the positive span of
$(\beta_i)_i$ and in particular $-\bar \rho+\bigcup_r
r\Sigma=X(T)_{\blb R}$, thus every $\chi\neq -\bar{\rho}\in X(T)$ lies in the
relative interior of a unique proper face of $-\bar\rho+r\bar{\Sigma}$ for
a unique $r>0$. We will partition the set $X(T)^+$ according to the
relative interiors of faces of $-\bar{\rho}+r\bar{\Sigma}$ to which its elements belong. However for convenience
we will not use the faces directly but rather some equivalent combinatorial data
associated to them.
\medskip
For a set $S$ let ${\cal P}(S)$ be its power set.
We put a partial ordering $\prec$
on ${\blb R}^+\times {\mathcal P}(\{1,\dots,d\})^3$ by declaring $(r,S^+,S^-,S^0)\preceq (r',S^{\prime +},S^{\prime -},S^{\prime 0})$
if either $r<r'$ or else $r=r'$, $S^+\subset S^{\prime +}$ and $S^-\subset S^{\prime -}$.
If ${\bf S}=(S^+,S^-,S^0)$ then we write $|{\bf S}|=(|S^+|,|S^-|,|S^0|)$.
Let ${\blb R}^+\times {\blb N}^3$ be equipped with the (total) lexicographic ordering. There
is an order preserving map
\begin{equation}
\label{eq:order}
({\blb R}^+\times {\mathcal P}(\{1,\dots,d\})^3,\prec)\rightarrow ({\blb R}^+\times {\blb N}^3,<):(r,{\bf S})\mapsto (r,|{\bf S}|)
\end{equation}
whose fibers are incomparable among each other.
\begin{lemmadefinitions}\label{ref-1.5}
\begin{enumerate}
\item
For $\chi\neq -\bar{\rho}\in X(T)$ there exists an expression of the form
\begin{equation}\label{expresschi}
\chi=-\bar\rho-r\sum_{i\in S^+}\beta_i+0\sum_{i\in S^-}\beta_i+\sum_{i\in S^0}b_i\beta_i,
\end{equation}
where $S^+\neq \emptyset$, $r>0$,
$\forall i:-r<b_i< 0$ and $S^+\coprod S^-\coprod S^0=\{1,\dots,d\}$.
There is a unique tuple $(r_\chi,{\bf S}_\chi):=(r_{\chi},S_{\chi}^+,S_{\chi}^-,S_{\chi}^0),$
for which $(r_\chi,|{\bf S}_\chi|)$ is minimal among tuples attached to such expression of the form \eqref{expresschi}.
If $\chi=-\bar{\rho}$
then we put by convention $r_\chi=0$, $S_\chi^+=S_\chi^-=\emptyset$ (although \eqref{expresschi} is then not true). Below we refer to this situation as the ``trivial case''.
\item\label{tri}
If $\chi\in X(T)^+$ then there exists $\lambda\in Y(T)^-$ with the properties:
$S_{\chi}^{-}=T_\lambda^-$, $S_{\chi}^+=T_\lambda^+$, $S_{\chi}^0=T_\lambda^0$.
\end{enumerate}
\end{lemmadefinitions}
\begin{remarks} If $\chi\in X(T)^+$ then the trivial case is equivalent to $G=T$ and $\chi=0$.
\end{remarks}
\begin{remarks}\label{rmk:faces} It will follow from the proof of Lemma \ref{ref-1.5} as well a
Lemma \ref{eq:supporting} below that
the data $(r_\chi,{\bf S}_\chi)$ identifies which
proper face $F$ of $-\bar{\rho}+r_\chi\Sigma$ contains~$\chi$ in its relative
interior. Reformulating Lemma \ref{eq:supporting} one has
\begin{equation}
\label{eq:relint}
\operatorname {relint} F=\biggl\{ -\bar\rho-r_{\chi}\sum_{i\in S_{\chi}^+}\beta_i+\sum_{i\in S_\chi^0}b_i\beta_i\mid \forall i:-r_\chi<b_i<0\biggr\}
\end{equation}
Furthermore $\lambda$ as in Lemma \ref{ref-1.5}\eqref{tri} defines an appropriately chosen
supporting plane for that face. Finally the $\prec$-ordering is opposite to the ordering given by inclusion of faces.
\end{remarks}
\begin{proof}[Proof of Lemma \ref{ref-1.5}]
The existence of an expression with minimal $(r_\chi,|{\bf
S}_\chi|)$ is obvious. To prove the uniqueness of the associated
tuple $(r_{\chi},{\bf S}_{\chi})$ assume that there are two minimal
expressions with different associated tuples. Taking their average
we obtain an expression which is strictly smaller than both the original
expressions, contradicting the minimality.
We will now prove \eqref{tri}. If we are in the trivial
case
then we take
$\lambda=0$.
So we will now assume we are not in the trivial case.
We take $r$ minimal such that $\chi\in -\bar \rho+r\bar\Sigma$ (and hence $r_{\chi}=r$).
By \cite[Lemma C.2]{SVdB} (and as all $\beta_i\in X(T)$) there exists $0\neq \lambda\in Y(T)$
such that $\langle\lambda,\chi\rangle<\langle\lambda,\mu\rangle$ for all $\mu\in -\bar\rho+r_{\chi}\Sigma$
and $\chi$ can be written as
\begin{equation}
\label{eq:chiexpr}
\chi=-\bar\rho-r_{\chi}\sum_{i\in T_{\lambda}^+}\beta_i+\sum_{i\in T_{\lambda}^0}b_i\beta_i,
\end{equation}
$-r_{\chi}<b_i<0$. Thus by Lemma \ref{eq:supporting} below $\chi+\bar{\rho}$ is
in the relative interior of the face of $r_\chi\bar{\Sigma}$ defined
by the supporting half plane
$\langle\lambda,\chi+\bar{\rho}\rangle\le\langle\lambda,-\rangle$. Let $w\in {\cal W}$ be such
that $w{\lambda}\in Y(T)^-$. By the discussion preceding the \cite[
(11.4)]{SVdB} $\langle w\lambda,\chi+\bar{\rho}\rangle\le\langle w\lambda,-\rangle$ is still a
supporting half plane for $r_\chi\bar{\Sigma}$. It is easy to see
that the corresponding face must be equal to $wF$.
Since the face still contains $\chi+\bar{\rho}$ we must have $F=wF$.
It follows again
from Lemma \ref{eq:supporting} below that \eqref{eq:chiexpr} remains
true with ${\lambda}$ replaced by $w{\lambda}$. So we now assume ${\lambda}\in Y(T)^-$.
By Observation (3)
in the proof of \cite[Theorem 1.4.1]{SVdB} (applied to both boundaries of the interval $[-r_\chi,0]$)
we find that in any expression of the form \eqref{expresschi} we must have $T_{\lambda}^+\subset S^+$,
$T_{\lambda}^-\subset S^-$. Since $(|S_\chi^+|,|S_\chi^-|)$ is minimal this implies
$S_{\chi}^\pm=T_{{\lambda}}^\pm$, $S_{\chi}^0=T_{\lambda}^0$, establishing \eqref{tri}.
\end{proof}
We have used the following result.
\begin{lemmas} \label{eq:supporting}
Let $\chi\in X(T)_{\blb R}$, $0\neq \lambda\in Y(T)_{\blb R}$ be such that
the equation $\langle {\lambda},\chi\rangle = \langle {\lambda},-\rangle$ defines a supporting half-plane of $\bar{\Sigma}$. Then $\chi$ is in the relative interior of the corresponding face if and only if $\chi$ can be written as
\[
\chi=-\sum_{i\in T_{\lambda}^+}\beta_i+\sum_{i\in T_{\lambda}^0}b_i\beta_i,
\]
with $-1<b_i<0$.
\end{lemmas}
\begin{proof} Let $H$ be the hyperplane $\langle {\lambda},\chi\rangle = \langle {\lambda},-\rangle$. Then
$
F=H\cap \bar{\Sigma}
$
is given by those $\mu=\sum_i c_i\beta_i$ such that $\langle {\lambda},\chi\rangle = \langle {\lambda},\mu\rangle$
and
\begin{align*}
i\in T^+_{\lambda} &\Rightarrow c_i=-1,\\
i\in T^-_{\lambda} &\Rightarrow c_i=0,\\
i\in T^0_{\lambda} &\Rightarrow c_i\in [-1,0].
\end{align*}
This follows in fact from Observation (3)
in the proof of \cite[Theorem 1.4.1]{SVdB} (applied to both boundaries of the interval $[-1,0]$). It
follows that $\operatorname {relint} F$ is given by those $\mu=\sum_i c_i\beta_i$ such that $\langle {\lambda},\chi\rangle = \langle {\lambda},\mu\rangle$
and
\begin{align*}
i\in T^+_{\lambda} &\Rightarrow c_i=-1,\\
i\in T^-_{\lambda} &\Rightarrow c_i=0,\\
i\in T^0_{\lambda} &\Rightarrow c_i\in ]-1,0[.
\end{align*}
Applying this with $\mu=\chi$ yields what we want.
\end{proof}
\begin{lemmas} \label{ref-1.5bis}
Assume that $\lambda\in Y(T)^-$ corresponds to $\chi\in X(T)^+$ as in Lemma \ref{ref-1.5}\eqref{tri}. Then the following properties hold
\begin{enumerate}
\item\label{ena}
Let $\mu \in X(T)^+$. If $({r}_{\chi},{\bf S}_\chi)\not\preceq({r}_{\mu},{\bf S}_{\mu})$ then
$\langle\lambda,\chi\rangle<\langle\lambda,\mu\rangle$.
Similarly if $({r}_{\chi},{\bf S}_\chi)=({r}_{\mu},{\bf S}_{\mu})$ then
$\langle\lambda,\chi\rangle=\langle \lambda,\mu\rangle$.
\item\label{dva}
the sets $\{\beta_i\mid i\in S_{\chi}^+\}$, $\{\beta_i\mid i\in S_{\chi}^-\}$ are ${\mathcal W}_{G^\lambda}$-invariant.
\end{enumerate}
\end{lemmas}
\begin{proof}
\eqref{dva} is immediate from Lemma \ref{ref-1.5}\eqref{tri}
since $\lambda$ is stabilized by ${\cal W}_{G^\lambda}$.
Now we verify \eqref{ena}. Again it is sufficient to consider the
non-trivial case. The second claim is easy so we
discuss the first one. We have
\[
\mu=-\bar\rho-r_\mu\sum_{i\in S^+_\mu}\beta_i+\sum_{i\in S^0_\mu}b_i\beta_i,
\]
with $-r_\mu<b_i<0$. Write $\ss_i=\langle \lambda,\beta_i\rangle$. Then using Lemma \ref{ref-1.5}\eqref{tri}
\begin{equation}
\label{eq:scalar}
\langle \lambda ,\chi\rangle=-\langle \lambda,\bar{\rho}\rangle
-r_\chi\sum_{i\in T_\lambda^+}\ss_i
\end{equation}
and
\begin{equation}
\label{eq:chain}
\begin{aligned}
\langle \lambda,\mu\rangle&=
-\langle \lambda,\bar{\rho}\rangle-r_\mu\sum_{i\in S^+_\mu}\ss_i+
\sum_{i\in S^0_\mu}b_i\ss_i\\
&=-\langle \lambda,\bar{\rho}\rangle-r_\mu\sum_{i\in S^+_\mu\cap T_\lambda^+}\ss_i
-r_\mu\sum_{i\in S^+_\mu\cap T_\lambda^-}\ss_i+
\sum_{i\in S^0_\mu\cap T^+_\lambda}b_i\ss_i
+\sum_{i\in S^0_\mu\cap T^-_\lambda}b_i\ss_i\\
&\ge -\langle \lambda,\bar{\rho}\rangle-r_\mu\sum_{i\in S^+_\mu\cap T_\lambda^+}\ss_i
-0\sum_{i\in S^+_\mu\cap T_\lambda^-}\ss_i
-r_\mu\sum_{i\in S^0_\mu\cap T^+_\lambda}\ss_i
+0\sum_{i\in S^0_\mu\cap T^-_\lambda}\ss_i\\
&= -\langle \lambda,\bar{\rho}\rangle-r_\mu\sum_{i\in (S^+_\mu\cup S^0_\mu)\cap T_\lambda^+}\ss_i\\
&\ge -\langle \lambda,\bar{\rho}\rangle-r_\mu\sum_{i\in T_\lambda^+}\ss_i.
\end{aligned}
\end{equation}
The total inequality will be strict if any of the following conditions hold:
\begin{align*}
S^+_\mu\cap T_\lambda^-&\neq \emptyset\qquad \text{ or}\\
S^0_\mu\cap (T_\lambda^+\cup T^-_\lambda)&\neq \emptyset\qquad \text{ or}\\
(S^+_\mu\cup S^0_\mu)&\not\supset T_\lambda^+,
\end{align*}
which is equivalent to any of the following conditions holding
\begin{equation}
\label{eq:strict}
\begin{aligned}
S^+_\mu&\not\supset T_\lambda^+\qquad\text{ or}\\
S^-_\mu&\not\supset T_\lambda^-\qquad\text{ or}\\
S^0_\mu&\not\supset T_\lambda^0.
\end{aligned}
\end{equation}
To prove \eqref{ena} we have to show
\[
(r_\chi,{\bold S}_\chi)\not\preceq (r_\mu,{\bold S}_\mu)\Rightarrow \langle \lambda,\chi\rangle<\langle \lambda,\mu\rangle.
\]
The condition on the left hand side is equivalent to any of the following conditions holding
\begin{equation}
\label{eq:partial}
\begin{gathered}
r_\chi>r_\mu\qquad \text{ or}\\
r_\chi=r_\mu\text{ and } T^+_\lambda\not\subset S^+_\mu\qquad \text{ or}\\
r_\chi=r_\mu\text{ and } T^-_\lambda\not\subset S^-_\mu.
\end{gathered}
\end{equation}
If $r_\chi>r_\mu$ then $\langle \lambda,\chi\rangle<\langle \mu,\chi\rangle$ by comparing
\eqref{eq:chain} to \eqref{eq:scalar} using the fact that $S^+_\chi=T^+_\lambda\neq \emptyset$.
If $r_\chi=r_\mu$ then \eqref{ena} follows by comparing \eqref{eq:strict} with
\eqref{eq:partial}.
\end{proof}
\begin{corollarys}\label{betaplus}
Let $\chi\in X(T)^+$ be such that $r_\chi\ge 1$ and let $\lambda\in Y(T)^-$ be as in Lemma \ref{ref-1.5}(\ref{tri}).
If $p>0$ and $\mu=\chi+\beta_{i_1}+\cdots+\beta_{i_{p}}$, where $\{i_1,\ldots,i_{p}\}\subset\{1,\ldots,d\}$, $i_j\neq i_{j'}$ for
$j\neq j'$ and $\langle\lambda,\beta_{i_j}\rangle>0$,
then $({r}_{{\mu}^+},{\bf |S}_{{\mu}^+}|)<({ r}_\chi,{\bf |S}_\chi|)$.
Moreover, $\langle\lambda,\chi\rangle<\langle\lambda,\mu^+\rangle$.
\end{corollarys}
\begin{proof}
By the property \eqref{tri} in Lemma \ref{ref-1.5} every $k$ for which $\langle\lambda,\beta_k\rangle>0$ belongs to $S_{\chi}^+$ and thus ${r}_{\mu}<{ r}_{\chi}$ or ${r}_{\mu}= {r}_{\chi}$ and $|{\bf S}_{\mu}|<|{\bf S}_{\chi}|$.
In other words $(r_\mu,|{\bf S}_\mu|)<(r_{\chi},|{\bf S}_\chi|)$.
As (${r}_{\mu}$, $|{\bf S}_{\mu}|$) depends only on the ${\mathcal W}$-orbit of $\mu$ for the $*$-action,
we also have $(r_{\mu^+},|{\bf S}_{\mu^+}|)<(r_{\chi},|{\bf S}_\chi|)$ and hence $(r_{\mu^+},{\bf S}_{\mu^+})
\not\succeq(r_{\chi},{\bf S}_\chi)$.
Property \eqref{ena} in Lemma \ref{ref-1.5bis} then implies $\langle \lambda,\mu^+\rangle>\langle\lambda,{\chi}\rangle$.
\end{proof}
We denote
\begin{align*}
\chi_p&=-r_{\chi}\sum_{i\in S_{\chi}^+}\beta_i
\end{align*}
The following lemma gives a description of the set of $\chi$ with given $(r_\chi,{\bf S}_\chi)$
in terms of objects related to $G^\lambda$.
\begin{lemmas} \label{chi_lambda}
Let $\chi\in X(T)^+$ and let $\lambda$ be as in Lemma \ref{ref-1.5}(\ref{tri}).
Then the set
\begin{equation}
\label{eq:set}
\{\chi'\in X(T)^+\mid (r_{\chi'},{\bf S}_{\chi'})= (r_{\chi},{\bf S}_{\chi})\}
\end{equation}
is equal to
\[
(\nu-\bar{\rho}_\lambda+r_\chi \Sigma^0_\lambda)\cap X(T)^\lambda
\]
where
\[
\nu= -\bar\rho+\bar\rho_\lambda+\chi_p.
\]
Moreover $\nu$ is ${\cal W}_{G^\lambda}$-invariant.
\end{lemmas}
\begin{proof}
The fact that $\nu$ is ${\cal W}_{G^\lambda}$-invariant is a direct
consequence of Lemma \ref{ref-1.5bis} \eqref{dva} and the standard fact that $-\bar{\rho}+\bar{\rho}_\lambda$ is ${\cal W}_{G^\lambda}$-invariant.
By Lemma \ref{ref-1.5} and \eqref{eq:relint} we have
\[
\{\chi'\in X(T)^+\mid (r_{\chi'},{\bf S}_{\chi'})= (r_{\chi},{\bf S}_{\chi})\}=(-\bar{\rho}+\chi_p+r_\chi\Sigma^0_\lambda)\cap X(T)^+.
\]
By Lemma \ref{chi_lambdabis} below we may rewrite this as
\[
\{\chi'\in X(T)^\lambda\mid (r_{\chi'},{\bf S}_{\chi'})= (r_{\chi},{\bf S}_{\chi})\}
=
(-\bar{\rho}+\chi_p+r_\chi\Sigma^0_\lambda)\cap X(T)^\lambda
\]
which is the same as
\[
(\nu-\bar{\rho}_\lambda+r_\chi\Sigma^0_\lambda)\cap X(T)^\lambda.}\end{proof
\]
\def}\end{proof{}\end{proof}
We have used the following result.
\begin{lemmas}\label{chi_lambdabis}
Let
$\chi\in X(T)^+$ and let $\lambda$ be as in Lemma \ref{ref-1.5}(\ref{tri}).
If $\chi\in X(T)^+$ and $\chi'\in X(T)^\lambda$ are such that
$({r}_{\chi'},{\bf S}_{\chi'})=({r}_{\chi},{\bf S}_{\chi})$ then $\chi'\in X(T)^+$.
\end{lemmas}
\begin{proof}
We assume that we are not in the trivial case otherwise there is nothing to do.
We first verify $s_\alpha{*}\chi'\neq\chi'$
for all $\alpha\in {\cal W}$. This implies that $\chi^{\prime+}$ exists.
Assume on the contrary that $s_\alpha{*}\chi'=\chi'$ for some
$\alpha$. By the uniqueness of the minimal expression in Lemma
\ref{ref-1.5} we obtain that
$S_{\chi'}^+=S^+_{\chi}=T_\lambda^+$ is $s_\alpha$-invariant. Using
the formula \eqref{eq:scalar} we find $\langle
\lambda,\chi\rangle=\langle \lambda,s_\alpha{\ast}\chi\rangle$.
Then Lemma \ref{ref-A.2} implies
$s_\alpha\in {\cal W}_{G^\lambda}$. However this is excluded by the
fact that $s_\alpha{\ast}\chi'=\chi'$ and $\chi'\in X(T)^\lambda$.
Assume $\chi'\not \in X(T)^+$. By the above discussion there exists $1\neq w\in {\cal W}$ such that
$w{\ast}\chi^{\prime}$ is dominant.
Furthermore $w\not\in {\mathcal W}_{G^\lambda}$ as $\chi'\in X(T)^\lambda$.
Then Lemma \ref{ref-A.2} implies $\langle\lambda, w{\ast}\chi^{\prime}\rangle<\langle\lambda,\chi'\rangle=\langle\lambda,\chi\rangle$.
As
$({r}_{w{\ast}\chi^{\prime}},|{\bf S}_{w{\ast}\chi^{\prime}}|)=({r}_{\chi'},|{\bf S}_{\chi'}|)=({r}_{\chi},|{\bf S}_{\chi}|)$
we deduce $(r_{\chi},{\bf S}_{\chi})\not\prec({ r}_{w{\ast}\chi^{\prime}},{\bf S}_{w{\ast}\chi^{\prime}})$.
Then
property \eqref{ena} in Lemma \ref{ref-1.5bis} implies $\langle\lambda,\chi\rangle\le \langle\lambda, w{\ast}\chi^{\prime}\rangle$.
This is a contradiction.
\end{proof}
\subsection{$G/G_e$-action on $X(T)^+$}
Here we allow $G$ to be non-connected. $W$ is still a
$G$-representation. We apply the previous results with $G$ replaced by
$G_e$. In particular $T\subset B\subset G_e$. As we have seen in
\S\ref{sec:definition} the group $G/G_e$ acts on $X(T)^+$ via
$\bar{g}(\chi)=\chi\circ \sigma^{-1}_{\bar{g}}$.
Since $G/G_e$ also acts on the
weights on $W$, it may be made to act on $\{1,\ldots,d\}$ via $\bar{g}(i)=j$ if $\beta_j=\beta_i\circ \sigma^{-1}_{\bar{g}}$.
This action extends to an action of $G/G_e$ on the partially
ordered set $({\blb R}^+\times \cal P(\{1,\dots,d\})^{3},\prec)$ introduced above.
\begin{lemmas}\label{ref-lemma-6.7}
The map
\begin{equation}
\label{eq:equivariant}
X(T)^+\mapsto {\blb R}^+\times \cal P(\{1,\dots,d\})^{3}:\chi\mapsto (r_\chi,{\bf S}_\chi)
\end{equation}
is $G/G_e$-equivariant.
Moreover if $\chi\in X(T)^+$, $\bar{g}\in G/G_e$ and $\lambda\in Y(T)^-$ is as in Lemma \ref{ref-1.5}\eqref{tri}, then $\sigma_{\bar{g}} \circ{\lambda}\in
Y(T)^-$ satisfies property \eqref{tri} in Lemma \ref{ref-1.5} for
${{\chi\circ\sigma_{\bar{g}}}}$. Consequently,
$(T_{\sigma_{\bar{g}}\circ\lambda}^{\pm},T_{\sigma_{\bar{g}}\circ\lambda}^0)=\bar{g}^{-1}(T_{\lambda}^{\pm},T_{\lambda}^0)$.
\end{lemmas}
\begin{proof}
Let $\chi\in X(T)^+$. As usual we may assume we are in the non-trivial case.
We obtain an expression of
$\chi\circ\sigma_{\bar{g}}$ of the form \eqref{expresschi} with $\beta_i$ replaced by
$\beta_i\circ \sigma_{g}$. Since this expression in minimal for
$\chi\circ \sigma_{\bar{g}}$ (as otherwise applying $-\circ\sigma_{\bar{g}}^{-1}$ would give a smaller
expression for $\chi$), $(r_{\chi\circ\sigma_{\bar{g}}},{\bf
S}_{\chi\circ\sigma_{\bar{g}}})=\bar{g}^{-1}(r_{\chi},{\bf S}_{\chi})$.
As $\langle\lambda,\beta\circ
\sigma_{\bar{g}}\rangle=\langle\sigma_{\bar{g}}\circ\lambda,\beta\rangle$ for
all $\beta\in X(T)$, it easy to verify $\sigma_{\bar{g}}
\circ{\lambda}\in Y(T)^-_{\blb R}$ satisfies property \eqref{tri} in
Lemma \ref{ref-1.5} for ${{\chi\circ \sigma_{\bar{g}}}}$. This implies in particular
the last assertion of the lemma.
\end{proof}
\subsection{Reduction settings}\label{order}
Here we allow $G$ to be again non-connected.
\begin{lemmas}
\label{lem:total}
There exists a total ordering $<$ on ${\blb R}^+\times \cal P(\{1,\dots,d\})^{3}$
such that the following conditions hold.
\begin{enumerate}
\item \label{ena1}
If
$({ r},|{\bf S}|)<({ r}',|{\bf S}'|)$ then $({ r},{\bf S})<({ r}',{\bf S}')$.
\item
\label{dva1} If $({ r},{\bf S})<({ r}',{\bf S}')$ and $({ r},{\bf S})$
and $({ r}',{\bf S}')$ are in different $G/G_e$-orbits then
$\bar{g}({ r},{\bf S})<\bar{h}({ r}',{\bf S}')$ for all $\bar{g},\bar{h}$ in $G/G_e$.
\end{enumerate}
\end{lemmas}
\begin{proof} The map \eqref{eq:order} is $G/G_e$-equivariant. We
choose an arbitrary totally ordering on the fibers of \eqref{eq:order} compatible with condition \eqref{dva1}. Combining this with~\eqref{ena1} completely fixes $<$.
\end{proof}
\begin{remarks}
It is clear that any total ordering
$<$ as in Lemma \ref{lem:total}\eqref{ena1}
refines the partial ordering
$\prec$.
\end{remarks}
\begin{lemmadefinitions}
\label{def:lchi}
It is possible to choose
for any $\chi\in X(T)^+$ a $\lambda_\chi\in Y(T)^-$ such that the following conditions are satisfied
\begin{enumerate}
\item \label{one1} $\lambda=\lambda_\chi$
satisfies the property (\ref{tri}) in Lemma \ref{ref-1.5}.
\item \label{two1}
If $({r}_{\chi'},{\bf
S}_{\chi'})=({r}_{\chi},{\bf S}_\chi)$ then $\lambda_\chi=\lambda_{\chi'}$.
\item \label{three1}
We have
$\lambda_{\chi\circ \sigma_{\bar{g}}}=\sigma_{\bar{g}}\circ\lambda_\chi$ for all $\bar{g}\in G/G_e$.
\item \label{four1}
If $\bar{g}(T_{\lambda_\chi}^{\pm},T_{\lambda_\chi}^0)=(T_{\lambda_{\chi}}^{\pm},T_{\lambda_{\chi}}^0)$ then
$\sigma_{\bar{g}}\circ \lambda_\chi=\lambda_\chi$.
\end{enumerate}
\end{lemmadefinitions}
\begin{proof}
Choose representatives $(r_{\chi_i},{\bf S}_{\chi_i})$ for the orbits of the $G/G_e$-action on the image of
\eqref{eq:equivariant}. For each $i$ choose $\lambda'_i\in Y(T)^-$ such that
$S_{\chi_i}^{-}=T_{\lambda'_i}^-$, $S_{\chi_i}^+=T_{\lambda'_i}^+$, $S_{\chi_i}^0=T_{\lambda'_i}^0$ as in Lemma \ref{ref-1.5}\eqref{tri}.
Let $\bar{G}_i\subset G/G_e$ be the stabilizer of $(r_{\chi_i},{\bf S}_{\chi_i})$ and put
$\lambda_i=\sum_{\bar{g}\in \bar{G}_i} \sigma_{\bar{g}}\circ \lambda'_i$. Then it is easy to see
that we still have $S_{\chi_i}^{-}=T_{\lambda_i}^-$, $S_{\chi_i}^+=T_{\lambda_i}^+$, $S_{\chi_i}^0=T_{\lambda_i}^0$
and moreover $\bar{g}\circ\lambda_i=\lambda_i$ if $\bar{g}\in \bar{G}_i$.
Now for $\chi\in X(T)^+$ write $(r_\chi,{\bf S}_\chi)=\bar{h}(r_{\chi_i},{\bf S}_{\chi_i})$ for suitable $i$ and
$\bar{h}\in G/G_e$. Then we put $\lambda_\chi=\sigma_{\bar{h}}\circ \lambda_i$. It is clear that
this is well defined and has the requested properties.
\end{proof}
\begin{lemmas}
\label{eq:diff}
Let $(\lambda_\chi)_\chi$ be as in Lemma \ref{def:lchi}. We have for $\bar{g}\in G/G_e$:
\[
\langle \lambda_\chi,\bar{g}(\chi)\rangle \ge \langle \lambda_\chi,\chi\rangle
\]
with equality if and only if $\bar{g}\in (G/G_e)^{\lambda_\chi}$.
\end{lemmas}
\begin{proof} By \eqref{eq:equivariant} we have
\begin{equation}
\label{eq:stab}
(r_{\bar{g}(\chi)},{\bf S}_{\bar{g}(\chi)})=\bar{g}(r_{\chi},{\bf S}_{\chi}).
\end{equation}
Hence in particular
\[
(r_{\bar{g}(\chi)},|{\bf S}_{\bar{g}(\chi)}|)=(r_{\chi},|{\bf S}_{\chi}|)
\]
and so $(r_\chi,S_\chi)\not\prec (r_{\bar{g}(\chi)},S_{\bar{g}(\chi)})$. It follows from Lemma \ref{ref-1.5bis}\eqref{dva} that
\[
\langle \lambda_\chi,\chi\rangle\le \langle \lambda_\chi,\bar{g}(\chi)\rangle.
\]
Also by Lemma \ref{ref-1.5bis} equality will happen precisely when $(r_{\bar{g}(\chi)},{\bf S}_{\bar{g}(\chi)})=
(r_{\chi},{\bf S}_{\chi})$ which by \eqref{eq:stab} implies that $\bar{g}$ stabilizes
${\bf S}_{\chi}=(T^{+}_{\lambda_\chi},T^-_{\lambda_\chi},T^0_{\lambda_\chi})$. By Lemma \ref{def:lchi}\eqref{four1} this
implies $\bar{g}\in (G/G_e)^{\lambda_\chi}$.
\end{proof}
Below we fix $(\lambda_\chi)_\chi$ as in Lemma \ref{def:lchi} and
we choose a total ordering on
${\blb R}^+\times \cal P(\{1,\dots,d\})^{3}$ as in Lemma \ref{lem:total}.
We put the induced ordering on $I:=\{({ r}_\chi,{\bf S}_\chi)\mid \chi\in X(T)^+\}\subset {\blb R}^+\times \cal P(\{1,\dots,d\})^{3}$. As a totally
ordered set we have~$I\cong{\blb N}$. For $i\in I$ we put
$F_i=\{\chi\in
X(T)^+\mid ({ r}_\chi,{\bf S}_\chi)=i\}$. This gives a $G/G_e$-equivariant partition
\begin{equation}
\label{eq:partition}
X(T)^+=\coprod_{i\in I} F_i.
\end{equation}
In each $F_i$ we choose one
representative which we denote by $\chi_i$. We write
$\lambda_i=\lambda_{\chi_i}$, $r_i=r_{\chi_i}$, ${\bf S}_i={\bf S}_{\chi_i}$.
By our choice of $\lambda_\chi$ and the definition of $F_i$, $(r_i,\lambda_i,{\bf S}_i)$
depends only on $i\in I$ and not on the choice of $\chi_i\in F_i$.
For $j\in I$ we write
\begin{align*}
{\cal L}_{<j}&=\bigcup_{i\in I,i<j}F_i\subset X(T)^+,\\
{\cal L}_{\le j}&=\bigcup_{i\in I,i\le j}F_i\subset X(T)^+.
\end{align*}
Let $J\subset I$ be the minimal representatives for the orbits of the
action of $G/G_e$ on~$I$. By the choice of $J$ and
property \eqref{dva1} in Lemma \ref{lem:total} the set $\{i\in I\mid
i<j\}$ is $G/G_e$-invariant if $j\in J$ and hence the same is true for ${\cal L}_{<j}$.
\begin{corollarys}\label{linredsec}
Let $j\in J$ with $r_j\ge 1$ and $\chi\in F_j$. Then $(G,B,T,X,{\cal L}_{<j},\chi,\lambda_j)$ is a reduction setting.
\end{corollarys}
\begin{proof}
We may reduce to the case that $X$ is affine. Then by
Theorem \ref{ref-3.1-13} and Proposition \ref{ref-prop-3.4}
we may assume $G=G_e$ and $X=W^\vee$.
Thus we need to verify the assumptions of Proposition
\ref{-iC}. They are satisfied by Corollary \ref{betaplus} due to the
choice of $\lambda_j=\lambda_\chi$ for every $\chi\in F_j$.
\end{proof}
\section{Proofs of the semi-orthogonal decompositions}
\label{sec:proofs}
In this section we will prove Theorem \ref{nonl} and
along the way we will also prove Proposition \ref{ref-1.3} and Corollary \ref{twist}.
We continue to use the notations introduced in the previous section.
For the proof of Theorem
\ref{nonl} we select a finite open affine covering $X/\!\!/ G=\bigcup_i U_i$
and put $X_i=\pi^{-1}(U_i)$. Thus
$X=\bigcup_i X_i$ for
$G$-equivariant affine $G$-varieties $X_i$. We choose a
$G$-representation $W$ such that $W^\vee$ has a $T$-stable point
together with a closed $G$-equivariant embedding $\coprod_i X_i\hookrightarrow
W^\vee$. We use $W$ to construct a partition \eqref{eq:partition} of
$X(T)^+$ as in \S\ref{order}. The arguments below will be based on ``reduction to the affine case'', i.e.\ to one of the $X_i$.
\medskip
For $j\in I$ let ${\cal D}_{<j}$, ${\cal D}_{\le j}$ be the triangulated subcategories of ${\cal D}({{X}}/G)$
locally classically generated by $P_{{\cal L}_{<j}}$, $P_{{\cal L}_{\le j}}$ as in \S\ref{sec:localgen}
and put $\Lambda_{<j}=\pi_{s\ast}\operatorname {\mathcal{E}\mathit{nd}}_{X/G}(P_{{\cal L}_{<j}})$.
For $j\in J$ let ${\cal D}_j$ be the triangulated subcategory of ${\cal D}({{X}}/G)$ locally classically generated by $\langle
\RInd^{G}_{G^{\lambda_j,+}} (V_{G^{\lambda_j}}(\chi) \otimes
{\cal O}_{X^{\lambda_j,+}})\mid \chi\in F_j\rangle$.
For a
${\mathcal W}_{G_e^\lambda}$-invariant $\nu\in X(T)_{\blb R}$ we put
\begin{align*}
{{\cal L}_{r,\lambda,\nu}}&=X(T)^\lambda\cap (\nu-\bar{\rho}_{\lambda}+r\Sigma^0_\lambda),\\
U_{r,\lambda,\nu}&=\bigoplus_{\mu\in {\cal L}_{r,\lambda,\nu}} \Ind^{H^\lambda}_{G_e^\lambda} V_{G_e^\lambda}(\mu),\\
\Lambda_{r,\lambda,\nu}&=(\operatorname {End}(U_{r,\lambda,\nu})\otimes_k {\cal O}_{X^\lambda})^{H^\lambda}.
\end{align*}
Proposition \ref{ref-1.3} and
Theorem \ref{nonl} will be consequences of the following proposition.
\begin{proposition}\label{affineso}
\begin{enumerate}
\item \label{0}
If $j\not\in J$ then ${\cal D}_{<j}={\cal D}_{\le j}$.
\end{enumerate}
Assume that $j\in J$ is such that $r_j\ge 1$. Then
\begin{enumerate}
\setcounter{enumi}{1}
\item\label{1}
$\Dscr_{<j}\cong{\cal D}(\Lambda_{<j})$ and $\Lambda_{<j}$ has finite global dimension when restricted to affine opens in $X/\!\!/ G$.
\item\label{3} $\Dscr_{j}\cong {\cal D}(\Lambda_{r_j,\lambda_j,\nu_j})$
for $\nu_j=-\bar\rho+\bar\rho_{\lambda_j}+(\chi_j)_p$ (where $(\chi_j)_p$ was introduced in
\S\ref{sec:interms}) and
$\Lambda_{r_j,\lambda_j,\nu_j}$ has finite global dimension when restricted to affine opens in $X/\!\!/ G$.
\item\label{2}
$\Dscr_{\le j}=\langle \Dscr_{j},\Dscr_{<j}\rangle$ is a semi-orthogonal decomposition of $\Dscr_{\le j}$.
\item\label{4} One has ${\cal D}(X/G)=\bigcup_{j\in J}{\cal D}_j$.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item[\eqref{0}]
We claim that $\bigoplus_{\chi\in {\cal L}_{<j}}V(\chi)=\bigoplus_{\chi\in {\cal L}_{\le j}}
V(\chi)$. We only
have to prove that if $\chi\in F_j$ then $V(\chi)$ is a summand of
$\bigoplus_{\chi\in {\cal L}_{<j}}
V(\chi)$. Since $j\not\in J$
there is $\bar{g}\in G/G_e$ such that $\bar{g}(j)<j$. Put $\chi'=\bar{g}(\chi)\in F_{\bar{g}(j)}\subset {\cal L}_{<j}$.
Using \eqref{eq:restriction}, \eqref{eq:restriction1} we obtain
\begin{align*}
V(\chi')&=\Ind_{G_e}^G V_{G_e}(\chi')\\
&=\Ind_{G_e}^G {}_{\sigma^{-1}_{\bar{g}}}V_{G_e}(\chi)\\
&\cong \Ind_{G_e}^G V_{G_e}(\chi)\\
&=V(\chi)\,.
\end{align*}
\item[\eqref{1}]
The fact that ${\cal D}_{<j}={\cal D}(\Lambda_{<j})$ follows from Lemma \ref{lem:ff}. To prove
that $\Lambda_{<j}$ is locally of finite global dimension we
may restrict to the case that $X$ is affine.
To prove that $\operatorname {gl\,dim} \Lambda_{<j}<\infty$, by \cite[Thm. 4.3.1, Lem.\ 4.5.1]{SVdB} it suffices to consider the case
${{X}}=W^\vee$ and $G=G_e$. We denote
$\tilde{P}_{{{\cal L}_{<j}},\chi}=\operatorname {Hom}_{{{X/G}}}(P_{<j},P_\chi)$. By
\cite[Lem.\ 11.1.1]{SVdB} it is enough to show that $\operatorname{pdim} \tilde
P_{\cal L_{<j},\chi}<\infty$ for every $\chi\in X(T)^+$. Assume that
there exists~$\chi$ such that $\operatorname{pdim} \tilde P_{\cal
L_{<j},\chi}=\infty$ and take $\chi\in X(T)^+$ with minimal $({
r}_\chi,|{\bf S}_\chi|)$. Then $({ r}_\chi,{\bf S}_\chi)\not\leq({
r}_\mu,{\bf S}_\mu)$ for all $\mu\in {\mathcal L}_{<j}$ (for otherwise $\chi\in
{\mathcal L}_{<j}$ and hence $\operatorname{pdim} \tilde P_{\cal L_{<j},\chi}=0$). Let
$\lambda=\lambda_\chi$. It follows from Lemma \ref{ref-1.5bis}
(\ref{ena}) that $\langle\lambda,\chi\rangle<\langle\lambda,\mu\rangle $ for all
$\mu\in{\mathcal L}_{<j}$. Thus, $C_{{\mathcal L}_{<j},\lambda,\chi}:=\operatorname {Hom}_{X/G}(P_{{\cal L}_{<j}},C_{\lambda,\chi})$ is acylic by
\eqref{quasiiso} and the fact that the $\lambda$-weights of $k[{{X}}^{\lambda,+}]$ are $\le 0$ (see \S\ref{ref-1.2-0}). We have
$({ r}_{\chi'},|{\bf S}_{\chi'}|)<({
r}_\chi,|{\bf S}_\chi|)$ for all $\tilde P_{\cal L_{<j},\chi'}\neq
\tilde P_{\cal L_{<j},\chi}$ that appear in $C_{{\mathcal L}_i,\lambda,\chi}$ by
\eqref{eq:weights} and Corollary \ref{betaplus}. Hence
$\operatorname{pdim}\tilde P_{\cal L_{<j},\chi'}<\infty$ by the minimality
assumption, and therefore $\operatorname{pdim}\tilde P_{\cal L_{<j},\chi}<\infty$, a
contradiction.
\item[\eqref{3}]
Now we use the fact that $(G,B,T,{{X}},{\cal L}_j,\chi,\lambda_j)$ is a reduction setting for $\chi\in F_j$
by Corollary \ref{linredsec}.
Let us abbreviate $D_{j,\chi}=\RInd^{G}_{G^{\lambda_j,+}} (V_{G^{\lambda_j}}(\chi) \otimes_k {\cal O}_{X^{\lambda_j,+}})$.
By Proposition \ref{Di} we have
\[
\pi_{s\ast}\operatorname {R\mathcal{E}\mathit{nd}}_{X/G}(\oplus_{\chi\in F_j} D_{{j},\chi})=
\pi_{s\ast}\operatorname {\mathcal{E}\mathit{nd}}_{X^\lambda/H^\lambda}(\oplus_{\chi\in F_j} P_{H^\lambda,\chi})
\]
and the latter is equal to $\Lambda_{r_j,\lambda_j,\nu_j}$
by Lemma \ref{chi_lambda}.
To prove that $\Lambda_{r_j,\lambda_j,\nu_j}$ locally has finite global dimension we may reduce to
the affine case. Then by \cite[Thm. 4.3.1, Lem.\ 4.5.1]{SVdB} we may reduce to the case $G=G_e$ and $X=W^\vee$. Finally we invoke Proposition \ref{sigma}.
\item[\eqref{2}]
The fact that ${\cal D}_{\le j}$ is generated by ${\cal D}_{<j}$ and ${\cal D}_j$ follows from \eqref{ref-2.3-8}. The
fact that $\operatorname {Hom}_{X/G}({\cal D}_{<j},{\cal D}_j)=0$ follows from \eqref{ref-2.1-6}
by a suitable version of the local global spectral
sequence on $X/\!\!/ G$.
\item[\eqref{4}] This follows from Lemma \ref{lem:thick}.
\end{enumerate}
\def}\end{proof{}\end{proof}
\begin{proof}[Proof of Theorem \ref{nonl}]
Put
\[
j_0=\min \{j\in J\mid r_j\ge 1\}.
\]
Then we have by Lemma \ref{lem:ff}
$
{\cal D}_{\le j_0}={\cal D}(\Lambda_{1,0,0})
$. Now write
\[
\{j\in J\mid r_j\ge 1\}=\{j_0,j_{1},j_{2},\ldots\}.
\]
Put ${\cal D}_0={\cal D}_{\le j_0}$ and for $i>0$ let ${\cal D}_{-i}$
be the right orthogonal of ${\cal D}_{\le j_{i-1}}
={\cal D}_{< j_{i}}$ in ${\cal D}_{{\cal L}_{\le j_{i}}}$. By Proposition \ref{affineso}
(\ref{0},\ref{2},\ref{4}) we have a semi-orthogonal decomposition
\begin{equation}
\label{eq:semi}
{\cal D}(X/G)=\langle \ldots,{\cal D}_{-2},{\cal D}_{-1},{\cal D}_0\rangle
\end{equation}
and by Proposition \ref{affineso}\eqref{3} each of the ${\cal D}_{-1}$, ${\cal D}_{-2}$,\dots has the required form. The corresponding statement for $\tilde{{\cal D}}_{X/G}$ follows by replacing $X/\!\!/ G$ by open subschemes.
\end{proof}
\begin{remark} If follows from Lemma \ref{gldimadm} that ${\cal D}_{-j}\subset {\cal D}(X/G)$ and
$\Dscr_{\leq j}$ for $j\in J$ are admissible.
\end{remark}
\begin{proof}[Proof of Proposition \ref{ref-1.3}]
This corresponds to the special case $X=W^\vee$ in the proof of Theorem \ref{nonl}.
\end{proof}
\section{The case that $X$ does not have a $T$-stable point.}\label{introsec:nonstable}
\label{sec:nonstable}
In this section we will assume throughout that $G$ is connected and
that $X$ is a connected smooth $G$-variety such that a good
quotient $X/\!\!/ G$ exists. We will give an alternative
semi-orthogonal decomposition of $\Dscr(X/G)$ in case $X$ does not have a
$T$-stable point.
\begin{lemma}
\label{prop:two} Assume that $X$ does not have a $T$-stable point. Then
at least one of the following settings holds:
\begin{enumerate}
\item \label{sit1} There is a non-trivial normal connected subgroup $K$ of $G$ acting trivially on $X$.
\item \label{sit2} There is a non-trivial central one parameter group $\nu:G_m\rightarrow Z(G)$
such that $X=X^{\nu,+}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since
$X$ does not have a $T$-stable point we have
\begin{equation}
\label{eq:union}
X=\bigcup_{\sigma\in X(T)-\{0\}} X^{\sigma,+}\,.
\end{equation}
It is well-known and easy to see that there are only a finite
number of distinct $X^{\sigma,+}$. Indeed: by covering $X/\!\!/ G$ by a finite number
of affines, it suffices to verify this in the affine
case and then it follows by embedding~$X$ into a representation. Hence since
the union in \eqref{eq:union} is finite and $X$ is irreducible we
find that there is some $\sigma\in X(T)-\{0\}$ such that $X=X^{\sigma,+}$.
It follows that $X=X^{w\sigma,+}$ for every $w\in {\cal W}$.
Put $\nu=\sum_{w\in {\cal W}} w\sigma$.
Then $\nu$ is a~$\Wscr$-in\-variant $1$-parameter subgroup of $T$ and in particular its image is contained in the center of $G$.
We claim $X^{\nu,+}=X$, $X^{\nu}\subset X^\sigma$. We may check this in the case that $X$ is affine. Let $C\subset X(T)_{\blb R}$ be the cone
spanned by the weights of $k[X]$. Since $X^{\sigma,+}=X$ we have $\langle \sigma,C\rangle\le 0$. Since $C$ is ${\cal W}$-invariant
we immediately deduce $\langle \nu,C\rangle \le 0$ and hence $X^{\nu,+}=X$.
To prove $X^{\nu}\subset X^\sigma$ we have to verify that if $\chi\in C$ and $\langle \nu,\chi\rangle=0$ then
$\langle \sigma,\chi\rangle=0$. To prove this it suffices to observe that $\langle \nu,\chi\rangle=\sum_{w\in {\cal W}} \langle \sigma,w^{-1}\chi\rangle$
and this can only be zero if $ \langle \sigma,w^{-1}\chi\rangle=0$ for all $w\in {\cal W}$.
If $\nu\neq 0$ then we are in situation \eqref{sit2}. If $\nu=0$ then
$X^{\nu}=X$. Hence also $X^\sigma=X$ by the above discussion. Therefore \eqref{sit1} holds
with $K$ being the identity component of $\operatorname {ker}(G\rightarrow \Aut(X))$. The group $K$ is
not trivial as it contains $\operatorname {im} \sigma$.
\end{proof}
To continue it will be convenient to slightly generalize our setting in a similar way as \cite[Thm 1.6.3]{SVdB}.
We will however use different notations which are more adapted to the current setting.
We will assume that $G$ contains a finite central subgroup~$A$ acting trivially on $X$ with $\bar{G}:=G/A$.
Let $X(A):=\operatorname {Hom}(A,G_m)$ be the character group of $A$.
For $\tau\in X(A)$ let ${\Dscr}(X/G)_{\tau}$
be the triangulated subcategory of ${\Dscr}(X/G)$ consisting of complexes
on which $A$ acts as $\tau$.
We have an orthogonal decomposition
\begin{equation}
\label{eq:orthogonal}
\Dscr(X/G)=\bigoplus_{\tau\in X(A)} {\Dscr}(X/G)_{\tau}
\end{equation}
and moreover $ {\Dscr}(X/G)_{0}=\Dscr(X/\bar{G})$. In general we should think of
${\Dscr}(X/G)_{\tau}$ as a twisted version of $\Dscr(X/\bar{G})$.
\begin{proposition}\label{twist}
Let $\tau\in X(A)$. Then there exists a semi-orthogonal decomposition of
${\Dscr}(X/G)_{\tau}$ of the form described in Theorem \ref{nonl}.
\end{proposition}
\begin{proof}
Let the notation be as in \S\ref{sec:proofs}. We have $A\subset T$. Let $\bar{T}=T/A$. Then
there is an exact sequence
\begin{equation}\label{eq:barTTA}
0\rightarrow X(\bar{T})\rightarrow X(T)\rightarrow X(A)\rightarrow 0.
\end{equation}
Let $X(\bar{T})_\tau\subset X(T)$ be the inverse image of $\tau\in X(A)$.
Let $\chi\in F_j\cap X(\bar{T})_{\tau}$.
Since~$A$ acts trivially on $X$, it acts with the character $\tau$ on the right-hand side of \eqref{ref-2.3-8} with ${\cal L}={\cal L}_{<j}$.
Since $\pi_{s\ast}\operatorname {\mathcal{H}\mathit{om}}_{X/{G}}(P_{{G},\chi'},P_{{G},\chi})=0$ if $\chi'\not\in X(\bar{T})_{\tau}$, \eqref{ref-2.3-8} is thus still an isomorphism if we replace ${\cal L}_{<j}$ by
${\cal L}_{<j,\tau}={\cal L}_{<j}\cap {X(\bar{T})}_{\tau}$.
Put
\begin{align*}
\Lambda_{<j,{\tau}}&=\pi_{s*}\operatorname {\mathcal{E}\mathit{nd}}_{X/{G}}(P_{{\cal L}_{<j,\tau}}),\\
{\cal L}_{j,{\tau}}&=X(\bar{T})_{\tau}^{\lambda_j}\cap (\nu-\bar{\rho}_{\lambda_j}+r_j\Sigma^0_{\lambda_j}),\\
U_{j,{\tau}}&=\bigoplus_{\mu\in {\cal L}_{j,{\tau}}} \Ind^{H^{\lambda_j}}_{{G}_e^{\lambda_j}} V_{{G}_e^{\lambda_j}}(\mu),\\
\Lambda_{j,{\tau}}&=(\operatorname {End}(U_{j,{\tau}})\otimes_k {\cal O}_{X^{\lambda_j}})^{H^{\lambda_j}}.
\end{align*}
We have $\Lambda_{<j}=\oplus_{ \tau\in X(A)} \Lambda_{<j,{\tau}}$,
$\Lambda_{j}=\oplus_{\tau\in X(A)} \Lambda_{j,{\tau}}$.
As $\Lambda_{<j}$ and $\Lambda_j$ have finite global dimension when restricted to affine opens in $X/\!\!/{G}$, the same holds for
$\Lambda_{<j,{\tau}}$, $\Lambda_{j,{\tau}}$.
Arguing as above we thus obtain a semi-orthogonal decomposition $\Dscr_{\leq j,{\tau}}=\langle \Dscr_{j,{\tau}},\Dscr_{<j,{\tau}}\rangle$ (with the obvious definitions for $\Dscr_{j,{\tau}},\Dscr_{<j,{\tau}}$),
$\Dscr_{<j,{\tau}}\cong \Dscr(\Lambda_{<j,{\tau}})$, $\Dscr_{j,{\tau}}\cong \Dscr(\Lambda_{j,{\tau}})$. Then the proof continues as before.
\end{proof}
If $K$ is a connected normal subgroup of $G$ then we will define a
\emph{pseudo-comple\-ment} of $K$ as a connected normal subgroup $Q$ of $G$ such that
$K$ and $Q$ commute, $G=KQ$ and $K\cap Q$ is finite. It follows easily
from \cite[Theorem 8.1.5, Corollary 8.1.6]{Springer} that such a
pseudo-complement always exists.
\begin{proposition}
\label{prop:nonstable1}
Assume $X$ does not have a $T$-stable point and
that we are in the situation of Lemma \ref{prop:two}\eqref{sit1}. Let $Q$ be a pseudo-complement of $K$ in $G$. Then there is a finite central subgroup $A_Q$ of $Q$ acting trivially on $X$ such that there is an orthogonal decomposition
\begin{equation}
\label{eq:Q}
{\cal D}(X/G)_{\tau}\cong \bigoplus_{i\in I} {\cal D}(X/Q)_{\mu_i}
\end{equation}
for a suitable collection of $(\mu_i)_{i\in I}\in X(A_Q)$.
\end{proposition}
\begin{proof}
Let $\tilde{G}:=K\times Q\rightarrow G$ be the multiplication map and let $\tilde{A}\subset K\times Q$ be
the inverse image of $A$. Let $A_K\subset K$, $A_Q\subset Q$ be the images of $\tilde{A}$ under the
projections $K\times Q\rightarrow K,Q$.
It is easy to see that $A_Q$ acts trivially on $X$.
Let $\tilde{\tau}$ be the composition $\tilde{A}\rightarrow A\xrightarrow{\tau} G_m$. In a similar way,
if $\mu\in X(A_K)$ or $\mu\in X(A_Q)$ then we denote by $\tilde{\mu}$ the
element of $X(\tilde{A})$, obtained by composing $\mu$ with the appropriate projections $\tilde{A}\rightarrow A_K,A_Q$.
We have
\begin{equation}
\label{eq:part1}
D(X/G)_{\tau}=D(X/\tilde{G})_{\tilde{\tau}}\,.
\end{equation}
Let $\operatorname{Irr}(K)$ be the set of isomorphism classes of irreducible representations of $K$.
We have an orthogonal decomposition
\begin{equation}
\label{eq:part2}
\bigoplus_{V\in \operatorname{Irr}(K)} D(X/Q) \rightarrow D(X/\tilde{G}):({\cal F}_V)_V\mapsto \oplus_{V\in\operatorname{Irr}(K)} V\otimes_k {\cal F}_V\,.
\end{equation}
If $V\in\operatorname{Irr}(K)$ then $A_K$ acts on $V$ via a character which we
denote by $\chi_V\in X(A_K)$.
Combining \eqref{eq:part1} and \eqref{eq:part2} yields
\[
D(X/G)_{\tau}\cong\bigoplus_{(\mu,V)\in X(A_Q)\times \operatorname{Irr}(K), \tilde{\mu}+\tilde{\chi}_V=\tilde{\tau}} D(X/Q)_{\mu}
\]
which implies \eqref{eq:Q}.
\end{proof}
\begin{proposition}
\label{prop:nonstable2}
Assume $X$ does not have a $T$-stable point and
that we are in the situation of Lemma \ref{prop:two}\eqref{sit2}. Let $Q$ be a pseudo-complement of $\operatorname {im} \nu$ in $G$. Then there is a finite central subgroup $A_Q$ of $Q$ acting trivially on $X^\nu$ such that there is a semi-orthogonal decomposition
\begin{equation}
\label{eq:Q1}
{\cal D}(X/G)_{\tau}= \langle {\cal D}(X^\nu/Q)_{\mu_i}\mid i\in I\rangle
\end{equation}
for a suitable totally ordered set $I$ and a
collection of $(\mu_i)_{i\in I}\in X(A_Q)$.
\end{proposition}
\begin{proof}
Without loss of generality we may, and we will, assume that $\nu$ is injective. We put $K=\operatorname {im} \nu\cong G_m$ and we borrow the associated notation from the proof of Proposition \ref{prop:nonstable1}. One checks that
in this case $A_Q$ acts indeed trivially on $X^\nu$.
The set $\operatorname{Irr}(K)$ is equal to $\{(\chi_n)_n\}$ where $\chi_n\in X(K)$ is such that $\chi_n(z)=z^n$.
\medskip
We know by Lemma \ref{lem:thick}
that ${\cal D}(X/\tilde{G})$ is locally classically generated by $P_{n,V}:=(\chi_n\otimes_k V\otimes_k {\cal O}_X)_{n,V}$, with $n\in {\blb Z}$, $V\in \operatorname{Irr}(Q)$.
We also put $P_{V}^\nu:=V\otimes_k {\cal O}_{X^\nu}\in {\cal D}(X^\nu/Q)$.
We claim that for $n\le m$ one has
\begin{equation}
\label{eq:dndef}
\pi_{s\ast} \operatorname {R\mathcal{H}\mathit{om}}_{X/\tilde{G}}(P_{n,V},P_{m,V'})=
\begin{cases}
0&\text{if $n<m$}\\
\pi_{s,\ast} j_{s,\ast}\operatorname {R\mathcal{H}\mathit{om}}_{X^\nu/Q}(P^\nu_{V},P^\nu_{V'})&\text{if $n=m$}\\
\end{cases}
\end{equation}
where $j:X^\nu\rightarrow X$ is the embedding and $j_s:X^\nu/Q\rightarrow X/\tilde{G}$ is the corresponding map of quotient stacks.
To prove this we may reduce to the case that $X/\!\!/ \tilde{G}$ is affine. Then as usual $\nu$ induces a grading on $k[X]$
which, as $\nu$ is central, is compatible
with the $G$-action.
In the affine case $\pi_{s,\ast} \operatorname {R\mathcal{H}\mathit{om}}_{X/\tilde{G}}(P_{n,V},P_{m,V'})$ is the
quasi-coherent sheaf on $X/\!\!/ G$ associated to the $k[X]^G$-module given by $(\operatorname {Hom}(V,V')\otimes_k k[X]_{m-n})^{\tilde{G}}$ which is zero if $n<m$
since the hypothesis $X=X^{\nu,+}$ implies that
the grading on $k[X]$ is concentrated in negative degree. Similarly if $n=m$ then we have $(\operatorname {Hom}(V,V')\otimes_k k[X]_{m-n})^{\tilde{G}}=(\operatorname {Hom}(V,V')\otimes_k k[X^\nu])^{Q}$
finishing the proof of \eqref{eq:dndef}.
\medskip
Let ${\cal D}_n\subset {\cal D}(X/\tilde{G})$ be locally classically generated by $(P_{-n,V})_{V\in \operatorname{Irr}(Q)}$. Then using Proposition \ref{th:recognition} and \eqref{eq:dndef}
we see that we have a semi-orthogonal decomposition
\begin{equation}
\label{eq:part11}
{\cal D}(X/\tilde{G})=\langle {\cal D}_n\mid n\in {\blb Z}\rangle\,.
\end{equation}
The next step is to describe the ${\cal D}_n$.
We claim that there is an equivalence of categories
\begin{equation}
\label{eq:part12}
{\cal D}_n\rightarrow D(X^\nu/Q):F\mapsto \chi_{n} \otimes_k Lj_{s}^\ast( F)\,.
\end{equation}
Let $F,F'\in {\cal D}_n$. We have to prove that the natural map
\[
\operatorname {RHom}_{X/\tilde{G}}(F,F')\rightarrow \operatorname {RHom}_{X^\nu/Q}(\chi_{n} \otimes_k Lj_{s}^\ast(\c F),\chi_{n} \otimes_k Lj_{s}^\ast( F'))
\]
is an isomorphism. Using the local global spectral sequence it suffices to prove that
\[
\pi_{s,\ast }\operatorname {R\mathcal{H}\mathit{om}}_{X/\tilde{G}}(F,F')\rightarrow \pi_{s,\ast} Rj_{s,\ast} \operatorname {R\mathcal{H}\mathit{om}}_{X^\nu/Q}(\chi_{n} \otimes_k Lj_{s}^\ast( F),\chi_{n} \otimes_k Lj_{s}^\ast( F'))
\]
is an isomorphism. To do this we may assume that $X/\!\!/ \tilde{G}$ is
affine. Then we can check it on the generators $P_{-n,V}$ of ${\cal D}_n$
and finally we invoke \ref{eq:dndef}.
\medskip
Combining the equivalence \eqref{eq:part12} with the semi-orthogonal decomposition \eqref{eq:part11}
we obtain a a semi-orthogonal decomposition
\[
{\cal D}(X/\tilde{G})=\langle {\cal D}(X^\nu/Q)\mid n\in {\blb Z}\rangle
\]
Considering suitable subset of the local generators one obtains
in the same way a semi-orthogonal decomposition of ${\cal D}(X/\tilde{G})_{\tilde{\tau}}$. To be more precise
consider the following set
\[
{\cal S}=\{(n,\mu)\in {\blb Z}\times A_Q\mid -\tilde{\chi}_n+\tilde{\mu}=\tilde{\tau}\}
\]
Let $\prec$ be the partial ordering on ${\cal S}$ induced from the projection ${\cal S}\rightarrow {\blb Z}$, i.e. $(n,\mu)\prec (n',\mu')$
if and only if $n<n'$.
Let $<$ be a total ordering on ${\cal S}$ which refines $\prec$. Then we have a semi-orthogonal
decomposition
\[
{\cal D}(X/\tilde{G})_{\tilde{\tau}}=\langle {\cal D}(X^\nu/Q)_{\mu}\mid (n,\mu)\in {\cal S}\rangle\,.
\]
Combining this with the identification \eqref{eq:part1} yields \eqref{eq:Q1}.
\end{proof}
\begin{remark} Even if $A=0$ (and hence ${\cal D}(X/G)_{\tau}={\cal D}(X/G)$), the group $A_Q$ and the twisting characters $\mu_i$ will generally be non-trivial in Propositions
\ref{prop:nonstable1},\ref{prop:nonstable2}.
\end{remark}
\begin{remark}
Note that in Propositions
\ref{prop:nonstable1},\ref{prop:nonstable2} we have $\dim Q<\dim
G$. Thus we have made genuine progress.
By repeatedly applying
Propositions \ref{prop:nonstable1},\ref{prop:nonstable2} we reduce to a semi-orthogonal decomposition of ${\cal D}(X/G)_\tau$ involving a set of ${\cal D}(X'/G')_{\tau'}$
such that $X'$ has a $T'$-stable point
for $T'$ a maximal torus of $G'$,
thus justifying Remark \ref{rem:notTstable} (and also making it more precise).
\end{remark}
\section{The quasi-symmetric case}\label{appA}
In this section we refine the semi-orthogonal decomposition of $\Dscr(X/G)$ given in Proposition \ref{ref-1.3} in the quasi-symmetric case. In some cases the refined decomposition consists of (twisted) non-commutative crepant resolutions of certain quotient singularities for reductive groups.
\subsection{Main result}
Let $W$ be a finite dimensional $G$-representation of dimension $d$ such that $X=W^\vee$. Let $(\beta_i)_{i=1}^d\in X(T)$ be the $T$-weights of $W$. \emph{Throughout this section we assume that $W$ is quasi-symmetric}; i.e., for every line $\ell\subset X(T)_{\blb R}$ through
the origin we have $\sum_{\beta_i\in\ell}\beta_i=0$.
Let $\Delta\subset{\blb R}^n$ be a bounded closed convex polygon.
For $\varepsilon\in {\blb R}^n$ parallel to the linear space spanned by $\Delta$ put
\[
\begin{aligned}
\Delta_\varepsilon&=\bigcup_{r>0} \Delta\cap (r\varepsilon+\Delta),\\
\Delta_{\pm\varepsilon}&=\Delta_\varepsilon\cap \Delta_{-\varepsilon}.
\end{aligned}
\]
We say that a ${\cal W}$-invariant $\varepsilon\in X(T)_{\blb R}$ is {\em generic} for $\Delta$ if it parallel to $\Delta$ but not parallel to any face of $\Delta$. Note that such $\varepsilon$ does not necessarily exist - for example there may be no non-zero ${\cal W}$-invariant vectors at all.
We shall say that $\varepsilon$ is {\em weakly generic} for $\Delta$ if it is parallel to $\Delta$ but not parallel to faces of $\Delta$ for which there exist non-parallel ${\cal W}$-invariant vectors.
Let $0\neq \lambda\in Y(T)^-$ and
let $A$ be a finite central subgroup of $G^\lambda$ which acts trivially on $X^\lambda=(W_\lambda)^\vee$.
Fix $\tau\in A$ and ${\cal W}_{G^\lambda}$-invariant $\varepsilon,\nu\in X(T)_{\blb R}$.
We denote $\bar{T}=T/A$ and $X(\bar{T})_\tau$ the inverse image of $\tau\in X(A)$ under the natural projection map $X(T)\to X(A)$ (see \eqref{eq:barTTA}), and we write $X(\bar{T})^\lambda_\tau$ for the $G^\lambda$-dominant weights inside $X(\bar{T})_\tau$.
Put
\begin{align*}
{\cal L}_{\lambda,\nu,\tau}^{\varepsilon}&= X(\bar T)^\lambda_\tau\cap\left(\nu-\bar{\rho}_\lambda+
(1/2)(\bar{\Sigma}_\lambda)_{\varepsilon}\right),\\
U_{\lambda,\nu,\tau}^{\varepsilon}&=\bigoplus_{\mu\in {\cal L}_{\lambda,\nu,\tau}^{\varepsilon}}V_{G^\lambda}(\mu),\\
\Lambda_{\lambda,\nu,\tau}^{\varepsilon}&=(\operatorname {End} U_{\lambda,\nu,\tau}^{\varepsilon}\otimes_k \operatorname{Sym} W_\lambda)^{G^\lambda}.
\end{align*}
As $\lambda\neq 0$ acts trivially on $X^\lambda$ the latter
does not have a $T=T_{G^\lambda}$-stable point.
Since $W_{\lambda}$
is also quasi-symmetric,
we are in situation of Lemma \ref{prop:two}\eqref{sit1}.
We denote by $Q_\lambda \subset G^\lambda$ a pseudo-complement of the stabilizer subgroup ${\rm Stab}(X^\lambda)\subset G^\lambda$.
Recall the following definition from \cite{SVdB}:
\begin{definition}\label{def:generic}
We say that $W$ is a {\em generic} $G$-representation if
\begin{enumerate}
\item $X$ contains a point with closed orbit and trivial stabilizer.
\item If $X^{\mathbf{s}}\subset X$ is the locus of points that satisfy (1) then $\operatorname{codim}
(X-X^{\mathbf{s}},X) \ge 2$.
\end{enumerate}
We say that $W$ is a {\em pseudo-generic} $G$-representation if the stabilizing subgroup ${\rm Stab}(X)$ of $G$ is finite, and $W$ is a generic $G/{\rm Stab}(X)$-representation.
\end{definition}
\begin{remark}\label{rmk:pseudo}
Let $W,U$ be finite dimensional $G$-representations. Assume that
$A={\rm Stab}W \subset G$ is finite.
We have $(SW)^G=(SW)^{G/A}$, and $(U\otimes SW)^G\cong (U^A\otimes SW)^{G/A}$.
As such the results stated in \cite{SVdB} for generic representations extend trivially to pseudo-generic representations.
\end{remark}
We will need a ``crepant'' version of Proposition \ref{sigma}. For the definition of {twisted non-commutative resolution} (twisted NCCR) we refer to \cite[Definition 3.2]{SVdB}.
\begin{proposition}\cite[Theorems 1.6.3, 1.6.4]{SVdB}\label{nccr}
We have $\operatorname {gl\,dim}\Lambda_{\lambda,\nu,\tau}^{\varepsilon}<\infty$.
Moreover, if $W_\lambda$ is a pseudo-generic
$Q_\lambda$-representation, ${\cal L}_{\lambda,\nu,\tau}^{\varepsilon}\neq\emptyset$ and
\begin{equation}\label{prazno}
X(\bar T)_\tau^\lambda\cap \left(\nu-\bar\rho_\lambda
+(1/2)\left((\bar{\Sigma}_\lambda)_{\pm\varepsilon}- {\Sigma}_\lambda\right)\right)=\emptyset
\end{equation}
then $\Lambda_{\lambda,\nu,\tau}^{\varepsilon}$ is a twisted NCCR of $(\operatorname{Sym} W_\lambda)^{G^\lambda}$.
\end{proposition}
\begin{proof}
The first part follows by the proof of \cite[Theorem 1.6.3]{SVdB}.
To show that $\Lambda_{\lambda,\nu,\tau}^{\varepsilon}$ is Cohen-Macaulay we can proceed as in the proof of \cite[Theorem 1.6.4]{SVdB} since $X^\lambda$ has a stable $Q_\lambda$-point (as $W_\lambda$ is a pseudo-generic $Q_\lambda$-representation).
Then $\Lambda_{\lambda,\nu,\tau}^{\varepsilon}$ is a twisted NCCR by Remark \ref{rmk:pseudo} and \cite[Proposition 4.1.6]{SVdB}.
\end{proof}
We can now state the main result of this appendix (see \S\ref{sec:quasimod} below for the proof).
\begin{proposition}\label{quasisod}
Let $X$ have a $T$-stable point.
There exist $\lambda_i\in Y(T)^-$,
finite central subgroups $A_i$ of $G^{\lambda_i}$ acting trivially on $X^{\lambda_i}$, $\tau_i\in X(A_i)$, $\Wscr_{G^{\lambda_i}}$-invariant $\nu_i,\varepsilon_i\in X(T)_{\blb R}$, such that $\Dscr(X/G)$ has a semi-orthogonal decomposition $\Dscr=\langle\dots,\Dscr_{-2},\Dscr_{-1},\Dscr_0\rangle$ with $\Dscr_{-i}\cong \Dscr(\Lambda_{\lambda_i,\nu_i,\tau_i}^{\varepsilon_i})$. Moreover, we may assume $\Dscr_0\cong \Dscr(\Lambda_{0,0,0}^{\varepsilon_0})$, and that $\epsilon_i$ is weakly generic for $\bar{\Sigma}_{\lambda_i}$.
\end{proposition}
Combining Propositions \ref{nccr}, \ref{quasisod} we thus obtain under some favourable conditions a semi-orthogonal decomposition of $\Dscr(X/G)$ consisting of twisted NCCRs.
\begin{corollary}\label{quasicor}
If for every $\lambda\in Y(T)^-$ either $W_\lambda=\{0\}$ or $W_\lambda$ is a pseudo-generic $Q_\lambda$-representation and the condition \eqref{prazno} holds for all ${\cal W}_{G^\lambda}$-invariant $\nu\in X(T)_{\blb R}$ and for all ${\cal W}_{G^\lambda}$-invariant $\varepsilon\in X(T)_{\blb R}$ which are weakly generic for $\bar{\Sigma}_{\lambda}$, and all $\tau\in X(A)$ for an arbitrary finite central subgroup $A$ of $G^\lambda$, then $\Dscr(X/G)$ has a semi-orthogonal decomposition
$\Dscr=\langle\dots,\Dscr_{-2},\Dscr_{-1},\Dscr_0\rangle$ with $\Dscr_{-i}\cong \Dscr(\Lambda_{\lambda_i,\nu_i,\tau_i}^{\epsilon_i})$, and $\Lambda_{\lambda_i,\nu_i,\tau_i}^{\epsilon_i}$ is a twisted NCCR of $(\operatorname{Sym} W_{\lambda_i})^{G^{\lambda_i}}$.
\end{corollary}
\subsection{Examples}
Here we list some examples of quotient singularities for reductive groups
for which Corollary \ref{quasicor}
gives a semi-orthogonal decomposition such that its components are NCCRs of singularities of the same type.
In the cases below one can verify \eqref{prazno} in a similar way as the analogous condition was verified in \cite{SVdB}.
\subsubsection{Torus action}
In the case of $G=T$ the condition \eqref{prazno} holds for every generic $\epsilon\in X(T)_{\blb R}$, and therefore also for every weakly generic $\epsilon$ since the two notions coincide in this case. Considering also the pseudo-genericity of $W_\lambda$'s we deduce the following proposition.
\begin{proposition}
Assume that on every line $\ell\subset X(T)_{\blb R}$ through the origin on which lies a nonzero $\beta_i$ there lie at least two $\beta_i$ on each of its sides. Then the condition in the Corollary \ref{quasicor} is satisfied and thus ${\cal D}(X/G)$ admits a semi-orthogonal decomposition consisting of NCCRs.
\end{proposition}
\subsubsection{$\rm SL_2$-action}
Let $G={\rm SL}_2={\rm SL}(V)$ for a $2$-dimensional vector space $V$.
Put $W=\bigoplus_{i=0}^{d_i} S^{d_i}V$,
$c=|\{i\mid d_i=0\}|$,
\[
s^{(n)}=\begin{cases}
n+(n-2)+\cdots+1=\dfrac{(n+1)^2}{4}&\text{if $n$ is odd}\\
n+(n-2)+\cdots+2=\dfrac{n(n+2)}{4}&\text{if $n$ is even}
\end{cases}
\]
and $s=\sum_i s^{(d_i)}$. Set $R=(\operatorname{Sym} W)^G$, $M=\oplus_{0\leq i\leq s/2-1} (\operatorname{Sym}(W) \otimes_k S^iV)^G$.
\begin{proposition}
\begin{enumerate}
\item\label{A}
If $W$ is a sum of $k^c$ and one of the following representations
\begin{equation*}\label{A}
V,S^2V,V\oplus V,V\oplus S^2V, S^2V\oplus S^2V, S^3V, S^4V,
\end{equation*}
then $(\operatorname{Sym} W)^G$ is a polynomial ring, and $\Dscr(X/G)$ has a semi-orthogonal decomposition $\Dscr=\langle\dots,\Dscr_{-2},\Dscr_{-1},\Dscr_0\rangle$, with $\Dscr_{-i}\cong \Dscr(k[x_1,\dots,x_{c}])$ for $i>0$,
$\Dscr_{0}\cong\Dscr(R)$.
\item \label{B}
If $W$ is not as in the case \eqref{A}
and if
$s$ is odd then ${\cal D}(X/G)$ has a semi-orthogonal decomposition $\langle \dots, \Dscr_{-2},\Dscr_{-1},\Dscr_0\rangle$ with $\Dscr_{-i}\cong \Dscr(k[x_1,\dots,x_{c}])$ for $i>0$,
$\Dscr_0\cong\Dscr(\operatorname {End}_R(M))$.
\end{enumerate}
\end{proposition}
\begin{proof}
We note that every representation of $\rm SL_2$ is quasi-symmetric.
Let us first assume that we are in the case \eqref{B}.
We have $\Sigma=]-s,s[$.
We want to verify the condition \eqref{prazno} for $\lambda=0$.
Note that $\epsilon=0$ and $A=0$. As $s$ is odd,
$W$ satisfies \eqref{prazno} (cf. \cite[Theorem 1.4.5]{SVdB} and its proof).
Since $s$ is odd it also follows that $W$ is a generic ${\rm SL}_2$-representation.
Moreover if $\lambda\neq 0$ then $W_\lambda$ is a sum of trivial
representations and hence $\operatorname{Stab}(W_\lambda)=G^\lambda$. Thus $Q_\lambda$ is the
trivial group. So in particular $W_\lambda$ is $Q_\lambda$-generic for $\lambda\neq 0$.
Since $Q_\lambda$ is the trivial group for $\lambda\neq 0$ we verify that $\Lambda_{\lambda,\nu,\tau}^\varepsilon\cong \operatorname{Sym} W_\lambda\cong k[x_1,\dots,x_c]$ if $\Lambda_{\lambda,\nu,\tau}^\varepsilon\neq 0$.
If $W$ is as in \eqref{A} then it is well-known that $(\operatorname{Sym} W)^G$ is a polynomial ring. Therefore $\langle P_0\rangle\cong {\cal D}((\operatorname{Sym} W)^G)$ is admissible in $\Dscr(X/G)$, and thus we can set it to be equal to ${\cal D}_0$, and we can adjust the proof of Proposition \ref{affineso} to get as above ${\cal D}_{-i}\cong \langle \operatorname{Sym} W_\lambda\rangle \cong k[x_1,\dots,x_c]$ for $\lambda\neq 0$.
\end{proof}
\subsubsection{Determinantal varieties}
Let $n<h$, $W=(V^*)^h\oplus V^h$, $\dim V=n$, $G={\rm GL}(V)$, and let $Y_{n,h}=W/\!\!/ G$ be the variety of $h\times h$-matrices of rank $\leq n$. We denote by $\Lambda_j$ the NCCR of $Y_{j,h}$ given by \cite[Proposition 5.2.2]{SVdB} (constructed earlier in \cite{BLVdB2,SegalDonovan}). For convenience we set $\Lambda_0:=k$.
\begin{proposition}
Let ${\cal D}={\cal D}(X/G)$. Then ${\cal D}$ has a semi-orthogonal decomposition $\langle \dots, \Dscr_{-2},\Dscr_{-1},\Dscr_0\rangle$ with $\Dscr_0\cong \Dscr(\Lambda_n)$, $\Dscr_{-i}\cong \Dscr(\Lambda_j)$ for some $j<n$.
\end{proposition}
\begin{proof}
Let us recall some relevant data from \cite[\S 5]{SVdB}.
Let $L_i\in X(T)$ be given by $\operatorname {diag}(z_1,\dots,z_n)\to z_i$.
The weights of $W=V^h\oplus (V^*)^h$ are $(\pm L_i)_i$, each weight occurring with multiplicity $h$, and a system of positive roots is given by $(L_i-L_j)_{i>j}$.
The weights of $W_\lambda$ are of the form $(\pm L_i)_{i\in S}$ for some subset $S\subset \{1,\dots,n\}$, each weight occurring with multiplicity $h$, and $Q_\lambda$ is isomorphic to $\operatorname {GL}(V_\lambda)$, where $V_\lambda$ is an $|S|$-dimensional vector space.
Thus, $W_\lambda$ is a $Q_\lambda$-generic representation (see \cite[\S 5.1]{SVdB}).
Note that none of the weights of $W$ belongs to the subspace of $X(T)_{\blb R}$ spanned by the roots. Note also that $\varepsilon=\sum_i L_i\in X(T)_{\blb R}$ is ${\cal W}$-invariant and that the space of ${\cal W}_\lambda$-invariant vectors is spanned by $\sum_{i\in S}L_i$.
Thus, it is enough to see that $\sum_{i\in S}L_i$ satisfies \eqref{prazno} for $\lambda\in Y(T)^-$ in order to apply Corollary \ref{quasicor}. Similarly as in the proof of \cite[Proposition 5.2.2]{SVdB} we have for ${\cal W}_{G^\lambda}$-invariant $\nu\in X(T)_{\blb R}$
\begin{multline*}
\nu-\bar{\rho_\lambda}+(1/2) (\bar{\Sigma}_{\lambda})_{\pm \epsilon}=\\
\sum_{i\in T_\lambda^+\cup T_\lambda^-}r_iL_i-\sum_{i\in T_\lambda^0} (n-2i+1)/2 L_i{}
+\left\{\sum_{i\in T_\lambda^0} a_iL_i\mid a_i\in -]h/2,h/2[\right\}=\\
\nu-\bar{\rho_\lambda}+(1/2) \bar{\Sigma}_{\lambda},
\end{multline*}
which implies \eqref{prazno}.
Moreover, from the previous paragraph it also follows that the components of the semi-orthogonal decomposition from Proposition \ref{quasisod} are in this case isomorphic to the NCCRs of $Y_{j,h}$, $1\leq j\leq n$, or to $\Dscr(k)$.
\end{proof}
\subsubsection{Pfaffian varieties}
Let $2n<h$, $W=V^h$, where $V$ is a $2n$-dimensional vector space equipped with a non-degenerate skew-symmetric bilinear form, $G={\rm Sp}_{2n}(k)$, and let $Y_{2n,h}^-=W/\!\!/ G$ be the variety of skew-symmetric $h\times h$-matrices of rank $\leq 2n$.
We denote by $\Lambda_j$ the NCCR of $Y_{2j,h}$ given by \cite[Proposition 6.1.2]{SVdB}. For convenience we set $\Lambda_0:=k$.
\begin{proposition}
\label{prop:pfaffians}
If $h$ is odd then ${\cal D}(X/G)$ has a semi-orthogonal decomposition $\langle \dots, \Dscr_{-2},\Dscr_{-1},\Dscr_0\rangle$ with $\Dscr_0\cong \Dscr(\Lambda_n)$, $\Dscr_{-i}\cong \Dscr(\Lambda_j)$ for some $j<n$.
\end{proposition}
\begin{proof}
We recall some fragments of \cite[\S 6.1]{SVdB}. We assume that $(v_i)_i$ is a basis for $V$ such that the skew-symmetric form on $V$ is given by $\langle v_i,v_{i+n}\rangle=1$, $\langle v_i,v_j\rangle=0$ for $j\neq i\pm n$. Let $T\subset \operatorname{Sp}(V)$ be the maximal torus $\{\operatorname {diag}(z_1,\ldots,z_n,z_1^{-1},\ldots,z^{-1}_n)\}$ and let $L_i\in X(T)$ be given by $(z_1,\dots,z_n)\to z_i$.
The weights of $W=V^h$ are $(\pm L_i)_i$, each occurring with multiplicity $h$, and a system of positive roots is given by $(L_i-L_j)_{i>j}$, $(2 L_i)_i$,
$
\bar{\rho}=nL_1+(n-1)L_2+\cdots+L_n,
$
\[
\Sigma=\{\sum_i a_iL_i\mid a_i\in ]-h,h[\}.
\]
For $\lambda\in Y(T)$ and ${\cal W}_{G^\lambda}$-invariant $\nu\in X(T)_{\blb R}$ we thus have
\[
\nu-\bar{\rho_\lambda}+(1/2) \bar{\Sigma}_{\lambda}=
\sum_{i\in T_\lambda^+\cup T_\lambda^-}r_iL_i-\sum_{i\in T_\lambda^0} (n-1+i)L_i
+\left\{\sum_{i\in T_\lambda^0} a_iL_i\mid a_i\in [-h/2,h/2]\right\}
\]
It easily follows that the boundary of this set does not intersect $X(T)$ if $h$ is odd, thus the condition \eqref{prazno} holds.
Similarly as in the case of determinantal varieties, $W_\lambda$ is $Q_\lambda$-generic, and here the semi-orthogonal decomposition consists of the NCCRs of Pfaffian varieties $Y_{2k,h}$, $1\leq k\leq n$, or $\Dscr(k)$.
\end{proof}
\subsection{Proof of Proposition \ref{quasisod}}
\label{sec:quasimod}
\subsubsection{Preliminaries}
We remind the reader of our standing hypothesis that $W$ is quasi-symmetric.
In \S\ref{partition} (see \eqref{eq:partition}, Remark
\ref{rmk:faces}) we partitioned the set $X(T)^+$ according to the
relative interiors of faces of $-\bar\rho+r\bar\Sigma$, $r\geq
1$. However, in order to obtain a decomposition by NCCRs we need a
finer decomposition of $X(T)^+$. As indicated in Proposition
\ref{nccr} the decomposition parts should be roughly given by slightly
shifted faces of $-\bar\rho+(1/2)\bar{\Sigma}$. To obtain such a
decomposition we will inductively refine each face of
$-\bar\rho+r\bar\Sigma$. While in \S\ref{partition} we started with
$r\geq 1$, we need here $r> 1/2$. The following lemma will ensure that
this is indeed possible.
We note first that properties of the partition in \S\ref{partition}
remain basically unchanged if we replace $-\bar\rho+r\bar\Sigma$ by
$\nu-\bar\rho+r\bar\Sigma$ for a $\Wscr$-invariant $\nu\in
X(T)_{\blb R}$.
By replacing $-\bar{\rho}$ in the right hand side of
\eqref{expresschi} by $\nu-\bar\rho$ we obtain a minimal quadruple
which we denote by $(r_\chi^{\nu},{\bf S}_\chi^\nu)$ and also a
corresponding one-parameter subgroup $\lambda^\nu$ as in Lemma
\ref{ref-1.5}\eqref{tri}. We will omit the extra decoration
$(-)^\nu$ in case no confusion can arise.
The following lemma is an improved and slightly generalized version of Corollary \ref{betaplus}
which holds because $W$ is quasi-symmetric.
\begin{lemma}\label{betaplus+}
Let $\chi\in X(T)^+$ be such that $r^{\nu}_\chi> 1/2$.
If $p>0$ and $\mu=\chi+\beta_{i_1}+\cdots+\beta_{i_{p}}$, where $\{i_1,\ldots,i_{p}\}\subset\{1,\ldots,d\}$, $i_j\neq i_{j'}$ for
$j\neq j'$ and $\langle\lambda^\delta,\beta_{i_j}\rangle>0$,
then $({r}^\delta_{{\mu}^+},{\bf |S}^\nu_{{\mu}^+}|)<({ r}^\nu_\chi,{\bf |S}^{\nu}_\chi|)$.
Moreover, $\langle\lambda^\nu,\chi\rangle<\langle\lambda^\nu,\mu^+\rangle$.
\end{lemma}
\begin{proof}
For the first claim we proceed exactly as in the proof of \cite[Theorem 1.6.1]{SVdB}. The second claim follows as the corresponding part in
Corollary \ref{betaplus}.
\end{proof}
Here we use notation introduced in \S\ref{sec:nonstable}.
For $\lambda\in Y(T)^-$ and for ${\cal W}$-invariant $\nu\in X(T)_{\blb R}$ and $\tau\in A$ we denote
\begin{align*}
{\cal L}_{\lambda,r,\nu,\tau}&= X(\bar T)^\lambda_\tau\cap(\nu-\bar\rho_\lambda+r\Sigma_\lambda),\\
{\cal D}_{\lambda,r,\nu}(X/G)_\tau&=\langle P_{G^\lambda,\chi}\mid \chi\in {\cal L}_{\lambda,r,\nu,\tau}\rangle,\\
U_{\lambda,r,\nu,\tau}&=\bigoplus_{\mu\in {\cal L}_{\lambda,r,\nu,\tau}}V_{G^\lambda}(\mu)
,\\
\Lambda_{\lambda,r,\nu,\tau}&=(\operatorname {End}(U_{\lambda,r,\nu,\tau})\otimes_k \operatorname{Sym} W_\lambda)^{G^\lambda}.
\end{align*}
so that in particular
\begin{equation}
{\cal D}_{\lambda,r,\nu}(X/G)_\tau\cong {\cal D}(\Lambda_{\lambda,r,\nu,\tau})
\end{equation}
We will also use some specializations of these notations in case
part of the data $\lambda,r,\nu,\tau$ is omitted. If $\lambda$ is omitted
then we assume $\lambda=0$. If $r$ is omitted then we assume that $r=1/2+\epsilon$ where $\epsilon>0$ but arbitrarily small.
For example with this convention we have
\begin{align*}
{\cal L}_{\lambda,\nu,\tau}&= X(\bar T)^+_\tau\cap(\nu-\bar\rho_\lambda+(1/2)\bar{\Sigma}_\lambda),\\
{\cal L}_{\nu,\tau}&= X(\bar T)^+_\tau\cap(\nu-\bar\rho+(1/2)\bar{\Sigma}).
\end{align*}
In the other direction, to indicate, context we may also write
$(G,X,\lambda,r,\nu,\tau)$ instead of $(\lambda,r,\nu,\tau)$,
where we allow again $r$ or $\lambda$ to be omitted.
\begin{proposition}\cite[Theorem 1.6.1]{SVdB}\label{sigmaq}
Assume $r>1/2$. Then $\operatorname {gl\,dim} \Lambda_{r,\nu,\tau}\allowbreak <\infty$.
Consequently, ${\cal D}_{r,\nu}(X/G)_\tau\cong \Dscr(\Lambda_{r,\nu,\tau})$.
In particular, (specializing to $\lambda=0$ and $r=1/2+\epsilon$) $\operatorname {gl\,dim} \Lambda_{\nu,\tau}<\infty$ and ${\cal D}_{\nu}(X/G)_\tau\cong \Dscr(\Lambda_{\nu,\tau})$.
\end{proposition}
\begin{lemma}\label{tired}
Let $\nu$ be a ${\cal W}$-invariant element of $X(T)_{\blb R}$. Using the partition \eqref{eq:partition} of $X(T)^+$, calculated with respect to $\nu-\bar{\rho}+r\bar{\Sigma}$, $r> 1/2$ and using $(r^\nu_\chi,\bold{S}^\nu_\chi)$, the analogues of Propositions \ref{affineso}, \ref{twist} hold.
\end{lemma}
\begin{proof}
Using Lemma \ref{betaplus+} in place of Corollary \ref{betaplus} and Proposition \ref{sigmaq} it is easy to check that the proofs of the semi-orthogonal decompositions in \S\ref{sec:proofs} carry over with the above partition of $X(T)^+$,
replacing
$\Lambda_{<j}$, $\Dscr_{<j}$, $\Dscr_{j}$ by their shifted counterparts $\Lambda_{<j,\nu}$, $\Dscr_{<j,\nu}$, $\Dscr_{j,\nu}$.
Consequently, the same holds true also for $\Lambda_{<j,\nu,\tau}$, $\Dscr_{<j,\nu,\tau}$, $\Dscr_{j,\nu,\tau}$ (see proof of Proposition \ref{twist}).
\end{proof}
If $K$ is a connected normal subgroup of $G$, and $Q$ is a chosen pseudo-comple\-ment, then we denote (as in the proof of Proposition \ref{twist}) by $\tilde T=T_K\times T_Q$ a maximal torus of $\tilde G=K\times Q$, such that $\tilde T\to T\subset G$ (under the multiplication map). Let us denote by $(\nu_K,\nu_Q)$ the image of $\nu\in X(T)$ in $X(\tilde T)$.
\begin{lemma}\label{faceTstable}
Let $\nu\in X(T)_{\blb R}$ be ${\cal W}$-invariant.
Assume that there is a non-trivial connected subgroup $K$ of $G$ acting trivially on $X$. Let $Q$ be a pseudo-complement of $K$ in $G$.
Then there exist a ${\cal W}_Q$-invariant $\nu_Q\in X(T_Q)_{\blb R}$,
a finite central group $A_Q$ of $Q$ acting trivially on $X$,
and $\tau_{Q}\in X(A_Q)$ such that
\[
{\cal L}_{\lambda,r,\nu,\tau}\cong {\cal L}_{Q_\lambda,X^\lambda,r,\nu_Q,\tau_Q}
\]
via the natural map $X(\bar{T})_\tau\to X(T)\to X(\tilde{T})\to X(T_Q)$.
\end{lemma}
\begin{proof}
The proof is similar to
the proof of Proposition \ref{prop:nonstable1}, additionally noting that $-\bar \rho$ decomposes as $(-\bar\rho_K,-\bar\rho_Q)$, $\beta_i$ as $(0,\beta_i)$, $\nu$ as $(\nu_Q,\nu_K)$, and $\tau_Q\in X(A_Q)$ satisfies
$\tilde\tau_Q+\tilde\nu_K-\tilde{\bar{\rho}}_K=\tilde\tau$.
\end{proof}
\subsubsection{Proof}
The proof is in two steps.
We first decompose $\Dscr(X/G)$ such that its components are isomorphic to $\Dscr(\Lambda_{\lambda,\nu,\tau})$, and then we decompose this further with components isomorphic to $\Dscr(\Lambda_{\lambda,\nu,\tau}^\varepsilon)$.
\begin{lemma}\label{lemmaquasisod}
Let $X$ have a $T$-stable point.
There exist $\lambda_i\in Y(T)^-$,
finite central subgroups $A_i$ of $G^{\lambda_i}$ acting trivially on $X^{\lambda_i}$, $\tau_i\in X(A_i)$, $\Wscr_{G^{\lambda_i}}$-invariant $\nu_i\in X(T)_{\blb R}$ such that ${\cal D}=\Dscr(X/G)$ has a semi-orthogonal decomposition ${\cal D}=\langle\dots,\Dscr_{-2},\Dscr_{-1},\Dscr_0\rangle$ with $\Dscr_{-i}\cong \Dscr(\Lambda_{\lambda_i,\nu_i,\tau_i})$.
\end{lemma}
\begin{proof}
We decompose ${\cal D}$ according to faces of $-\bar\rho+r\bar{\Sigma}$
assuming $r_j>1/2$. By Lemmas \ref{tired}, \ref{faceTstable}
and \eqref{eq:part2}
we have
$\Dscr_{-i}\cong \Dscr_{r_{j_i},(\nu_{j_i})_{Q_{\lambda_{j_i}}}}(X^{\lambda_{j_i}}/Q_{\lambda_{j_i}})_{\tau_{Q_{\lambda_i}}}$.
As $\dim X^{\lambda_{j_i}}<\dim X$, $\dim Q_{\lambda_{j_i}}<\dim G$, $r_{j_i}>1/2$ we can decompose $\Dscr_{-i}$ further. Note that in finitely many steps (as ${\cal L}_{r,\nu,\tau}$ is finite) we reach the situation when every component of the decomposition is of the form $\langle P_{\chi}\mid \chi\in{\cal L}_{\lambda,\nu,\tau}\rangle$ for some $\lambda\in Y(T)^-$, ${\cal W}_{G^{\lambda}}$-invariant $\nu\in X(T)_{\blb R}$, and $\tau\in X(A)$ for a finite central subgroup $A$ of $G^\lambda$ acting trivially on $X^\lambda$ by Lemma \ref{lem:technical} below (and the fact that for $r=1/2+\epsilon$ we have by definition ${\cal L}_{\lambda,r,\nu,\tau}={\cal L}_{\lambda,\nu,\tau}$). \end{proof}
We now proceed to decompose
${\cal D}_{\lambda,\nu}(X/G)_\tau\cong {\cal D}(\Lambda_{\lambda,\nu,\tau})$ further.
\begin{lemma}\label{0step}
There exist $\lambda_0=\lambda$, $\lambda_i\in Y(T)^-$, ${\cal W}_{G^{\lambda_i}}$-invariant $\nu_i$, ${\cal W}_{G^{\lambda_i}}$-invariant $\varepsilon_i\in X(T)_{\blb R}$ which are weakly generic for $\Sigma_{\lambda_i}$,
finite central subgroups $A_i$ of $G^{\lambda_i}$ acting trivially on $X^{\lambda_i}$, $\tau_i\in X(A_i)$ such that $\Dscr={\cal D}_{\lambda,\nu}(X/G)_\tau$ has a semi-orthogonal decomposition $\Dscr=\langle \Dscr_{-k},\dots,\Dscr_{-1},\Dscr_0\rangle$ with
$\Dscr_{-i}\cong \Dscr(\Lambda_{\lambda_i,\nu_i,\tau_i}^{\varepsilon_i})$ for $i\geq 0$.
\end{lemma}
\begin{proof}
We choose a ${\cal W}_{G^\lambda}$-invariant $\varepsilon\in X(T)_{\blb R}$ weakly generic for $\bar{\Sigma}_\lambda$
and small $a>0$ such that
\[
X(\bar T)_\tau^\lambda\cap(\nu-\bar\rho_\lambda+(1/2)(\bar{\Sigma}_\lambda)_\varepsilon)=
X(\bar T)_\tau^+\cap(\nu-\bar\rho_\lambda+a\varepsilon+(1/2)\bar{\Sigma}_\lambda)
\]
and
\begin{equation}\label{objem}
{\cal L}_{\lambda,\nu,\tau}=X(\bar T)_\tau^\lambda\cap(\nu-\bar\rho_\lambda+a\varepsilon+r'{\Sigma}_\lambda)
\end{equation}
for some $r'>1/2$.
We take $\delta=\nu+a\varepsilon$, and let $\nu_Q$, $\tau_Q$ be as in Lemma \ref{faceTstable}, and partition $X(T_{Q_\lambda})^+$ accordingly.
Due to \eqref{objem} and Lemma \ref{tired}, ${\cal L}_{\lambda,\nu,\tau}^\varepsilon\cong \cup_{i<j_1} F_i$ and there exists $k\geq 0$ such that ${\cal L}^\epsilon_{0,\nu,\tau}\cup_{1\leq i\leq k} F_{j_i}= {\cal L}_{\nu,\tau}$. We set $\varepsilon_0=\varepsilon$.
As in the second paragraph of the proof of Lemma \ref{lemmaquasisod}, we can decompose $\langle P_\chi\mid \chi\in F_{j_i}\rangle$ further, until the components are isomorphic to ${\cal D}_{\lambda,\nu}(X/G)_\tau$ for some $\lambda\in Y(T)^-$, ${\cal W}_{G^\lambda}$-invariant $\nu\in X(T)_{\blb R}$ and $\tau\in X(A)$ for a finite central subgroup $A$ of $G^\lambda$ acting trivially on $X^\lambda$. Now we can repeat the first part of this proof and decompose ${\cal D}_{\lambda,\nu}(X/G)_\tau$ further. We get the desired decomposition by invoking Lemma \ref{lem:technical} below.
\end{proof}
We have used the following technical lemma.
\begin{lemma}\label{lem:technical}
Let $\lambda'\in Y(T)^-$, $\lambda''\in Y(T_{Q_{\lambda'}})^-$,
let $\nu'\in X(T_{Q_{\lambda'}})$ be ${\cal W}_{G^{\lambda''}}$-invariant, and let $A'$ be a finite central subgroup of $(Q_{\lambda'})^{\lambda''}$ acting trivially on $(X^{\lambda'})^{\lambda''}$, $\tau'\in X(A')$. There exist $\lambda\in Y(T)^-$, ${\cal W}_{G^{\lambda}}$-invariant $\nu\in X(T)_{\blb R}$, finite central subgroup $A$ of $G^\lambda$ acting trivially on $X^\lambda$, $\tau\in X(A)$,
such that
\begin{align*}
{\cal L}_{Q_{\lambda'},X^{\lambda'},\lambda'',r,\nu',\tau'}&\cong {\cal L}_{\lambda,r,\nu,\tau},\\
{\cal D}_{Q_{\lambda'},X^{\lambda'},\lambda'',r,\nu'}(X^{\lambda'}/Q_{\lambda'})_{\tau'}&\cong \Dscr_{\lambda,r,\nu}(X/G)_\tau\cong \Dscr_{r,\nu_Q}(X^\lambda/Q_\lambda)_{\tau_Q}.
\end{align*}
\end{lemma}
\begin{proof}
It is easy to check using Lemma \ref{faceTstable} that we can take $\lambda=a(\lambda'+b\lambda'')$ for small $b\in {\blb Q}$, $a\in {\blb N}$ such that $\lambda\in Y(T)^-$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{quasisod}]
It suffices to combine Lemma \ref{lemmaquasisod} with Lemma \ref{0step}.
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
train/arxiv
|
BkiUdQg5qoYAuNZAcXhB
| 5 | 1 |
\section{Introduction}
The next generation of gravitational wave ({GW}) detectors - most
notably Advanced LIGO \cite{aligo}, GEO-HF \cite{geohf} and Advanced VIRGO \cite{avirgo} -
are expected to achieve the first direct detection of {GW} s
during 2015, when Advanced LIGO begins operation or soon thereafter,
giving life to the new field of the
{GW}~astronomy.
The impact on cosmology, fundamental physics, and of course on
conventional astronomy itself will be enormous; the interest in
{GW}~observation is therefore
growing rapidly in the scientific community. But while the public
interest in cosmology and relativity is high, public knowledge of the
principles behind {GW}~detection is still rather limited, and even the
existence of the large {GW}~observatories is little known by the more
general public, compared to larger experimental facilities such as,
for instance, the Large Hadron Collider \cite{LHC}. There is a clear
need to better inform and inspire the general public and prospective
students in {GW}~astronomy and related sciences.
Within the LIGO Scientific Collaboration (LSC)
the `Education and Public Outreach' (EPO) group aims to combine ideas and
approaches across the collaboration to successfully communicate the
vision and benefits of GW observation throughout the world.
To contribute to these international efforts in the promotion of GW
astronomy, at the University of Birmingham we have established a
unique program aimed to the development of small educational
computer applications that can be used to illustrate the basics of
{GW}~science and {GW}~detector technology in a playful but informative
way. The aim is to present GW science to younger generations within
one of the environments they are more familiar with, i.e. computer-games,
and to exploit the communication channels that the {\it new
technologies} offer to possibly get an even larger international
audience in contact with {GW}~science.
This activity led us to the successful development of a number of
interactive computer applets describing a variety of concepts
connected to {GW}~science and, eventually, to the realisation of
two full-scale computer-games based on the subjects of gravity and {GW}
s. In this article we overview our computer related outreach activity
and present and discuss our {GW}~related games, {Black Hole Pong}~and {Space Time Quest}.
\section{{`Processing'} programs for science outreach}
The idea of developing small computer applications for educational
purposes builds upon the need to introduce new students to the world of
computer programming and modelling of physical systems in a manner
which is enjoyable. This will not only help them learn successfully, but
also engages them with GW research.
During the initial `induction'
phase, undergraduate or summer-students involved in research projects
with our group are encouraged to generate a small computer program on
a {GW}~subject of their interest, which is then developed, as a
conventional student-project, by the student with the supervision of
more senior members of the group.
The small computer programs, called `sketches', are developed using
the open-source programming environment Processing \cite{processing:book,processing:web},
originally developed at the MIT Media Lab in 2001 as a software prototyping environment and to teach
fundamentals of computer programming within a graphical context.
Processing has eventually reached a wide audience
and is now largely used in many professional communities, such
as designers, artists, and architects to create graphical
applications, animations, interactive tools and for visual arts in
general.
Processing offers an intuitive approach to programming for
the beginner or an efficient sketchbook for rapid prototyping by
experienced programmers. Thus Processing allows students with very different
computing-backgrounds to collaborate and to successfully produce graphically
impressive sketches in a relatively short period of time.
The successful sketches are published online on our outreach website
\href{http://www.gwoptics.org}{gwoptics.org}~\cite{gwoptics}, on individual webpages where the
student-author can provide instructions and a short description of the
physics illustrated in the sketch. The program is either embedded
within the HTML code to run as an applet in the webpage itself or,
where more appropriate, distributed for download as an application to
install and run on the computer of the interested person.
The collection of sketches developed so far covers a wide spectrum of
subjects related to {GW}~science and technologies. The programs range
from illustrating the most fundamental properties of {GW} s, for
example the deformation of space-time produced by a propagating {GW}~or
the characteristic `sound' of the {GW}~signal of colliding black holes,
to the illustration of the vital technologies and phenomena used in GW
detectors such as lasers, vibration isolation systems or interference
of light. Different combinations of these sketches have been
successfully used as interactive tools during seminars about GW
detection and during more general lectures in schools and
universities, and the sketches webpages are regularly consulted online
by people interested in learning about the specific subject.
\begin{figure}[th]
\begin{center}
\includegraphics[width=7.8cm]{bhp_screen01.png}
\includegraphics[width=7.8cm]{bhp_screen03.png}
\caption{\label{fig:BHPscreenshots} Screen-shots of the {Black Hole Pong}~game. The
image on the left shows the start-up screen of the game and the
right snapshot has been
taken mid-game, showing one of the
astronomical background images, the two `black holes' controlled
by the players and several bright disks as the stars currently
in play.}
\end{center}
\end{figure}
\section{The gravitational wave games}
The positive feedback we received about the different interactive sketches has
encouraged us to realise two properly defined computer-games
based on subjects related to gravity and GW science.
The motivation behind the development of the two particular games was
as follows:
in one case, the aim was to produce an intuitive and graphically
attractive computer-game that could engage and entertain children and
teenagers during science exhibitions; in the other,
the goal was
vice-versa to develop an interactive element that could function as an
engaging and playful supplement for illustrating and explaining the
secrets of {GW}~detectors and their technology to a more educated
audience, such as high-school students onwards. The two games, {Black Hole Pong}~and
{Space Time Quest}, are presented and discussed in this section.
\subsection{Black Hole Pong}
{Black Hole Pong}~(BHP) is a new arcade-style game with a reference to
one of the very first computer-games, {\it Pong}~\cite{Pong}.
Pong involved two players, each one controlling a paddle
which they would move vertically up and down in order to bounce a
ball back towards their opponent: each
time the ball touched the opponent's far edge of the screen, the player
would score a point. In BHP the idea of the split-screen has been
kept. However each player controls a black hole which can move
horizontally as well as vertically, and the
objective is to make use of the gravitational potential of the black
hole to fling free roaming stars towards the other player.
\begin{figure}[b]
\begin{center}
\includegraphics[height=4.1cm]{BHP_commday_2011.pdf}
\includegraphics[height=4.1cm]{BHP_bsf_2010.pdf}
\includegraphics[height=4.1cm]{BHP_amaldi_2011.pdf}
\caption{\label{fig:BHPexhibition} Children, students and adults enjoying the BHP~game
at the University of Birmingham Community Day 2011 (left), the British
Science Festival 2010 (center) and the 9$^{\rm th}$ Amaldi Conference
(right).}
\end{center}
\end{figure}
BHP~has been designed as a simple, fun game for people of
all ages. At the same time, BHP features several educational elements
that make it a fascinating tool for teaching, learning and discovering
new physical concepts. For example, by learning how to manoeuvre the
incoming stars only using the black hole's gravitational potential,
the player develops an intuitive understanding of concepts such as
gravitational attraction, orbital
mechanics and gravitational slingshot effect.
The background graphics of the game is a slideshow containing images of
the night sky as well as astronomical objects taken by both amateur
astronomers and large ground/space based telescopes.
Furthermore, several astrophysical phenomena are graphically featured
in the game, such as `worm holes', `star capture'
and `gravitational lensing', adding to the overall simple but
attractive graphics, see Fig.~\ref{fig:BHPscreenshots}.
When used as part of an exhibition, science fair or similar activity we have
supported the arcade-style attraction by state-of-the art hardware:
the game is designed to be controlled with the well-known Microsoft
Xbox controllers and whenever possible we use large Apple iMac
computers to run the game, as shown in Fig.~\ref{fig:BHPexhibition}.
\subsection{Space Time Quest}
{Space Time Quest}~(STQ), here shown with screenshots in Fig.~\ref{fig:STQ:screens} and Fig.~\ref{fig:STQ:noise}, is a manager-simulation type game:
the player is the `Principal Investigator' (PI) of a
future ground-based
{GW}~interferometer who has the goal to
design
the most sensitive GW detector.
The PI is assigned a limited budget that he has to wisely distribute between
the different detector's subsystems
to tweak the instrument parameters and to achieve
the best sensitivity. Once the player is satisfied with their design,
they can operate the detector in `Science-Run' mode and see how well it performs:
the final score is determined by how far in the universe the detector itself can explore,
based on the achieved sensitivity curve,
and it is recorded in the web-based STQ `hall-of-fame'.
\begin{figure}[t]
\includegraphics[height=5.3cm]{STQ_PI.pdf}
\includegraphics[height=5.3cm]{STQ_opt.pdf}
\caption{\label{fig:STQ:screens} Screenshots of STQ. On the left, the `PI's office'
is the game's hub from which the player can access all the detector's subsystems. On the right an image of the `Optics' subsystem screen.}
\end{figure}
STQ is complemented by the \href{http://www.gwoptics.org/ebook}{`Gravitational Waves E-Book'} \cite{ebook},
effectively a collection of webpages with short introductions to a number of topics relevant to GW science.
The E-Book offers a description of the main instrument subsystems that comprise the detector and
illustrates
how each noise source relates to the
different subsystem parameters that the player chooses.
The E-Book text is purposely aimed to
the general person who has interested in GW science,
and as such it is written with a simple style
to make it accessible
to a broad public of all
ages and backgrounds.
The E-Book is also independently available online as more general reading material on GW science,
and is now also offered in multiple languages.
STQ contains many educational merits for the public.
First of all, the game showcases the physics behind a real
GW detector and it presents all the main subsystems that comprise it.
Furthermore, it illustrates all the most important noise sources
which can limit the detector's sensitivity.
By reading the E-Book and by looking at the changes in the sensitivity curve,
the player can see
how each subsystem is affected by the different noise sources,
and discover some of the ways in which physicists
try to reduce the noise sources in the detector.
The game also presents some of the typical challenges that physicists face when designing
a real physics experiment,
such
as
making trade-offs between performances of different
interlinked subsystem parameters and managing the available resources wisely.
Finally, STQ features images of astronomical objects in the background,
similar to BHP. Furthermore
the STQ graphics are based on photographs of components from real GW detectors complemented
with realistic cartoons of the detector parts, offering a realistic picture of how a GW interferometer looks like to the user.
\begin{figure}[t]
\includegraphics[height=5.3cm]{STQ_sens.pdf}
\includegraphics[height=5.3cm]{STQ_kids.pdf}
\caption{\label{fig:STQ:noise} Left: a screen-shot of the `sensitivity curve' in the STQ game.
Right: school students playing
STQ with a demonstrator during an exhibition at University of Birmingham.}
\end{figure}
STQ~is mainly targeted for science teachers, A-level, Higher and
Advanced Higher science students and is mainly suited for use in
science fairs and exhibitions and initially played with the help of
demonstrators. However, STQ~has also proved to be an entertaining
tool to teach the basics of {GW}~science also to beginners in
{GW}~research-projects and PhD schools, or to attract prospective
research students towards the {GW}~field.
\section{Use in exhibitions and distribution}
\begin{figure}[b]
\includegraphics[width=16cm]{youtube_stats.pdf}
\caption{\label{fig:youtube} Total number of unique views of the online video tutorials for the Black Hole Pong and Space Time Quest games and for one of our processing sketches, the `Augmented Reality Pendulum'. Data from \href{http://www.youtube.com/gwoptics}{youtube.com}.}
\end{figure}
Early prototypes of the games have been used for the first time
during the exhibition about {GW}~science `Looking
for Black Holes with Lasers', organised by the Birmingham GW Group
within the `British Science Festival', held in Birmingham in September
2010
\cite{BSF:main}.
The very positive feedback collected with the games
during this first exhibition gained us the attention of our university
and of local schools and associations. This led to our displays being
routinely used in University Open/Admission days and in our
university's outreach events, e.g. the University Community Day 2011
\cite{comday}, and gained members of our group invitations to visit
schools and give public seminars, where the games were used
to complement the seminar.
At the same time, positive feedback has been received from school
teachers concerning our other Processing sketches which have been used
as support material in physics lectures and during science activities.
Since their official release, BHP and STQ are freely distributed on
our website \href{http://www.gwoptics.org}{gwoptics.org}~ and on the outreach pages of the \href{http://www.ligo.org/}{ligo.org} website \cite{ligo.org}, which hosts links and multimedia material of interest for the EPO group. The launch of the two games was also announced online via
social-media networks and with online videos, with the main aim
of raising both the profile of the games and its
visibility.
Indicative figures of merit
on how positive this campaign has been
can be inferred from
the total number of
download
of the two games, the entries in the
high score `hall of fame', the unique views of the online
video-tutorials, and more generally from the number of visits to
the webpages presenting our online material.
Examples of such data are presented in Fig.\,\ref{fig:youtube} and
Fig.\,\ref{fig:gwoptics}.
The response is so far very positive and
seem to indicate a slow but constant growth of new contacts and an
increasing interest in the products themselves. In particular,
sensible increments in the number of contacts can be successfully
correlated with our contribution at science events and exhibitions,
with release of new outreach material and with our communication
campaign via online social networks.
\begin{figure}[t]
\includegraphics[width=16cm]{gwoptics_stats_cumulative_b.pdf}
\caption{\label{fig:gwoptics} Top panel: number of download of the Black Hole Pong and Space Time Quest games over the year 2011. Bottom panel, cumulative number of visits
during 2011 to some of the \href{http://www.gwoptics.org}{gwoptics.org}~outreach pages, such as the main page collecting all Processing programs, the games BHP and STQ, the STQ high-score `hall of fame', the E-Book and the pages of two other Processing sketches, the `Augmented Reality Pendulum' and the `Inspiral signal'. }
\end{figure}
STQ and BHP are also accompanied by short questionnaires, handed-out at
exhibitions and during visits to schools as well as online,
which are distributed to collect anonymous feedback amongst the users,
targeting in particular teacher's and student's categories.
The aim of the questionnaires is primarily to evaluate
the success of the games among the users and in particular about the
effectiveness of their educational aspects.
As a further step in the future,
the goal is to develop a proper analysis of the feedback provided in the questionnaires
that will allow us to better link the games to specific elements of education, such as formal education,
and to improve their integration within the syllabus.
\section{Conclusion and future activities}
Our program aimed at the development of small computer applications
for educational purposes. This has been successful, with the realisation of
several interactive sketches and two full scale games
related to GW science and technology. Thanks to our participation
at
popular science events, and to our online presence and communication
campaign, the sketches and the games are now becoming popular within
schools and science associations and, as shown by the
response gathered from the online audience, the prospective feedback looks
promising for the future. In particular,
BHP and STQ
have allowed us to significantly increase the visibility of
our online presence and as such also of
GW
science
within the general public, as well as within the local scientific community.
Encouraged by these positive results, we will continue our
computer related outreach activity in the next years, and we plan to
realise new Processing sketches for outreach in the near future.
In parallel, BHP and STQ will be treated as running projects. We will take
advantage of the feedback and advice from teachers, students and other
users to make further modifications and improvements to both games.
Furthermore, we
will continue presenting BHP, STQ and our other interactive sketches
during visits to local schools and in popular science events. We hope
to increase and improve our online communication campaign
on GW subjects to make GW detectors more and more popular among the
general public and to gain GW science the largest audience possible.
\ack
We are very grateful to the Processing community for all the online examples, code libraries and online forums
that helped us in the development of our sketches and games and without whom most of this work would not have been possible.
We thank the astronomers who allowed us to use their impressive photographs of the night sky as background images for the BHP and STQ games (see the credits page in the games for the details) and we are thankful to the GEO600 collaboration, for providing images of components of the detector used in the STQ game and for extensive beta-testing of the initial game prototype. This document has been assigned the LIGO Laboratory document number LIGO-P1100145.
\section*{References}
|
train/arxiv
|
BkiUaqzxK4sA-5Y3qI_3
| 5 | 1 |
\section{DA$\Phi$NE and KLOE}
\noindent
The DA$\Phi$NE e$^+$e$^-$ collider operates at a total energy
W = 1020 MeV, the mass of the $\phi$(1020)-meson.
Approximately $3\times10^6$ $\phi$ -mesons are produced for
each integradet luminosity of 1 pb$^{-1}$.
Since 2001, KLOE has collected an integrated luminosity of
about 2.5 fb$^{-1}$.
Results presented below are based on 2001-02 data for about 450~pb$^{-1}$.
The KLOE detector consists of a large cylindrical drift chamber, DC, surrounded
by a lead/scintillating-fiber electromagnetic calorimeter, EMC.
The drift chamber \cite{bib:dc}, is 4~m in diameter and 3.3~m long.
The momentum resolution is $\sigma(p_{T})/p_{T} \sim 0.4\%$.
Two track vertices are reconstructed with a spatial resolution
of $\sim$ 3 mm.
The calorimeter \cite{bib:emc}, composed of a barrel and two endcaps,
covers 98\% of the solid angle.
Energy and time resolution are $\sigma(E)/E = 5.7\%/\sqrt{E[{\rm GeV}]}$ and
$\sigma(t) = 57 {\rm ps}/ \sqrt{E[{\rm GeV}]} \oplus 100 {\rm ps}$.
A superconducting coil around the detector provides a 0.52~T magnetic
field.
The KLOE trigger \cite{bib:trg}, uses calorimeter and drift chamber
information.
For the present analysis only the electromagnetic calorimeter (EMC)
signals have been used. Two local energy deposits above threshold,
$E_{\rm th}>50$ MeV for the barrel and $E_{\rm th}>150$ MeV for the
endcaps, are required.
\section{The tag mechanism}
Most of the case in its center of mass $\phi$-mesons decay into
anti-collinear $K\bar{K}$ pairs.
In the laboratory this remains approximately true because of the small
crossing angle of the e$^+$ and e$^-$ beams.
Therefore the detection of a $K(\bar K)$ tags the presence of a
$\bar K (K)$ of given momentum and direction.
The decay products of the $K^{\pm}$ pair define two spatially well
separated regions called the tag and the signal hemispeheres.
Identified $K^{\mp}$ decays tag a $K^{\pm}$ beam and provide sample
count, using the total number of tags as normalization.
This procedure is a unique feature of a $\phi$-factory and provides the
means for measuring absolute branching ratios.
Charged kaons are tagged using the two body decays
$K^{\pm}\rightarrow \mu^\pm\rlap{\raise1.2ex\hbox{\scriptsize($-$)}}
\kern.3em\nu_{\,\mu}$ and $K^{\pm}\rightarrow \pi^{\pm} \pi^0$.
Since the two body decays correspond to about 85\% of the charged kaon
decays \cite{bib:pdg} and since $BR(\phi \rightarrow K^+K^-)\simeq 49\%$
\cite{bib:pdg}, there are about $1.5 \times 10^6 K^+K^-$ events/pb$^{-1}$.
The two body decays are identified as peaks in the momentum spectrum of the
secondary tracks in the kaon rest frame and computed assuming $m_\pi$ for
the particle (Fig. \ref{fig:tag_spectrum}).
In order to minimize the impact of the trigger efficiency, the taging kaon
must provide the EMC trigger of the event, so called self-triggering
tags.
$N_{\rm selftrg\ tag} \approx 2 \times 10^5$ per pb$^{-1}$.
\begin{figure}[htb]
\figboxc tag;7;\vglue-.8cm
\caption{\footnotesize{Momentum spectrum in the kaon rest frame of the negative charged
decay particle, assuming the particle has the pion mass for data
(dots) and MC (lines).The distribution are normalized to
unity. The two peaks correspond to pions and muons
from $K^- \rightarrow \pi^- \pi^0$ (205 MeV/c) and $K^-
\rightarrow \mu^- \nu_\mu$ (236 MeV/c). The muon peak is
broadened by the use of the incorrect mass}}
\label{fig:tag_spectrum}
\end{figure}\vglue-5mm
\section{Measurement of the charged kaon lifetime}
\noindent
The measurement is performed using 230 pb$^{-1}$ collected at $\phi$ peak.
The data sample has been split in two uncorrelated subsamples,
150 pb$^{-1}$ have been used for the measurement, the remaining
80 pb$^{-1}$ have been used to evaluate the efficiencies.
$K_{\mu 2}$ tags of both charges have been used.
There are two methods available for the measurement: the kaon decay length
and the kaon decay time.
The two methods allow cross checks and studies of systematics; their
resolutions are comparable.
The method relying on the measurement of the charged kaon decay length
requires first the reconstruction of the kaon decay vertex in the
fiducial volume using only DC information: the signal is given by a
$K^\pm$, moving outwards in the DC with momentum
$70 < p_K < 130$ MeV/c and having point of closest approach to the
interaction point (IP) with $0 < \sqrt{x^2_{PCA} +y^2_{PCA}} < 10$ cm
and $|z_{PCA}| < 20$ cm. The kaon decay vertex in the DC fiducial volume
($40 < \sqrt{x^2_V +y^2_V} < 150$ cm, $|z_V| < 150$ cm) is required.
Once the decay vertex has been identified the kaon track is extrapolated
backward to the interaction point into 2 mm steps, taking into account the
ionization energy loss $dE/dx$ to evaluate its velocity $\beta c$.
Then the proper time is obtained from:
\begin{equation}
t^* = \sum_i \Delta t_i =
\sum_i \frac{\sqrt{1-\beta^2_i}}{\beta_i} \Delta l_i
\end{equation}
The efficiency has been evaluated directly from data.
The control sample has been selected using calorimetric information only,
selecting for a neutral vertex: two clusters in time fired by the photons
coming from the $\pi^0$ decay.
The proper time is fitted between 16 and 30 ns correcting for the
efficiency.
Resolution effects have been taken into account.
The preliminary result we have obtained, which is the weighted mean between
the $K^+$ and the $K^-$ lifetimes, is:
\begin{center}
\begin{equation}
\tau^\pm = (12.367 \pm 0.044 \pm 0.065)\ ns
\end{equation}
\end{center}
The evaluation of systematic uncertainties is sill preliminary,
final numbers will be presented at the conference.
\begin{figure}[htb]
\vglue-8mm\figboxc fit_length_new;6.5;\vglue-10mm
\caption{\footnotesize{Charged kaon proper time distribution, obtained with the first method, fitted (red line) with a convolution of an exponential and a resolution function}}
\label{fig:tau_spectrum}
\end{figure}\vglue-5mm
The second method relies on the measurement of the kaon decay time. We consider only events with a $\pi^0$ in the final state:
\begin{equation}
K^\pm \rightarrow X + \pi^0 \rightarrow X + \gamma \gamma
\end{equation}
We can obtain the kaon time of flight using the time ot the EMC clusters of the photons from the $\pi^0$ decay. We require the backward extrapolation to the interaction point of the tagging kaon track and the forward extrapolation of the helix of the other kaon on the signal side. Stepping along the helix we look for the $\pi^0\rightarrow \gamma
\gamma$ decay vertex without looking at the real kaon track. For each photon it is possible to measure the kaon proper decay time
\begin{equation}
t^* = (t_\gamma - \frac{r_\gamma}{c} - t_\phi) \cdot \sqrt{1-\beta^2_K}
\end{equation}
The efficiency has been evaluated directly from data.
The control sample has been selected using drift chamber information only,
selecting the kaon decay vertex in the fiducial volume.
The proper time is fitted between 13 and 42 ns correcting for the
efficiency. Resolution effects have been taken into account.
The weighted mean between the $K^+$ and the $K^-$ lifetimes gives as
preliminary result:
\begin{equation}
\tau^\pm = (12.391 \pm 0.049 \pm 0.025)\ ns
\end{equation}
\begin{figure}[htb]
\vglue-8mm\figboxc fit_time_new;6.5;\vglue-10mm
\caption{\footnotesize{Charged kaon proper time distribution, obtained with the second method, fitted (red line) with a convolution of an exponential and a resolution function}}
\label{fig:time_spectrum}
\end{figure}
The evaluation of systematic uncertainties is sill preliminary, final numbers will be presented at the conference. In order to evaluate the statistical correlation between the two methods we divide the data sample into five subsamples. For each subsample, and for each method, we evaluate the proper time distribution and its efficiency. The value of the correlation is
\begin{equation}
\rho = .338
\end{equation}
The weighted mean between the two charges and between the two methods is
\begin{center}
\begin{equation}
\tau^\pm = (12.384 \pm 0.048)\ ns
\end{equation}
\end{center}
|
train/arxiv
|
BkiUdl85qoYA4DD26fCj
| 5 | 1 |
\section*{Methods}
\subsubsection{Primer on cQED}
Here we briefly review the basic theory for the transmon/cavity system, introducing all the notation and assumptions used in this work. More detail can be found in ref.\cite{Walter2017RapidQubits,Krantz2019AQubits}.
The transmon is an anharmonic oscillator of frequency $\omega_{q}$ and anharmonicity $\alpha$. To a good approximation the transmon can be treated as a two-level system, forming the qubit. It is coupled to a resonant cavity of frequency $\omega_c$ via the Jaynes-Cummings Hamiltonian,
\begin{equation}
\mathcal{H}_{qed}=
\frac{1}{2}\hbar\omega_{q}\hat{\sigma}_z
+\hbar\omega_{c}\hat{a}^{\dagger}\hat{a}
+ \hbar g\left(\hat{a}\hat{\sigma}_{+} +\hat{a}^{\dagger}\hat{\sigma}_{-}\right)
\label{eq:Hjc}
\end{equation}
where $g$ is the strength of the exchange interaction. We operate in the dispersive regime, where the detuning between the qubit and cavity frequencies is large compared to the coupling strength, $\lvert\Delta\rvert=\lvert\omega_c-\omega_{q}\rvert\gg g$, preventing any direct energy exchange between the two systems. In this regime, the Hamiltonian becomes
\begin{equation}
\mathcal{H}_{qed}\approx
\frac{1}{2}\hbar\omega_{q}\hat{\sigma}_z +\hbar\omega_{c}\hat{a}^{\dagger}\hat{a}-\chi\hat{\sigma}_z \hat{a}^{\dagger}\hat{a}
\label{eq:Hdis}
\end{equation}
where $\chi= \frac{g^2}{\Delta}\frac{\alpha}{\Delta+\alpha}$ is the so-called dispersive shift. The cavity resonance frequency depends on the qubit state: $\omega_c+\chi$ or $\omega_c-\chi$ for the qubit respectively in the ground state $\lvert g\rangle$ or excited state $\lvert e\rangle$. Conversely the qubit frequency is shifted by $2\chi$ per photon in the cavity and we redefined the qubit frequency as $\omega_{q}\equiv\omega_{q}+\chi$ to absorb the Lamb shift. Importantly, the dispersive approximation is only valid for small photon numbers in the cavity, and spurious qubit transitions occur when approaching the critical photon number $n_{crit}=\Delta^2/4g^2$.
In our system the cavity and the qubit are coupled to the environment using a single antenna. The cavity linewidth $\kappa$ is dominated by the coupling to the antenna. Due to the filtering of the cavity, the qubit is only weakly coupled to the antenna, at a rate $\Gamma_{ext}$, much smaller than the intrinsic relaxation rate $\Gamma_{int}$ of the qubit.
\paragraph{Qubit readout} The qubit is readout by driving the cavity with an input microwave field of amplitude $\alpha_{in}$ and frequency $\omega_d$. In the steady state, the resulting coherent state, $\alpha_{g,e}$, depends on the qubit state $\lvert g\rangle$ or $\lvert e\rangle$, following the equation of motion:
\begin{equation}
(\omega_d-\omega_c\mp\chi+i\kappa/2)\alpha_{g,e}=i\sqrt{\kappa}\alpha_{in}
\end{equation}
The output field, $\alpha_{out}=\sqrt{\kappa}\alpha_{g,e}-\alpha_{in}$, acquires a qubit state dependent phase shift $\pm2\arctan(2\chi/\kappa)$ that enables qubit state discrimination. One can show that the measurement rate is $\Gamma_{m}=\kappa\lvert\alpha_{e}-\alpha_{g}\rvert^2$ and is maximized by the distance in phase space between the two coherent states \cite{Gambetta2008QuantumEffect}. The qubit measurement fidelity is defined as $F=1-P(e\lvert g)-P(g\lvert e)$ where $P(x\lvert y)$ is the probability of measuring the qubit state $x$ when prepared in the state $y$. In absence of preparation errors and qubit transitions during the measurement, and in the steady state, the measurement fidelity after an integration time $\tau$ can be written as $F=\text{erf}(\sqrt{\eta\tau\Gamma_{m}/2})$, where $\eta$ is the microwave measurement efficiency \cite{Walter2017RapidQubits}.
\paragraph{Photon number fluctuations and qubit dephasing}
Fluctuations of the number of photons in the cavity induce fluctuations of the qubit frequency and therefore dephasing. The Stark-shift and dephasing rate for an average thermal occupancy of the cavity $\Bar{n}$ are respectively
\begin{align}
\begin{split}
\Delta_\text{Stark}^\text{th} = \beta2\chi\Bar{n},\\
\Gamma_\phi^\text{th} = \beta\frac{4\chi^2}{\kappa}\Bar{n},\\
\end{split}
\label{eq:MeasIndDeph}
\end{align}
where $\beta=\kappa^2/(\kappa^2+4\chi^2)$. Note that these expressions are only valid for $\Bar{n}\ll1$ and more general forms can be found in Ref.\cite{Clerk2007}.
Experimentally, we extract the Stark-shift from the frequency of Ramsey oscillations. The qubit dephasing is extracted from the exponential decay of the Ramsey oscillations, $\Gamma_2=\Gamma_1/2+\Gamma_\phi$, and we assume it is dominated by photon number fluctuations, $\Gamma_\phi=\Gamma_\phi^\text{th}$.
\paragraph{Qubit control}
Under resonant drive, the qubit undergoes Rabi oscillations between the ground and excited states at the Rabi rate $\Omega_R=2\sqrt{\Dot{n}\Gamma_{ext}}$ where $\Dot{n}$ is the number of photons per second at antenna. When the Rabi rate approaches the transmon anharmonicity, $\Omega_R\sim\alpha$, the transmon dynamics involve higher excited states, leaving the computational subspace. A hallmark of this regime is the deviation from the linear relation between Rabi rate and drive amplitude\cite{Claudon2004}, as observed in Fig.\ref{fig2}.e. In practice, typical superconducting quantum processors operate in the linear regime, $\Omega_R<\alpha/2$.
\subsubsection{Primer on photodetection}
Here we briefly explain the basic principle of a photodiode, introducing all the notation and assumptions used in this work. More detail can be found in ref.\cite{Saleh1991FundamentalsPhotonics}.
\paragraph{Photocurrent} The photodiode can be seen as a high impedance current source, with an output current $I$ proportional to the incident optical power $P_{\text{o}}$ such that $I=\mathcal{R}P_{\text{o}}$, where $\mathcal{R}=\eta e / \hbar\omega_{o}$ is the responsivity, $e$ is the electron charge, $\omega_{o}$ is the frequency of the optical photons and $\eta$ is the quantum efficiency (defined as the probability of generating an electron-hole pair per incident photon). A perfectly efficient photodiode ($\eta=1$) operating at a wavelength of $1490~\text{nm}$ ($\omega_\text{o}/2\pi\approx 201~\text{THz}$) has a maximum responsivity $\mathcal{R}\approx1.2~\si{\ampere \watt^{-1}}$. In practice, the quantum efficiency depends on extrinsic effects such as alignment and Fresnel reflections, and on the intrinsic efficiency of the detector. For the photodiode used in this work, we measure a responsivity of $0.7~\si{\ampere \watt^{-1}}$ at room temperature. At $20~\si{\milli \kelvin}$ the responsivity drops to $0.5~\si{\ampere \watt^{-1}}$, probably caused by a change of the optical alignment due to thermal contractions.
\paragraph{Microwave generation} Microwaves are generated by modulating the optical power such that $P_{o}(t) = \Bar{P_{o}}(1+m\cos(\omega t+\phi))$ where $\Bar{P_{o}}$ is the average optical power, $m$ is the modulation depth ($m\leq 1$), $\omega$ is the modulation frequency, and $\phi$ is the modulation phase. This induces an oscillating photocurrent $I(t)=h(t)*\mathcal{R}P_{o}(t)$ where $h(t)$ is the impulse response of the photodiode. The corresponding microwave power $P_\mu$ in a load impedance $Z$ is $P_{\mu} =\frac{1}{2} m^2 \bar{I}^2\times \abs{H(\omega)}^2\times Z$ where $\bar{I}=\mathcal{R}\Bar{P_{o}}$ is the average photocurrent and $H(\omega)$ is the transfer function of the photodiode. For the photodiode used here, the response function is limited by the RC time constant, with a $3~\text{dB}$ cutoff frequency set by the capacitance of the diode and the impedance of the load.
\paragraph{Photocurrent shot-noise} The probabilistic nature of creating electron-hole pairs results in photocurrent shot noise with power spectral density $S_I(\omega)=2e\Bar{I}\abs{H(\omega)}^2$ \cite{Boyd1983RadiometryRadiation}.
\paragraph{Other definitions and notation} In the main text we assume $m=1$ and $\abs{H(\omega)}=1$. We define the microwave photon flux $\Dot{n}=P_{\mu}/\hbar\omega$ as the number of photons per second and the microwave photon noise spectral density $\Bar{n} = S_IZ/\hbar\omega$ as the number of photons per second per hertz. Because it simplifies notation, it is convenient to define $\theta=\hbar\omega/Z$, in units of $\si{\joule \ohm^{-1}}$.
\subsubsection{Excess photocurrent noise}
While the photocurrent noise measurements in Fig.\ref{fig3} are consistent with shot noise-limited photodetection for photocurrents up to $20~\si{\micro \ampere}$, we estimate here the possible contributions of two other known sources of excess photocurrent noise: voltage noise at the microwave input of the electro-optic intensity modulator and excess laser intensity noise.
\paragraph{Voltage noise at the EOM input} We consider a lossless EOM with an infinite extinction ratio. The output optical power is $P_\text{o}(t)= \Bar{P_\text{o}}\left(1+sin (\pi V(t)/V_\pi) \right)$ where $\Bar{P_\text{o}}$ is the average optical power, $V_\pi$ is the voltage required to go from maximum transmission to minimum transmission and $V(t)=V_\mu(t) +V_\text{dc}$ is the input voltage. For a modulator biased at quadrature ($V_\text{dc}=0$) and in the limit of small input voltage ($V_\mu(t)\ll V_\pi$) the output power becomes $P_\text{o}(t)= \Bar{P_\text{o}} \left(1+\pi V_\mu (t)/V_\pi \right)$. The noise variance of the optical power is then $\moy{\delta P_\text{o}^2} =\Bar{P_\text{o}}^2\pi^2 \moy{\delta V_\mu^2}/V_\pi^2$. The photocurrent noise variance is then $\moy{\delta I^2} = \mathcal{R}^2\moy{\delta P_\text{o}^2} = \Bar{I}^2\pi^2 \moy{\delta V_\mu^2}/V_\pi^2 $ where $\Bar{I}=\mathcal{R}\Bar{P_\text{o}}$ is the average photocurrent. In terms of the current noise power spectral density, this becomes $S_I^{\delta V} (\omega)=S_V (\omega)\Bar{I}^2\pi^2/V_\pi^2$ where $S_V (\omega)=4k_BT_NZ_{EOM}$ is the input voltage noise power spectral density set by the noise temperature $T_N$ of of the input impedance of the EOM $Z_{EOM}$.
\paragraph{Excess laser noise} Laser intensity noise is usually given as a fractional variation, termed Relative Intensity Noise (RIN), defined as $\text{RIN}(\omega)=S_\text{P}(\omega)/P_\text{o}^2$ where $S_\text{P}(\omega)$ is the power spectral density of the optical power fluctuations, in units of $\si{\watt^2 \hertz^{-1}}$ \cite{Yariv1997OpticalCommunications}. The linear relationship between optical power and photocurrent leads to a photocurrent noise due to RIN given by $S_I^{\text{RIN}} (\omega)=\Bar{I}^2\text{RIN}(\omega)$.
\paragraph{Total photocurrent noise} The total current noise emitted by the photodiode is then $S_I (\omega)=2eI + S_I^{\delta V}(\omega) + S_I^{\text{RIN}}(\omega)$. At the highest photocurrent used in this work, $\Bar{I}=20~\si{\micro \ampere}$, the photocurrent shot noise is $2eI\approx8\times10^{-24} ~\si{\ampere^2 \hertz^{-1}}$. For the voltage noise on the EOM, we measure $V_\pi=3.5V$ and $T_N=2.5\times10^5~\text{K}$, set by a power amplifier at the input of the EOM. This yields $S_V (\omega)\approx 3\times10^{-25}~\si{\ampere^2 \hertz^{-1}}$, more than an order of magnitude smaller than the photocurrent shot-noise. The RIN of readily available commercial semiconductor distributed feedback (DFB) lasers is below $10^{-14}~\si{\hertz^{-1}}$, and can approach $10^{-16}~\si{\hertz^{-1}}$, leading to a current noise $S_I^{\text{RIN}} <5\times10^{-24}~\si{\ampere^2 \hertz^{-1}}$. As we do not resolve experimentally any deviation from photocurrent shot-noise, we conclude that the laser RIN is below $10^{-15}~\si{ \hertz^{-1}}$.
Finally, we emphasize that our measurement is sensitive only to noise above microwave vacuum fluctuations and that any residual thermal noise is already included in the qubit Stark-shift or qubit population at zero photocurrent.
\subsubsection{Attenuation between the cavity antenna and photodiode}
Here we discuss the procedure to move the reference plane from the cavity antenna to the photodiode. As the hardware and frequencies differ slightly between the qubit control and readout experiments, they require separate \textit{in-situ} calibrations.
\paragraph{Qubit control}
We start by calibrating the microwave power at the cavity antenna using the coaxial line. From the measurement of the power at room temperature and the calibration of the attenuation from room temperature to the cavity antenna we can calibrate the x-axis in Fig.\ref{fig2}.e. We then compare to the Rabi rate to extract the coupling rate between the qubit and cavity antenna, $1/\Gamma_{ext}=198~\si{\micro \second}$. We define the loss between the photodiode and the cavity antenna, $A$, so that the power at the cavity antenna is $AP_{\mu}=A\frac{1}{2}Z{\Bar{I}}^2$, where $A$ includes the effect of explicit loss and the response function of the photodiode. Comparing the Rabi rate to the average photocurrent, we find $A=0.034$.
We then extract the current noise spectral density of the photocurrent, $S_I$, using the qubit ground state population $P_g$ measured in Fig.\ref{fig3}.b. From detailed balance we find $\left(\Gamma_{int}+\Gamma_{ext}\right)n=\Gamma_{int}n_{int}+\Gamma_{ext}n_{ext}$ where $n=(1-P_g)/P_g$, $n_{int}$ is the average photon number in the internal bath extracted from the equilibrium population at zero photocurrent, and $n_{ext}=AZS_I/\hbar\omega_q$. Finally we get:
\begin{equation}
S_I = \frac{\hbar\omega_q}{AZ\Gamma_{ext}}\left[\left(\Gamma_{int}+\Gamma_{ext}\right)n-\Gamma_{int}n_{int}\right]
\end{equation}
\paragraph{Qubit readout}
We fix the photocurrent and use the Stark shift to calibrate the intra-cavity photon number \cite{Schuster2005AcField,Gambetta2008QuantumEffect} and therefore extract the power at the cavity antenna $AP_{\mu}=A\frac{1}{2}Z{\Bar{I}}^2$. We find $A=0.065$. As the measurement cavity is overcoupled, we can simply extract the current noise spectral density of the photocurrent from the cavity occupancy, $S_I=n\hbar\omega_c/AZ$.
\subsubsection{Effect of photocurrent shot noise on measurement fidelity and gate errors}
In this section we discuss the effect of the microwave noise induced by the photocurrent shot noise of the photodiode. In the context of qubit readout, extraneous noise at the cavity frequency (1) dephases the qubits coupled to it and (2) reduces the microwave measurement efficiency, which in turn impacts the qubit measurement fidelity. In the context of qubit control, extraneous noise at the qubit frequency induces transitions to the excited states which reduces gate fidelity. To simplify the discussion, we consider a photodiode with unity quantum efficiency and operating well within its bandwidth, and neglect loss between the photodiode and the cavity or the qubit control line.
\paragraph{Qubit readout}
Optimal measurement speed and separation in phase space between $\alpha_{g}$ and $\alpha_{e}$ is obtained for $2\chi=\kappa$ and $\omega_d=\omega_c$\cite{Walter2017RapidQubits,Krantz2019AQubits}, leading to $\lvert\alpha_{g}\rvert^2=\lvert\alpha_{e}\rvert^2=\lvert\alpha\rvert=2\Dot{n}/\kappa$. The corresponding average photocurrent is $\Bar{I}=\sqrt{\kappa\theta}\lvert\alpha\rvert$. In turn the microwave noise is $\Bar{n}=2e\sqrt{\kappa/\theta}\lvert\alpha\rvert$, which induces qubit dephasing according to Eq.\ref{eq:MeasIndDeph}, and limits the efficiency of the measurement chain to $\eta=1/(1+2\Bar{n})$. For a typical experiment operating at $\lvert\alpha\rvert^2\approx n_{crit}/5 \approx 10$, with $\kappa/2\pi=10~\si{\mega \hertz}$, $Z=50~\si{\ohm}$ and $\omega/2\pi=6~\si{\giga \hertz}$, one obtains $\Bar{I}\approx7~\si{\nano \ampere}$ and $\Bar{n}\approx0.03$. This leads a microwave measurement efficiency limited to $\eta\approx94\%$, much larger than the state-of-the-art. Additionally, qubit measurement infidelity is typically dominated by qubit relaxation events during the measurement with only a small contribution due to the limited measurement efficiency \cite{Walter2017RapidQubits}. We expect therefore the assignment errors due to the photocurrent shot noise to be negligible.
\paragraph{Qubit control}
We assume a qubit gate error rate dominated by the relaxation rate, $\Gamma_\downarrow$, and the excitation rate, $\Gamma_\uparrow$, which are linked by detailed balance $\Gamma_\uparrow=\Bar{n}\Gamma_\downarrow$. The error probability $\epsilon$ for a gate of length $\tau$ is:
\begin{equation}
\epsilon=1-\exp^{-(\Gamma_\uparrow+\Gamma_\downarrow)\tau}=1-\exp^{-(1+\Bar{n})\Gamma_\downarrow\tau}
\end{equation}
For a $\pi$-pulse at a Rabi rate $\Omega_R\gg(1+\Bar{n})\Gamma_\downarrow$, the error probability becomes:
\begin{equation}
\epsilon=\frac{\pi\Gamma_\downarrow}{\Omega_R}(1+\Bar{n})
\end{equation}
We decompose the qubit relaxation rate into an external contribution from the coupling to the control line, $\Gamma_{ext}$, and an internal contribution from all other degrees of freedom, $\Gamma_{int}$. The Rabi rate is defined as $\Omega_R=2\sqrt{\Dot{n}\Gamma_{ext}}$ where $\Dot{n}$ is the photon flux in photon/s at the control line. The effective qubit population, $\Bar{n}$, is linked to the population of the internal and external bath, $\Bar{n}_{int}$ and $\Bar{n}_{ext}$, by detailed balance so that $\Gamma_\downarrow\Bar{n}=\Gamma_{int}\Bar{n}_{int}+\Gamma_{ext}\Bar{n}_{ext}$. In the following we will assume the internal bath is cold, $\Bar{n}_{int}=0$.
For a photodiode operating well within its bandwidth driving a control line of impedance $Z$, the photon flux is set by the microwave power generated by the photodiode $\hbar\omega_{q}\Dot{n}=\frac{1}{2}Z\Bar{I}^2$ and the external bath occupancy is set by the photon shot-noise $\hbar\omega_{q}\Bar{n}_{ext}=2e\Bar{I}Z$, leading to:
\begin{equation}
\epsilon=\frac{\pi}{\sqrt{2}}\left( \frac{\Gamma_\downarrow}{\Bar{I}}\sqrt{\frac{\theta}{\Gamma_{ext}}}+2e\sqrt{\frac{\Gamma_{ext}}{\theta}}\right)
\end{equation}
At low photocurrent, $\Bar{I}\ll\frac{\hbar\omega_{q}\Gamma_\downarrow}{2eZ\Gamma_{ext}}$, the microwave noise generated by the photodiode is negligible, $\Bar{n}\ll1$. In this regime the error probability decreases as the ratio between Rabi rate and relaxation rate increases. In contrast, at high photocurrent, $\Bar{I}\gg\frac{\hbar\omega_{q}\Gamma_\downarrow}{2eZ\Gamma_{ext}}$, the error probability plateaus as the errors induced by the photocurrent shot noise balances the increase in Rabi rate. For a realistic case where $\omega_{q}/2\pi = 6~\si{\giga \hertz}$, $Z=50~\si{\ohm}$ and $1/\Gamma_{ext} = 1~\si{\milli \second}$, the error probability saturates at $\epsilon>4\times10^{-5}$, far below what has been achieved in state-of-the-art-system. Note that as qubit coherence improves, the coupling rate to the control line will decrease, which reduces the minimum error probability.
Finally we note that the spectrum of microwave noise induced by the photocurrent shot noise on the photodiode is white up to the bandwidth of the photodiode. When considering an architecture where multiple qubits are addressed using a single photodiode, one would have to take into account that all these qubits are driven by the microwave noise of the photodiode.
\subsubsection{Heat load estimation}
Here we detail the calculations and assumptions used to estimate and compare the heat load of the regular coaxial approach and the photonic link approach. For simplicity, we focus on the heat load associated with the qubit microwave control lines, and neglect here all other heat loads, such as those associated with qubit readout and dc-flux biasing. For both approaches, the heat can be divided into a passive and active heat load. The passive heat load is set by the heat flow through the coaxial cables or optical fibers. The active heat load comes from the Joule heating in the attenuators in the coaxial approach and from the dissipated optical power in the photonic link approach.
\paragraph{Passive heat load}
Previous work has investigated the heat load for the coaxial line approach\cite{Krinner2019EngineeringSystems}. We focus here on the heat load on the mixing chamber of a DR. The heat load from a $0.085"$ diameter stainless steel coaxial cable has been measured to be $P_{coax}=14~\si{\nano \watt}$, slightly larger than the estimated value of $4~~\si{\nano \watt}$. Following the same reasoning, we estimate the heat load of an optical fiber to be $P_{link}=3~~\si{\pico \watt}$. We assumed a silica core and cladding of $125~\si{\micro \meter}$ diameter with a coating increasing the diameter to $250~\si{\micro \meter}$. In absence of data about the thermal conductivity of the coating at low temperature, we assume it is the same as silica, which was measured down to $100~\si{\milli \kelvin}$ \cite{Smith1978EffectSilica}.
\paragraph{Active heat load}
The microwave power required at the qubit control line, $P(t)=\hbar\omega_{q}\Dot{n}(t)$, depends on the Rabi rate and coupling rate so that $P(t)=\hbar\omega_{q}\Omega_R(t)^2/4\Gamma_{ext}$ where $\Omega_R(t)=\Omega_RS(t)$ and $S(t)$ is the time domain pulse shape. We define the average power of a pulse of duration $\tau$ as $\Bar{P}=\int_0^\tau P(t)/\tau=\hbar\omega_{q}\Omega_R^2\Bar{S}^2/4\Gamma_{ext}$ with $\Bar{S}=\int_0^\tau S(t)/\tau$.
In the coaxial approach, attenuation at the mixing chamber is necessary to reduce the black body radiation from higher temperature stages. This leads to an active heat load per control pulse $P_{coax}^{act}=\Bar{P}\times(1/A-1)$ where $A<1$ is the attenuation.
In the photonic link approach, the optical power is fully dissipated as heat, leading to an active heat load per control pulse $P_{link}^{act}=\sqrt{2\Bar{P}/Z\mathcal{R}^2}$, neglecting loss between the photodiode and the control line.
\paragraph{Total heat load}
The total heat load strongly depends on the duty cycle per qubit $D$, $P_{coax,link}=P_{coax,link}^{pass}+D\times P_{coax,link}^{act}$. The total number of qubits that can be addressed in both approaches is $N_{coax,link}=P_{cool}/P_{coax,link}$ where $P_{cool}$ is the cooling power at the mixing chamber.
In figure \ref{fig4} we use the following parameters: $\Gamma_{ext} = 1~\text{ms}^{-1}$, $\Omega_R/2\pi = 40~\si{\mega \hertz}$, $\mathcal{R}=1$, $\omega_{q}/2\pi = 6~\si{\giga \hertz}$, $P_{cool} = 20~\si{\micro \watt}$, $P_{coax} = 14~\si{\nano \watt}$, $P_{link} = 3~\si{\pico \watt}$, $A = 0.01$ and $S(t)$ is a $cos^2$ pulse shape leading to $\Bar{S}=0.5$.
\subsubsection{Experimental setup}
The photodetector used is a commercially available, high-speed photodiode with an $InGaAs$ absorber, packaged with a fiber pigtail. Due to the bandgap shift of $InGaAs$ as the photodiode is cooled \cite{Zielinski1986ExcitonicInGaAs/InP}, it is illuminated at a wavelength of $1490~\text{nm}$. An external bias tee was used to monitor the DC photocurrent and apply a voltage bias. For the qubit readout experiment (Fig.\ref{fig2}b-c and Fig.\ref{fig3}a), no bias voltage was applied. For the qubit control experiment (Fig.\ref{fig2}e-f and Fig.\ref{fig3}b), a voltage of $-2~\text{V}$ was applied. In Extended Data Fig.\ref{fig_IV} we show the photodiode IV curve at $20~\si{\milli \kelvin}$ in absence of optical power.
The transmon qubit was fabricated using standard optical and electron beam lithography techniques on a sapphire substrate. A sputtered and patterned Nb film forms capacitor electrodes, and a shadow evaporated Al-AlO$_\text{x}$-Al Josephson junction is produced using the Dolan resist bridge technique. An additional evaporated Al ``patch" layer connects the junction electrodes to the Nb capacitor pads that previously had its native oxides removed by an Ar ion mill step. The qubit chip was then placed into a machined three-dimensional Al cavity.
The optical setup at room temperature uses a commercial semiconductor DFB $1490~\text{nm}$ laser. The intensity of the laser was modulated by an external LiNbO$_3$ Mach-Zehnder modulator. Despite the nonlinear response of the electro-optic modulator, no predistortion of the microwave signals for qubit control and readout was necessary. A second intensity modulator is used to finely control the average optical power, followed by a mechanical attenuator.
A detailed experimental diagram is available on Extended Data Fig.\ref{fig_ExpDR} and \ref{fig_ExpRT}. A summary of the various device parameters is shown in Table \ref{ParamTable}.
\begin{table*}[h]
\begin{tabular}{ |c|c|c| }
\hline
Qubit frequency in Fig.\ref{fig2}.a-c and Fig.\ref{fig3}.a & $\omega_{q}$ & $\omega_{q}/2\pi=5.052~\si{\giga \hertz}$ \\
\hline
Qubit frequency in Fig.\ref{fig2}.d-f and Fig.\ref{fig3}.b & $\omega_{q}$ & $\omega_{q}/2\pi=5.088~\si{\giga \hertz}$ \\
\hline
Cavity frequency & $\omega_{c}$ & $\omega_{c}/2\pi=10.866~\si{\giga \hertz}$ \\
\hline
Cavity linewidth & $\kappa$ & $\kappa/2\pi=3.09~\si{\mega \hertz}$ \\ \hline
Exchange coupling strength & $g$ & $g/2\pi=294~\si{\mega \hertz}$ \\
\hline
Dispersive shift & $\chi$ & $\chi/2\pi=517~\text{kHz}$ \\
\hline
Critical photon number & $n_{crit}$ & $n_{crit}=98$ \\
\hline
photodiode responsivity & $\mathcal{R}$ & $\mathcal{R}=0.55~\text{A/W}$ \\
\hline
\end{tabular}
\caption{\textbf{Device Parameters}
\label{ParamTable}}
\end{table*}
|
train/arxiv
|
BkiUbEXxaJJQnKrAicHd
| 5 | 1 |
\section{Introduction and results}
Throughout the paper, let $\cH$ denote a hypergraph with point set $V$ and edge set $E$. A strict $N$-coloring $\cC$ of $\cH$ is a coloring of the elements of $V$ using exactly $N$ colors; in other words, $\cC=\{C_1,\ldots,C_N\}$ is a partition of $V$ where each $C_i$ is nonempty ($1\leq i\leq N$). Given a coloring $\cC$, we define the mapping $\varphi_\cC\colon V\to \{1,2,\ldots,N\}$ by $\varphi_\cC(P)=i$ if and only if $P\in C_i$. We call the numbers $1,\ldots, N$ colors and the sets $C_1,\ldots C_N$ color classes. We call a hyperedge $H\in E$ \emph{rainbow (with respect to $\cC$)} if no two points of $H$ have the same color; that is, $|H\cap C_i|\leq 1$ for all $1\leq i\leq N$. The upper chromatic number (or shortly UCN) of the hypergraph $\cH$, denoted by $\UCN(\cH)$, is the maximal number $N$ for which $\cH$ admits a strict $N$-coloring without rainbow hyperedges. Let us call such a coloring \emph{proper} or \emph{rainbow-free}. It is easy to see that for an ordinary graph $G$ (that is, a $2$-uniform hypergraph), $\UCN(G)$ is just the number of connected components of $G$.
As one can see, the above defined hypergraph coloring problem is a counterpart of the traditional one, where we seek the least number of colors with which we can color the vertices of a hypergraph while forbidding hyperedges to contain two vertices of the same color. The general mixed hypergraph model, introduced by Voloshin \cite{V1,V2}, combines the above two concepts. This mixed model is better known but here we do not discuss it; the interested reader is referred to \cite{Iljics}.
It is clear that if we find a vertex set $T\subset V$ in $\cH$ which intersects every hyperedge in at least two points, then by coloring the points of $T$ with one color and all the other points of $V$ by mutually distinct colors, we obtain a proper, strict $(|V|-|T|+1)$-coloring.
\begin{definition}\label{transversal}
Let $\cH=(V;E)$ be a hypergraph, $t$ a nonnegative integer. A vertex set $T\subset V$ is called a \emph{$t$-transversal of $\cH$} if $|T\cap H|\geq t$ for all $H\in E$. The size of the smallest $t$-transversal of $\cH$ is denoted by $\tau_t(\cH)$.
\end{definition}
\begin{definition}\label{trivcolor}
We say that a coloring of $\cH$ is \emph{trivial} if it contains a monochromatic $2$-transversal.
\end{definition}
As seen above, the best trivial colorings immediately yield a lower bound for $\UCN(\cH)$.
\begin{proposition}\label{trivbound}
For any hypergraph $\cH$,
\[\UCN(\cH)\geq |V|-\tau_2(\cH)+1.\]
\end{proposition}
Two general problems are to determine whether this bound is sharp (for a particular class of hypergraphs) and to describe the colorings attaining the upper chromatic number. In this paper, the hypergraphs we consider consist of the points of the $n$-dimensional projective space $\PG(n,q)$ over the finite field $\GF(q)$ of $q$ elements with its $k$-dimensional subspaces as hyperedges, $n\geq 2$, $1\leq k\leq n-1$. We denote this hypergraph by $\cH(n,k,q)$; however, we usually take into account the richer structure of $\PG(n,q)$ when working in $\cH(n,k,q)$. The study of this particular case was started in the mid-nineties by Bacs\'o and Tuza \cite{TB}, who established general bounds for the upper chromatic number of arbitrary finite projective planes (considered as a hypergraph whose points and hyperedges are the points and lines of the plane). Let us introduce the notation $\theta_{q,k}=\theta_k=q^k+q^{k-1}+\ldots+q+1=\frac{q^{k+1}-1}{q-1}$ for the number of points in a $k$-dimensional projective space of order $q$. We recall that a projective plane of order $q$ has $\theta_2=q^2+q+1$ points.
\begin{result}[Bacs\'o, Tuza \cite{TB}]
Let $\Pi_q$ be an arbitrary finite projective plane of order $q$, and let $\tau_2(\Pi_q)=2(q+1)+c(\Pi_q)$. Then
\[\UCN(\Pi_q)\leq q^2-q-\frac{c(\Pi_q)}{2}+o(\sqrt{q}).\]
\end{result}
Note that Proposition \ref{trivbound} claims $\UCN(\Pi_q)\geq q^2-q-c(\Pi_q)$. Recently, Bacs\'o, H\'eger, and Sz\H{o}nyi have obtained exact results for the Desarguesian projective plane $\PG(2,q)$.
\begin{result}[Bacs\'o, H\'eger, Sz\H{o}nyi \cite{BHSz}]\label{BHSz}
Let $q=p^h$, $p$ prime. Suppose that either $q>256$ is a square, or $p\geq 29$ and $h\geq3$ odd. Then $\UCN(\PG(2,q))=\theta_2-\tau_2(\PG(2,q))+1$, and equality is reached only by trivial colorings.
\end{result}
In this work, we determine $\UCN(\cH(n,k,q))$ for many parameters and aim not only to characterise trivial colorings as the only ones achieving the upper chromatic number of the hypergraph $\cH(n,k,q)$, but to obtain results showing that proper colorings of $\cH(n,k,q)$ using a little less number of colors than $\UCN(\cH(n,k,q))$ are trivial; in other words, to prove that trivial colorings are stable regarding the number of colors. For the sake of convenience, we will formulate our results in three theorems for the hypergraph $\cH(n,n-k,q)$. Let us note that if $k<\frac{n}{2}$, then $\tau_2(\cH(n,n-k,q))=2\theta_k$, where equality can be reached by the union of two disjoint $k$-spaces, but not much is known if $k\geq \frac{n}{2}$; some details are given in Section \ref{sec:blsets}.
\begin{theorem}\label{ucnthm}
Let $n\geq 3$, $1\leq k<\frac{n}{2}$, and assume that $q\geq 17$ if $k=1$ and $q\geq 13$ if $k\geq 2$. Then
\[
\UCN(\cH(n,n-k,q))=\theta_n-\tau_2(\cH(n,n-k,q))+1=\theta_n-2\theta_k + 1.
\]
\end{theorem}
\begin{theorem}\label{ucnstabp}
Let $n\geq 2$, $q=p^h$, $p$ prime, $1\leq k\leq n-1$. Suppose that
\begin{itemize}
\item $\delta = \frac12\left( (\sqrt{2}-1)q^k - 3\theta_{k-1} - 8 \right)\geq0$ and $q\geq 11$ if $h=1$,
\item $\delta = \frac12\left(q^{k-1}-\theta_{k-2}-3\right)$, $k\geq 2$ and $q\geq 25$ if $h\geq2$.
\end{itemize}
Under these assumptions the following hold:
\begin{compactenum}
\item[a)] If $k < \frac{n}{2}$, then any rainbow-free coloring of $\cH(n,n-k,q)$ using
\[
N\geq \theta_n-\tau_2(\cH(n,n-k,q))+1-\delta = \theta_n-2\theta_k + 1 - \delta
\]
colors contains a monochromatic pair of disjoint $k$-spaces, and hence is trivial.
\item[b)] If $k \geq \frac{n}{2}$, then
\[
\UCN(\cH(n,n-k,q)) < \theta_n-2\theta_k + 1 - \delta.
\]
\end{compactenum}
\end{theorem}
Note that the stability gap $\delta$ in the above result is far much weaker in the non-prime case (in particular, the case $k=1$ is missing). The next theorem gives a much better result at the expense of requiring much stronger assumptions on the order and the characteristic of the field.
\begin{theorem}\label{ucnstabq}
Let $n\geq 2$, $q=p^h$, $p$ prime, $h\geq 2$, $1\leq k\leq n-1$. Suppose that $p\ge11$, $q^k\ge 239$ and $\delta=\frac{q^k}{200} - \theta_{k-1}-\frac32$. Then any rainbow-free coloring of $\cH(n,n-k,q)$ using
\[
N\geq \theta_n-2\theta_k + 1 - \delta
\]
colors contains a monochromatic $2$-transversal, and hence is trivial.
\end{theorem}
The requirements on $q$ and $N$ in the above theorem could be chosen differently, see Remark \ref{ucnthm:remark} for the details. Note that Theorem \ref{ucnstabq} is not phrased in terms of $\tau_2(\cH(n,n-k,q))$, the parameter found in the trivial lower bound Proposition \ref{trivbound}. As noted earlier, $\tau_2(\cH(n,n-k,q))=2\theta_k$ if $k<\frac{n}{2}$. If $k=\frac{n}{2}$, then \cite[Corollary 4.13]{DBHSzVdV} asserts the existence of a $2$-fold $k$-blocking set in $\PG(2k,q)$ (in finite geometrical language, $t$-transversals of $\cH(n,n-k,q)$ are called $t$-fold $k$-blocking sets, see Section \ref{sec:blsets}) of size $2q^k + 2\frac{q^k-1}{p-1}$, where $q=p^h$, $p>5$ prime, $h\geq2$. Thus, if $p \geq 409$, then $\tau_2(\cH(2k,k,q))\leq 2\theta_k + \delta$, whence Theorem \ref{ucnstabq} yields that the trivial bound is again sharp for $\cH(2k,k,q)$, regardless the exact value of $\tau_2(\cH(2k,k,q))$.
In the proof of the above theorem, we rely on weighted $2$-fold blocking sets as well, so we devote the next section to this topic, and we obtain the following new result which, in fact, follows from the similar Theorem \ref{tmodpsetthm} about $t \pmod p$ sets. The precise definitions are given in the next section.
\begin{theorem}\label{blsetthm}
Let $\cB$ be a minimal weighted $t$-fold $k$-blocking set of $\PG(n,p)$, $p$ prime. Assume that $|\cB| \le (t+\frac{1}{2})p^{k}-\frac12$ and $t\leq \frac38p+1$. Then $\cB$ is the (weighted) union of $t$ not necessarily distinct $k$-dimensional subspaces.
\end{theorem}
\section{Small, weighted multiple $(n-k)$-blocking sets}\label{sec:blsets}
For the sake of convenience, we will refer to $(n-k)$-blocking sets instead of $k$-blocking sets throughout this section.
\subsection{Preliminary notation and results}
\begin{definition}
An \emph{$m$-space} is a subspace of $\PG(n,q)$ of dimension $m$ (in projective sense). A point-set $\cB$ in $\PG(n,q)$ is called a \emph{$t$-fold $(n-k)$-blocking set} if every $k$-space intersects $\cB$ in at least $t$ points. A point $P$ of $\cB$ is \emph{essential} if $\cB\setminus\{P\}$ is not a $t$-fold $(n-k)$-blocking set; in other words, if there is a $k$-space through $P$ that intersects $\cB$ in precisely $t$ points. $\cB$ is called \emph{minimal}, if all of its points are essential; in other words, if $\cB$ does not contain a smaller $t$-fold $(n-k)$-blocking set.
\end{definition}
In $\PG(n,q)$, every $(n-k)$-space intersects every $k$-space non-trivially. If $n-k<\frac{n}{2}$, it is easy to find two (or more, say, $t$) disjoint $(n-k)$-spaces, whose union is clearly a $2$-fold (or $t$-fold) $(n-k)$-blocking set of size $2\theta_{n-k}$. If $n-k\geq \frac{n}{2}$, this does not work and, in fact, not much is known even about the size of a smallest double $(n-k)$-blocking set, let alone its structure. Even for the particular case $n=2k$, the first and, so far, only general construction for small double $(n-k)$-blocking sets appeared in \cite{DBHSzVdV}. Note that, however, \emph{weighted} $t$-fold blocking sets can be obtained easily in this way.
\begin{definition}
A weighted point set of $\PG(n,q)$ is a multiset $\cB$ of the points of $\PG(n,q)$. We may refer to the multiplicities of the points of $\cB$ via a function $w=w_\cB$ mapping the point set of $\PG(n,q)$ to the set of non-negative integers, where $w$ is also called a weight function; points not contained in $\cB$ have weight zero by $w$ and, vice versa, zero weight points are considered to be not in $\cB$. We call a weighted point set $\cB$ of $\PG(n,q)$ a weighted $t$-fold $(n-k)$-blocking set if for every $k$-space $U$, $\sum_{P\in U}w(P)\geq t$, and $\cB$ is called minimal if decreasing the weight of any point results in a $k$-space violating the previous property; in other words, if $\cB$ does not contain a strictly smaller $t$-fold $(n-k)$-blocking set, where the size of a weighted point set is defined as the sum of weights in it. Also, for any point set $S$, $|S\cap\cB|$ is defined as $\sum_{P\in S}w(P)$, and in general, any quantity referring to a number of points of $\cB$ is usually considered with multiplicities. E.g., an $i$-secant line $\ell$ (with respect to $\cB$) is a line such that $|\ell\cap\cB|=i$.
\end{definition}
We also refer to $1$-fold and $2$-fold blocking sets as blocking sets and double blocking sets, respectively; the term multiple blocking set refers to a $t$-fold blocking set with $t\geq 2$. We call a point of weight one simple. It is easy to see that a weighted $t$-fold $k$-blocking set must contain at least $t\theta_k$ points unless $t\geq q+1$. We include this supposedly folklore result with proof for the sake of completeness.
\begin{proposition}\label{folklore}
Let $\cB$ be a $t$-fold $(n-k)$-blocking set in $\PG(n,q)$. If $t\leq q$, then $|\cB|\geq t\theta_{n-k}$.
\end{proposition}
\begin{proof}
We prove by induction on $k$. If $k=1$, we may take a point $P\notin\cB$ (otherwise $|\cB|\geq \theta_n > q\theta_{n-1}$ and there is nothing to prove). There are $\theta_{n-1}$ lines through $P$, each containing at least $t$ points of $\cB$, whence $|\cB|\geq t\theta_{n-1}$. Suppose now $k\geq 2$. If $\cB$ is an $(n-k+1)$-blocking set then, by induction, $|\cB|\geq \theta_{n-k+1}=q\theta_{n-k}+1 > t\theta_{n-k}$ and we are done. If there is a $(k-1)$-space $\Pi$ disjoint from $\cB$, then each of the $\theta_{n-k}$ distinct $k$-spaces containing $\Pi$ intersects $\cB$ in at least $t$ points, whence $|\cB|\geq t\theta_{n-k}$.
\end{proof}
Note that $t\leq q$ is necessary here, as if $\cB$ contains each point of an $(n-k+1)$-space with weight one, then $\cB$ is a $(q+1)$-fold $(n-k)$-blocking set of size $\theta_{n-k+1}=q\theta_{n-k} + 1 < (q+1)\theta_{n-k}$; moreover, adding $s$ further $(n-k)$-spaces to $\cB$ we obtain a weighted $(q+1+s)$-fold $(n-k)$-blocking set of size less than $(q+1+s)\theta_{n-k}$ for any $s\geq 0$.
A stability result for weighted $t$-fold $(n-k)$-blocking sets of size close to this lower bound was proven by Klein and Metsch \cite[Theorem 11]{KM}.
\begin{result}[Klein, Metsch \cite{KM}]\label{KM}
Let $\cB$ be a weighted $t$-fold $(n-k)$-blocking set in $\PG(n,q)$. Suppose that $|\cB|\leq t\theta_{n-k} + r\theta_{n-k-2}$, where $t$ and $r$ satisfy the following:
\begin{enumerate}
\item[a)] {$1\leq t\leq \frac{q+1}{2}$;}
\item[b)] $t+r\leq q$, $r\geq 0$ is an integer;
\item[c)] any blocking set of $\PG(2,q)$ of size at most $q+t$ contains a line.
\end{enumerate}
Then $\cB$ contains the (weighted) union of $t$ not necessarily distinct $(n-k)$-spaces.
\end{result}
Let us remark that for $k=1$ (that is, when $\cB$ is a $t$-fold weighted blocking set with respect to lines), \cite[Theorem 7]{KM} shows that condition \emph{c)} can be omitted in the above result. However, a blocking set of $\PG(2,q)$ not containing a line must contain at least $q+\sqrt{q}+1$ points in general (see \cite{Bruen} by Bruen), and, according to the following result of Blokhuis, at least $\frac32(q+1)$ if $q$ is prime, hence condition \emph{c)} holds accordingly.
\begin{result}[Blokhuis \cite{Husi}]\label{Husi}
Suppose that $\cB$ is a blocking set in $\PG(2,p)$, $p$ prime, not containing a line. Then $|\cB|\geq \frac32(p+1)$.
\end{result}
The following two theorems will be very useful for us.
\begin{result}[Harrach \cite{Harrach}]\label{Harrach}
Suppose that a weighted $t$-fold $k$-blocking set $\cB$ in $\PG(n,q)$ has less than $(t+1)q^k+\theta_{k-1}$ points. Then $\cB$ contains a unique minimal weighted $t$-fold $k$-blocking set $\cB'$.
\end{result}
A theorem of the below type is often called a $t \pmod p$ result.
\begin{result}[Ferret, Storme, Sziklai, Weiner \cite{FSSzW}]\label{FSSzW}
Let $\cB$ be a minimal weighted $t$-fold $(n-k)$-blocking set of $\PG(n,q)$, $q=p^h$, $p$ prime, $h\geq1$, of size $|\cB|=tq^{n-k}+t+k'$, with $t+k'\leq\frac{q^{n-k}-1}{2}$. Then $\cB$ intersects every $k$-space in $t \pmod p$ points \cite[Theorem 4.2]{FSSzW}. Moreover if $e\ge1$ denotes the largest integer for which each $k$-space intersects $\cB$ in $t \pmod{p^e}$ points, then $|\cB| > tq^{n-k} + \frac{q^{n-k}}{p^e+1} - 1$ \cite[Corollary 5.2]{FSSzW}.
\end{result}
Finally, we recall that the number of $(k+1)$-spaces containing a fixed $k$-space in $\PG(n,q)$ is $\theta_{n-k-1}$. This can be seen easily by taking an $(n-k-1)$-space disjoint from the fixed $k$-space and observing that each appropriate $(k+1)$-space intersects it in a unique point.
\subsection{Proof of Theorem \ref{blsetthm}}
We prove a theorem closely related to Theorem \ref{blsetthm} by considering an analogous problem in a slightly more general setting.
\begin{definition}\label{tmodpset}
Let us call a weighted point set $\cB$ in $\PG(n,q)$ \emph{a $t \pmod p$ set} with respect to the $k$-dimensional subspaces if $\cB$ intersects every $k$-space ($1\leq k\leq n-1$) in $t \pmod p$ points (counted with weights).
\end{definition}
Clearly, $t \pmod p$ sets are $t$-fold blocking sets if $t<p$ and, by Result \ref{FSSzW}, small minimal $t$-fold blocking sets are $t \pmod p$ sets.
\begin{theorem}\label{tmodpsetthm}
Let $\cB$ be a $t \pmod p$ set with respect to the $k$-dimensional subspaces in $\PG(n,p)$, $p$ prime. Suppose that $t\leq \frac38p + 1$ and $|\cB|\leq (t+1)\theta_{n-k} + p - 2$. Then $|\cB|= t\theta_{n-k}$ and $\cB$ consists of the weighted union of $t$ not necessarily distinct $(n-k)$-spaces.
\end{theorem}
\begin{proof}
The proof will use induction on $k$. Clearly, $\cB$ is a $t$-fold $(n-k)$-blocking set. We will need the existence of a point not in $\cB$. This follows if $|\cB|<\theta_n$. If $p-1\geq t+1$, then our assumption gives $|\cB|\leq (p-1)\theta_{n-1}+p-2 = \theta_{n} - 1 - \theta_{n-1} + p - 2 < \theta_n$. If $p\leq t+1$, then from $t\leq \frac38p+1$ it follows that $p\leq 3$ must hold. If $p=2$, then $t=1$ and $|\cB|\leq (t+1)\theta_{n-1} + p - 2 = \theta_n - 1$. If $p=3$, the problematic case is $t=2$, when $|\cB|\leq 3\theta_{n-1} + 1 = \theta_n$. If $|\cB|=\theta_n$ and $\cB$ contains every point of the space, then it is clearly a $1 \pmod p$ set for every subspace, in contradiction with $t=2$. Hence we always find a point not contained in $\cB$.
\noindent\textbf{Case 1: $k=1$ (and $n\geq 2$).}
Notice first that every point of $\cB$ has weight at most $t$. Indeed, by taking the weights of the points modulo $p$, we may assume that no point has weight at least $p$; and if $t+1\leq w(P)\leq p-1$ for a point $P$, then all the $\theta_{n-1}$ lines through $P$ must contain at least $p+t-w(P)$ more weights, whence $|\cB|\geq w(P) + (p+t-w(P))\theta_{n-1}\geq p - 1 + (t+1)\theta_{n-1}$, a contradiction.
It follows from Results \ref{KM} and \ref{Husi} that the assertion holds if $|\cB|=t\theta_{n-1}$, hence we may assume that $|\cB|> t\theta_{n-1}$ and prove by contradiction.
We will call lines that are neither $t$-secants (to $\cB$), nor contained fully in $\cB$ \emph{long} lines; lines contained in $\cB$ will be referred to as \emph{full} lines. Non-$t$-secant lines are, therefore, either full or long. Long lines exist as on any point not in $\cB$ (an \emph{outer} point) we find a line intersecting $\cB$ in more than $t$ points, since $|\cB|=t\theta_{n-1}$ would follow otherwise. Suppose that the minimum weight of a long line is $sp+t$. Clearly, $1\leq s \leq t-1$ (the weight of a long line is at most $tp$). Let $\ell$ be a long line of weight $sp+t$, and let $P\in\ell\setminus\cB$. We want to show that for any $2$-space $\Pi$ containing $\ell$, there is a long line through $P$ in $\Pi$ different from $\ell$. Fix such a plane $\Pi$ (if $n=2$, then this is unique) and suppose to the contrary. Let $\cB'=\cB\cap\Pi$. Then, looking around from $P$ in $\Pi$, $|\cB'|= (p+1)t + sp$. Similarly as before, there must be a non-$t$-secant line on any point $R\in\Pi$; in other words, long and full lines form a blocking set in the dual plane of $\Pi$. It follows that long lines cover each outer point of $\Pi$ exactly once. Moreover, the number of non-$t$-secant lines must be at least $\frac32(p+1)$ for the following reason. By Blokhuis' Result \ref{Husi}, a blocking set of $\PG(2,p)$ of size less than $\frac32(p+1)$ contains a line. In our setting this situation would result in a point $Q$ through which all lines are either long or full. But then $(2t-1)p + t\geq tp+t+sp = |\cB'|\geq w(Q) + (p+1)(p+t-w(Q)) \geq t+(p+1)p$, a contradiction even under $t < \frac12p+1$.
Let $e$ be a $t$-secant to $\cB'$ (such a line exists as seen above). Let $P_1,\ldots,P_r$ be the mutually distinct points of $e\cap\cB'$, $1\leq r \leq t$. Let $h_1(P_i)$ and $h_2(P_i)$ denote the number of full and long lines on $P_i$, respectively, and let $h_1$ and $h_2$ be the total number of full and long lines, respectively; then $h_1+h_2 \geq \frac32(p+1)$. Looking around from $P_i$ we see that
\[
(p+1)t + sp = |\cB'|\geq w(P_i) + (p+1)(t-w(P_i)) + h_1(P_i)p + h_2(P_i)sp,
\]
whence $w(P_i)+s\geq h_1(P_i)+sh_2(P_i)$. Let $h_2':=h_2-(p+1-r)$ be the number of long lines intersecting $e$ in a point of $\cB'$. Then $h_1+h_2'\geq \frac32(p+1) - (p+1-r) \geq \frac12(p+1)+r$, and we obtain that
\[
t+rs = \sum_{i=1}^r (w(P_i) + s) \geq \sum_{i=1}^r (h_1(P_i) + sh_2(P_i)) = h_1 + sh_2' = h_1+s(h_2-(p+1-r)),
\]
whence
\begin{eqnarray*}
t &\!\geq\!& h_1 + s(h_2 -(p+1)) > h_1 + s\left(\left(\frac{3(p+1)}{2}-h_1\right) - (p+1)\right)
= h_1 + s\left(\frac{p+1}{2}-h_1\right)\\
&\!=\!& (s-1)\left(\frac{p+1}{2} -h_1 \right) +\frac{p+1}{2}.
\end{eqnarray*}
As $t < \frac12(p+1)$, it follows that $s\geq 2$ and $h_1 > \frac12(p+1)$.
It is clear that there are at least $p+1-r\geq 2$ long lines. Take now two long lines and the $h_1\geq \frac12p+1$ full lines one by one. The first line contains at least $sp+t$ weights of $\cB'$. The second line may intersect it in a point of weight at most $t$, hence we see at least $sp$ more weights on it. Turning to the full lines, the $i$th full line contains at least $p+1 - 2 - (i-1) = p - i$ points of $\cB'$ not contained by any of the previous lines. Altogether we obtain
\begin{eqnarray*}
(p+1)t + sp = |\cB'| &\geq& 2sp + t + \sum_{i=1}^{\frac{p}{2}+1}(p-i) = 2sp + t + \left(\frac{p}{2}+1\right)p - \binom{\frac{p}{2}+2}{2} \\
&=& 2sp + t + \frac{p^2}{2} + p - \frac{p^2}{8}-\frac{3p}{4} -1,
\end{eqnarray*}
whence
\[
t\geq \frac38p + s + \frac14 -\frac1p > \frac38p + 1,
\]
a contradiction. Thus we see that all planes containing $\ell$ indeed contain at least one other long line through $P$, so we find at least $1+\theta_{n-2}$ long lines through $P$, hence on all the $\theta_{n-1}$ lines through $P$ we find that $|\cB|\geq t\theta_{n-1} + (\theta_{n-2}+1)sp = t\theta_{n-1} + s(\theta_{n-1}-1) + sp \geq (t+1)\theta_{n-1} + p -1$, a contradiction.
\textbf{Case 2: $2\leq k \leq n-1$ (and $n\geq 3$).}
Take a point $P \notin \cB$ in $\PG(n,p)$. Project the points of $\cB$ from $P$ into an arbitrary hyperplane $H$. We get a weighted point set $\tb \subseteq \Pi$ for which $|\tb| = |\cB|$. Let $W$ be a $(k-1)$-space in $H$, and let $U = \langle P, W\rangle$ be the $k$-space spanned by $P$ and $W$. Then $|W\cap \tb| = |U \cap \cB|$, hence $\tb$ is a $t \pmod p$ set with respect to $(k-1)$-spaces in the $(n-1)$-space $H$ thus, by induction on $k$, $|\cB| = |\tb| = t\theta_{n-1 - (k-1)}=t\theta_{n-k}$. Results \ref{KM} and \ref{Husi} finish the proof.
\end{proof}
Theorem \ref{blsetthm} now follows from Theorem \ref{tmodpsetthm} and the $t\pmod p$ Result \ref{FSSzW}.
\section{On the upper chromatic number of $\cH(n,n-k,q)$}
\subsection{Proof of Theorems \ref{ucnthm} and \ref{ucnstabp}}
The steps of the proof have a lot in common with those in \cite{BHSz}. We recall that we want to color the points of $\PG(n,q)$ with as many colors as possible so that each $(n-k)$-space contains two equicolored points. For two points $P$ and $Q$, $PQ$ denotes the line joining them.
\begin{definition
|
train/arxiv
|
BkiUdqjxK0fkXPSOqOX4
| 5 | 1 |
\section{Introduction}
Self-avoiding walk (SAW) on a lattice serves as a paradigm for research
in statistical properties of natural and artificial polymers
\cite{degennes}.
A ``bridge'' is defined as a bond on the lattice that connects two sites
both of which are located on the SAW and are nearest-neighbours on the
lattice but are not nearest neighbours along the contour of the SAW.
RWs on SAWs is an interesting problem in its own right because of the
interesting effects of the hops of the random walker across the bridges.
Many years ago, motivated by the vibrational dynamics of proteins, the
root-mean-square displacement of the random walker on a SAW was studied
both in the absence and presence of hops across bridges
\cite{chowdhury85a,chowdhury85b,yang85,maritan85,bouchaud86,seno89,manna89}.
RW on SAW has also been studied as one of the prototypes of RW in
disordered and fractal media \cite{Bouchaud,havlin,klafter07}.
In this paper we report the effects of the hops of the random walker
across the bridges on the distributions of their first passage times,
(FPT) \cite{redner}, i.e., the time taken by the walker to reach a
target site for the first time.
Moreover, we extend the model even further by allowing the possibility
of detachments and re-attachments (to be described in detail in section
\ref{model}); we also report the effects of these processes of
attachments/detachments of the random walkers on the distributions of
their first passage times. This extension of the model and the
computation of the first passage times are motivated by a biological
process which is discussed in the next section. Therefore,
this work may also be viewed as a biologically motivated extension of
the works reported earlier.\cite{chowdhury85a,chowdhury85b,yang85,maritan85,bouchaud86,seno89,manna89}.
This paper is organised as follows: In section {\ref{motivation}}, we discuss
the biological motivation behind this problem. In section {\ref{review}}, we
review some of the earlier works and compare our model to the previous models.
In section {\ref{model}}, we build the model and, thereafter in section
{\ref{result}}, we discuss the results.
\section{Biological motivation}
\label{motivation}
A cell is the structural and functional unit of a living system. DNA, the
device used by nature for storage of genetic information, is essentially
a linear polymer. The genetic information is chemically encoded in the
sequence of the nucleotides, the monomeric subunits of DNA. Some viruses
use RNA, instead of DNA, for storage of genetic information. In almost
all processes involved in the nucleic acid (DNA or RNA) metabolism,
specific proteins (or, more generally, macromolecular complexes) need to
bind to specific sites on the nucleic acid. For example, a transcription
factor must bind at the appropriate site on the DNA to initiate the
process of transcription whereby genetic code is transcribed from the
DNA to the corresponding RNA. Similarly, the processes of
DNA replication, repair and recombination also require binding of the
corresponding appropriate proteins at specific sites on the DNA. Other
processes of similar nature include restriction and modification of DNA
by sequence-specific endonucleases. The typical length of a DNA chain
could be millions of base-pairs, whereas the target site may be a
sequence of just a few basepairs. But, a protein usually succeeds in
reaching the target in an unbelievably short time. One of the most
challenging open questions in molecular cell biology and biophysics is:
how does a protein search such a long strand of DNA in an efficient
manner to reach the target site?
To our knowledge, this question was first formulated clearly by Von Hippel
and coworkers \cite{Hippel1,Hippel2} who also pointed out three possible
mechanisms of search for the specific binding sites by the DNA-binding
proteins. These three possible modes of search are as follows: \\
(i) The protein {\it slides } diffusively along an effectively
one-dimensional track formed by covalently-bonded bases of the DNA template,\\
(ii) it not only slides along the DNA chain but, occasionally, also
{\it hops } from one segment of the DNA to a neighbouring segment;
proteins with more than one DNA-binding sites can exploit this mechanism,\\
(iii) in addition to sliding and intersegmental hopping, it also carries
out a three-dimensional search for the specific binding site by first
{\it detaching} from the DNA strand and, then, after executing
three-dimensional diffusion in the solution, {\it re-attaching} at a new
site which is uncorrelated with the site from which it detached (see
Fig.{\ref{rnapcartoon}}). \\
Various aspects of these mechanisms and their relative importance have
been explored by many research groups in subsequent works (see next
section for a brief review and comparison to our model).
\cite{elf07,Busta,Halford1,Holyst,Halford2,Moreau,slutsky04,Mirny,oshanin,salerno,Zhou,Kampmann,metzler05,klafter06,lindenberg07,lindenberg08,Flyvbjerg,murugan07,sokolov05,kolomeisky,mirnyarxiv,rezania,kafri}.
\section{Brief Review of earlier models}
\label{review}
Bustamante et al. {\cite{Busta}} showed experimental evidence of the
intersegmental transfer and hopping movements of {\it E. Coli} RNA Polymerase
(RNAP) on nonspecific DNA. They also showed
the effect of Heparin, which disrupts the RNAP-DNA nonspecific complexes. (For a
theoretical review of this phenomenon, see {\cite{Halford1}}.)
Burdzy and Holyst {\cite{Holyst}} address an important question, namely the
number of molecules needed to locate the target of a given size. However, the
theoretical arguments are not supported by any simulations. Also, the arguments
are not in terms of FPTs, which may be more relevant biologically in the
given context.
The effect of sequential inhomogeneity of the DNA was taken into consideration
by Slutsky et al. {\cite{slutsky04}}. They however focussed only on a
combination of one and three dimensional search mechanisms,
without focusing on the Intersegmental transfers. Also, they modeled the DNA as a one-dimensional strand, which is not completely realistic in the biological
context.
The DNA was modeled as a one-dimensional strip consisting of low and high
affinity sites by Rezania et al. {\cite{rezania}}.
They also took a two dimensional strip which, in addition to the above mentioned
sites, has zero affinity water. However, they did not investigate the role of
the bridges explicitly in their simulations.
The model developed by Oshanin et al. {\cite{oshanin}} is similar to our model, in that the search is carried out
in discrete time steps till a maximum of $N$ steps, until the immobile target is found. The survival probability
is found in terms of the leakage probability and is optimized to minimize this probability. However, the
calculations are done for a one-dimensional substrate, which may not be
biologically realistic.
Recently, Sheinman et al. {\cite{kafri}} studied the effect of
intersegmental transfers on the search process. The DNA was however modeled by
connecting an ideal gas of rods (of unit persistence length) randomly to
form a small world network. The authors reported a decrease
in the search time by using scaling arguments and numerical verification. They
also found dependence on the length of the DNA, an aspect which we do not
address in great detail here.
Therefore, in spite of the large attention that this problem has
received recently, the role of all three mechanisms and, in particular, the
role of intersegmental transfer together with the attachment/detachment
have not been investigated thoroughly. In this paper, we study all the three
mechanisms together, which complements some of the works which have been
reported earlier for elucidating the relative importance of each.
\vspace{1cm}
\begin{figure}[ph]
\centerline{\psfig{file=dcfig1.eps,width=7.7cm}}
\vspace*{8pt}
\caption{A pictorial depiction of the various mechanisms of searching for specific binding sites by a DNA-binding
protein (e.g., searching of the promoter site by a transcription factor).\label{rnapcartoon}}
\end{figure}
\section{The Model}
\label{model}
A DNA can be considered to be a freely jointed chain over length scales
much longer than its persistence length. A freely jointed chain can be
modeled using a SAW \cite{degennes}, where the
length of each of the steps of the SAW is typically of the order of the
persistence length. The persistence length of DNA is roughly 100
base-pairs (bps). Therefore, a SAW of total length $L = 100$ would
correspond, approximately, to $10,000$ base pairs which is comparable,
for example, to the length of a {\it bacteriophage} DNA.
Motivated by the experimental and theoretical works summarized in
sections \ref{motivation} and \ref{review}, in this paper we explore the efficiency of
searching the SAW by a random walker for a specific binding site on the
SAW. We study the efficiency of various search mechanisms that the
random walker may use in order to reach the target site. We have
introduced a new quantitative measure of the efficiencies of the search
mechanisms in terms of the time-scales that are relevant to this problem.
\begin{figure}[ph]
\centerline{\psfig{file=dcfig2.eps,width=8.7cm}}
\vspace*{8pt}
\caption{A pictorial depiction of a Self Avoiding Walk (SAW) generated on a
square lattice. A$\leftrightarrow$B$\leftrightarrow$C represents sliding.
A$\leftrightarrow$D represents hopping across a {\it bridge}. A path from A to
F which does not consist entirely of sliding along the contour and hopping
across bridges is called jumping (It would consist of atleast one detachment
from the SAW and its re-attachment). The probabilities associated with hopping
from one site to the other depends on the local conformation of the SAW.
The boundaries of the underlying lattice are far away from the
SAW.\label{SAWsq}}
\end{figure}
For the sake of simplicity, we consider SAWs in two-dimensions, rather
than three-dimensions. The random walker
is represented by a particle. The particle searches the binding site
by a combination of sliding, intersegment hopping as well as detachments,
two-dimensional diffusion followed by, possibly, re-attachments (see
Fig.\ref{SAWsq}). {\it Sliding} motion of the particle is captured by
its one-dimensional RW where its position at the successive time steps
are nearest-neighbours along the contour of the SAW. In contrast, an
{\it inter-segment hopping} of the particle takes place across a
``bridge'' that connects two sites both of which are located on the
SAW and are nearest-neighbours on the square lattice but are not nearest
neighbours along the contour of the SAW. Finally, upon {\it detachment}
from the SAW, a particle executes an unbiased RW on the square lattice
and, during this process, may {\it re-attach} with the SAW if it hops
onto a site occupied by the SAW.
In our model we generate SAW configurations, each of length $L = 101$, on a
square lattice (Fig.{\ref{SAWsq}}) using a combination of {\it reptation} and the {\it kink
jump} algorithms \cite{krembind}. Averaging over the configurations thus generated, we have
verified that the {\it mean-square end-to-end Euclidean distance of the SAWs} satisfy the
well known relation $<r_L^2> {\propto} L^{3/2}$. When the random walker was
constrained to move only along the SAW, it performed, effectively, one-dimensional diffusion.
We can determine the value of the effective diffusion constant
$D$, where $D=<R_{t}^{2}>/(2t)$, $<R_{t}^{2}>$ being the mean square displacement {\it along the contour}
of the SAW. We have also verified that the mean-square {\it Euclidean displacement} of the
random walker, on the SAW, follows $<R_E^2(t)> \propto t^{3/4}$, even when hopping across the bridges are
allowed. This is in agreement with the results reported earlier
\cite{chowdhury85a,chowdhury85b}.
\section{Results and Discussion}
\label{result}
We parametrize the positions along the contour of the SAW by
the symbol $s$; $s = 1$ and $s = L$ correspond to the two end points on the SAW. We designate the two end points,
i.e., $s = 1$ and $s = L$ as the specific binding sites for the particle.
On each SAW of length $L$, we release a particle at the mid-point of the SAW
(i.e., at $s = (L+1)/2$) and allow it to execute a RW for a total of $N$ discrete
time steps. If the particle is unable to reach either of the target sites
(i.e., $s=1$ or $s=L$), then
the search by that particle is aborted and the search by another particle
starts again. $N$ is $5000$ and
$L$ is $101$ in all our simulations. In three different sets
of computer experiments we implemented three different types of RWs of the particle.\\
(i) Mechanism I (M I): The particle is allowed to perform random walk only along the contour of the SAW.\\
(ii) Mechanism II (M II): Hopping across the bridges is allowed, in addition to the process included in mechanism I \cite{allowed}.\\
(iii) Mechanism III (M III): Attachment and detachment of the particle are also allowed, in addition to the processes
included in mechanism II \cite{allowed}.
For the random walkers, we impose absorbing boundary
conditions at $s = 1$ and $s = L$, i.e. a succesful search process is terminated
once the walkers reach the target site for the first time. Under these boundary
conditions, the time taken by a random walker to reach one of the two
boundaries (i.e., $s = 1$ or $s = L$) is identified as the corresponding FPT.\\
\subsection{Distributions of First Passage Times(FPTs)}
The distribution $P(t)$ of the FPTs for the three mechanisms are plotted in Fig.\ref{comparemech}.
Since all three mechanisms are based on diffusive search, the qualitative shape of the curve $P(t)$ is the
same in all the three cases. But, comparing the most probable time for three mechanisms, we conclude that
the mechanism II is more efficient than mechanism I whereas mechanism III is the most efficient of all. This observation
strongly suggests that the search for DNA-binding sites by proteins would be more efficient if, in addition to
sliding, both inter-segment hopping and detachment/re-attachment are also allowed.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\resizebox{80mm}{!}{\includegraphics{dcfig3a.eps}} \\
\resizebox{80mm}{!}{\includegraphics{dcfig3b.eps}} \\
\resizebox{80mm}{!}{\includegraphics{dcfig3c.eps}} \\
\end{tabular}
\caption{The distribution of the FPTs for
(a) Mechanism I, (b) Mechanism II, and (c) Mechanism III. (See Appendix {\rm I}
for fit parameters). }
\label{comparemech}
\end{center}
\end{figure}
\subsection{Relative importance of detachments/re-attachments}
In order to compare the relative importance of detachment/re-attachment compared to sliding and inter-segment
hopping, we have computed the fraction of the time steps the particle spends unattached with the SAW in each
successful search process. Corresponding to every search time, $t$, we compute
the fraction of the search time that the particle spends unattached from the
SAW. We plot this fraction as a function of the search time in Fig.\ref{fracunat}
. Note that the peak in Fig.{\ref{fracunat}} occurs at $t=191$. Interestingly,
this value is close to the most probable FPT in Fig.{\ref{comparemech}},
corresponding to Mechanism III, namely $t=153$.
Thus, the target site is reached in the shortest possible time if the particle uses
mechanism III,
in which the searching particle spends a fraction of the search time
outside the SAW.
\begin{figure}[ph]
\centerline{\psfig{file=dcfig4.eps,width=7.7cm}}
\vspace*{8pt}
\caption{The fraction of the time steps during which the particle remains unattached from the SAW before reaching
the target binding site in $t$ steps.\label{fracunat}}
\end{figure}
We have also computed the probability of re-attachment of a particle after $t$ time steps, following its
detachment from the SAW; this probability distribution is shown in
Fig.\ref{reattach}. The log-log plot in the inset indicates the
possibility of an initial {\it power law} regime, which is most likely $\sim t^{-1/2}$, crossing over to
another power law regime at long times, which was found to be $\sim t^{-3/2}$.
\begin{figure}
\begin{center}
\psfig{figure=dcfig5a.eps,width=90mm}
\vspace*{-100mm}
\end{center}
\vspace*{45mm}
\hspace*{35mm}
\psfig{figure=dcfig5b.eps,width=45mm}
\vspace*{30mm}
\caption{The re-attachment probability in Mechanism III. The inset shows the same data on a log scale}
\label{reattach}
\end{figure}
\subsection{Mechanism I versus Mechanism II}
In this subsection, we consider a {\it modified version} of Mechanism II (MM II)
which reduces to the mechanism I in a special limit.
In this modified version, we compute the effect of forced hopping across
the bridges, with a given probability. We define a quantity $R$ as follows,
\begin{eqnarray}
R={\dfrac{p_{bridge}}{p_{contour}}} \\
p_{bridge}+p_{contour}=1
\label{R}
\end{eqnarray}
where $p_{bridge}$ is the probability of hopping across the bridge and
$p_{contour}$ is the probability of diffusing along contour. In the
limit $p_{bridge} = 0$ (i.e., $R = 0$), this modified version reduces to
mechanism I.
In Fig.({\ref{Rplots}}), we plot the distribution $P(t)$ of the FPTs for
four different values of $R$.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\resizebox{80mm}{!}{\includegraphics{dcfig6a.eps}} \\
\resizebox{80mm}{!}{\includegraphics{dcfig6b.eps}} \\
\resizebox{80mm}{!}{\includegraphics{dcfig6c.eps}} \\
\resizebox{80mm}{!}{\includegraphics{dcfig6d.eps}} \\
\end{tabular}
\caption{The distribution of the FPTs for the modified Mechanism II for
(a) R=0.1, (b) R=1, (c) R=10, and (d) R=100. To each curve, we fit a Gamma
Distribution and a Difference of Exponentials (see Appendix {\rm I}). }
\label{Rplots}
\end{center}
\end{figure}
A higher value of $R$ indicates a higher probability of hopping across a
bridge. This gives rise to a higher probability of reaching the ends in
roughly the same amount of time. Therefore, if the protein has some bio-chemical
means of hopping across such bridges preferentially, then it can bind to the
specific binding site in a more efficient manner.
However, as we see from Fig.{\ref{Rplots}}, for an extremely high value of
$R$, the walker tends to get trapped in the bridge and hence takes a longer
time to reach the ends. For example, when $R=100$, $p_{bridge}\approx 0.99$.
For this value of $p_{bridge}$, the moment the walker encounters a bridge, it
would tend to get trapped in a bridge between two sites (for example, the
bridge connecting ``A" and ``D" in Fig. {\ref{SAWsq}}).
\subsection{Quantitative estimates of efficiencies of search-times}
We are now in a position to compare the values of the most probable time,
$\tau_{mp}$, and the MFPT ($\tau_{avg}$) for the distribution of the FPTs of
all the mechanisms that we have investigated till now. Let $\tau_{1D}$ be the
most probable/ MFPT for successful search using
Mechanism I while $\tau$ be the corresponding most probable/ MFPT
for the specific mechanism under consideration.
We define
\begin{equation}
{\eta}=|1-\frac{\tau}{\tau_{1D}}|,
\label{eta}
\end{equation}
which we use as a quantitative
measure of the efficiency of the process, relative to purely one-dimensional
diffusion.
The data are summarised in the table below
\begin{table}[ht]
{\begin{tabular}{@{}ccccc@{}} \toprule
{\bf Mechanism} & {\bf Most Probable} & {\bf $\eta_{mp}$}&{\bf Mean Search}&{\bf $\eta_{avg}$}\\
& {\bf Search Time ($\tau_{mp}$)} & &{\bf Time ($\tau_{avg}$)} &\\ \colrule
Mechanism I&841&0&1931.6&0\\
Mechanism II&494&0.41&1419.8&0.27\\
Mechanism III&153&0.82&1346.5&0.30\\
MM II (R=0.1)&115&0.86&1243.9&0.36\\
MM II (R=1)&117&0.86&553.7&0.71\\
MM II (R=10)&112&0.87&598.3&0.69\\
MM II (R=100)&457&0.46&1243.9&0.36\\ \botrule
\label{tablemp}
\end{tabular}}
\end{table}
We conclude that among the possible mechanisms
considered in this paper, the modified Mechanism II with
$R=10$ turns out to be the most efficient search process, as far as $\eta_{mp}$
is concerned. However, in terms of $\eta_{avg}$, $R=1$ would be the most
efficient search mechanism. Therefore, we conjecture that if both $\eta_{mp}$
and $\eta_{avg}$ play equally important roles in determining the efficiency of
a given mechanism, then the most efficient search mechanism would correspond to
the range $1\leq R\leq 10$.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\resizebox{90mm}{!}{\includegraphics{dcfig7a.eps}} \\
\resizebox{90mm}{!}{\includegraphics{dcfig7b.eps}} \\
\end{tabular}
\caption{(a.) The $\tau_{mp}$ as a function of $R$. (b.) The $\tau_{avg}$ as a function of $R$. (See Appendix {\rm I} for fit parameters) }
\label{taumpavg}
\end{center}
\end{figure}
\begin{table}[ht]
{\begin{tabular}{@{}ccc@{}} \toprule
{\bf Mechanism}&$a\approx$&$b\approx$\\ \colrule
M I&1.483&0.001\\
($k\approx 266.725$)&&\\
M II&1.836&0.001\\
MM II (R=0.1) \hphantom{00} &\hphantom{00} 1.146 \hphantom{00} &\hphantom{00} 0.001\\
MM II (R=1) \hphantom{00} &\hphantom{00} 1.442 \hphantom{00} &\hphantom{00} 0.003\\
MM II (R=10) \hphantom{00} &\hphantom{00} 1.346 \hphantom{00} & \hphantom{00} 0.003\\
MM II (R=100) \hphantom{00} &\hphantom{00} 1.461 \hphantom{00} & \hphantom{00} 0.001\\ \botrule
\end{tabular} \label{tablab}}
\end{table}
\begin{table}[ht]
{\begin{tabular}{@{}ccccc@{}} \toprule
{\bf Mechanism}&$c\approx$&$d\approx$&$f\approx$&$h\approx$\\ \colrule
M I&\hphantom{00}-0.001&\hphantom{00}0.003&\hphantom{00}-0.001&\hphantom{00}0.001\\
M II&\hphantom{00}0.001&\hphantom{00}0.001&\hphantom{00}0.001&\hphantom{00}0.004\\
MM II (R=0.1)&\hphantom{00}0.100&\hphantom{00}0.001&\hphantom{00}0.099&\hphantom{00}0.001\\
MM II (R=1)&\hphantom{00}-0.003&\hphantom{00}0.023&\hphantom{00}-0.003&\hphantom{00}0.003\\
MM II (R=10)&\hphantom{00}-0.002&\hphantom{00}0.026&\hphantom{00}-0.002&\hphantom{00}0.002\\
MM II (R=100)&\hphantom{00}0.996&\hphantom{00}0.002&\hphantom{00}0.996&\hphantom{00}0.002\\ \botrule
\end{tabular} \label{tablcdfh}}
\end{table}
We also observe another interesting feature in MM II. We note that $P_{max}=
P(\tau_{mp})$ is largest for $R=1$ and then for $R=10$. This implies that not only are
they the most efficient of all the mechanisms considered, but that they also have the
highest ``success-rate" of reaching the taget sites. Therefore, we see that MM II with forced
hopping across the bridges leads to the most efficient and successful search process.
Whether all proteins with multiple DNA-binding sites actually make use of this mechanism to
reach their target sites is something that needs to be tested experimentally under controlled
conditions in the near future.
In Fig. {\ref{taumpavg}}, we plot the most probable search time and the
mean first passage time as functions of $R$. We do not show the point $R=0$
(in the limit $R\to 0$, we recover $\tau_{mp}=841$ and $\tau_{avg}=1931.6$)
on the log-scale. The only quantitative difference between the two is that the
turning point in the curve for the MFPT lies in the range $0.1\leq R\leq 1$,
whereas, the turning point in the curve for the $\tau_{mp}$ lies in the range
$0\leq R\leq 0.1$.
\subsection{Multiple Walkers and Immovable Barriers}
We have also investigated the search of the same binding sites simultaneously by $N$ ($>1$)
interacting particles which are initially distributed randomly along the SAW. The positions
of the particles are updated in parallel subject to the constraint that none of the lattice
sites is occupied by more than one walker at a time. As is suggested by our intuition, the
$<R_t^2>$ decreases with an increasing number of random walkers. In case of pure sliding,
the interaction between the particles, effectively, constrains each one to a shorter region
on the SAW. Consequently, $<R_t^2>$ decreases with increasing $N$. However, in the presence
of {\it Bridges}, there arise some situations in which a particle can {\it bypass} the
other particles on its way by hopping across the bridges and, thereby, increasing $<R_t^2>$. Effects
of mutual hindrance is further weakened by detachments/re-attachment processes.
We also considered the situation when there are immovable barriers placed
randomly along the SAW. This could mimic the effect of various obstacles that
are present {\it in-vivo} in the crowded environment of the cell.
The mechanism III is the most efficient search
process in the presence of these barriers.
\section{Summary and Conclusion}
In this paper, we have suggested a biologically motivated extension of random walk on
self-avoiding walks. The results of this investigation provide insight into the
relative importance of different mechanisms of search for specific binding on DNA by
DNA-binding proteins. We studied the effect of preferential bias to hop across the
bridges in the intersegmental transfer and found that for $1\leq R\leq 10$,
the mechanism II turns out to be most efficient. Whether this is the mechanism that
proteins actually use in order to find the target sites can be verified only by doing
controlled experiments.
We also suggest experiments that can be performed to test
the efficiency of the various search processes. The value of $\tau_{1D}$ can
be taken as an input from standard known results. The value of $\tau_{mp}$
and $\tau_{avg}$ can be measured using {\it Fluoroscence Spectroscopy}.
The experimentally obtained $\eta$ can then be compared with the above mentioned results
,obtained using simulations to throw light on the possible mechanism that the
protein uses to search for its target site.
\section*{Acknowledgments}
The idea of this work originated while visiting
the Max-Planck Institute for the Physics of Complex Systems, Dresden during
summer '07.
I thank the visitors programme of MPI-PKS for the hospitality in Dresden.
I thank S.W. Grill, E.A. Galburt, A.B. Kolomeisky, B.K. Chakrabarti, Abhishek Dhar,
Abhishek Chaudhuri, Aditya Sood and Ashok Garai for fruitful
discussions and comments. I also thank Debashish Chowdhury, R. Metzler and
J. Klafter for drawing my attention to some relevant earlier works.
|
train/arxiv
|
BkiUdT85qoYAtIo5IUjA
| 5 | 1 |
\section{Introduction}
The rise of the top is one of the impressive physical phenomena for rigid body rotations.
The explanation of the rise comes from the friction \cite{Gray}, and this has been known for more than a century.
There are some books on classical mechanics and rigid body rotations considering the rise of the top \cite{BargerOlson, Perry, Jellett}, however, they are not easily understandable by students.
On the other hand, the rise of the top is a useful example of how to include torque in rigid body rotations and can be very helpful in education.
In previous works, the rise of the top is considered in the body reference frame, whose origin is at the center of mass.
In this type of consideration, one needs to consider torque originating from the reaction force of the surface \cite{Jellett, Moffatt, Parkyn, Yogi} that makes the situation more complicated.
We will consider the problem in the body reference frame whose origin is at the radial center of the spherical peg.
This gives simpler resultant equations and makes the effect of the friction easily understandable.
In the previous work, we have compared Jellet's model and the model applied in this work \cite{Tanriverdi_rise}.
In previous version (version 3) of that work, we have explained the rise of the top, though, in the latest version, this explanation is not available.
In this work, we will modify and explain the rise of the top in detail, which can be helpful to students.
In this work, we will consider the problem by using Euler equations; usage of them is necessary since friction, a non-conservative force, is considered.
In section \ref{two}, we will obtain equations of motion.
In section \ref{three}, we will numerically solve different situations to see how the rise occurs.
Then, we will conclude in section \ref{four}.
\section{Rise of the top}
\label{two}
Let us consider a symmetric top having a spherical peg with radius $R$, shown in figure \ref{fig:hst}.
And, consider that its mass is $M$, the distance between the center of the peg and the center of mass is $\tilde l$ and moments of inertia $I_y=I_x$ and $I_z$ in the body reference frame whose origin is the peg's radial center.
Due to rotations, there are some changes in the reaction force of the surface.
However, they are small since $\dot \theta$ is small, and we will ignore them and say $N=M g$.
The touchpoint $T$ of the top has a velocity with respect to the peg's center, and it can be found by using $\vec v=\vec w \times \vec r$
where $\vec w$ is the top's angular velocity and $\vec r$ is the vector from the peg's center to the touchpoint.
\begin{figure}[!h]
\begin{center}
\subfigure[]{\includegraphics[width=5.5cm]{symtop_rp7.png}}
\subfigure[]{\includegraphics[width=5.5cm]{symtop_rp7_peg_b.png}}
\caption{ a) Heavy symmetric top with a spherical peg, stationary reference frame ($x',y',z'$), center of peg-body reference frame ($x,y,z$), line of nodes $N$, Euler angles ($\theta, \phi, \psi$), angular velocities ($\dot \theta, \dot \phi, \dot \psi$). b) Peg of the top, center of peg $O$, touchpoint $T$, center of mass $G$, $\vec r$ and $\tilde l$.
}
\label{fig:hst}
\end{center}
\end{figure}
Since friction has to be included to obtain the rise of the top, it is better to use Euler equations which can be written for a symmetric top ($I_y=I_x$) as
\begin{eqnarray}
\tau_x&=&I_x \dot w_x+w_y w_z (I_z-I_x), \nonumber \\
\tau_y&=&I_x \dot w_y+w_x w_z (I_x-I_z), \label{eeq} \\
\tau_z&=&I_z \dot w_z, \nonumber
\end{eqnarray}
where $\vec \tau$ is torque.
These equations are in the body reference frame.
One can write angular velocities in terms of Euler angles as
\begin{eqnarray}
w_x&=&\dot \theta \cos \psi+\dot \phi \sin \theta \sin \psi, \nonumber \\
w_y&=&-\dot \theta \sin \psi+ \dot \phi \sin \theta \cos \psi, \label{ang_vel}\\
w_z&=&\dot \psi+\dot \phi \cos \theta, \nonumber
\end{eqnarray}
where $\theta$ is the angle between the stationary $z'$-axis and body $z$-axis and describes rotation around the line of nodes,
$\phi$ is the angle describing rotations around the stationary $z'$-axis,
and $\psi$ is the angle describing rotations around the body $z$-axis.
These can be seen in figure \ref{fig:hst}.
By using $\vec r=-R \hat z'$ and $\hat z'= \sin \theta \sin \psi \hat x+\sin \theta \cos \psi \hat y+ \cos \theta \hat z$, we can find velocity of the toucpoint with respect to the peg's center in terms of Euler angles as
\begin{eqnarray}
\vec v=R &[&(\dot \theta \cos \theta \sin \psi+\dot \psi \sin \theta \cos \psi)\hat x+(\dot \theta \cos \theta \cos \psi-\dot \psi \sin \theta \sin \psi)\hat y \nonumber \\
& &+(-\dot \theta \sin \theta)\hat z].
\label{slipping_velocity}
\end{eqnarray}
The magnitude of this velocity can be obtained as $|\vec v|=R \sqrt{\dot \theta^2 +\dot \psi^2 \sin^2 \theta}$.
In daily life, the top can roll on the surface.
However, considering rolling makes the situation complicated and the aim of this work is only to show how the top rises by friction.
Then, we will ignore the rolling motion and consider that the top slips on the surface.
Hence, friction force can be written as $\vec f=- k N \vec v / |\vec v|$, where $k$ is a positive friction constant.
Now, we can obtain torque due to friction force by using $\vec \tau=\vec r \times \vec f$ as
\begin{eqnarray}
\vec \tau= &\frac{k M g R^2} {|\vec v|}& [(-\dot \theta \cos \psi+\dot \psi \sin \theta \cos \theta \sin \psi)\hat x\nonumber \\
& & \,+(\dot \theta \sin \psi+\dot \psi \sin \theta \cot \theta \cos \psi)\hat y+(-\dot \psi \sin^2 \theta)\hat z],
\label{velocityT}
\end{eqnarray}
and the gravitational torque can be obtained as
\begin{equation}
\vec \tau_g=-Mg \tilde l \sin \theta (-\cos \psi \hat x+\sin \psi \hat y).
\end{equation}
As it is seen from equation \ref{velocityT} velocity (and dependently friction) does not depend on $\dot \phi$, which is the natural result of considering the touchpoint as a point.
On the other hand, they depend on the other two angular velocities $\dot \theta$ and $\dot \psi$, and in general $|\dot \psi| >> |\dot \theta|$.
By considering this, ignoring dissipation due to motion related to $\theta$ is a plausible assumption.
Then, by ignoring it, we can write components of torque as
\begin{eqnarray}
\tau_x&=& Mg \tilde l \sin \theta \cos \psi + \frac{k M g R^2 (\dot \psi \sin \theta \cos \theta \sin \psi ) }{|\vec v|}, \nonumber \\
\tau_y&=& Mg \tilde l \sin \theta \sin \psi + \frac{k M g R^2 (\dot \psi \sin \theta \cos \theta \cos \psi ) }{|\vec v|} \\
\tau_z&=&- \frac{k M g R^2 \dot \psi \sin^2 \theta }{|\vec v|}. \nonumber
\end{eqnarray}
Now, we can include these components of torque in Euler equations and obtain the following equations with some algebra as
\begin{eqnarray}
\ddot \theta&=& -\frac{I_z \dot \phi \sin \theta}{I_x}(\dot \psi+\dot \phi \cos \theta)+\dot \phi^2 \sin \theta \cos \theta+ \frac{Mg \tilde l }{I_x} \sin \theta, \nonumber \\
\ddot \phi&=& \frac{I_z \dot \theta}{I_x \sin \theta}( \dot \psi +\dot \phi \cos \theta)- \frac{ 2 \dot \theta \dot \phi \cos \theta}{ \sin \theta} +\frac{k M g R^2 \dot \psi \cos \theta}{I_x |\vec v|}, \label{diffeqns_odp} \\
\ddot \psi&=& - \frac{I_z \dot \theta \cos \theta}{I_x \sin \theta} ( \dot \psi + \dot \phi \cos \theta )+\frac{2 \dot \theta \dot \phi \cos^2 \theta}{\sin \theta} +\dot \theta \dot \phi \sin \theta \nonumber \\
& & -\frac{k M g R^2 \dot \psi }{|\vec v|}\left(\frac{\cos^2 \theta}{I_x}+\frac{\sin^2 \theta}{I_z}\right). \nonumber
\end{eqnarray}
As it is seen, the considered dissipation shows itself in angular accelerations $\ddot \phi$ and $\ddot \psi$.
These three equations of motion are coupled equations, and any change in one of these will affect the other one.
Hence, it can be very difficult to comment on these; however, one can still consider some aspects.
And in here, we will consider $\ddot \theta$ a bit further since we are considering the rise of the top.
$\ddot \theta$ is a function of $\theta$, $\dot \psi$ and $\dot \phi$, and it depends quadratically on $\dot \phi$.
If one considers fixed $\theta$ and $\dot \psi$ values, then there are, in general, two roots giving regular precession.
Due to $\ddot \theta$'s quadratic nature, $\ddot \theta$ should be either positive or negative when $\dot \phi$ value is between these two roots.
For ordinary tops, $\theta$ is smaller than $\pi/2$.
And, when $I_x>I_z$, $\ddot \theta$ becomes negative between these two roots for ordinary tops, and vice versa.
When $\ddot \theta$ is negative, the top rises.
The dissipative term in $\ddot \phi$, $k M g R^2 \dot \psi \cos \theta/I_x |\vec v|$, has a positive sign and that term increases the magnitude of the $\dot \phi$.
For the motion of an ordinary top, if $\dot \phi$ is smaller than the smaller root giving regular precession, then an increase of $\dot \phi$ results in the rise of the top provided that $\theta<\pi/2$ and $I_z<I_x$ \cite{Tanriverdi_wdwuc}.
\section{Numerical solution}
\label{three}
We will numerically solve equations \eqref{diffeqns_odp} to see the rise of the top.
We will consider a top with $I_x=8.52 \times 10^{-5} \,kg\,m^2$, $I_z=7.25 \times 10^{-5} \,kg\,m^2$, $M=110 \,gr$, $\tilde l=20 \,mm$ and $R=7 \,mm$.
Initial values will be taken as $\theta_0=0$, $\dot \theta_0=0$, $\dot \phi_0=1.75 \,rad\,s^{-1}$ and $\dot \psi_0=170 \,rad\,s^{-1}$ ( $\phi_0=0$ and $\psi_0=0$ ).
This configuration is very close to regular precession, and it is considered to reduce nutation, which is not necessary to obtain the top's rise.
Some of the terms in equations \eqref{diffeqns_odp} go to infinity as $\theta$ goes to zero, and to avoid these infinities, the numerical solution is cut when the top comes to the nearly upright position, i.e. $\theta=0.02 \,rad$.
Results of the numerical solutions for equations \eqref{diffeqns_odp} for $\theta$ can be seen in figure \ref{fig:thetaphipsi_d}.
One can see from figure \ref{fig:thetaphipsi_e}(a) that the top rises to the nearly upright position in around $25$ seconds.
There are some small nutations, figure \ref{fig:thetaphipsi_e}(b), which do not affect the general motion.
It can be seen that the average of $\dot \theta$ is negative as required for the rise.
From figure \ref{fig:thetaphipsi_e}(c), it can be seen that the average of $\dot \phi$ slightly increases, which is the main reason for the rise.
$\dot \phi$ takes negative values at the end, which means that the top makes the looping motion at the later stages of motion \cite{Goldstein}.
The average value of $\dot \psi$ decreases as a result of dissipation, figure \ref{fig:thetaphipsi_e}(d).
In real situations, after the rise of the top, dissipation slows down the top, and then it falls.
In the numerical solution, there is some increase in fluctuations of angular velocities at the end, and such fluctuations are not seen in $\theta$, see figure \ref{fig:thetaphipsi_e}(a).
In fact, the observed fluctuations are the results of very small fluctuations in small $\theta$ values.
In real situations, such fluctuations, in general, are damped by air dissipation which we did not consider in this work.
\begin{figure}[h!]
\begin{center}
\subfigure[$\theta$]{
\includegraphics[width=4.2cm]{sym_theta_f.pdf}
}
\subfigure[$\dot \theta$]{
\includegraphics[width=4.2cm]{sym_dt_f.pdf}
}
\subfigure[$\dot \phi$]{
\includegraphics[width=4.2cm]{sym_dph_f.pdf}
}
\subfigure[$\dot \psi$]{
\includegraphics[width=4.2cm]{sym_dps_f.pdf}
}
\caption{Results of the numerical solution for full equations \eqref{diffeqns_odp} for $\theta$ (a), $\dot \theta$ (b), $\dot \phi$ (c) and $\dot \psi$ (d). Continuous (blue) curves show results of numerical solution, which have a bulk structure in some places due to nutation, dashed (black) curves show nutation average. Initial values: $\theta_0=0.5\,rad$, $\dot \theta_0=0$, $\dot \phi_0=1.75 \,rad\,s^{-1}$ and $\dot \psi_0=170 \,rad\,s^{-1}$.
}
\label{fig:thetaphipsi_d}
\end{center}
\end{figure}
One can see the three-dimensional plot of the change of symmetry axis's position, shapes for the locus, and its projection in figure \ref{fig:tt_prj_f}.
It can be seen that the top rises slowly with a spiraling structure, which is different than spiralling motion \cite{Routh, Tanriverdi_abequal}.
We previously mentioned that $\dot \phi$ has negative values at the end of the solution, implying that the top makes the looping motion.
Such a motion is observed; however, it is not resolved in the plot due to the very small structure.
\begin{figure}[h!]
\begin{center}
\subfigure[]{\includegraphics[width=4.2cm]{gr_tt_st7_f.pdf}}
\subfigure[]{\includegraphics[width=4.2cm]{gr_ttprojection_f.pdf}}
\caption{Shapes for the locus (a) and its projection on to $xy$-plane (b) for the rise of the top.
Initial values are given in figure \ref{fig:thetaphipsi_d}, and the solution is obtained by considering full equations \eqref{diffeqns_odp}.
Animated version is available at \href{https://youtu.be/gcAnamQHmBM}{https://youtu.be/gcAnamQHmBM}.
}
\label{fig:tt_prj_f}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[]{\includegraphics[width=4.2cm]{sym_theta_two_f_s.pdf}}
\subfigure[]{\includegraphics[width=4.2cm]{sym_theta_two_f_t.pdf}}
\caption{ Results of the numerical solution for inclination angle $\theta$ when different dissipative effects are considered.
Continuous (blue) lines show results of numerical solution for: (a) equations \ref{diffeqns_odp} without the dissipative term in $\ddot \psi$, (b) equations \ref{diffeqns_odp} without the dissipative term in $\ddot \phi$.
Dashed lines (black) show nutation average results for full equations \ref{diffeqns_odp}.
}
\label{fig:thetaphipsi_e}
\end{center}
\end{figure}
Now, we will consider and numerically solve two more cases to clearly see the reason of the rise: Equations \eqref{diffeqns_odp} without the dissipative term in $\ddot \psi$ and equations \eqref{diffeqns_odp} without the dissipative term in $\ddot \phi$.
We will take the parameters of the top and initial values the same as the numerically solved example.
Results of the numerical solution for $\theta$ can be seen in figure \ref{fig:thetaphipsi_e}(a) when only the dissipative term in $\ddot \phi$ is included, and the rise of the top is observed.
This shows that the rise is related to the dissipative term in $\ddot \phi$.
We should note that the rise to the nearly upright position takes some longer time.
This occurs due to not including the dissipative term in $\ddot \psi$: $\dot \psi$ does not decrease, and $\dot \phi$ requires bigger values for the rise.
In the other case, we will set the dissipative term in $\ddot \phi$ to zero and keep the dissipative term in $\ddot \psi$.
Results of the numerical solution for $\theta$ can be seen in figure \ref{fig:thetaphipsi_e}(b).
It can be seen from that figure that the top does not rise and fall, which clearly shows that the dissipative term in $\ddot \phi$ is the cause of the rise, and the dissipative term in $\ddot \psi$ does not directly contribute to the rise of the top.
Results of the numerical solution show that $\dot \psi$ decreases quickly, and the reason for this quick decrease is the absence of the dissipative term in $\ddot \phi$.
Around $t \approx 24 \,s$, there is a fast fall of the top because $|a|=I_z |\dot \psi+\dot \phi \cos \theta|/I_x$ takes smaller values than $\sqrt{4 Mg \tilde l/I_x}$ which corresponds to "weak top" condition \cite{KleinSommerfeld}.
\section{Conclusion}
\label{four}
The rise of the top with a hemispherical peg is studied.
We have considered the problem in the body reference frame, whose origin is at the peg's radial center.
Results show that rise is provided by the term $\frac{k N R^2 \dot \psi \cos \theta}{I_x |\vec v|}$ in $\ddot \phi$ which is the result of the friction at the touchpoint.
We have also seen from the results that if the dissipation in $\ddot \psi$ is removed, the rise time becomes longer.
Then, we can say that in real situations, due to the presence of air dissipation $\dot \psi$ decreases and the top rises faster.
In previous works, this problem is studied in the body reference frame whose origin is at the center of mass.
If the origin is taken at the center of mass, the rise term depends on $\tilde l$ also, which is different from the studied model.
The rise of the top deserves a more detailed explanation which holds for both models.
We have seen that the rise of the top is directly related to the term in $\ddot \phi$ originating from the slipping friction at the touchpoint due to the rotation related to the spin angular velocity as it is known previously.
This term causes an increase in the magnitude of $\dot \phi$, and an increase of $\dot \phi$ can result in negative $\ddot \theta$.
This is stated as "Hurry on the precession, and the body rises in opposition to gravity." by Perry.
We should note that this happens if the precession angular velocity is smaller than the smaller root for the regular precession for tops satisfying $\theta<\pi/2$ and $I_x>I_z$.
Then, as a summary, one can say that \textit{the slipping friction at the touchpoint due to rotation related to the spin angular velocity generates a term in the precession angular acceleration, which increases the precession angular velocity, and this increase results in the rise of the top by making the angular acceleration for inclination angle negative.}
We should note that for the rise of the top, the increase of $\dot \phi$ is not a necessary condition.
If dissipation reduces the magnitude of $\dot \psi$ more quickly, then one can observe the rise of the top without any increase of $\dot \phi$.
This can be understood by studying $\ddot \theta$ considering different situations.
|
train/arxiv
|
BkiUcMDxK0fkXQzmK-rv
| 5 | 1 |
\section{Introduction}
In physics and engineering, models are used as mathematical images of the world we experience around ourselves. Therefore, the actual task defines which elements we include in the model and which phenomena we want to understand, predict, and characterise quantitatively. Throughout this paper, we focus on heat conduction as the primary phenomenon.
Fourier's well-known model, i.e., when the heat flux ($\mathbf q$) is being proportional with the temperature gradient ($\nabla T$),
\begin{align}
\mathbf q = - \lambda \nabla T \label{fourier}
\end{align}
with $\lambda$ being the thermal conductivity, leads to the simplest heat conduction equation that satisfies the II. law of thermodynamics,
\begin{align}
\partial_t T = \alpha \Delta T, \label{fouuu}
\end{align}
where $\alpha=\lambda/(\rho c)$ is the thermal diffusivity with mass density $\rho$ and specific heat $c$, both assumed constant here; $\partial_t$ denotes the partial derivative respect to time $t$, and $\Delta$ is the Laplacian. Although it is applicable for most of the engineering tasks, in recent decades, several situations have been discovered for which Fourier's law loses its validity. Such cases can be a low-temperature environment \cite{Tisza38, Pesh44, McNEta70a}, nanostructures \cite{Zhang07b, LebEtal11, JouCimm16}, heterogeneous materials \cite{Sobolev94, Botetal16, Sobolev16, FehEtal21, FehKov21} and even low-pressure states for fluids \cite{MulRug98, Struc05, StrTah11, RugSug15}. These problems can be modelled with a non-Fourier equation, in which Eq.~\eqref{fourier} is replaced with a (partial) differential equation, and, correspondingly, \eqref{fouuu} gets generalized to a partial differential equation that is higher-order in time. The level of extension depends on the particular situation and the chosen approach, hence, several models exist in the literature.
It seems inevitable to investigate and understand the mathematical properties of non-Fourier models, as one of these may be a new standard model in the future, substituting the Fourier heat equation. In the present study, we want to investigate two of them, the Maxwell-Cattaneo-Vernotte (MCV) \cite{Cattaneo58, Vernotte58} and the Guyer-Krumhansl (GK) \cite{GuyKru66a1} equations, as they are the simplest, thermodynamically compatible extensions of Fourier's law. Recently, both numerical and analytical solution methods have been developed for these models with special attention to the boundary conditions \cite{Kov18gk} in order to avoid unphysical results such as negative absolute temperature \cite{Zhukov16, Zhu16b}. Now, we turn our attention to the initial conditions. Typically, the models are solved with steady initial states and homogeneous field variables. When this is not the case, the situation may be surprisingly more difficult to overcome, and one must take the first step with care to avoid misleading assumptions.
Consequently, here we consider a situation in which the initial temperature distribution is nonhomogeneous, thus having a non-equilibrium initial state. For the Fourier heat equation \eqref{fouuu}, that problem is almost trivial, as it only requires knowledge the temperature profile at the initial time instant. However, for a non-Fourier equation, the initial time deriatives are also required, which is restricted by a more complicated constitutive equation. Now, such constitutive relationship is a differential equation, needing further considerations about the initial conditions. This paper aims to answer this question through analytical solutions of the aforementioned heat conduction
models, which reflect the essential physical aspects.
In what follows, we first present the concept of the Fourier heat equation, forming the basis of the analytical solution technique also used for non-Fourier models. After that,
we can move on to the more complex aspects of non-Fourier models and reveal how the initial temperature state influence the initial time derivatives. We highlight the critical steps by analytically solving the MCV and GK equations.
\section{Demonstration using Fourier's law}
\noindent In one space dimension, Fourier's law \eqref{fourier} reads,
\begin{align}
q=-\lambda \partial_x T, \label{fourier2}
\end{align}
which is a constitutive equation and becomes mathematically and physically complete together with the balance equation of internal energy ($e=cT$),
\begin{align}
\rho c \partial_t T + \partial_x q = 0, \label{ebal}
\end{align}
for rigid conductors without heat sources. The simplicity of Eq.~\eqref{fourier2} suggests to use the temperature $T$ as the only field variable, i.e., substituting \eqref{fourier2} into \eqref{ebal} results in
\begin{align}
\partial_t T = \alpha \partial_{xx} T. \label{FT}
\end{align}
Despite the fact that Eq.~\eqref{FT} would be adequate for certain investigations, we do not intend to eliminate any of the variables as the initial pair of equations \eqref{fourier2}--\eqref{ebal} is more suitable for our
present purposes. Let us consider the following initial and boundary conditions,
\begin{align}
q(x=0,t)=q(x=L,t)=0 \frac{\mathrm{W}}{\mathrm{m}^2}; \quad T(x,t=0)=T_0(x)=T_\textrm{b} + T_f \exp{(-x/z)}, \label{icbc}
\end{align}
which represent a rod of length $L$ with adiabatic ends. That set of initial and boundary conditions \eqref{icbc} prescribes a non-equilibrium situation as an initial state. Regarding the initial temperature distribution, we choose an exponential decay, illustrated in Figure \ref{fig1}.
\begin{figure}[H]
\centering
\includegraphics[width=12cm,height=6cm]{fig_ic.jpg}
\caption{The schematic presentation of the initial and boundary conditions, with $z=0.025$ m, $T_\textrm{b}=15$ $^\circ$C, $T_f=5$ $^\circ$C, $L=0.1$ m.}
\label{fig1}
\end{figure}
The practical importance of such an initial temperature distribution is that radiation (incident from the left) absorbed by the body induces such a temperature profile. For instance, this is realistic in a heat pulse experiment in which some part of the flash energy is absorbed in a semi-transparent body. Accordingly, $z$ represents a penetration depth quantity, while $T_f$ is the temperature rise at the front side, thus $T_b + T_f$ together form the front side temperature as $T_b$ is the temperature of the body before the flash occurs (e.g., ambient temperature).
We expect the nonhomogeneous temperature profile to equilibrate in time, regardless of the applied heat conduction model.
As learned from numerical solutions for Fourier and beyond-Fourier
heat conduction problems \cite{RietEtal18}, it is advantageous to compute both $T$ and $q$. Here, we determine them analytically, using Galerkin's method.
However, in order to remain consistent with the non-Fourier models and highlight the critical steps, we choose a different approach and solve the model analytically using Galerkin's method. Let us suppose that
\begin{align}
q(x,t)=\sum_{j=0}^N a_j(t) \phi_j(x), \quad T(x,t)=T_\textrm{b}+\sum_{j=0}^N b_j(t) \varphi_j(x) \label{g1}
\end{align}
where $\phi_j(x)$ and $\varphi_j(x)$ should be compatible with each other due to the spatial derivatives that appear in Eqs.~\eqref{fourier2} and \eqref{ebal}. Suitable functions are found earlier in \cite{FehKov21, Kov18gk}, that is, $\phi_j(x)=\sin(j \pi x/L)$ and $\varphi_j(x)=\cos(j \pi x/L)$. Here, the modes $\phi_j$ fulfil the boundary conditions and form a
complete orthogonal set of functions among the square integrable
functions, and the functions $\varphi_j$ also provide a complete
orthogonal set \cite{Fulop07}. With the help of them, we can transform the partial differential equation to a set of ordinary differential equations. After substituting Eq.~\eqref{g1} into \eqref{fourier2} and \eqref{ebal}, we obtain
\begin{align}
a_j =&\lambda \frac{j \pi}{L} b_j, \label{g2f}\\
\rho c \frac{\textrm{d} b_j}{\textrm{d} t}=& -\frac{j \pi}{L} a_j.\label{g2e}
\end{align}
Furthermore, $a_0=0$, and the cosine series expansion of $T_0(x)$ serves as the initial condition:
\begin{align}
b_0 =& \frac{1}{L} \int\displaylimits_0^L \big(T_0(x)-T_\textrm{b}\big) \textrm{d}x = \frac{T_f z}{L} \Big ( 1 - \exp(-L/z) \Big), \\
b_j (t=0) = b_{j0}=& \frac{2}{L} \int\displaylimits_0^L \big(T_0(x)-T_\textrm{b}\big) \cos\left(\frac{j \pi x}{L} \right) \textrm{d}x = \frac{2 T_f z L e^{-L/z}}{L^2 + j^2 \pi^2 z^2} \left ( e^{L/z} - (-1)^j \right ). \label{f1}
\end{align}
Finally, we found the solution to be
\begin{align}
T(x,t)=T_\textrm{b}+{b_0} + \sum_{j=1}^N b_{j0} e^{-\alpha_j t} \cos\left(\frac{j \pi x}{L} \right), \quad \textrm{with} \quad \alpha_j = \frac{\lambda}{\rho c} \frac{j^2 \pi^2 }{L^2}, \label{thist}
\end{align}
for which Figure \ref{fig2} shows the convergence using the set of the following parameters: $\rho=2000$ kg/m$^3$, $c=500$ J/(kg$\cdot $K), $\lambda=5$ W/(m$\cdot $K), with the initial conditions of Figure \ref{fig1}. Figure \ref{fig3} shows the temperature for various spatial points. While the results followed from a straightforward calculation and seemed basic, it makes the fundamental aspects of non-Fourier models apparent in the next section.
\begin{figure}[H]
\centering
\includegraphics[width=17cm,height=6.7cm]{fig2b.jpg}
\caption{Demonstrating the convergence of \eqref{thist}, rear side ($x=L=0.1$ m) temperature history. The left side figure magnifies the initial time interval.}
\label{fig2}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[width=14cm,height=7.5cm]{fig3.jpg}
\caption{Temperature history for different points ($x=\{0, 0.025, 0.05, 0.075, 0.1\}$ m), using $N=50$ terms.}
\label{fig3}
\end{figure}
\section{Initial states vs.~non-Fourier equations}
We start this section with the MCV equation, the first hyperbolic extension of the Fourier heat equation. The MCV model is - unfortunately - restricted to low-temperature heat conduction problems \cite{Naretal75}, and thus, it has less practical relevance in engineering, as it has been concluded by Auriault \cite{Aur16}, in agreement with our experience \cite{SudEtal21}. However, it serves as an excellent example of how careful we must be when using an unconventional model. Subsequently, this impression is strengthened further by investigating the GK equation for which neither the initial conditions nor the boundary conditions work as in the Fourier case \cite{FehKov21}.
\subsection*{Maxwell-Cattaneo-Vernotte equation}
The MCV model extends the Fourier equation by the time derivative of the heat flux, i.e.,
\begin{align}
\tau \partial_t q + q = - \lambda \partial_x T, \label{mcv0}
\end{align}
in which a new parameter $\tau$, called relaxation time, appears. As a direct consequence of the II.~law of thermodynamics, $\tau$ and $\lambda$ are not independent of each other due to the Onsagerian relations, which property becomes crucial for nonlinearities, e.g., for temperature-dependent parameters \cite{KovRog20}.
Analogously to the Fourier model, it is possible to eliminate one of the variables ($q$ or $T$, using Eq.~\eqref{ebal}), resulting in
\begin{align}
\tau \partial_{tt} T + \partial_t T &= \alpha \partial_{xx} T, \quad \textrm{or} \label{mcvt}\\
\tau \partial_{tt} q + \partial_t q &= \alpha \partial_{xx} q, \label{mcvq}
\end{align}
depending on which variable we choose in a linear situation. While $q$ is not conventional to use, it could be helpful in certain cases \cite{FehKov21}. Nevertheless, in general, we recommend not eliminating any of the variables. As it is visible, the set of initial conditions seemingly depends on our decision, which path we choose. For Eq.~\eqref{mcvq}, one should define the heat flux and its time derivative at the initial time instant, however, none of them is directly measurable. In parallel, regarding Eq.~\eqref{mcvt}, we have the initial temperature distribution $T_0(x)$, but what can we tell about its time derivative? If one follows the papers of Moosaie \cite{Moosaie07, Moosaie08}, or Tung and Fong \cite{TungFong11}, one could conclude that it is possible to safely assume the time derivative to be zero since the thermodynamic origin of the $T$-representation is entirely hidden in their approach. As it turns out soon, this would be a seriously misleading assumption.
Applying again the ansatz of \eqref{g1} on \eqref{mcv0}, we obtain
\begin{align}
\tau \frac{\textrm{d} a_j}{\textrm{d} t} + a_j = \lambda \frac{j \pi}{L} b_j,
\end{align}
together with \eqref{g2e}. Let us eliminate the variables again, i.e., eliminating the heat flux ($q$), we have
\begin{align}
\textrm{$T$-representation:}& \quad \quad \tau \frac{\textrm{d}^2 b_j}{\textrm{d} t^2} + \frac{\textrm{d} b_j}{\textrm{d} t} + \alpha_j b_j = 0, \quad \quad \alpha_j= \frac{\lambda}{\rho c} \frac{j^2 \pi^2 }{L^2} \\
\textrm{initial conditions:}& \quad \quad b_j(t=0)=b_{j0}, \quad \textrm{and} \quad \frac{\textrm{d} b_j}{\textrm{d} t}\bigg\rvert_{t=0} = -\frac{j \pi}{\rho c L} a_j(t=0), \nonumber
\end{align}
in which the time derivative of $b_j$ follows from the balance of internal energy \eqref{ebal} in case of no heat sources, otherwise the heat source would also appear here. Thus, the MCV equation can be solved if the heat flux is known at the initial time instant, directly \textit{determining} the initial time derivative of temperature, which is not visible from \eqref{mcvt}. Now, there is an advantageous property of most non-Fourier heat equations: their steady-state coincides with the Fourier equation. Consequently, if and only if the initial state is close to steady-state, then the heat flux can be calculated easily using \eqref{fourier2}, and by having $a_j(t=0)$, the model can be solved. For non-zero $T_0(x)-T_\textrm{b}$, it means a non-zero time derivative of temperature, too. In parallel, if that $T_0(x)$ represents a state far from equilibrium, then we cannot use \eqref{fourier2} anymore for such a purpose. Hence the way of how to determine the initial time derivative of temperature remains an open question. We want to recall the fact that, for our application-motivated present set of initial and boundary conditions \eqref{icbc}, the initial state is not a steady-state.
In the case when $q$ is chosed to be the primary field variable, we obtain
\begin{align}
\textrm{$q$-representation:}& \quad \quad \tau \frac{\textrm{d}^2 a_j}{\textrm{d} t^2} + \frac{\textrm{d} a_j}{\textrm{d} t} + \alpha_j a_j = 0, \quad \quad \alpha_j= \frac{\lambda}{\rho c} \frac{j^2 \pi^2 }{L^2} \\
\textrm{initial conditions:}& \quad \quad a_j(t=0)=a_{j0}, \quad \textrm{and} \quad \frac{\textrm{d} a_j}{\textrm{d} t}\bigg\rvert_{t=0} = \frac{1}{\tau}\Big(\lambda\frac{j \pi}{ L} b_{j0} - a_{j0} \Big ), \nonumber
\end{align}
which reveals that the initial time derivative of heat flux is zero if and only if the Fourier law is applied to determine the initial heat flux due to \eqref{g2f}. In both representations, \eqref{f1} is used to determine the solutions, plotted in Figure \ref{fig4} for demonstration.
A more complicated way to find the proper initial condition is if one substitutes the function $T_0(x)$ into the constitutive equation \eqref{mcv0}, as it is usual with the Fourier equation,
\begin{align}
q(x,t=0) = - \lambda \partial_x T_0(x) + C(x), \label{mcvic0}
\end{align}
where $C(x)$ appears by solving the differential equation \eqref{mcv0} for $q$, and taking $t=0$. This is reasonable since \eqref{mcv0} describes the material, with which the initial condition must be compatible. Moreover, the balance equation \eqref{ebal} leads to
\begin{align}
\partial_t T = - \frac{1}{\rho c} \partial_x q = \frac{1}{\rho c} \Big ( \lambda \partial_{xx} T_0(x) - \partial_x C(x) \Big ).
\end{align}
From that point of view, $C(x)$ is an uncertainty that must be restricted somehow. Exploiting the knowledge that steady-states are identical, one would think that $C(x)$ should be zero. However, this would be misleading: the boundary conditions do have a role at this step. In other words, $q(x=0,t)=-\lambda \partial_x T_0(x=0) + C(x=0)=0$ and the same for $x=L$ holds, therefore $C(x)$ depends on the initial temperature distribution and the boundary conditions, too. Moreover, as it could depend on $x$, one has the freedom to choose a suitable function for dynamic situations. However, if $C(x)$ is zero everywhere except on the boundary, then $C(x)$ is not necessarily differentiable at $x=\{0,L\}$, or $\partial_x C(x)$ could be difficult to find and it definitely is not zero everywhere. While it seemed reasonable, finding a proper initial time derivative is not straightforward.
Instead of sticking to the conventional approach, Galerkin's method can offer deeper insight, even with nonzero (and time-dependent) boundary conditions.. In that case, it is possible to separate the boundaries \cite{FehKov21} and deal with an adiabatic problem. Then, the basis functions ($\phi$ and $\varphi$) automatically satisfy the boundary conditions and provide a more explicit and straightforward approach.
Consequently, realising of these aspects was possible due to the chosen solution method. Galerkin's method revealed which modes must be used to obtain a physically compatible set of initial conditions. However, without any prior knowledge of the non-Fourier heat conduction constitutive law, it would be difficult to do so. If one starts with utilising only the $T$-representation of the heat equation, the compatibility between the initial conditions and the heat conduction constitutive relationship can be easily violated.
\subsection*{Guyer-Krumhansl equation}
While the GK equation was incredibly helpful in modelling low-temperature phenomenon of second sound in solids \cite{GK66}, it turned out recently that it is also applicable for macroscale heterogeneous materials at room temperature, when over-diffusion appears, using a continuum background \cite{VanFul12, JozsKov20b}. Therefore, this model may have greater practical importance later, and we feel it necessary to investigate the characteristics of the GK equation.
It introduces a new spatial derivative term into the constitutive equation, compared to \eqref{mcv0},
\begin{align}
\tau \partial_t q + q = - \lambda \partial_x T + l^2 \partial_{xx} q, \label{gk0}
\end{align}
where $l^2$ has originally been interpreted as the squared mean free
path of kinetic theory. However, as we utilise a continuum thermodynamic approach instead of the kinetic theory, the coefficients $\tau, \lambda, l^2\geq0$ are restricted only by the II.~law of thermodynamics, and can be fitted to experiments. That new term makes it more difficult to properly handle the boundary conditions \cite{BallEtal20}. The usual interpretation of boundaries by Fourier is no longer valid. The $T$ and $q$ representations are
\begin{align}
\tau \partial_{tt} T + \partial_t T &= \alpha \partial_{xx} T + l^2 \partial_{txx} T, \label{gkt}\\
\tau \partial_{tt} q + \partial_t q &= \alpha \partial_{xx} q + l^2 \partial_{txx} q, \label{gkq}
\end{align}
with exploiting \eqref{ebal} again. Having \eqref{gkt}, the prescribed $\partial_x T$ is not a valid boundary condition, and leads to unphysical results \cite{Zhukov16, Zhu16b}.
In regard to the initial conditions, we aim to investigate the role of the $l^2$ term. Apparently, substituting the intial condition \eqref{icbc} into \eqref{gk0} and solving it as a partial differential equation, would offer us no advantage, similarly to \eqref{mcvic0}. Instead, we remain with the Galerkin method, using the same basis functions for $\phi(x)$ and $\varphi(x)$ in \eqref{g1}, thus we obtain
\begin{align}
\tau \frac{\textrm{d} a_j}{\textrm{d} t} + \Big (1 + l^2 \frac{j^2 \pi^2}{L^2} \Big ) a_j &= \lambda \frac{j \pi}{L} b_j, \label{g3gk}\\
\rho c \frac{\textrm{d} b_j}{\textrm{d} t}=& -\frac{j \pi}{L} a_j.\label{g3e}
\end{align}
Interestingly, that spatial derivative term of $l^2$ appears only in a coefficient of $a_j$, therefore it does not differ from the MCV equation significantly. In other words, the term $l^2$ does not provide further restrictions on the initial conditions. Furthermore, the same series expansion \eqref{f1} can be applied here as well.
We find the temperature history in the form of
\begin{align}
b_j(t)=\frac{b_{j0}}{2 \beta_j} e^{-\frac{ (\beta_j+\gamma_j)t}{2 \tau }} \left[(\gamma_j-2 \alpha_j \tau) \left(e^{\frac{ \beta_j t}{\tau }}-1\right) +\beta_j \left(e^{\frac{\beta_j t}{\tau }}+1\right)\right]
\end{align}
with
\begin{align}
\alpha_j=\frac{\lambda}{\rho c} \frac{j^2 \pi^2 }{L^2}; \quad \beta_j=\sqrt{\gamma_j^2-4\alpha_j \tau}; \quad \gamma_j=1 + l^2 \frac{j^2 \pi^2}{L^2}.
\end{align}
Naturally, $l^2=0$ recovers the solution of the MCV equation. First, we illustrate this special case: Figure \ref{fig4} presents the characteristic solutions for the MCV equation with changing the relaxation time. As it is visible, in order to observe a wave propagation effect, extremely large relaxation time is needed. Since the materials usually have much smaller relaxation time, it is not possible to experimentally observe such phenomenon under normal conditions on macroscale, in agreement with \cite{Aur16}.
\begin{figure}[H]
\centering
\includegraphics[width=8cm,height=7cm]{fig4.jpg}
\caption{Temperature history according to the MCV equation, at $x=0.05$ m, for different relaxation times ($\tau=\{0, 0.025, 0.05, 0.075, 0.1\}$ s), using $N=500$ terms.}
\label{fig4}
\end{figure}
Next, Figure \ref{fig5} shows the behaviour of the GK equation, presenting the characteristic temperature curves at the middle. In this case, $l^2$ is changed in order to achieve the corresponding $B=l^2/(\alpha \tau)$ values. That $B$ is a dimensionless parameter, being helpful in the characterisation of the solution on how it deviates from the Fourier equation \cite{Vanetal17}. Fourier's solution is recovered when $B=1$ (`resonance' case, see \cite{VanKovFul15, FulEtal18e}). As $B$ increases, the temperature rises faster at the beginning, resulting in better convergence properties in agreement with \cite{FehKov21, Kov18gk}. After that initial time interval, the temperature rise becomes slower than the one belonging to the Fourier equation. This phenomenon is called over-diffusion and is characteristic to the GK equation only.
\begin{figure}[H]
\centering
\includegraphics[width=12cm,height=7cm]{fig5.jpg}
\caption{Temperature history according to the GK equation, at $x=0.05$ m, for different ($l^2=\{0.075, 0.375, 0.75, 1.125\}\cdot 10^{-4}$ m), using $N=5$ terms.}
\label{fig5}
\end{figure}
\section{Discussion of consequences}
A time will come shortly when the community must decide which non-Fourier equation should be the next standard model of heat conduction after Fourier's law. More and more possibilities are revealed, widening the application areas. Therefore, understanding these models, their physical consistency, and how to solve them have increasing importance for which this paper aimed to contribute.
For non-Fourier equations, the usual approach does not work as it used to do with the Fourier heat equation. The initial and boundary conditions must be handled with more care, as they do affect the overall outcome of the model. The solution methods must respect these attributes, even for the most straightforward linear situation. For nonlinear problems, such as temperature-dependent coefficients, the material parameters are functionally restricted as a consequence of the II.~law of thermodynamcis, one affecting the other, and without a consistent physical background, it is not possible to obtain a physically sound solution \cite{KovRog20}.
Throughout the paper, we showed that it is not advantageous to use the $T$-representation of the heat equations, as it hides the essential connections between the field variables and can influence how we think about the initial (and boundary) conditions.
On the example of the MCV and GK equations, we presented the Galerkin-type solution method for nonhomogeneous initial condition, revealing that one needs to determine the heat flux first in order to be able to take the time derivative of the temperature at the initial state correctly into account. Even for a linear spatial dependence, the time derivative becomes non-zero, which impacts the entire solution. This is the main difference between Fourier's law and non-Fourier models: for the Fourier equation, it does not matter how far the system is from equilibrium, and it seems natural to consider it to be a static situation as it does not influence the time derivatives. However, if the system is initially in a non-equilibrium state, then it induces an initial non-zero time derivative for a non-Fourier equation. If this initial state is close to equilibrium, i.e., not far from a steady-state, then Fourier's law can be applied to find the heat flux field at the initial time instant. For a numerical code, it can be interpreted as taking the $0^{th}$ step with the Fourier equation in determining the initial heat flux and then resuming the calculation with the non-Fourier model.
On the contrary, if the initial state is far from equilibrium, it remains an open question how the heat flux field could be determined reliably.
Connected to this, a more general question also emerges: how can one
determine whether a state is close to or far from equilibrium? These
remain for further investigation.
\section{Acknowledgement}
The research reported in this paper and carried out at BME has been supported by the grants National Research, Development and Innovation Office-NKFIH FK 134277, by the NRDI Fund (TKP2020 NC, Grant No. BME-NC) based on the charter of bolster issued by the NRDI Office under the auspices of the Ministry for Innovation and Technology and the New National Excellence Program of the Ministry for Innovation and Technology project ÚNKP-21-5-BME-368. This paper was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences.
\bibliographystyle{unsrt}
|
train/arxiv
|
BkiUeC3xK0wg09KOXyt2
| 5 | 1 |
\section{Introduction}
Let $(M,\overline{g})$ be a compact $n$-dimensional Riemannian manifold, $n \geq 3$. If $\Lambda \subset M$ is any
closed set, then the `standard' singular Yamabe problem concerns the existence and geometric properties of
complete metrics of the form $g = u^{\frac{4}{n-2}}\overline{g}$ with constant scalar curvature. This corresponds to solving
the partial differential equation
\begin{equation}\label{eq:cL}
\Delta_{\overline{g}} u + \frac{n-2}{4(n-1)} R^{\overline{g}} u = \frac{n-2}{4(n-1)}R^g \, u^{\frac{n+2}{n-2} }, \qquad u > 0,
\end{equation}
where $R^g$ is constant and with a `boundary condition' that $u \to \infty$ sufficiently quickly at $\Lambda$ so that
$g$ is complete. (Note that in our convention, $\Delta_{\overline{g}}$ is an operator with nonnegative spectrum.) It is known
that solutions with $R^g < 0$ exist quite generally if $\Lambda$ is large in a capacitary
sense \cite{Lab}, whereas for $R^g > 0$ existence is only known when $\Lambda$ is a smooth
submanifold (possibly with boundary) of dimension $k < (n-2)/2$, see \cite{MP}, \cite{F}.
There are both analytic and geometric motivations for studying this problem. For example, in the positive case ($R^g > 0$), solutions
to this problem are actually weak solutions across the singular set, so these results fit into the broader investigation of
possible singular sets of weak solutions of semilinear elliptic equations. On the geometric side is a well-known theorem
by Schoen and Yau \cite{SY} stating that if $(M,h)$ is a compact manifold with a locally conformally flat metric $h$ of
positive scalar curvature, then the developing map $D$ from the universal cover $\widetilde{M}$ to $S^n$, which
by definition is conformal, is injective, and moreover, $\Lambda := S^n \setminus D(\widetilde{M})$ has Hausdorff
dimension less than or equal to $(n-2)/2$. Regarding the lifted metric $\tilde{h}$ on $\widetilde{M}$ as a metric
on $\Omega$, this provides an interesting class of solutions of the singular Yamabe problem which are periodic with respect to a
Kleinian group, and for which the singular set $\Lambda$ is typically nonrectifiable. More generally, that paper also
shows that if $\overline{g}$ is the standard round metric on the sphere and if $g = u^{\frac{4}{n-2}}\overline{g}$ is a complete metric
with positive scalar curvature and bounded Ricci curvature on a domain $\Omega = S^n \setminus \Lambda$, then
$\dim \Lambda \leq (n-2)/2$.
In the past two decades it has been realized that the conformal Laplacian, which is the operator appearing as the
linear part of \eqref{eq:cL}, fits into a holomorphic family of conformally covariant elliptic pseudodifferential operators.
The operators in this family of positive even integer order are the GJMS operators, and these have a central role in conformal
geometry. Just as the Yamabe problem is naturally associated to the conformal Laplacian, so too are there higher order
Yamabe-type problems associated to the other GJMS operators, or more generally, also to these other conformally covariant
operators with noninteger order. The higher (integer) order Yamabe problems have proved to be analytically challenging
and provide insight into the GJMS operators themselves. Hence it is reasonable to hope that these fractional order (singular)
Yamabe problems will have a similarly rich development and will bring out interesting features of these
conformally covariant pseudodifferential operators. From a purely analytic perspective, little is known about regularity of
solutions of semilinear pseudodifferential
equations like these, and this family of geometric problems is a natural place to start. In fact, as we explain below,
fractional powers of the Laplacian have also appeared recently in the work of Caffarelli and his collaborators as
generalized Dirichlet to Neumann operators for certain singular divergence form elliptic equations, which further
indicates the worth of studying such operators.
The present paper begins an investigation into these questions. Our goals here are limited: beyond presenting
this set of problems as an interesting area of investigation, we prove a few
results which indicate how certain properties of the fractional singular Yamabe problem extend some well-known
results for the standard Yamabe equation.
To describe this more carefully, we first define the family of fractional conformal powers of the Laplacian. As we have
already indicated, the linear operator which appears as the first two terms on the left in \eqref{eq:cL} is known as
the conformal Laplacian associated to the metric $\overline{g}$, and denoted $P_1^{\overline{g}}$. It is conformally covariant in the sense
that if $f$ is any (smooth) function and $g = u^{\frac{4}{n-2}}\, \overline{g}$ for some $u > 0$, then
\begin{equation}
P_1^{\overline{g}} (uf) = u^{\frac{n+2}{n-2}}P_1^g (f).
\label{eq:cccL}
\end{equation}
Setting $f \equiv 1$ in \eqref{eq:cccL} yields the familiar relationship \eqref{eq:cL} between the scalar curvatures
$R^{\overline{g}}$ and $R^g$. $P_1$ is the first in a sequence of conformally covariant elliptic operators, $P_k$,
which exist for all $k \in {\mathbb N}$ if $n$ is odd, but only for $k \in \{1, \ldots, n/2\}$ if $k$ is even. The first construction
of these operators, by Graham-Jenne-Mason-Sparling \cite{GJMS} (for which reason they are known as the GJMS operators),
proceeded by trying to find lower order geometric correction terms to $\Delta^k$ in order to obtain nice transformation
properties under conformal changes of metric. Beyond the case $k=1$ which we have already discussed, the operator
\[
P_2=\Delta^2 +\delta\left( a_n Rg+b_n Ric\right) d+\tfrac{n-4}{2}Q_2,
\]
called the Paneitz operator (here $Q_2$ is the standard $Q$-curvature), had also been discovered much earlier than the operators $P_k$ with $k > 2$.
This leads naturally to the question whether there exist any conformally covariant pseudodifferential operators of noninteger
order. A partial result in this direction was given by Peterson \cite{P}, who showed that for any $\gamma$, the
conformal covariance condition determines the full Riemannian symbol of a pseudodifferential operator with
principal symbol $|\xi|^{2\gamma}$. Hence $P_\gamma$ is determined modulo smoothing operators, but it is by no
means clear that one can choose smoothing operators to make the conformal covariance relationships hold exactly.
The breakthrough result, by Graham and Zworski \cite{GZ}, was that if $(M,[\overline{g}])$ is a smooth compact manifold
endowed with a conformal structure, then the operators $P_k$ can be realized as residues at the values $\gamma = k$ of
the meromorphic family $S(n/2 + \gamma)$ of scattering operators associated to the Laplacian on any Poincar\'e-Einstein manifold
$(X,G)$ for which $(M,[\overline{g}])$ is the conformal infinity. These are the `trivial' poles of the scattering operator, so-called
because their location is independent of the interior geometry; $S(s)$ typically has infinitely many other poles, which are
called resonances, the location and asymptotic distribution of which is a matter of considerable interest and ongoing study.
Multiplying this scattering family by some $\Gamma$ factors to regularize these poles, one obtains a holomorphic family of
elliptic pseudodifferential operators $P_\gamma^{\overline{g}}$ (which patently depends on the filling $(X,G)$). An alternate construction of
these operators has been obtained by Juhl, and his monograph \cite{Juhl} describes an intriguing general framework for
studying conformally covariant operators, see also \cite{Juhl2}.
This realization of the GJMS operators has led to important new understanding of them, including for example the basic fact that
$P_\gamma^{\overline{g}}$ is symmetric with respect to $dV_{\overline{g}}$, (something not obvious from the previous fundamentally algebraic
construction). Hence even though the family $P_\gamma^{\bar g}$ is not entirely canonically associated to $(M,[\overline{g}])$ (as we
explain in some detail below), its study can still illuminate the truly canonical operators which occur as special values at
positive integers, i.e.\ the GJMS operators.
For various technical reasons, we focus here only on the operators $P_\gamma$ when $\gamma \in \mathbb R$, $|\gamma| \le n/2$.
These have the following properties: first, $P_0 = \mathrm{Id}$, and more generally, $P_k$ is the $k^{\mathrm{th}}$ GJMS
operator, $k = 1, \ldots, n/2$; next, $P_\gamma$ is a classical elliptic pseudodifferential operator of order $2\gamma$
with principal symbol $\sigma_{2\gamma}(P_\gamma^{\overline{g}}) = |\xi|^{2\gamma}_{\overline{g}}$, hence (since $M$ is compact), $P_\gamma$
is Fredholm on $L^2$ when $\gamma > 0$; if $P_\gamma$ is invertible, then $P_{-\gamma} = P_{\gamma}^{-1}$; finally,
\begin{equation}
\mbox{if}\ g=u^{\frac{4}{n-2\gamma}}\overline{g}, \qquad \mbox{then}\ P_\gamma^{\overline{g}} (uf) = u^{\frac{n+2\gamma}{n-2\gamma}} P_\gamma^g (f)
\label{eq:ccfcL}
\end{equation}
for any smooth function $f$. Generalizing the formul\ae\ for scalar curvature ($\gamma = 1$) and the Paneitz-Branson
$Q$-curvature ($\gamma = 2$), we make the definition that for any $0 < \gamma \leq n/2$, $Q_\gamma^{\overline{g}}$, the
$Q$-curvature of order $\gamma$ associated to a metric $\overline{g}$, is given by
\begin{equation}
Q_\gamma^{\overline{g}} = P_\gamma^{\overline{g}}(1).
\label{eq:Qgamma}
\end{equation}
Let us comment further on the choices involved in these definitions. First, Poincar\'e-Einstein fillings $(X,G)$ of $(M,[\overline{g}])$ (which
are defined at the beginning of \S 2), may
not always exist, and when they exist, they may not be unique. The existence issue is not serious: the construction of \cite{GZ}
only uses that the metric $G$ satisfy the Einstein equation to sufficiently high order, and one can even take $X = M \times [0,1]$
with the conformal structure $[\overline{g}]$ at $M \times \{0\}$ and with the other boundary $M \times \{1\}$ a regular (incomplete)
boundary for $G$. However, these comments indicate that the issue of lack of uniqueness is far worse, since there are
always infinite dimensional families of asymptotically Poincar\'e-Einstein fillings. Any choice of one of these fixes a
family of operators $P_\gamma^{\overline{g}}$, and for each such choice $P_\gamma$ satisfies all the properties listed above.
As already noted, the complete Riemannian symbol of $P_\gamma^{\overline{g}}$ is determined by the metric $\overline{g}$ and
the conformal covariance; the choice of filling provides a consistent selection of smoothing terms in these
pseudodifferential operators for which the same covariance properties hold. Hence the $Q$-curvatures $Q_\gamma^{\overline{g}}$ for
noninteger values of $\gamma$ are similarly ill-defined. In particular, except in certain special cases where there
are canonical choices of fillings (e.g.\ the sphere), it is not clear that the existence of a metric $\overline{g}$ in a conformal
class such that $Q_\gamma^{\overline{g}} > 0$ depends only on that conformal class. We leave open these significant problems,
and in what follows, always make the tacit assumption that for any given $(M,[\overline{g}])$, we have fixed an approximately
Poincar\'e-Einstein filling $(X,G)$ and used this to define the family $P_\gamma^{\overline{g}}$. In other words, it is perhaps
more sensible to think of $P_\gamma$ and $Q_\gamma$ as quantities determined by the pair $(( M,[\overline{g}]), (X,G))$.
In any case, generalizing \eqref{eq:cL}, consider the ``fractional Yamabe problem": given a metric $\overline{g}$ on a compact
manifold $M$, find $u > 0$ so that if $g = u^{4/(n-2\gamma)}\overline{g}$, then $Q_\gamma^g$ is constant. This amounts to solving
\begin{equation}
P_\gamma^{\overline{g}} u = Q_\gamma^{g} u^{\frac{n+2\gamma}{n-2\gamma}} ,\quad u>0,
\label{eq:ccQ}
\end{equation}
for $Q_\gamma^g = \mbox{const.}$ More generally, we can simply seek metrics $g$ which are conformally related to $\overline{g}$
and such that $Q_\gamma^g \geq 0$ or $Q_\gamma^g < 0$ everywhere.
This fractional Yamabe problem has now been solved in many cases where the positive mass theorem is not needed
\cite{fractional-Yamabe}, and further work on this is in progress.
As described earlier, it is is also interesting to construct complete metrics of constant (positive) $Q_\gamma$ curvature on open
subdomains $\Omega = M \setminus \Lambda$, or in other words, to find metrics $g = u^{4/(n-2\gamma)}\overline{g}$ which are complete
on $\Omega$ and such that $u$ satisfies \eqref{eq:ccQ} with $Q_\gamma^g$ a constant. This is the fractional singular Yamabe
problem. In the first few integer cases it is known that the positivity of the curvature places restrictions on $\dim \Lambda$ (see \cite{SY},
\cite{MP} for the case $\gamma=1$, \cite{Chang-Hang-Yang} for $\gamma=2$, and \cite{non-removable} for the analogous problem for the closely related $\sigma_k$ curvature).
Although it is not at all clear how to define $P^g_\gamma$ and $Q^g_\gamma$ on a general complete open manifold,
we can give a reasonable definition when $\Omega$ is an open dense set in a compact manifold $M$ and the metric
$g$ is conformally related to a smooth metric $\overline{g}$ on $M$. Namely, we can define them by demanding that the
relationship \eqref{eq:ccfcL} holds. Note, however, that this too is not as simple as it first appears since, because of
the nonlocal character of $P_\gamma^{\overline{g}}$, we must extend $u$ as a distribution on all of $M$.
We discuss this further below.
The purpose of this note is to clarify some basic features of this fractional singular Yamabe problem and to establish
a few preliminary results about it. Our first result generalizes the Schoen-Yau theorem.
\begin{thm}\label{th:SY}
Suppose that $(M^n,\overline{g})$ is compact and $g = u^{\frac{4}{n-2\gamma}}\overline{g}$ is a complete metric on $\Omega = M \setminus
\Lambda$, where $\Lambda$ is a smooth $k$-dimensional submanifold. Assume furthermore that $u$ is polyhomogeneous
along $\Lambda$ with leading exponent $-n/2 + \gamma$. If $0 < \gamma \leq \frac{n}{2}$, and if $Q_\gamma^g > 0$ everywhere
for any choice of asymptotically Poincar\'e-Einstein extension $(X,G)$ which defines $P_\gamma^{\overline{g}}$ and hence
$Q_\gamma^g$, then $n$, $k$ and $\gamma$ are restricted by the inequality
\begin{equation}
\Gamma\left(\frac{n}{4} - \frac{k}{2} + \frac{\gamma}{2}\right) \Big/ \Gamma\left(\frac{n}{4} - \frac{k}{2} - \frac{\gamma}{2}\right) > 0,
\label{eq:dimrest}
\end{equation}
where $\Gamma$ is the ordinary Gamma function. This inequality holds in particular when $k < (n-2\gamma)/2$,
and in this case then there is a unique distributional extension of $u$ on all of $M$ which is still a solution of \eqref{eq:ccQ}.
\label{th:fSY}
\end{thm}
\begin{remark}
Recall that $u$ is said to be polyhomogeneous along $\Lambda$ if in terms of any cylindrical coordinate system $(r,\theta,y)$
in a tubular neighborhood of $\Lambda$, where $r$ and $\theta$ are polar coordinates in disks in the normal bundle and
$y$ is a local coordinate along $\Lambda$, $u$ admits an asymptotic expansion
\[
u \sim \sum a_{jk}(y,\theta) r^{\mu_j} (\log r)^k
\]
where $\mu_j$ is a sequence of complex numbers with real part tending to infinity, for each $j$, $a_{jk}$ is nonzero
for only finitely many nonnegative integers $k$, and such that every coefficient $a_{jk} \in {\mathcal C}^\infty$. The number $\mu_0$
is called the leading exponent if $\Re (\mu_j) > \Re (\mu_0)$ for all $j \neq 0$. We refer to \cite{Mazzeo:edge-operators}
for a more thorough account of polyhomogeneity.
\end{remark}
\begin{remark}
As we have noted, inequality \eqref{eq:dimrest} is satisfied whenever $k < (n-2\gamma)/2$, and in fact is equivalent to this simpler
inequality when $\gamma = 1$. When $\gamma=2$, i.e.\ for the standard $Q-$curvature, this result is already known: it is shown in
\cite{Chang-Hang-Yang} that complete metrics with $Q_2 > 0$ and positive scalar curvature must have singular set with
dimension less than $(n-4)/2$, which again agrees with \eqref{eq:dimrest}.
\end{remark}
We also present a few special existence results. First, the following remark exhibits solutions
coming from Kleinian group theory where $\Lambda$ is nonrectifiable.
\begin{remark}
Suppose that $\gamma\in [1,n/2)$. Let $\Gamma$ be a convex cocompact subgroup of $\mbox{SO}(n+1,1)$ with
Poincar\'e exponent $\delta(\Gamma) \in [1, (n-2\gamma)/2)$. Let $\Lambda\subset S^n$ be the limit set of $\Gamma$.
Then $\Omega = S^n \setminus \Lambda$ admits a complete metric $g$ conformal to the round metric and with
$Q_\gamma^g > 0$.
\label{remark:Klein}
\end{remark}
As we explain below, this follows directly from the work of Qing and Raske \cite{Qing-Raske}.
Finally, one can also obtain existence of solutions when $\gamma$ is sufficiently near $1$ and $\Lambda$ is smooth
by perturbation theory.
\begin{thm}
Let $(M^n, [\overline{g}])$ be compact with nonnegative Yamabe constant and $\Lambda$ a $k$-dimensional submanifold
with $k < \frac12 (n-2)$. Then there exists an $\epsilon > 0$ such that if $\gamma \in (1-\epsilon, 1 + \epsilon)$,
there exists a solution to the fractional singular Yamabe problem \eqref{eq:ccQ} with $Q_\gamma > 0$ which is complete
on $M \setminus \Lambda$.
\label{th:pert}
\end{thm}
Our final result is a growth estimate for weak solutions that are singular on $S^n\backslash\Omega$. Our result is not very strong in the sense that we do need to require that $u$ is a weak solution in the whole $S^n$. However, it provides the first insight into a general theory of weak solutions on subdomains of $S^n$.
\begin{prop}\label{prop:growth}
Let $g_c$ be the standard round metric on $S^n$, and $(B^{n+1},G)$ the Poincar\'e ball model of hyperbolic space, which
has $(S^n,[g_c])$ as its conformal infinity. Let $g = u^{\frac{4}{n-2\gamma}}g_c$ be a complete metric on a dense subdomain of
the sphere, $\Omega = S^n \setminus \Lambda$, with $Q_\gamma^g$ equal to a positive constant, and such that $u$ is a
distributional solution to
\beta
\label{problem3}P_\gamma^{g_c} u=u^{\frac{n+2\gamma}{n-2\gamma}}
\end{equation}
on $S^n$ (with $u$ finite only on $\Omega$). Then, for all $z\in\Omega$,
\[
u(z)\leq \frac{C}{d_{g_c}(z,\Lambda)^{\frac{n-2\gamma}{2}}},
\]
where $C$ depends only on $n$ and $\gamma$.
\end{prop}
There are many interesting questions not addressed here. For example, we point out again that there is not yet
a good definition of the family $P_\gamma^g$ on an arbitrary complete manifold $(\Omega,g)$. Provided one is able
to make this definition, it would then be useful to compute the $L^2$-spectrum of $P_\gamma^g$, even for some
specific examples such as $\mathbb H^n$ or $\mathbb H^{k+1} \times S^{n-k-1}$. Finally, it would also be important to obtain
the correct generalization of the Schoen-Yau theorem for the operators $P_\gamma$. We hope to address
these and other problems elsewhere.
\section{Fractional conformal Laplacians}
We now provide a more careful description of the construction of the family of conformally covariant operators $P_\gamma$,
and also give two alternate definitions of these operators in the flat case to provide some perspective.
As we have described in the introduction, Graham and Zworski \cite{GZ} discovered a beautiful connection
between the scattering theory of the Laplacian on an asymptotically hyperbolic Einstein manifold and the
GJMS operators on its conformal infinity. Let $(M,g)$ be a compact $n$-dimensional Riemannian manifold.
Suppose that $X$ is a smooth compact manifold with boundary, with $\partial X = M$, and denote by $x$ a defining
function for the boundary, i.e.\ $x \geq 0$ on $X$, $x = 0$ precisely on $\partial X$ and $dx \neq 0$ there. A metric
$G$ on the interior of $X$ is called conformally compact if $x^2 G = \overline{G}$ extends as a smooth nondegenerate
metric on the closed manifold with boundary. It is not hard to check that $G$ is complete and, provided that
$|dx|_{\overline{G}} = 1$ at $\partial X$, the sectional curvatures of $G$ all tend to $-1$ at `infinity'. The metric
$G$ is called Poincar\'e-Einstein if it is conformally compact and also satisfies the Einstein equation $\mathrm{Ric}^G = - n G$.
As we have explained, it is only necessary to consider asymptotically Poincar\'e-Einstein metrics; by definition,
these are conformally compact metrics which satisfy $\mathrm{Ric}^G = -nG + {\mathcal O}(x^N)$ for some suitably large $N$
(typically, $N > n$ is sufficient).
The conformal infinity of $G$ is the conformal class of $\left. \overline{G} \right|_{T\partial X}$; only the conformal class is
well defined since the defining function $x$ is defined up to a positive smooth multiple. If $g$ is any representative
of this conformal class, then there is a unique defining function $x$ for $M$ such that $G = x^{-2}(dx^2 + g(x))$ where
$g(x)$ is a family of metrics on $M$ (or rather, the level sets of $x$), with $g(0)$ the given initial metric.
We now define the scattering operator $S(s)$ for $(X,G)$. Fix any $f_0 \in {\mathcal C}^\infty(M)$; then for all but a discrete set of
values $s \in \mathbb C$, there exists a unique generalized eigenfunction $u$ of the Laplace operator on $X$ with eigenvalue $s(n-s)$.
In other words, $u$ satisfies
\beta\label{eigenvalue-problem}
\left\{\begin{split}&(\Delta_G - s(n-s))u = 0\\
&u = f x^{n-s} + \tilde{f} x^{s}, \quad \mbox{for some}\ f, \tilde{f} \in {\mathcal C}^\infty(\overline{X}) \quad
\mbox{with}\ \left. f \right|_{x=0} = f_0.
\end{split}\right.\end{equation}
By definition, $S(s)f_0 = \tilde{f}|_{x=0}$. This is an elliptic pseudodifferential operator of order $2s - n$
which depends meromorphically on $s$; it is known to always have simple poles at the values $s = n/2, n/2 +1 ,
n/2 + 2, \ldots$. These locations are independent of $(X,G)$, hence are called the trivial poles of the
scattering operator. $S(s)$ has infinitely many other poles which are of great interest in other investigations,
but do not concern us here. Letting $s=n/2+\gamma$, we now define
\begin{equation}
P_\gamma^g = 2^{2\gamma} \frac{\Gamma(\gamma)}{\Gamma(-\gamma)} S\left(\frac{n}{2} + \gamma\right);
\end{equation}
because of these prefactors, one has that the principal symbol is
\begin{equation}
\sigma_{2\gamma}(P_\gamma^g) = |\eta|^{2\gamma}_g.
\end{equation}
The scattering operator satisfies a functional equation, $S(s) S(n-s) = \mbox{Id}$, which implies that
\begin{equation}
P_\gamma \circ P_{-\gamma} = \mbox{Id}.
\end{equation}
Finally, it is proved in \cite{GZ} that the operators $P_\gamma^g$ satisfy the conformal covariance equation \eqref{eq:ccfcL}.
This definition of the operators $P_\gamma$ depends crucially on the choice of the Poincar\'e-Einstein filling $(X,G)$.
Graham and Zworski point out that it is only necessary that the metric $G$ satisfy the Einstein equation to sufficiently
high order as $x \to 0$ in order that the properties of the $P_\gamma$ listed above be true (for $\gamma$ in a finite
range which depends on the order to which $G$ satisfies the Einstein equation). As we have discussed in the
introduction, it is always possible to find such metrics, and we suppose that one has been fixed.
Let us now address the issue of how to define $P_\gamma^g$ and $Q_\gamma^g$ when $\Omega$ is a dense open set
in a compact manifold $M$ and $g$ is complete and conformal to a metric $\overline{g}$ which extends to all of $M$. (As usual,
we assume that $(M,\overline{g})$ has an asymptotically Poincar\'e-Einstein filling). There is no difficulty
in using the relationship \eqref{eq:ccfcL} to define $P_\gamma^gf$ when $f \in {\mathcal C}^\infty_0(\Omega)$ .
From here one can use an abstract functional analytic argument to extend $P_\gamma^g$ to act on any $f \in L^2(\Omega, dV_g)$.
Indeed, it is straightforward to check that the operator $P_\gamma^g$ defined in this way is essentially self-adjoint on
$L^2(\Omega, dV_g)$ when $\gamma$ is real. However, observe that $P_\gamma = \Delta_g^{\gamma} + K$, where
$K$ is a pseudo-differential operator of order $2\gamma-1$. Furthermore, $\Delta^{\gamma}$ is self-adjoint by
the functional calculus, so we can appeal to a classical theorem, see \cite{RS}, which states that a lower order
symmetric perturbation of a self-adjoint operator is essentially self-adjoint.
A separate, but also very interesting issue, is whether $Q_\gamma^g$ is a positive constant implies that the conformal factor
$u$ is a weak solution of \eqref{eq:ccQ} on all of $M$. This is true (with some additional hypotheses) when $\gamma = 1$,
cf. \cite{SY}.
We conclude this section with two alternate definitions of the operators $P_\gamma^{\overline{g}}$ in the special
case where $(M,[\overline{g}]) = \mathbb R^n$ with its standard flat conformal class.
The canonical Poincar\'e-Einstein filling in this case is the hyperbolic space $X = \mathbb R^{n+1}_+ = \mathbb R^+_x \times \mathbb R^n_y$ with
metric $G = x^{-2}(dx^2 + |dy|^2)$.
Since $\overline{g}$ is flat, we have $P_\gamma^{\overline{g}} = \Delta_{\overline{g}}^\gamma$, and this can be written in either of the two equivalent forms:
\begin{eqnarray*}
\Delta^\gamma f(y) & = & (2\pi)^{-n} \int_{\mathbb R^n} e^{iy \eta} |\eta|^{2\gamma} \hat{f}(\eta)\, d\eta \quad \mbox{or} \\
& = & \mbox{P.V.}\int_{\mathbb R^n}\frac{f(y)-f(\tilde{y})}{|y-\tilde{y}|^{n+2\gamma}}\, d\tilde{y}.
\end{eqnarray*}
Both formul\ae\ can be regularized so as to hold for any given $\gamma$.
One other way that $\Delta^\gamma$ arises is as a generalized Dirichlet to Neumann map; this definition is essentially the
same as the one above involving the scattering operator (indeed, the point of view in geometric scattering theory
is that the scattering operator \emph{is} simply the Dirichlet to Neumann operator at infinity), but as recently
rediscovered by Chang-Gonzalez \cite{CG} in relation to the work on (Euclidean) fractional Laplacians by Caffarelli
and Silvestre \cite{cafS}, it is sometimes helpful to consider the equation in a slightly different form.
In the following result, let $(X,G)$ be an asymptotically Poincar\'e-Einstein filling of the compact manifold $(M^n,[\bar g])$.
Fix a representative $\bar{g}$ of the conformal class on the boundary and let $x$ be the boundary defining function on $X$
such that $g = x^{-2}(dx^2 + \bar{g}_x)$ with $\bar{g}_0 = \bar{g}$. Also, write $\bar{G} = x^2 G$; this is an incomplete
metric on $\overline{X}$ which is smooth (or at least polyhomogeneous) up to the boundary.
\begin{prop}(\cite{CG})\label{prop:extension}
Let $U = x^{\frac{n}{2}-\gamma} u$ and
\[
E :=\Delta_{\bar G}\left( x^{\frac{1-2\gamma}{2}}\right) x^{\frac{1-2\gamma}{2}}+\left(\gamma^2-\tfrac{1}{4}\right) x^{-1-2\gamma}+
\tfrac{n-1}{4n} R_{\bar G} x^{1-2\gamma}.
\]
Then, for any $f_0\in\mathcal C^{\infty}(M)$, the eigenvalue problem \eqref{eigenvalue-problem} is equivalent to
\beta\label{divergence-equation}
\left\{\begin{split}
- \text{div} \left( x^{1-2\gamma} \nabla U\right) + E U &=0\quad \mbox{ on }(X, \bar{G}), \\
\left. U \right|_{x=0}&=f_0\quad \mbox{on }M,
\end{split}\right.
\end{equation}
where the divergence and gradient are taken with respect to $\bar{G}$. Moreover,
\[
P_\gamma^{\bar g} (f_0)=d_\gamma \lim_{x\to 0} x^{1-2\gamma}\partial_x U
\]
for some nonzero constant $d_\gamma$ depending only on $\gamma$ and $n$.
\end{prop}
The Euclidean version of this result (where $(X,G)$ is the hyperbolic upper half-space) was the one studied by
Caffarelli and Silvestre. The main advantage in this reformulation is that certain estimates are more
transparent from this point of view.
\section{Proofs}
We now turn to the proofs of Theorems \ref{th:fSY} and \ref{th:pert}, and Remark \ref{remark:Klein}.
\subsection{Dimensional restrictions on singular sets}
The idea for the proof of Theorem \ref{th:SY} is straightforward: let $u$ be a polyhomogeneous distribution on $M$ with singular
set along the smooth submanifold $\Lambda$. Suppose that the leading term in the expansion of $u$ is $a(y) r^{-n/2 + \gamma}$.
Then by a standard result in microlocal analysis \cite{DH}, the function $P_\gamma^{\overline{g}}u$ is again polyhomogeneous and
has leading term $b(y) r^{-(2\gamma+n)/2}$, where $b(y) = \lambda a(y)$ for some constant $\lambda$. Now, if $u$ is
a conformal factor for which $g = u^{4/(n-2\gamma)}\overline{g}$ has $Q_\gamma^g > 0$, then $P_\gamma^{\overline{g}}u > 0$, which implies
that $\lambda > 0$. So we must compute $\lambda$ to obtain \eqref{eq:dimrest}.
This microlocal argument states that if $u$ is polyhomogeneous, then the leading term of $P_\gamma^{\overline{g}}u$ can be computed
using the symbol calculus (for pseudodifferential operators and for polyhomogeneous distributions), and more specifically,
that the principal symbol of $P_\gamma^{\overline{g}}u$ is equal to the product of the principal symbols of $P_\gamma^{\overline{g}}$ and
that of $u$. (Note that the principal symbol of a distribution conormal to a submanifold $\Lambda$ is computed in terms of
the Fourier transform in the fibres of $N\Lambda$.) In the present setting, this implies that the constant $\lambda$
is the same as for the model case when $M = S^n$ and $\Lambda$ is an equatorial $S^k$, so we now focus on this special case.
Transform $S^n$ to $\mathbb R^n$ by stereographic projection, so that $\Lambda$ is mapped to a linear subspace $\mathbb R^k$
and $\overline{g}$ is the flat Euclidean metric (which we henceforth omit from the notation). Write $\mathbb R^n \ni y = (y',y'') \in
\mathbb R^k \times \mathbb R^{n-k}$, so that (in this model case) $u(y) = |y''|^{-n/2 + \gamma}$ for the singular metric $u^{\frac{4}{n-2\gamma}}\bar g$; then
\begin{multline*}
P_\gamma u(y) = \Delta^\gamma u (y) = (2\pi)^{-n} \int_{\mathbb R^n\times \mathbb R^n} e^{i(y-\tilde{y})\cdot \eta} |\eta|^{2\gamma} |y''|^{-n/2 + \gamma}\, d\tilde y d\eta
\\ = (2\pi)^{k-n} \int_{\mathbb R^{n-k}\times\mathbb R^{n-k} } e^{i(y'' - \tilde{y}'')\cdot\eta''} |\tilde{y}''|^{-n/2 + \gamma}\, d\tilde y'' d\eta''.
\end{multline*}
Now recall a well-known formula for the Fourier transform of homogeneous distributions in $\mathbb R^N$:
\[
\int_{\mathbb R^N} e^{-iz \cdot \zeta} |z|^{-N + \alpha} \,dz= c(N,\alpha) |\zeta|^{-\alpha},
\]
where
\[
c(N,\alpha) = \pi^{\alpha - N/2}\frac{\Gamma(\alpha/2)}{\Gamma((N -\alpha)/2)}.
\]
Applying this formula with $N = n-k$ (and replacing $y''$ by $y$ and $\eta''$ by $\eta$, for simplicity) yields first that
\[
\int_{\mathbb R^N} e^{-iy \cdot \eta} |y|^{-n/2 + \gamma}\, dy = c\left( n-k, \frac{n}{2} - k + \gamma\right)\, |\eta|^{-\frac{n}{2} + k - \gamma},
\]
then, multiplying by $|\eta|^{2\gamma}$ and taking inverse Fourier transform we obtain
\[
\frac{1}{(2\pi)^{n-k}} c\left( n-k, \frac{n}{2} - k + \gamma\right) c\left( n-k, \frac{n}{2} + \gamma\right) |y|^{-\frac{n}{2} - \gamma}.
\]
Altogether then, the multiplicative factor $\lambda$ is equal to
\[
2^{k-n} \pi^{k-n + 2\gamma} \frac{\Gamma\left(\frac12( \frac{n}{2} - k + \gamma)\right)}{\Gamma(\frac12 \left(\frac{n}{2} - \gamma)\right)}
\frac{ \Gamma\left( \frac12 (\frac{n}{2} + \gamma)\right)}{\Gamma\left( \frac12 (\frac{n}{2} - k - \gamma)\right)}.
\]
Discarding the factors which are always positive (which includes $\Gamma(n/4 - \gamma/2)$ since $\gamma < n/2$),
we obtain \eqref{eq:dimrest}.
It is unfortunately slightly messy to write down the entire set of values of $k$ and $\gamma$ for which
\eqref{eq:dimrest} holds. However, if $k < (n-2\gamma)/2$, then both $\frac{n}{2} - k \pm \frac12 \gamma > 0$.
Furthermore, if $\gamma = 1$ and $A := \frac{n}{2} - k < \frac12$, then the $\Gamma$ function always takes on
values with different signs at $A + \gamma/2$ and $A - \gamma/2$. More generally, if we fix $n$ and $k$
and let $\gamma$ increase from $0$ to $n/2$, then $\Gamma(A + \gamma/2)/\Gamma(A - \gamma/2) = 1$
when $\gamma = 0$; $\Gamma(A - \gamma/2)$ changes sign every time $\gamma$ increases by $2$, whereas
$\Gamma(A + \gamma/2)$ also changes similarly, but only for $\gamma$ in the range $(0, -2A)$, where $A < 0$.
To prove the final statement of the theorem, note that if $-\gamma - n/2$, the leading exponent of $P_\gamma^{\overline{g}} u$, is
greater than $k-n$, the codimension of $\Lambda$, then $P^{\bar g}_\gamma u$ cannot have any mass supported on $\Lambda$,
which means that $u$ is a weak solution of $P_\gamma^{\overline{g}} u = Q_\gamma^g u^{(n+2\gamma)/(n-2\gamma)}$ on all of $M$.
\subsection{Kleinian groups}
We now turn to a special case where this problem has a direct relationship to hyperbolic geometry. Let $\Gamma$ be a convex
cocompact group of motions acting on $\mathbb H^{n+1}$. Thus $\Gamma$ acts discretely and properly discontinuously
on hyperbolic space, is geometrically finite and contains no parabolic elements. Its domain of discontinuity
is the maximal open set $\Omega \subset S^n$ on which the action of $\Gamma$ extends to a discrete and
properly discontinuous action; by definition of convex cocompactness, the quotient $\Omega/\Gamma = Y$
is a compact manifold with a locally conformally flat structure. The complement $S^n \setminus \Omega = \Lambda$
is the limit set of $\Gamma$. Furthermore, the manifold $X = \mathbb H^{n+1}/\Gamma$ with its hyperbolic metric
is Poincar\'e-Einstein with conformal infinity $Y$ with the conformal structure induced from $S^n$. We use these
canonical fillings to define $P_\gamma$ and $Q_\gamma$.
In \cite{Qing-Raske}, Qing and Raske attack the problem of finding metrics of constant $Q_k$ curvature (with $k < n/2$
an integer). Their method involves finding metrics of constant $Q_\gamma$ curvature for all $1 \leq \gamma \leq k$.
They rephrase the problem $P_\gamma u = Q_\gamma u^{(n+2\gamma)/(n-2\gamma)}$ in the equivalent form $u =
P_{-\gamma} (Q_\gamma u^{(n+2\gamma)/(n-2\gamma)})$. The advantage of this modification is that $P_{-\gamma}$ is
a pseudodifferential operator of negative order, and its Schwartz kernel can be obtained by summing the
translates of the Schwartz kernel of $P_{-\gamma}$ on $S^n$ over the group $\Gamma$. This sum converges provided
the Poincar\'e exponent of $\Gamma$ is less than $\frac{n-2\gamma}{2}$, and because this is a convergent sum
one directly obtains explicit control and positivity of this operator. Fixing $1 \leq \gamma < n/2$ and
restricting to convex cocompact groups $\Gamma$ with Poincar\'e exponent in this range, they are able to
prove that if $Y = \Omega/\Gamma$ has positive Yamabe type, then it admits a metric of constant positive
$Q_\gamma$ curvature.
The proof of Remark \ref{remark:Klein} follows directly from this by lifting the conformal factor and solution metric to
$\Omega \subset S^n$. Namely, by the theorem of Schoen and Yau, the developing map of $Y$ is injective from the
universal cover $\tilde{Y}$ to $\Omega$, and the solution metric $g$ on $Y$ lifts to a complete metric $\tilde{g}$
on $\Omega$ of the form $u^{(n+2\gamma)/(n-2\gamma)} \overline{g}$, where $\overline{g}$ is the standard round metric. Using the
bound on the Poincar\'e exponent and the compactness of $Y$, standard lattice point counting arguments
show that $u(p) \leq c\, \mbox{dist}_{\overline{g}}\, (p,\Lambda)^{(2\gamma-n)/2}$. This shows that not only is $u$
a solution of the modified integral equation, but is also a weak solution of \eqref{eq:ccQ} on all of $S^n$
and that $Q_\gamma^g$ is constant. Finally, by Patterson-Sullivan theory, the dimension of the limit set $\Lambda$
is precisely the Poincar\'e exponent $\delta(\Gamma)$. In other words, we have produced a solution to the fractional
singular Yamabe problem with exponent $\gamma \in [1,n/2)$ and with singular set of dimension less than
$n/2 - \gamma$.
The Qing-Raske theorem is not stated for the remaining cases $\gamma \in (0,1)$; it is plausible that their proof
may be adapted to work then, and hence the lifted solution would also give a solution to our problem
also for $\gamma$ in this range, but we do not claim this. Note, however, the results in \S 4 below concerning
growth estimates for solutions of this equation for this range of $\gamma$.
\subsection{Perturbation methods}
We come at last to the perturbation result. We deduce existence of solutions for the fractional singular
Yamabe problem for values of $\gamma$ near $1$ from the the general existence result
in \cite{MP} for the singular Yamabe problem with $\gamma = 1$. Let $(M,\overline{g})$
and $\Lambda$ be a submanifold of dimension $k$ as in the statement of the theorem. (Slightly more
generally, we could let the different components of $\Lambda$ have different dimensions, but for
simplicity we assume that $\Lambda$ is connected.) Then there is a function
$u$ on $M \setminus \Lambda$ such that $g = u^{4/(n-2)}\overline{g}$ is complete and its scalar curvature
$Q_1^g$ is a positive constant. Moreover, it is known that the linearization of the equation \eqref{eq:cL}
at any one of the solutions $u$ constructed in \cite{MP} is surjective on appropriate weighted H\"older spaces.
In the following we phrase this rigorously and parlay this surjectivity into an existence theorem for $\gamma$
near $1$ using the implicit function theorem. Let ${\mathcal T}^\sigma\Lambda$ denote the tube of radius $\sigma$
(with respect to $\overline{g}$) around $\Lambda$; this is canonically diffeomorphic to the neighourhood of
radius $\sigma$ in the normal bundle $N\Lambda$ for $\sigma$ sufficiently small, and we use this to transfer
cylindrical coordinates $(r,y,\theta) \in [0,\sigma) \times {\mathcal U}_y \times S^{n-k-1}$ in a local trivialization of
$N\Lambda$ to Fermi coordinates in ${\mathcal T}^\sigma\Lambda$.
We use these coordinates to define weighted H\"older spaces with a certain dilation covariance property.
For $w \in {\mathcal C}^0({\mathcal T}^\sigma\Lambda)$, let
\[
\|w\|_{e, 0,\alpha,0}=\sup_{z \in \mathcal T^\sigma \Lambda} |w| +\sup_{z, \tilde z \in
\mathcal T^\sigma \Lambda} \frac{(r+\tilde r)^\alpha |w(z)-w(\tilde z)|}{|r-\tilde r|^\alpha +
|y-\tilde y|^\alpha+(r+\tilde r)^\alpha |\theta -\tilde \theta|^\alpha}.
\]
and denote by ${\mathcal C}^{0,\alpha}_e(M \backslash \Lambda)$ the space of all functions $w \in {\mathcal C}^0(T^\sigma\Lambda)$
such that this norm is finite. The initial subscript $e$ in the norm signifies that these are `edge' H\"older spaces. Next,
${\mathcal C}^{k,\alpha}_e(M\backslash \Lambda)$ denotes the subspace of ${\mathcal C}^k(M \backslash \Lambda)$ on which the norm
\[
\|w\|_{k,\alpha,0}=\|w\|_{k,\alpha,M_{\sigma/2}} + \sum_{j=0}^k \|\nabla^j w \|^{\mathcal T^\sigma \Lambda}_{e, 0, \alpha}
\]
is finite, where $M_{\sigma/2}= M \backslash \mathcal T_{\sigma/2}^\Lambda$. Finally, for $\nu \in \mathbb R$, let
\[
{\mathcal C}_\nu^{k,\alpha}(M \backslash \Lambda)= \left \{ w=r^\nu \overline w : \,\, \overline w \in {\mathcal C}^{k,\alpha}_e
(M \backslash \Lambda) \right \},
\]
with corresponding norm $|| \cdot ||_{e,k,\alpha,\nu}$.
Fixing $Q_\gamma^g = 1$, the linearization of $u \mapsto P_\gamma^{\overline{g}}u - u^{ (n+2\gamma)/ (n-2\gamma)}$
is the operator
\[
v \mapsto L_\gamma v := P_\gamma^{\overline{g}} v - \frac{n+2\gamma}{n-2\gamma} u^{\frac{4\gamma}{n-2\gamma}} v.
\]
Let $u$ be one of the solutions to the singular Yamabe problem ($\gamma = 1$) on $M \setminus \Lambda$
constructed in \cite{MP}. It is proved there that the solution $u$ has the form $u = c_1 r^{1 - n/2}(1 + v)$, where
$v \in {\mathcal C}^{2,\alpha}_\nu$ for any $0 < \nu < k/2$ and $c_1 > 0$ depends only on the dimensions $k$ and $n$;
furthermore, the mapping
\[
L_1: {\mathcal C}^{2,\alpha}_{\nu}(M \setminus \Lambda) \longrightarrow {\mathcal C}^{0,\alpha}_{\nu -2}(M \setminus \Lambda)
\]
is surjective for $\nu$ in this same range.
We claim that for $\gamma$ sufficiently close to $1$, and for $\nu \in (\eta, k/2 - \eta)$, where $\eta > 0$
is some small fixed number, the mapping
\[
L_\gamma: {\mathcal C}^{2,\alpha}_{\nu}(M \setminus \Lambda) \longrightarrow {\mathcal C}^{0,\alpha + 2(1-\gamma)}_{\nu -2\gamma}
(M \setminus \Lambda)
\]
is also bounded and surjective. The boundedness follows by an interpolation argument. Indeed, the spaces ${\mathcal C}^{k,\alpha}_0$
have interpolation properties which are identical to those for the ordinary H\"older spaces since they \emph{are} just
the standard H\"older spaces for the complete metric $\tilde{g} = \overline{g}/r^2$; a minor adjustment shows that the
addition of the weight factor behaves as expected. The assertion about the boundedness of $L_\gamma$
is clearly true for $\gamma = 0, 1, 2$, and hence by interpolation is true for all $\gamma$ close to $1$.
(It is true for the full range of $\gamma \in (0,2)$ if one makes the standard change, replacing the H\"older
space by a Zygmund space, when $\alpha + 2(1-\gamma)$ is an integer.) This also follows from
\cite{Mazzeo:edge-operators} because $r^{2\gamma}L_\gamma$ is a pseudodifferential edge operator of
order $2\gamma$. Similarly, surjectivity follows from the construction of a parametrix for $L_\gamma$
in the edge calculus, from \cite{Mazzeo:edge-operators} again. This proves that $L_\gamma$ is Fredholm,
and since it is surjective at $\gamma = 1$, it must remain surjective for values of $\gamma$ which are
close to $1$. We write its right inverse as $G_\gamma$.
Now consider the mapping
\[
(\gamma, c, v) \longmapsto N(\gamma,c,v) := G_\gamma
(P_\gamma^{\overline{g}} cr^{\gamma - n/2} (1 + v) - (c r^{\gamma - n/2}(1+v))^{\frac{n+2\gamma}{n - 2\gamma}}).
\]
If $u_1 = c_1 r^{1-n/2}(1+v_1)$ is the solution to the singular Yamabe problem from \cite{MP}, then $N(1,c_1,v_1) = 0$.
Let $c\in (c_1 -\varepsilon, c_1 + \varepsilon)$, and similarly, $v - v_1$ lie in a ball of radius $\varepsilon$ about $0$ in
${\mathcal C}^{2,\alpha}_\nu$. Clearly $\left. D_v N\right|_{(1,c_1,v_1)} = G_1 L_1 = \mbox{Id}$. The implicit function theorem now
applies to show that for every $(\gamma,c)$ near to $(1,c_1)$, there exists a unique $v_\gamma \in {\mathcal C}^{2,\alpha}_\nu$
with norm less than $\varepsilon$ such that
$u_\gamma = c r^{\gamma - n/2} (1 + v_\gamma)$ is a solution of the fractional singular Yamabe problem with singular set $\Lambda$.
\section{Growth estimates for weak solutions on $S^n$}
In this final section we furnish the proof of Proposition \ref{prop:growth}: if $\gamma \in (0,1)$ and $\Omega \subset S^n$ is dense, then
any weak solution of the fractional singular Yamabe problem
\beta\label{Yamabe1}
P_\gamma^{g_c}(u)=u^{\frac{n+2\gamma}{n-2\gamma}} \ \ \mbox{ in }S^n,\quad u>0, \quad u \mbox{ singular along }S^n\backslash\Omega
\end{equation}
satisfies a general growth estimate. This is a direct adaptation of Schoen's proof (which is written out in full in
\cite{Pollack:compactness}) for the case $\gamma = 1$.
We first comment on the local regularity for solutions of \eqref{problem3}. There are several ways to deduce the
necessary estimates. The path we follow uses the equivalence, as described in \ref{prop:extension}, of
\eqref{problem3} with the extension problem \eqref{divergence-equation}:
\beta\label{divergence-equation2}
\left\{\begin{split}
-\text{div} \left( x^{1-2\gamma} \nabla U\right) + E(x) U &=0, \quad \mbox{in } (X,\bar G),\\
-y^{1-2\gamma}\partial_x U&=c_{n,\gamma} U^{\frac{n+2\gamma}{n-2\gamma}}, \quad\mbox{on } x=0;
\end{split}\right.
\end{equation}
here $U=x^{\frac{n}{2}-\gamma} u$, $\bar G=x^2 G$.
From this point of view, we can use the linear regularity theorem \cite[Theorem 7.14]{Mazzeo:edge-operators} to
prove that $U$ is smooth up to $x=0$ away from $\Lambda$. This can also be deduced using standard elliptic
estimates for the pseudodifferential operator $P_\gamma^{g_c}$, but we refer also to more classical sources from
which this can also be deduced, in particular the paper by \cite{Fabes-Kenig-Serapioni:local-regularity-degenerate};
we also refer to more recent references \cite{Cabre-Sire:I} (where many properties of the solution are written down), and
\cite{fractional-Yamabe} (which holds for more general ambient metrics). In particular, from these last papers, one
has that Schauder and local $L^p\to L^\infty$ estimates hold, and the equation also satisfies the standard maximum principles.
Fix $z_0\not\in\Lambda$ and choose $\sigma < \dist_{g_c}(z_0,\Lambda)$. For simplicity, write $\rho(z) :=\dist_{g_c}(z, z_0)$. Now define
\[
f(z):=\left(\sigma-\rho(z)\right)^{\frac{n-2\gamma}{2}}u(z);
\]
note that $f = 0$ on $\partial B_\sigma(x_0)$.
It suffices to show that $f(z)\leq c$ for some $c > 0$ and for all $z\in B_\sigma(z_0)$ since if we choose $\sigma=\dist(z_0,\Lambda)/2$,
then $f(z_0)=\sigma^{\frac{n-2\gamma}{2}} u(z_0)$, and hence
\[
u(z_0)\leq \frac{c}{d(z_0,\Lambda)^{\frac{n-2\gamma}{2}}},
\]
which would finish the proof.
We prove this claim by contradiction. Assume that no such $c$ exists. Then there exists a sequence
$\{u_m,\Lambda_m, \sigma_m, z_{0,m}, z_m\}$ such that for all $m$, $f_m$ attains its maximum
in $B_{\sigma_m}(z_{0,m})$ at $z_m$, and
\[
f(z_m):=(\sigma_m-\dist(z_m, z_{0,m}))^{\frac{n-2\gamma}{2}} u_m(z_m) > m.
\]
Since $(\sigma_m-\dist(z_m, z_{0,m}))^{\frac{n-2\gamma}{2}}\leq \sigma_m^{\frac{n-2\gamma}{2}} \leq C$ for all $m$,
we see that necessarily $u_m(z_m)\to\infty$.
Let $z$ be a system of Riemann normal coordinates centered at $z_m$, so that the corresponding metric coefficients
satisfy $(g_c)_{ij}=\delta_{ij}+O(\abs{z}^2)$. (As $m$ varies, these coordinate systems also vary, but there is no
reason to include this in the notation.) Set $\lambda_m=\left( u_m(z_m)\right)^{\frac{2}{n-2\gamma}}$; we consider the dilated
coordinate system $\zeta =\lambda_m z$, the corresponding sequence of metrics $\hat g_m$, where
$$
g_c=\lambda_m^{-2} \sum_{i,j=1}^n (g_c)_{ij}(\zeta/\lambda_m) d\zeta^i d\zeta^j:= \lambda_m^{-2} \hat g_m,
$$
and finally the dilated family of solutions
$$
v_m(\zeta):=\lambda_m^{-\frac{n-2\gamma}{2}} u_m\left( \frac{\zeta}{\lambda_m}\right).
$$
By construction, $v_m(0) = 1$ for all $m$, and
$$
g_m:=u_{m}^{\frac{4}{n-2\gamma}}g_c=v_{m}^{\frac{4}{n-2\gamma}}\hat g_m.
$$
We show below that $\hat{g}_m$ and $v_m$ are defined on an expanding sequence of balls on $\mathbb R^n$, and it is then
clear that $\hat{g}_m$ converges to the Euclidean metric uniformly in ${\mathcal C}^\infty$ on any compact subset.
Let $r_m = \frac12 (\sigma_m - \rho(z_m))$, or equivalently, $\rho_m(z_m) + 2r_m = \sigma_m$. Then
\[
\sigma_m - \rho(z_m) \geq \sigma_m - \rho_m(z_m) - r_m = r_m
\]
on the ball $\mbox{dist}_{g_c}(z,z_m) < r_m$, and hence on this same ball,
\[
u_m(z) \leq \left(\frac{\sigma_m - \rho_m(z_m)}{\sigma_m - \rho_m(z)}\right)^{\frac{n+2\gamma}{n-2\gamma}} u_m(z_m) \leq c \, u_m(z_m),
\quad c=2^{\frac{n-2\gamma}{2}}.
\]
The corresponding ball in rescaled coordinates contains
$\{\zeta: |\zeta| < m^{\frac{2}{n-2\gamma}}\}$, hence has radius tending to infinity. By construction, $v_m(z)\leq c$ on
this entire ball, and $v_m(0) \equiv 1$. Since these functions are uniformly bounded and satisfy the converging
set of elliptic pseudodifferential equations
\beta
\label{problem1}P_{\gamma}^{\hat g_m} v_m=v_m^{\frac{n+2\gamma}{n-2\gamma}},
\end{equation}
we conclude using the local regularity theory (which is straightforward since $v_m$ is bounded) that $v_m$ is
bounded in ${\mathcal C}^{2,\alpha}$ of every compact set, and hence we can extract a convergent subsequence.
We thus obtain a smooth solution $v$ to the `flat' equation
\beta
\label{problem2}-(\Delta_{\mathbb R^n})^\gamma v=v^{\frac{n+2\gamma}{n-2\gamma}}\quad \mbox{in }\mathbb R^n.
\end{equation}
Since each $v_m > 0$, we see that $v \geq 0$, but $v \not\equiv 0$ since $v(0) = 1$. There is a maximum
principle for this equation \cite[Corollary 3.6]{fractional-Yamabe} when $0 < \gamma \leq 1$, so we
conclude that $v > 0$ on all of $\mathbb R^n$.
There is a complete characterization of positive solutions of \eqref{problem2}, \cite[\S 5]{fractional-Yamabe}).
They are the extremal functions for the
embedding $H^\gamma(\mathbb R^n)\hookrightarrow L^{\frac{2n}{n-2\gamma}}(\mathbb R^n)$, and are necessarily of the form
$$
v(z)=C\left(\frac{\mu}{\abs{z-z_0}^2+\mu^2}\right)^{\frac{n-2\gamma}{2}},
$$
for some $\mu,c>0$ and $z_0\in\mathbb R^n$ (these are well known ``bubbles'').
The argument is completed using Theorem \ref{thm:convex-boundary} below, which states that a small ball in $\Omega_m=S^n\setminus\Lambda_m$ must have a concave boundary with respect to to $g_m$ for $m$
sufficiently large. This is a contradiction to the already known limiting form of the $v_m$. The proof of Proposition \ref{prop:growth} is thus completed.
\qed\\
Note that the previous arguments do not require that $u$ is a weak solution in the whole $S^n$. The only place where this strong hypothesis is required is in the following convexity claim:
\begin{thm}\label{thm:convex-boundary}
In the same hypothesis as in Proposition \ref{prop:growth}, any open ball $B$ (with respect to $g_c$) with $\bar B\subset\Omega$, has boundary $\partial B$ which is geodesically convex with respect to $g$.
\end{thm}
This result was proved in \cite{Schoen:number-metrics} for constant scalar curvature metrics, and also in the case
$\gamma\in(1,n/2)$ for locally conformally flat manifolds satisfying some extra conditions by \cite{Qing-Raske}.
The crucial step is the application of the Alexandroff moving plane method. As we show here, the same ideas work
in the fractional case. The moving plane method has been successfully applied to fractional order operators in
\cite{Qing-Raske} and \cite{Chen-Li-Ou:classification-solutions}, at least when the equation is rewritten as an
integral equation. However, the proof in the present seting is simpler because of the equivalent formulation \eqref{divergence-equation2} and the precise asymptotics
\eqref{behavior-infinity}, so we include the details for the reader's convenience. Our proof follows the classical
arguments for the Laplacian by Gidas-Ni-Nirenberg in \cite{Gidas-Ni-Nirenberg},
\cite{Gidas-Ni-Nirenberg:symmetry-max-principle}. \\
For simplicity, we denote $P_\gamma:=P^{|dx|^2}_\gamma$. Let $v$ be a distributional solution of
\beta\label{equation-euclidean}P_\gamma v=v^{\frac{n+2\gamma}{n-2\gamma}}\quad\mbox{in }\mathbb R^n.\end{equation}
We will apply the Alexandroff reflection with respect to the planes $S_\lambda:=\{x\in\mathbb R^n: x^n=\lambda\}$.
Let $\Sigma_\lambda:=\{x\in\mathbb R^n \,: \,x^n>\lambda\}$ be the hyperplane lying above $S_\lambda$. Given $x=(x^1,\ldots,x^n)\in\Sigma_\lambda$, define $x^\lambda$ to be the reflection of $x$ with respect to the hyperplane $S_\lambda$, i.e., $x^\lambda:=(x^1,\ldots,x^{n-1},2\lambda-x^n)$. We also define $v_\lambda(x):=v(x^\lambda)$ and
$$w_\lambda(x):=v_\lambda(x)-v(x).$$
Note that the equation satisfied by $v_\lambda$ is the same as the satisfied by $v$. Although this fact is not clear for non-local operators, it is easily seen to be true in the Caffarelli-Silvestre extension \eqref{divergence-equation2}. Then, by linearity,
\beta\label{difference-Alexandroff}P_\gamma w_\lambda= v_\lambda^{\frac{n+2\gamma}{n-2\gamma}}-v^{\frac{n+2\gamma}{n-2\gamma}}, \mbox{ weakly}.\end{equation}
We will need a couple of preliminary results:
\begin{lemma}\label{lemma:start}
Let $v$ be any function with asymptotics
\beta\label{behavior-infinity} v(x)={|x|}^{2\gamma-n}\left( a+\sum_{i=1}^n \frac{b_i x^i}{|x|^2}+O\left(|x|^2\right)\rp \quad\mbox{when } |x|\to\infty,\end{equation}
for some $a>0$. Then there exists $\lambda_0>0$ such that for all $\lambda\geq\lambda_0$,
$$w_\lambda(x)>0\quad\mbox{for all } x\in\Sigma_\lambda.$$
\end{lemma}
\begin{proof}
This is just Lemma 2.2. in \cite{Gidas-Ni-Nirenberg}, and it does not use \eqref{equation-euclidean}.
\end{proof}
\begin{lemma}\label{lemma:claim}
Let $v$ a weak solution of \eqref{equation-euclidean}.
If for some $\lambda<\lambda_0$ we have that $w_\lambda(x)\geq 0$ but $w_\lambda\not\equiv 0$ in $\Sigma_\lambda$, then
\beta\label{properties}w_\lambda(x)>0 \mbox{ in }\Sigma_\lambda\quad \mbox{ and }\quad\partial_n v(x)<0 \mbox{ on }S_\lambda.\end{equation}
\end{lemma}
\begin{proof}
When $v$ solves the constant scalar curvature equation, this is just Lemma 2.2. and Lemma 4.3 in
\cite{Gidas-Ni-Nirenberg:symmetry-max-principle}. In our case, we need a strong maximum principle and Hopf's lemma
for the operator $P_\gamma$ (see \cite{fractional-Yamabe}, \cite{Cabre-Sire:I}). We know from \eqref{difference-Alexandroff}
that $P_\gamma w_\lambda\geq 0$. Since $w_\lambda\geq 0$ in $\Sigma_\lambda$ (and is not identically zero), and
since $w_\lambda$ vanishes on the boundary $S_\lambda$, the strong maximum principle gives that $w_\lambda>0$
in all of $\Sigma_\lambda$. On the other hand, Hopf's lemma implies that $\partial_n w_\lambda>0$ on $S_\lambda$.
Then, $\partial_n w_\lambda=\partial_n v_\lambda-\partial_n v=-2\partial_n v$, so we immediately have $\partial_n v<0$
along $S_\lambda$.\\
\end{proof}
\noindent\emph{Proof of Theorem \ref{thm:convex-boundary}: } Let $g$ be a complete metric of constant positive scalar
$Q_\gamma$ curvature on $\Omega\subset S^n$ of the form $g=u^{\frac{4}{n-2\gamma}}g_c$, and $B$ an open ball in $\Omega$.
Let $S=\partial B$ be the boundary sphere. Fix any point $p \in S$. Use stereographic projection to map $\Omega$ into
$\tilde\Omega\subset\mathbb R^n$ so that $p$ is mapped to infinity. Then $S$ is transformed to a hyperplane $\tilde S$,
and the projected $\partial\tilde\Omega$ lies on one side of $\tilde S$, say below. Use linear coordinates
$(x^1,\ldots,x^n) \in \mathbb R^n$ with $\tilde S = \{x^n=0\}$.
By stereographic projection, the metric $g$ transforms to a conformally flat metric on $\mathbb R^n$,
$g_v=v^{\frac{4}{n-2\gamma}}\abs{dx}^2$. Since the scalar curvature equation is conformally covariant, we also have
$$
\Delta^\gamma v= v^{\frac{n+2\gamma}{n-2\gamma}},
$$
on $\tilde\Omega\subset \mathbb R^n$. Note that $v$ is a weak solution of this equation on all of $\mathbb R^n$.
Since the function $u$ is smooth and strictly positive at $p$, the function $v$ is regular at infinity, i.e.\ has the asymptotics
\eqref{behavior-infinity} for some $a>0$ as $|x|\to\infty$.
\emph{Step 1. Starting the reflection:}
Thanks to Lemma \ref{lemma:start}, we can initiate the reflection argument when $\lambda$ is sufficiently large.
Note that the equation satisfied by $v$ is not needed here since we have the precise behavior \eqref{behavior-infinity}.
\emph{Step 2. Continuation:} We now move the plane $S_\lambda$, so long as it does not touch the singular set.
Suppose that at some $\lambda_1>0$ we have $w_{\lambda_1}(x)> 0$ for all $x\in\Sigma_{\lambda_1}$, but $w_{\lambda_1}\not\equiv 0$
in $\Sigma_{\lambda_1}$. Then the plane can be moved further; more precisely, there exists some $\epsilon>0$ not depending
on $\lambda_1$ such that $w_\lambda\geq 0$ in $\Sigma_\lambda$ for all $\lambda\in[\lambda_1-\epsilon,\lambda_1]$.
We observe first that because of Lemma \ref{lemma:claim} we must have
\beta\label{moving0}
\partial_n v<0 \quad \mbox{on }\Sigma_{\lambda_1}.
\end{equation}
Next, the proof of our claim follows as in Lemma 2.3. in \cite{Gidas-Ni-Nirenberg} by contradiction. Thus, assume that
there is a sequence $\lambda_j\to \lambda_1$ and a sequence of points $\{x_j\}$, $x_j\in\Sigma_{\lambda_j}$ such that
\beta\label{moving1}w_{\lambda_j}(x_j)\leq 0.\end{equation}
Either a subsequence, which we again call $\{x_j\}$, converges to $x_\infty\in\Sigma_{\lambda_1}$ or else $x_j\to\infty$.
In the first case, because of \eqref{moving1} we must have $\partial_n v(x_\infty)\leq 0$, thus contradicting \eqref{moving0}.
So $x_j\to\infty$. But in this second case may use the asymptotics for $v$ from \eqref{behavior-infinity}, that imply
$$
\frac{|x_j|^n}{\lambda_j-x_j^n} w_{\lambda_j}(x) \to -(n-2\gamma) a<0.
$$
This is a contradiction to \eqref{moving1}.
Finally, note that in this process we never have $w_\lambda \equiv 0$ since the existence of the singularity of $v$
implies that it has no plane of symmetry. Hence the moving plane can be moved all the way to $\lambda=0$.
\emph{Step 3. Conclusions:} We have shown that $\Sigma_\lambda$ can be moved to $\lambda = 0$, and then
$w_\lambda(x)>0$ for all $x\in\Sigma_0$ and
$$
\partial_n v(x)<0\quad\mbox{for all }x\in\overline{\Sigma_0}.
$$
Since $g=v^{\frac{4}{n-2\gamma}}|dx|^2$, the second fundamental form of any plane $S_\lambda$, $\lambda\geq 0$,
with respect to $g$ is given by
$$
\left( -\frac{4}{n-2\gamma} v^{-1}\frac{\partial v}{\partial x^n} I\right).$$
The sign of $\partial_n v$ therefore implies that $S_\lambda$ is locally geodesically convex for all $\lambda\geq 0$.
When transferred back to the sphere, this shows that any round ball contained in $\Omega$ has locally geodesically
convex boundary, as claimed.
\qed
\medskip
\textbf{Acknowledgements: }
M.d.M. Gonz\'alez is supported by Spain Government project MTM2008-06349-C03-01 and GenCat
2009SGR345. R. Mazzeo is supported by the NSF grant DMS-0805529.
Y. Sire would like to thank the hospitality of the Department of Mathematics at Stanford University.
\bibliographystyle{abbrv}
|
train/arxiv
|
BkiUd1E5qoaAwrv3XEOL
| 5 | 1 |
\section{Introduction}
The infrared regime of
Lattice Gauge Theories (LGT) in the confining phase displays a large
degree of universality.
The main evidences in favour of this universality
are given by the behaviour of the Wilson loop and of the
adimensional ratio $T_c^2/\sigma$ (where $T_c$ denotes the
deconfinement temperature) which are roughly independent
from the choice of the gauge group, and show a rather
simple dependence on the number of space-time
dimensions.
Both these behaviours are commonly understood as
consequences of the fact that the relevant degrees of freedom in the
confining regime are string-like excitations.
The phenomenological models which try to keep into account this string-like
picture are usually known as ``flux-tube'' models and turn out to give a very
good description of the Wilson loop behaviour
(see for instance~\cite{ip} and
references therein).
The reason of this success is that in the interquark potential we
have a natural scale, the string tension, which allows to define in a
rather precise way a large distance (``infrared'') regime in which the
adiabatic approximation for the string-like excitations can be trusted.
This regime can be reached by considering interquark distances large in
units of the string tension.
Besides the Wilson loop and the deconfinement temperature, another important set
of physical observables in LGT is represented by the glueball spectrum.
In this case it is less obvious that a string like description could be used to
understand the data.
However, a string-inspired model exists also for the glueball
spectrum: the Isgur Paton model~\cite{ip} (IP in the following).
For this reason it would be very interesting to test if the same universality
(which, as mentioned above, should manifest itself as a substantial independence
from the choice of the gauge group) displayed by the Wilson loops also
holds for the glueball spectrum.
In this case we do not have
the equivalent of the interquark distance, {\em i.e.}
a parameter which can be adjusted to select the infrared region:
the role of large Wilson loops is played by the higher states of the
spectrum which, being localized in larger space regions, are expected to
show more clearly a string-like behavior.
A major problem in this respect is the lack of
precise and reliable data for these higher states.
An obvious proposal to overcome this problem is to begin with
the (2+1)
dimensional case, for which some relevant simplifications occur in the spectrum
and a much higher precision can be achieved in the Montecarlo
simulations.
Following this suggestion we have studied in~\cite{acch} the glueball spectrum
in the case of the (2+1) Ising gauge
model
obtaining a high precision estimate of the first 11 states of the spectrum.
These can be compared with some results for the (2+1) dimensional SU(2) model
obtained with Montecarlo simulations~\cite{tep} and with
variational techniques~\cite{arisue}.
The comparison between the SU(2) and Ising spectra shows that,
not only the pattern of the states is the same in the two models, but also
the values of the masses (except for the lowest state) are in remarkable
agreement.
This is a strong evidence in favour of the above mentioned universality, and
suggests that the higher states of the glueball spectrum of any LGT,
(as it happens for the behaviour of large enough Wilson loops)
can be predicted by some relatively simple flux-tube inspired model.
We shall give in the next section few general information on the model and on
the algorithm that we used (we refer to~\cite{acch} for further details).
Sect 3.
is devoted to a discussion of the glueball spectrum and to a comparison with the
SU(2) results and with the IP predictions. Finally in the last section we shall
make some concluding remark on the duality transformation of the glueball
spectrum.
\section{The Ising gauge model}
The Ising gauge model is defined by the action
\begin{equation}
S_{gauge} = - \beta \;\; \sum_{n,\mu<\nu} \;\; g_{n;\mu\nu}
\label{Sgauge}
\end{equation}
where
$g_{n;\mu\nu}$ are the
plaquette variables, defined in terms of the link
fields $g_{n;\mu} \in {-1,1}$ as:
\begin{equation}
g_{n;\mu\nu}=g_{n;\mu} \; g_{n + \mu;\nu} \;
g_{n + \nu;\mu} \; g_{n;\nu}~~~.
\label{plaq}
\end{equation}
where $ n \equiv (\vec x,t)$
denotes the space-time position of the link and $\mu$ its direction.
For the Ising model, as for the SU(2) model, we cannot define a charge
conjugation operator. The glueball states are thus labelled only by their
angular momentum $J$ and by their parity eigenvalue $P=\pm$.
The standard notation is $J^{P}$. An important simplification due to the fact
that we are working in (2+1) dimensions is that in this case
all the states with
angular momentum different from zero are degenerate in parity. Namely $J^+$ and
$J^-$ (with $J\neq 0$) must have the same mass. This result holds in the
continuum limit. The lattice discretization breaks this degeneration, since in
this case the symmetry group is only the $D_4$ (dihedral) group. In particular
it can be shown that the degeneration still holds on the lattice
for all the odd $J$ states, and is lifted for all the even $J$ states
(see~\cite{acch} for details). An important test of the whole analysis is to see
if the degeneration of the even part of the spectrum is recovered in the
continuum limit. There are two other important consequences
of the fact the symmetry group is reduced to $D_4$. The first one is that only
operators with angular momentum $J~(mod(4))$ can be
observed and the second is that all the odd $J$ states are grouped together
in the same irreducible representation of $D_4$.
This means that we cannot distinguish among
them on the basis of the lattice symmetries.
Hence in the
following we shall denote the states belonging
to this channel as $J=1/3$ states.
The simulations were performed
with a local demon-algorithm in Multi-spin-coding technique.
Variational
techniques were used to
obtain accurate results
for the higher states of the spectrum in any given channel.
We studied five different values of $\beta$
in the scaling region, and carefully tested that all the states of
the spectrum followed the expected scaling behaviour. As mentioned above
we obtained reliable estimates for 11 masses.
\section{Results and comparison with SU(2) and with the IP model.}
A first unambiguous result of the simulation is that the parity degeneration is
indeed recovered in the continuum limit also for the even $J$ sector. Taking
into account this degeneration we end up with 8 independent states in the
continuum limit. Their masses in the continuum limit
(measured in units of the string tension
$\sqrt\sigma$) are listed in tab.1, where they are compared with the
corresponding results for SU(2) obtained in~\cite{tep} and with the predictions
of the IP model.
\begin{table}[ht]
\label{ip5f}
\caption{\sl Comparison between the Ising, SU(2) and IP spectra.}
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
$J^P$ & Ising & SU(2) & IP \\
\hline
$0^+$ & 3.08(3)& 4.763(31) & 2.00 \\
$(0^+)'$ & 5.80(4)& & 5.94 \\
$(0^+)''$ & 7.97(11)& & 8.35 \\
$2^{\pm}$ & 7.98(8) & 7.78(10)& 6.36 \\
$(2^\pm)'$ & 9.95(20)& & 8.76 \\
$0^-$ & 10.0(5) & 9.90(27) & 13.82 \\
$(0^-)'$ & 13.8(6) & & 15.05 \\
$(1/3)^\pm$ & 12.7(5)& 10.75(50) & 8.04 \\
\hline
\end{tabular}
\end{center}
\end{table}
Looking at tab.1 we see that the biggest discrepancy in the mass values is for
the lowest state, which is predicted to be too light in the IP model, and turns
out to be very different in the Ising and SU(2) cases. This is due first
to the lack of validity of the adiabatic approximation at small scales and
second to the fact that in the IP model an ``ideal'' picture of
string (without self repulsion terms) is assumed for the flux tube.
Apart from this state, in the remaining part of the spectrum
we immediately see an impressive agreement between the Ising and SU(2) spectra.
This agreement is further improved by looking at the excited states in the
$(0^+)$ channel for the SU(2) model. In~\cite{arisue} a variational estimate
for these masses can be found (up to our knowledge
no Montecarlo estimate exists for them). In tab.2 we compare these values with
the Ising ones. While the two sets of excited states disagree if measured in
units of $\sqrt\sigma$, they agree if measured in units of $0^+$.
Moreover a better and better agreement is observed if ratios of
higher masses are considered.
\begin{table}[ht]
\label{ip5g}
\caption{\sl The $0^+$ channel.}
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
ratio & Ising & SU(2) \\
\hline
$(0^+)'/0^+$ & 1.88(2)& 1.77(2) \\
$(0^+)''/0^+$ & 2.59(4)& 2.50(5) \\
$(0^+)''/(0^+)'$ & 1.37(4) & 1.41(4) \\
\hline
\end{tabular}
\end{center}
\end{table}
We can conclude from these data that the qualitative features of the
glueball spectrum are largely independent from the gauge group and
well described by a flux tube effective model.
While the higher states of the spectrum show a remarkable
independence from the gauge
group, for the lowest state the flux tube picture
breaks down and
the gauge group becomes important. The IP model, which is the simplest
possible realization of such a flux
tube, seems able to catch (at least at a qualitative level)
some of the relevant features
of the glueball spectrum.
\section{Duality.}
Another important reason of interest in the gauge Ising model is that it is
related
by duality to the ordinary $3D$ spin Ising model. As a consequence,
one expects the glueball spectrum to be mapped in the spectrum of
massive excitations of the spin
model. However this mapping has some non trivial
features. While the lowest state $0^+$ is
mapped into the (inverse) correlation length of the spin model
(see~\cite{acch,ch} for details),
it is not clear which are the dual
partners of the higher states of the spectrum. In fact one naively expects that
the spin Ising model should be described in the scaling region by a $\phi^4$
theory, whose spectrum however contains just one state.
As a matter of fact the higher states of the gauge Ising model
correspond to ``disorder'' variables in the spin Ising model which are
{\sl non-local} with respect to the ``order'' variable (the $\phi$ field).
The overlap between these mutually non-local observables becomes
vanishingly small in the continuum limit (even if it is non-zero in any finite
lattice) thus explaining the difference between the two dual spectra.
A detailed comparison of correlation functions in the Ising spin model and in
the $\phi^4$
theory seems to confirm this picture~\cite{chp}.
|
train/arxiv
|
BkiUepo5qhDCHWDRFLUq
| 5 | 1 |
\section{Introduction}
Previous models of coagulation and fragmentation of an ensemble of dust aggregates often assumed that a collision between particles of particular sizes occurs with a single velocity. We develop a more physically-motivated model for the collision velocities.
\section{Principles behind the new physically-motivated model}
\begin{figure}
\begin{center}
\includegraphics[width=0.6\columnwidth]{Meru_fig1.ps}
\end{center}
\caption{Schematic diagram showing the collisional outcomes of
sticking and bouncing at low velocities, and fragmentation and mass
transfer at high velocities. The PDF has an expectation in one zone
but extends into others.}
\label{fig:schematic}
\end{figure}
The key features are: (i) a particle's velocity is \emph{not} single-valued but is described by a probability distribution function (PDF); (ii) a particle's velocity in any direction is a Gaussian with a mean given by the deterministic velocity in that direction (i.e. radial drift, azimuthal drift or vertical settling) and a standard deviation given by the stochastic velocities (i.e. turbulence \& Brownian motion). The analytical 1D PDF of \emph{relative velocities} in each direction is produced for each pair of particles, and combined to give a 3D collision velocity PDF.
We assume that collisions with velocities lower than the bouncing velocity, $\vb$, lead to growth while those higher than the fragmentation velocity, $\vf$, break apart if their mass ratio is smaller than the \emph{mass transfer} parameter and stick if it is larger. Physically this means that collisions between unequal-sized aggregates are likely to lead to growth while equal-sized aggregates are likely to fragment - a result shown by experiments (e.g. \cite{Teiser_Wurm_highVcoll}) and simulations (\cite{Velocity_thresholds}). At all other velocities the aggregates bounce. Fig.~\ref{fig:schematic} shows that the final 3D PDF may cover any of these collisional outcomes - these are used to simulate the local size evolution of an ensemble of particles in a disc.
\section{Application to T Tauri and brown dwarf discs}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.4\columnwidth]{Meru_fig2a.ps}
\hspace{1.0cm}
\includegraphics[width=0.3\columnwidth]{Meru_fig3.eps}
\caption{Surface mass density distribution of particles. Left: at 1au for a model (i) without a velocity PDF (Model O), (ii) where the stochastic velocities dominate over the deterministic ones (Model M), and (iii) the new model (Model N). Right: for the brown dwarf disc (solid line) and T Tauri disc (dotted line) at 2/3 of the distance to the outer radius.}
\label{fig:TT}
\end{center}
\end{figure}
We simulate grain growth locally in a disc with mass $\Mdisc = 0.05 \Mstar$ around a $0.75 \Msolar$ star (see \cite{Garaud_vel_pdf} for parameters) and find growth to larger sizes (Fig.~\ref{fig:TT}, left). We also model growth in brown dwarf \& T Tauri discs where the latter is a scaled-up version of the former, i.e. the same disc to star mass ratio, simulated at the same location with respect to the truncation radius, $\Rt$, and where $\Mdisc$ \& $\Rt$ are set using the observed relation $\Mdisc \propto \Rt^{1.6}$ (\cite{Andrews_Md_Rt}). We simulate growth at 10au in a brown dwarf disc with $\Mdisc = 4\times 10^{-4} \Msolar$ and $\Rt = 15 \rm au$ around a $60 \MJup$ brown dwarf, and at 60au in a T Tauri disc with $\Mdisc = 7\times 10^{-3} \Msolar$ and $\Rt = 90 \rm au$ around a $1 \Msolar$ star. Fig.~\ref{fig:TT} (right) shows that growth at the equivalent location in both discs occurs to the same size (\cite{BD_discs_letter}).
\section{Conclusions}
We present a model for growth and fragmentation that considers a particle's velocity PDF and separates the deterministic and stochastic velocities. We find growth to large sizes and the emergence of two particle populations. In addition if brown dwarf discs are scaled-down versions of T Tauri discs, growth occurs to similar sizes. Our model may potentially explain the large ($\approx$ mm-sized) grains observed in brown dwarf discs (e.g. \cite{Bouy_BDdiscs_mm}) and the long-standing problem of grain growth in discs: growing dust while maintaining a small-size population.
|
train/arxiv
|
BkiUaw_xK5YsWV5L1vhs
| 5 | 1 |
\section{Introduction}
Located at a distance of 420~pc from the Sun \citep{hirota2007,menten2007,kim2008}, the Orion Kleinmann-Low \citep[KL;][]{kleinmann1967} region is known as the nearest site of active massive-star formation.
It has been recognized as one of the best laboratories to study massive star-formation processes \citep{genzel1989,bally2008}.
Because of its extremely high opacity, observations have been made mainly in the centimeter, millimeter/submillimeter, and infrared wavelengths.
In particular, radio interferometers have been powerful tools to study in detail about physical properties of young stellar objects (YSOs) in Orion~KL \citep[e.g. ][]{beuther2004, beuther2005, beuther2006, tang2010, favre2011, friedel2011, zapata2011, plambeck2013} at the highest spatial resolution.
These observational studies have identified some of the remarkable sources, such as Becklin-Neugebauer \citep[BN;][]{becklin1967} object, Hot Core, Compact Ridge, a radio source labeled I (Source~I), infrared source labeled n (Source~n), and submillimeter source identified by Submillimeter Array (SMA), SMA1, although their properties are still a matter of debate.
Among these compact sources, the most prominent energy source in Orion~KL is thought to be Source~I \citep{menten1995}.
It is a candidate of massive protostar driving a so-called low velocity bipolar outflow along the northeast-southwest direction with a scale of 1000~AU traced by the thermal SiO lines and H$_{2}$O masers \citep{genzel1989,gaume1998,hirota2007,plambeck2009,zapata2012,niederhofer2012,greenhill2013,kim2014}.
At the center of this outflow, there is a cluster of vibrationally excited SiO masers tracing a disk wind emanating from the surface of the circumstellar disk with a diameter of $\sim$100~AU \citep{kim2008,matthews2010,greenhill2013}.
A compact radio continuum source is associated at the center of the SiO masers which is interpreted as an edge-on circumstellar disk \citep{reid2007,goddi2011,plambeck2013}.
Such a disk-jet system is recently studied in the submillimeter H$_{2}$O lines by using a newly constructed Atacama Large Millimeter/Submillimeter Array (ALMA) \citep{hirota2012,hirota2014a}.
Nevertheless, the nature of Source~I and associated disk/outflow system are still far from a complete understanding.
One of the long-standing issues is an origin of the radio continuum emission associated with Source~I.
A spectral energy distribution (SED) of Source~I from centimeter to submillimeter wavelength can be fitted to a power-law function, $F_{\nu} \propto \nu^{2}$, indicative of optically thick emission up to submillimeter wavelength \citep[][and references therein]{plambeck2013}.
Based on such a SED of Source~I, it has been interpreted as either a combination of free-free emission and thermal dust emission \citep{beuther2004, beuther2006, reid2007, plambeck2013} or a H$^{-}$ free-free emission from a neutral atomic/molecular gas \citep{reid2007, plambeck2013}.
Although physical properties of Source I such as mass, temperature, and ionization degree are still under
debate, more recent results support a latter scenario \citep[e.g.][]{plambeck2013}.
In order to reveal physical properties of continuum sources in Orion~KL, in particular Source~I, we present observational study of multi-band continuum emission with the sub-arcsecond resolution by using ALMA.
The ALMA observations open new wavelength windows with the higher spatial resolution.
Combined with previous high-resolution observations at lower frequency bands, we will discuss about basic physical properties of some of the key sources in Orion~KL.
\section{Observations and Data Analysis}
Observations of the millimeter and submillimeter continuum emission were carried out with ALMA in several sessions during the early science operation in the cycle~0 period at bands 6 and 7
(ADS/JAO.ALMA\#2011.0.00199.S).
We also employed the ALMA Science Verification (SV) data (ADS/JAO.ALMA\#2011.0.00009.SV) at band~6.
Details of each session is summarized in Table \ref{tab-obs}.
\subsection{Band 6 data}
The observation in cycle~0 was done in the extended configuration on 2012 April 08 with 17$\times$12~m antennas.
The primary beam size of each 12~m antenna is 25\arcsec.
The baseline lengths ranged from 17 to 310~k$\lambda$ (from 21 to 385~m).
The observed frequency ranges were 240$-$244~GHz and 256$-$260~GHz consisted of 4 spectral windows with bandwidth of 2~GHz for each.
Dual polarization data were obtained simultaneously.
The ALMA correlator was set for low resolution wideband continuum observations and the spectral resolution was 15.625~MHz.
The target source was Orion~KL and the tracking center position was taken to be the bursting 22~GHz H$_{2}$O maser, RA(J2000)=05h35m14.1250s, Decl(J2000)=-05d22\arcmin36.486\arcsec \citep{hirota2011, hirota2014b}.
The total on-source integration time was 30~s.
A primary flux calibrator, band-pass calibrator, and secondary gain calibrator were Callisto, J053851-440507, and J0607-085, respectively.
The system noise temperatures ranged from 102~K to 122~K depending on the spectral window.
To improve the image quality, we combined the ALMA SV data at band~6 with our cycle~0 data.
The SV data were obtained in the 20 frequency settings to cover full spectral emissions from 214~GHz to 247~GHz.
We here only employed part of the frequency range from 230 to 232~GHz to achieve an image sensitivity comparable to the cycle~0 data.
The SV data were taken on 2012 January 20 in the compact configuration with 16$\times$12~m antennas separated from 14 to 203~k$\lambda$ (from 17 to 265~m).
The tracking center position of Orion~KL was set to be R. A. =05h35m14.35s and decl.=$-$05$^{\circ}$22\arcmin35\arcsec.0 (J2000), which is 4\arcsec \ northeast of that of the cycle 0 data.
In this paper, we define the band~6 frequency to be 245~GHz as it is the central frequency of all the combined data.
\subsection{Band 7 data}
For band~7, observations in two different frequency settings were carried out; continuum mode and spectral line mode.
The tracking center position was the same as band~6 cycle~0 data; RA(J2000)=05h35m14.1250s, Decl(J2000)=-05d22\arcmin36.486\arcsec.
Observations in the continuum mode was done in the extended configuration on 2012 October 23 with 23$\times$12~m antennas.
The primary beam size of each 12~m antenna is 17\arcsec.
The baseline lengths ranged from 19 to 425~k$\lambda$ (from 17 to 372~m).
The observed frequency ranges were 341.5$-$345.4~GHz and 353.5$-$357.5~GHz consisted of four 2~GHz-bandwidth spectral windows with dual-polarization.
The ALMA correlator was set for low resolution wideband continuum observations and the spectral resolution was 15.625~MHz.
The total on-source integration time was 100~s.
A primary flux calibrator, band-pass calibrator, and secondary gain calibrator were Callisto, J0423-013, and J0607-085, respectively.
The system noise temperatures ranged from 116~K to 162~K depending on the spectral window.
We carried out monitoring observations of the submillimeter H$_{2}$O lines at band~7 to study the H$_{2}$O maser burst event at 22~GHz in multi-frequency \citep{hirota2014a}.
For this purpose, three sessions of spectral line observations were carried out on 2012 July 16, August 25, and October 21 during the cycle~0 period.
Number of 12~m antennas were different from epoch to epoch; 21, 28, and 22 at the first, second, and third epochs, respectively.
Array configurations were in the extended mode with the baseline lengths of 16-427~k$\lambda$ (15-398~m), 22-416~k$\lambda$ (21-388~m), and 18-389~k$\lambda$ (17-363~m) at the first, second, and third epoch, respectively.
Four spectral windows were set at the frequency of 321.0-321.5~GHz, 322.1-322.6~GHz, 334.4-334.9~GHz, and 336.0-336.5~GHz with the bandwidth of 469~MHz for each.
Spectral resolution was set to be 0.122~MHz.
The on-source integration time of the target source was about 100~s for each session.
A primary flux calibrator, band-pass calibrator, and secondary gain calibrator were Callisto, J053851-440507/J0423-013, and J0607-085, respectively.
The system noise temperature ranged from 125-341~K, at which frequency around 322~GHz was significantly affected by the strong atmospheric absorption of the 325~GHz H$_{2}$O line.
Similar to the band~6 data, we define the band~7 frequency to be 339~GHz as the central frequency of all the combined data.
\begin{figure*}
\begin{center}
\includegraphics[width=15cm]{f1.eps}
\caption{
Full uv coverage of ALMA observations.
(a) Band 6 data including both compact (SV) and extended (cycle 0) configurations.
(b) Band 7 data including all of the three epochs in spectral line mode and one in continuum mode.
The circles indicate the uv length of 100~k$\lambda$.
}
\label{fig-uv}
\end{center}
\end{figure*}
\begin{deluxetable}{clccrcrcrr}
\rotate
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablecaption{Summary of Observations
\label{tab-obs}}
\tablehead{
\colhead{} &
\colhead{} & \colhead{Center} &
\colhead{Total} & \colhead{Effective } &
\colhead{Number} &
\colhead{On-source} &
\colhead{} & \colhead{} &
\colhead{}
\\
\colhead{} &
\colhead{} & \colhead{frequencies\tablenotemark{a}} &
\colhead{bandwidth} & \colhead{bandwidth} &
\colhead{of} &
\colhead{Time} &
\colhead{FWHM} & \colhead{PA} &
\colhead{rms}
\\
\colhead{Band} &
\colhead{Date (in 2012)} & \colhead{[GHz]} &
\colhead{[MHz]} & \colhead{[MHz]} &
\colhead{Antennas} &
\colhead{[sec]} &
\colhead{[arcsec]} & \colhead{[degree]} &
\colhead{[mJy~beam$^{-1}$]}
}
\startdata
6
& Jan. 20 & 231 & 1875 & 167 & 16 & 1196.1 & 1.76$\times$1.19 & \ -1 & 19 \\
& Apr. 08 & 250 & 8000 & 3547 & 17 & 30.3 & 0.68$\times$0.50 & -75 & 5 \\
& All uv & 245 & \nodata & \nodata & \nodata & \nodata & 0.81$\times$0.64 & -70 & 9 \\
& uv$>$100~k$\lambda$ & 245 & \nodata & \nodata & \nodata & \nodata & 0.63$\times$0.47 & -72 & 5 \\
7
& Oct. 23 & 350 & 8000 & 3641 & 23 & 100.8 & 0.47$\times$0.44 & \ \ 0 & 9 \\
& Jul. 16, Aug. 25, Oct. 21\tablenotemark{b}
& 329 & 1875 & 648 & 21,28,22 & 326.8 & 0.55$\times$0.46 & \ 59 & 12 \\
& All uv & 339 & \nodata & \nodata & \nodata & \nodata & 0.49$\times$0.45 & \ 59 & 9 \\
& uv$>$100~k$\lambda$ & 339 & \nodata & \nodata & \nodata & \nodata & 0.46$\times$0.41 & \ 57 & 5 \\
\enddata
\tablenotetext{a}{Center frequencies for LSB (lower side band) and USB (upper side band). \\
Only one spectral window in LSB is employed for band~6 data on January 20, 2012 (SV data). }
\tablenotetext{b}{Combine three epochs for spectral line monitoring at band~7. }
\end{deluxetable}
\subsection{Synthesis imaging}
The data were calibrated and imaged with the Common Astronomy Software Applications (CASA) package.
For the band~6 data, we combined calibrated SV and cycle~0 data by using the task {\tt{concat}} in CASA.
Plot of uv coverages of full-array are shown in Figure \ref{fig-uv}.
For band~7 data, we combined all of the calibrated data for both in continuum and spectral line mode by using the CASA task {\tt{concat}}.
After combining the data, continuum images of Orion~KL were made using the CASA task {\tt{clean}}.
The line emissions were excluded to make continuum images by integrating over the line-free channels.
Resultant effective bandwidths were almost half of the observed frequency range as listed in Table \ref{tab-obs}.
Both phase and amplitude self-calibrations were done with the continuum images by the CASA task {\tt{gaincal}} and these results were applied to the visibility data by the CASA task {\tt{applycal}}.
The resultant image rms noise level of the continuum emission and the uniform-weighted synthesized beam size were summarized in Table \ref{tab-obs}.
\begin{figure*}
\begin{center}
\includegraphics[width=15cm]{f2.eps}
\caption{
Continuum emission maps for band~6 data.
Synthesized beam size is indicated at the bottom-left corner in each panel.
The contours start at -5$\sigma$ level with an interval of 10$\sigma$ (5, 15, 25, ...).
The (0,0) position is the tracking center position of the cycle~0 data, RA(J2000)=05h35m14.1250s, Decl(J2000)=-05d22\arcmin36.486\arcsec.
The HKKH source ID numbers are indicated in panels (a) and (b).
(a) Combine both the SV and cycle~0 data.
The noise level (1$\sigma$) is 9~mJy~beam$^{-1}$.
(b) Same as (a) but excluding those with uv length less than 100~k$\lambda$.
The noise level (1$\sigma$) is 5~mJy~beam$^{-1}$.
(c) Cycle 0 data.
The noise level (1$\sigma$) is 5~mJy~beam$^{-1}$.
A circle indicates a primary beam size of the 12~m antenna, 25\arcsec.
(d) SV data.
The noise level (1$\sigma$) is 19~mJy~beam$^{-1}$.
A circle indicates a primary beam size of the 12~m antenna, 25\arcsec.
Note that the pointing center is 4\arcsec \ northeast of that of Cycle 0 data (see panel (c)).
}
\label{fig-mapb6}
\end{center}
\end{figure*}
\begin{deluxetable}{lllccrr}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablecaption{Continuum sources at band~6
\label{tab-band6}}
\tablehead{
\colhead{ID} &
\colhead{$\alpha$(J2000)} & \colhead{$\delta$(J2000)} &
\colhead{Convolved size} & \colhead{PA} &
\colhead{Integrated} & \colhead{Peak} \\
\colhead{HKKH} &
\colhead{+05h35m (s)} & \colhead{-05d22\arcmin (\arcsec)} &
\colhead{(\arcsec $\times$ \arcsec)} & \colhead{(deg)} &
\colhead{(mJy)} & \colhead{(mJy~beam$^{-1}$)}
}
\startdata
1 & 14.5373(13) & 27.072(15) & 0.82(3) $\times$ 0.61(4) & 127(3) & 123(8) & 74(5) \\
2 & 14.5988(26) & 28.876(39) & 0.73(6) $\times$ 0.48(8) & 110(4) & 123(16) & 106(13) \\
3 & 14.5902(27) & 29.369(31) & 0.88(6) $\times$ 0.52(8) & 97(50) & 160(21) & 104(13) \\
4 & 14.5073(10) & 30.603(12) & 0.66(2) $\times$ 0.47(3) & 124(2) & 284(14) & 272(13) \\
5 & 14.5804(26) & 31.029(31) & 0.61(6) $\times$ 0.57(8) & 97(31) & 118(15) & 102(13) \\
6 & 14.2775(20) & 30.776(23) & 0.68(6) $\times$ 0.46(5) & 170(4) & 53(5) & 51(5) \\
7 & 14.4853(20) & 32.338(23) & 1.09(5) $\times$ 0.51(6) & 121(3) & 267(25) & 141(13) \\
8 & 14.4162(66) & 33.684(77) & 0.32(17)$\times$ 0.20(19) & 74(29) & 13(4) & 58(18) \\
9 & 14.4360(78) & 34.488(91) & 0.47(18)$\times$ 0.35(24) & 99(19) & 27(10) & 49(18) \\
10 & 14.1939(16) & 34.289(19) & 0.65(4) $\times$ 0.44(5) & 100(4) & 69(5) & 71(6) \\
11 & 14.1112(18) & 36.383(21) & 0.66(4) $\times$ 0.48(5) & 134(5) & 57(5) & 52(4) \\
\enddata
\tablecomments{Numbers in parenthesis represent fitting errors determined by the CASA task {\tt{imfit}} in unit of the last significant digits. }
\end{deluxetable}
\begin{figure*}
\begin{center}
\includegraphics[width=15cm]{f3.eps}
\caption{
Continuum emission maps for band~7 data.
Synthesized beam size is indicated at the bottom-left corner in each panel.
The contours start at -5$\sigma$ level with an interval of 10$\sigma$ (5, 15, 25, ...).
The (0,0) position is the tracking center position of the cycle~0 data, RA(J2000)=05h35m14.1250s, Decl(J2000)=-05d22\arcmin36.486\arcsec.
The HKKH source ID numbers are indicated in panels (a) and (b).
(a) Combine all of the four epochs including both in spectral line mode and continuum mode.
The noise level (1$\sigma$) is 9~mJy~beam$^{-1}$.
(b) Same as (a) but excluding those with uv length less than 100~k$\lambda$.
The noise level (1$\sigma$) is 5~mJy~beam$^{-1}$.
(c) A single epoch data observed in the continuum mode.
The noise level (1$\sigma$) is 9~mJy~beam$^{-1}$.
A circle indicates a primary beam size of the 12~m antenna, 18\arcsec.
(d) Combine three epochs of observations in spectral line mode.
The noise level (1$\sigma$) is 12~mJy~beam$^{-1}$.
A circle indicates a primary beam size of the 12~m antenna, 18\arcsec.
}
\label{fig-mapb7}
\end{center}
\end{figure*}
\begin{deluxetable}{lllccrr}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablecaption{Continuum sources at band~7
\label{tab-band7}}
\tablehead{
\colhead{ID} &
\colhead{$\alpha$(J2000)} & \colhead{$\delta$(J2000)} &
\colhead{Convolved size} & \colhead{PA} &
\colhead{Integrated} & \colhead{Peak} \\
\colhead{HKKH} &
\colhead{+05h35m (s)} & \colhead{-05d22\arcmin (\arcsec)} &
\colhead{(\arcsec $\times$ \arcsec)} & \colhead{(deg)} &
\colhead{(mJy)} & \colhead{(mJy~beam$^{-1}$)}
}
\startdata
1 & 14.5372(17) & 26.939(24) & 0.93(5) $\times$ 0.66(5) & 104(5) & 505(58) & 156(18) \\
2 & 14.6141(17) & 28.816(24) & 0.65(5) $\times$ 0.57(5) & 33(3) & 591(66) & 299(33) \\
3 & 14.5844(23) & 29.317(42) & 0.44(10)$\times$ 0.33(7) & 131(7) & 161(25) & 212(33) \\
4 & 14.5113(5) & 30.571(7) & 0.50(2) $\times$ 0.44(1) & 126(5) & 1001(34) & 849(29) \\
5 & 14.5797(21) & 30.852(30) & 0.69(6) $\times$ 0.58(6) & 16(12) & 639(91) & 301(43) \\
6 & 14.2827(09) & 30.801(13) & 0.91(3) $\times$ 0.54(3) & 10(2) & 487(29) & 183(11) \\
7 & 14.4837(18) & 32.238(26) & 0.83(5) $\times$ 0.51(5) & 121(4) & 534(64) & 237(29) \\
8 & 14.4297(16) & 33.582(24) & 0.62(5) $\times$ 0.51(5) & 141(8) & 396(44) & 238(26) \\
9 & 14.4323(29) & 34.440(41) & 0.63(8) $\times$ 0.63(9) & 69(10) & 281(55) & 135(26) \\
10 & 14.1938(10) & 34.116(14) & 0.75(3) $\times$ 0.47(3) & 72(2) & 336(22) & 176(11) \\
11 & 14.1131(16) & 36.327(23) & 0.70(5) $\times$ 0.56(5) & 165(7) & 271(29) & 129(14) \\
\enddata
\tablecomments{Numbers in parenthesis represent fitting errors determined by the CASA task {\tt{imfit}} in unit of the last significant digits. }
\end{deluxetable}
\section{Results}
Figures \ref{fig-mapb6} and \ref{fig-mapb7} show continuum images of Orion~KL at bands~6 and 7, respectively, obtained with various uv coverages.
To achieve higher image quality and sensitivity, we combined different datasets observed in different uv coverage as explained in previous section.
Overall spatial structures of the band~6 and band~7 images with full uv coverage (Figures \ref{fig-mapb6}(a) and \ref{fig-mapb7}(a)) appear quite similar with each other.
The main emission region shows an elongated ridge-like structure \citep[``main dust ridge'' in ][]{tang2010} along the northeast-southwest direction consisted of Source~I, Hot Core and SMA1 as labeled in Figure \ref{fig-label}.
At western side of this main dust ridge and south of the BN object, there are several diffuse emission components located along the north-south direction, which can be identified as the Northwest Clump and Compact Ridge (Figure \ref{fig-label}).
Our ALMA images trace part of these extended emission corresponding to compact highest density condensations.
In order to identify compact continuum sources, we made synthesized images by employing only baselines longer than 100~k$\lambda$ as shown in Figures \ref{fig-mapb6}(b) and \ref{fig-mapb7}(b).
The extended emission components are resolved out in our ALMA images (Figures \ref{fig-mapb6}(b) and \ref{fig-mapb7}(b)) because of the lack of short baselines in the ALMA extended configuration.
Although the high resolutions images (Figures \ref{fig-mapb6}(b) and \ref{fig-mapb7}(b)) show lower intensities than those of full uv coverages (Figures \ref{fig-mapb6}(a) and \ref{fig-mapb7}(a)), compact sources are clearly detected with sufficiently high signal-to-noise ratios greater than 5$\sigma$.
Note that there are negative contour levels in these images due to insufficient uv coverages for shorter baselines.
When we employ the visibility data with baselines longer than 200~k$\lambda$, only Source~I remains unresolved implying a compact structure \citep{beuther2004}.
For band~6 data, we compare the cycle~0 data (Figure \ref{fig-mapb6}(c)) and SV data (Figure \ref{fig-mapb6}(d)) obtained in the extended and compact configurations, respectively.
Only the higher resolution cycle~0 data resolve the compact cores, as can be seen in the main dust ridge (Figure \ref{fig-mapb6}(c)).
In contrast, we cannot see extended NW clump in the cycle~0 data which is evident in the SV data (Figure \ref{fig-mapb6}(d)).
By combining both data (Figures \ref{fig-mapb6}(a)), we can significantly improve source structure by recovering both extended and compact features.
Since we have carried out three epochs of monitoring observations at band~7 in the spectral line modes as well as a single-epoch continuum observation, we compared these four observational results.
For the spectral line mode, we notice an apparent difference in the synthesize images between epoch to epoch.
Because such a difference is not significant for the most compact condensation identified as Source~I, we attribute the reason to the difference in the uv coverage.
When compared a image from the single-epoch continuum observation (Figure \ref{fig-mapb7}(c)) with that of combined image of all three epochs observed in the spectral line mode (Figure \ref{fig-mapb7}(d)), both results are in good agreement with each other.
Taking into account these effects, we estimate the accuracy of the flux measurement to be 20\%.
We determine positions and intensities of the compact cores by fitting the two-dimensional Gaussian on the images made with uv coverages of $>$100~k$\lambda$ (Figures \ref{fig-mapb6}(b) and \ref{fig-mapb7}(b)) by using the CASA task {\tt{imfit}}.
The flux densities are measured by employing the images after correction of primary beam attenuation.
Because there could be false detection due to sidelobes or noises, we conservatively regard the sources as real detections if their signal-to-noise ratio greater than 5 for both bands~6 and 7.
As a result, we identify total 11 sources as listed in Tables \ref{tab-band6} and \ref{tab-band7}.
In this paper, we define the source name as HKKH (an acronym of the authors' initials) followed by the ID number.
For the band~6 images, the missing flux in the highest resolution images with uv coverage of $>$100~k$\lambda$ (Figure \ref{fig-mapb6}(b)) compared with the image with full-uv coverage (Figure \ref{fig-mapb6}(a)) are larger than 90\% for sources HKKH1, HKKH6, and HKKH9.
In contrast, more than 90\% of flux densities at band~6 are recovered for sources HKKH2, HKKH4, and HKKH10, implying compact structures.
For the band~7 images, a fraction of the missing flux in the highest resolution images with uv coverage of $>$100~k$\lambda$ (Figure \ref{fig-mapb7}(b)) is less than 50\% in comparison with those of full-uv coverage (Figure \ref{fig-mapb7}(a)).
This difference in the degree of missing flux between bands~6 and 7 would be attributed to the different uv coverages and beam sizes.
For band~7, source HKKH1, HKKH2, and HKKH3 are outside of the primary beamsize of the ALMA 12~m antenna while they are within the field of view of band~6.
Although flux calibration is not reliable for these sources, we regard them as detection both in bands~6 and 7.
Similar to these sources, source HKKH5 is detected at the edge of the field of view of band~7 images.
In addition, the BN object, a famous massive YSO in this region, is outside of the field of view of bands~6 and 7, as discussed later.
\begin{deluxetable}{rrrrl}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablecaption{Identification of continuum sources
\label{tab-cont}}
\tablehead{
\colhead{ID} & \colhead{$I_{\mbox{339~GHz}}$} & \colhead{$I_{\mbox{245~GHz}}$} & \colhead{$I_{\mbox{90~GHz}}$\tablenotemark{a}} & \colhead{} \\
\colhead{HKKH} & \colhead{(mJy~beam$^{-1}$)} & \colhead{(mJy~beam$^{-1}$)} & \colhead{(mJy~beam$^{-1}$)} & \colhead{Note}
}
\startdata
1 & 156(18) & 74(5) & 4.89(29) & C13 \\
2 & 299(33) & 106(13) & 4.39(43) & C19, MF10? \\
3 & 212(33) & 104(13) & \nodata & C33?, MF10? \\
4 & 849(29) & 272(13) & 50.09(97) & C20, MF11?, Source~I \\
5 & 301(43) & 102(13) & 7.08(94) & C18 \\
6 & 183(11) & 51(5) \ & 4.18(9) \ & C23 IRc7 \\
7 & 237(29) & 141(13) & 6.05(60) & C21, MF6, SMA1 \\
8 & 238(26) & 58(18) & 5.31(75) & C22 \\
9 & 135(26) & 49(18) & \nodata & C34?, MF2 \\
10 & 176(11) & 71(6) & \nodata & MF12, IRc4 \\
11 & 129(14) & 52(4) & \nodata & C32, MF1, Source~R \\
\enddata
\tablenotetext{a}{See Table 1 in \citet{friedel2011}. }
\tablecomments{Numbers in parenthesis represent the errors in unit of the last significant digits.
MF and C in column 5 are methyl formate \citep{favre2011} and 3~mm continuum \citep{friedel2011} peaks, respectively. }
\end{deluxetable}
\begin{deluxetable}{rcccccccccc}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablecaption{graybody fitting results for continuum sources
\label{tab-gray}}
\tablehead{
\colhead{} & \colhead{} &
\colhead{} & \colhead{20~K} & \colhead{} & \colhead{} &
\colhead{} &
\colhead{} & \colhead{100~K} & \colhead{} & \colhead{} \\
\cline{3-6} \cline{8-11}
\colhead{ID} & \colhead{Diameter} &
\colhead{$M$} & \colhead{} & \colhead{$n$(H$_{2}$)} & \colhead{$N$(H$_{2}$)} &
\colhead{} &
\colhead{$M$} & \colhead{} & \colhead{$n$(H$_{2}$)} & \colhead{$N$(H$_{2}$)} \\
\colhead{HKKH} & \colhead{(AU)} &
\colhead{($M_{\odot}$)} & \colhead{$\beta$\tablenotemark{a}} & \colhead{(cm$^{-3}$)} & \colhead{(cm$^{-2}$)} &
\colhead{} &
\colhead{($M_{\odot}$)} & \colhead{$\beta$\tablenotemark{a}} & \colhead{(cm$^{-3}$)} & \colhead{(cm$^{-2}$)}
}
\startdata
1 & 313 & 0.077(3) & 0.77(2) & 8.6 10$^{8}$ & 4.0 10$^{24}$ & & 0.009(1) & 0.59(5) & 1.0 10$^{8}$ & 4.7 10$^{23}$ \\
2 & 252 & 0.274(35) & 1.30(7) & 5.8 10$^{9}$ & 2.2 10$^{25}$ & & 0.031(2) & 1.12(4) & 6.6 10$^{8}$ & 2.5 10$^{24}$ \\
3 & 213 & 0.082 & 0.58 & 2.9 10$^{9}$ & 9.2 10$^{24}$ & & 0.008 & 0.27 & 2.8 10$^{8}$ & 9.0 10$^{23}$ \\
4 & 215 & 0.156(119) & 0.19(40) & 5.4 10$^{9}$ & 1.7 10$^{25}$ & & 0.018(12) & -0.00(37) \ & 6.2 10$^{8}$ & 2.0 10$^{24}$ \\
5 & 257 & 0.159(57) & 0.92(19) & 3.2 10$^{9}$ & 1.2 10$^{25}$ & & 0.018(5) & 0.74(16) & 3.6 10$^{8}$ & 1.4 10$^{24}$ \\
6 & 263 & 0.086(55) & 0.90(33) & 1.6 10$^{9}$ & 6.4 10$^{24}$ & & 0.010(6) & 0.71(30) & 1.9 10$^{8}$ & 7.4 10$^{23}$ \\
7 & 293 & 0.175(78) & 0.99(23) & 2.4 10$^{9}$ & 1.1 10$^{25}$ & & 0.020(10) & 0.80(26) & 2.7 10$^{8}$ & 1.2 10$^{24}$ \\
8 & 158 & 0.103(86) & 0.89(43) & 8.9 10$^{9}$ & 2.1 10$^{25}$ & & 0.012(9) & 0.70(40) & 1.0 10$^{9}$ & 2.4 10$^{24}$ \\
9 & 212 & 0.168 & 1.51 & 6.0 10$^{9}$ & 1.9 10$^{25}$ & & 0.016 & 1.19 & 5.7 10$^{8}$ & 1.8 10$^{24}$ \\
10 & 237 & 0.145 & 1.16 & 3.7 10$^{9}$ & 1.3 10$^{25}$ & & 0.014 & 0.87 & 3.6 10$^{8}$ & 1.3 10$^{24}$ \\
11 & 249 & 0.107 & 1.19 & 2.4 10$^{9}$ & 8.8 10$^{24}$ & & 0.010 & 0.87 & 2.2 10$^{8}$ & 8.2 10$^{23}$ \\
\enddata
\tablenotetext{a}{Dust opacity index in equation (\ref{eq-graykappa}). }
\tablecomments{Numbers in parenthesis represent fitting errors (1$\sigma$) in unit of the last significant digits.
If the 90~GHz data is not available, fitting errors cannot be estimated. }
\end{deluxetable}
\section{Discussion}
\subsection{Comparison with previous observational results}
There have been number of high-resolution observations of Orion~KL with continuum emissions in wide range of wavelengths.
We compare some of the highest resolution data reported to date.
Examples are shown in Figure \ref{fig-label}.
Positions of most of our detected sources are coincident with previous interferometric maps of the highest resolution millimeter continuum emission \citep{friedel2011}, molecular lines of methyl formate, HCOOCH$_{3}$ \citep{favre2011}, and radio continuum emission \citep{felli1993a,felli1993b} as shown in Figure \ref{fig-label} and Table \ref{tab-cont}.
We compare spatial structure of our millimeter/submillimeter maps with the mid-infrared emission observed with the Subaru telescope \citep{okumura2011}.
As can be seen in Figure \ref{fig-label}, the mid-infrared emission shows an anti-correlation with those of millimeter/submillimeter.
It is easily interpreted that the millimeter/submillimeter emission are emitted predominantly from dust cloud with high opacity even at mid-infrared bands.
We could not detect some of possible counterparts of the millimeter/submillimeter continuum sources detected in other tracers (Figure \ref{fig-label}).
There are possible reasons for such differences.
First, we employ a rather conservative detection criterion that the continuum sources are regarded as detection if their peak intensities greater than 5 times the noise level.
This is because we eliminate contributions from strong sidelobes as possible.
As a result, we might miss weak emission in our map.
Furthermore, we employ the uv lengths longer than 100~k$\lambda$ to search for only compact sources.
Thus, our continuum images significantly resolves out weak extended emission sources detected in the shorter-baselines in previous interferometer observations.
\subsection{Graybody fitting of spectral energy distributions (SED)}
As tabulated in Table \ref{tab-cont}, we identify 11 continuum sources detected both in bands 6 and 7 images with the uv coverage of greater than 100~k$\lambda$.
It is known that the SED of millimeter/submillimeter continuum emission associated with YSOs can be well fitted by the optically thin graybody function as a result of thermal dust emission.
Thus we fit the observed flux densities at 245~GHz, and 339~GHz as listed in Table \ref{tab-cont} to the graybody function;
\begin{eqnarray}
F_{\nu}^{\rm{gb}} & = & \frac{M_{\rm{tot}}}{D^{2}} \kappa_{\nu} B_{\nu}(T_{\rm{dust}}) \label{eq-gray} \\
\kappa_{\nu} & = & \kappa_{\nu_{0}} \left( \frac{\nu}{\nu_{0}} \right) ^{\beta} \label{eq-graykappa}
\end{eqnarray}
where $M_{\rm{tot}}$ is the total mass, $D$ the distance, $\kappa_{\nu}$ the total mass opacity, and $B_{\nu}(T_{\rm{dust}})$ the Planck function at the dust temperature of $T_{\rm{dust}}$.
The total mass opacity $\kappa$ is expressed as a power law function of the frequency, and $\kappa_{\nu_{0}}=0.1$~cm$^{2}$~g$^{-1}$ at $\nu_{0}=1.2$~THz, or wavelength of 250$\mu$m, is adopted \citep{hildebrand1983} assuming the gas to dust mass ratio of 100.
If high resolution millimeter continuum data at 90~GHz are available \citep{friedel2011}, we also employ these results in the SED fitting.
The results of the graybody fitting are summarized in Table \ref{tab-gray}.
The power law index of the dust opacity, $\beta$ is fitted by using two or three data points as listed in Table \ref{tab-cont}.
Because the dust temperatures cannot be determined from our continuum observations, we assume the temperature of 100~K as obtained from the molecular line observations \citep{favre2011} and lower value of 20~K \citep{eisner2008} for comparison.
The derived mass is anti-correlated with the temperature as a function of $(\exp (h\nu/kT) - 1)$, and hence, the lower temperature values would yield an upper limit of the mass.
The dust opacity index, $\beta$ is close to 1 for most of the sources, except for Source~I.
The SED of Source~I cannot be fitted by a simple optically thin thermal dust emission as discussed later.
The origin of compact continuum sources are attributed to either star-forming or starless cores, although detailed nature of each source is still unknown (see discussion about individual sources).
If the source is associated with embedded YSO, the compact structure of 200-300~AU and high density of 10$^{8}$-10$^{9}$~cm$^{-3}$ are indicative of circumstellar disks as an one of the possible origins of millimeter/submillimeter sources.
The derived masses (0.08$M_{\odot}$-0.27$M_{\odot}$ for assumed temperature of 20~K) correspond to the smaller end of the masses for disk candidates in another massive star-forming region NGC6334I(N) \citep{hunter2014}.
Due to the closer distance of Orion~KL than that of NGC6334I(N), 1.3~kpc, derived source sizes are significantly smaller than those in NGC6334I(N) \citep{hunter2014}.
On the other hand, the derived mass range is smaller than the typical circumstellar disk masses associated with massive YSOs \citep{cesaroni2007} but is larger than those of low-mass YSOs including members of Orion Nebula Cluster \citep{eisner2008}.
The smaller circumstellar mass compared with another massive YSO was also recognized for Source~I in Orion~KL by the previous SMA observation \citep{beuther2004}.
If our millimeter/submillimeter sources trace the circumstellar disk of massive YSOs, the lower-mass range could imply a relatively later evolutionary stages in massive disk formation as proposed in \citet{beuther2004}.
Nevertheless, we cannot rule out possibilities of an envelope around YSO, shocked dense gas, or starless dense gas, which are significantly resolved with our high-resolution images, or a disk associated with low-/intermediate-mass YSO.
Key parameters to distinguish these possibilities are gas temperature and velocity structure traced by molecular lines in multiple transitions to investigate physical and dynamical properties of the continuum sources.
\begin{figure*}
\begin{center}
\includegraphics[width=15cm]{f4.eps}
\caption{
ALMA band~7 continuum maps (contour) superposed on the Subaru mid-infrared image \citep{okumura2011}.
Positions of band~7 continuum sources (Table \ref{tab-band7}) are indicated by magenta smaller crosses with the HKKH ID numbers.
Peaks of methyl formate \citep[MF;][]{favre2011}, 3mm~continuum emission \citep{friedel2011}, and radio continuum emission \citep{felli1993a,felli1993b} are indicated by green squares, blue larger crosses, and yellow triangles, respectively.
The (0,0) position is that of the 22~GHz H$_{2}$O maser burst in the Compact Ridge \citep{hirota2011, hirota2014b}.
}
\label{fig-label}
\end{center}
\end{figure*}
\section{Individual Sources}
\subsection{Source~I (HKKH4)}
\label{sec-sourceI}
Source~I is thought to be a dominant energy source in Orion~KL \citep{menten1995}.
Number of observational studies have been done for Source~I to reveal its nature whereas it can be detected only at longer wavelength than submillimeter band due to an extremely high opacity \citep{greenhill2004, okumura2011, sitarski2013}.
It can be identified as a color temperature peak derived from mid-infrared images \citep{okumura2011}.
Position of Source~I corresponds to the 3~mm continuum source C20 \citep{friedel2011}.
There is a nearby molecular peak MF10 at the western side of the submillimeter emission peak \citep{favre2011}.
Because the position offset between MF10 and Source~I is significant and, in addition, there is no emission from organic molecules at Source~I \citep{beuther2005}, they are unlikely to be physically associated.
By using previous observational results of interferometric continuum observations at centimeter, millimeter, and submillimeter wavelengths, we plot a SED of Source~I as shown in Figures \ref{fig-sed} and Table \ref{tab-flux} \citep[see][and references therein]{plambeck2013}.
At several frequencies, there are more than two observational results showing different flux densities.
For example, flux density of our band~6 data, 284~mJy, is consistent with that of CARMA, 310~mJy \citep{plambeck2013}, with in the errors $\sim$10\%.
On the other hand, our band~7 data is intermediate between those of \citet{beuther2004} and \citet{tang2010} observed with SMA.
The reason for such discrepancies could be that insufficient uv coverage affects the flux density measurements due to strong negative sidelobes from the bright extended emission in the complex structure of the Orion~KL region as suggested in \citet{plambeck2013}.
Another possibility could be a different degree of missing fluxes due to the different uv coverages.
To avoid such artifact, we employ the flux density observed with the highest resolution observations at each frequency.
The data employed in the following discussion are indicated by Y in Table \ref{tab-flux}.
\subsubsection{H$^{-}$ free-free emission}
As shown in Figure \ref{fig-sed} and Table \ref{tab-sed}, the SED of Source~I can be fitted by a single power law function
\begin{eqnarray}
\log F_{\nu}^{\rm{pl}} & = & p + q \log \nu
\end{eqnarray}
with the power law index $q$ of 1.97$\pm$0.10 for the frequency range between 8.4~GHz and 690~GHz.
The power law index is consistent with that expected for the black body radiation, $q$=2.0, suggesting that the emission could be optically thick even at the submillimeter wavelength.
As proposed for typical massive YSOs, the emission mechanism of the centimeter to submillimeter continuum emission from Source~I is a combination of graybody radiation from dust and free-free radiation from fully ionized gas \citep{beuther2004,beuther2006}.
However, the free-free emission from such a compact H{\sc{ii}} region is ruled out as a main opacity source of continuum emissions according to the observed source size and brightness temperature of $\sim$1500~K at 43~GHz \citep{reid2007, plambeck2013}.
Observed SED could be reconciled by the free-free emission only if the beam filling factor (source size and/or clumpiness) were much smaller than 1 or the gas temperature were much lower than that expected for the fully ionized gas of $\sim$8000~K.
Alternatively, it has been suggested that an H$^{-}$ free-free radiation emitted from neutral gas could explain the single power law SED \citep{reid2007, plambeck2013}.
In order to explain observed SED of Source~I from centimeter to submillimeter wavelength, we first evaluate physical properties of Source~I in the case of the H$^{-}$ free-free radiation.
Detailed derivation of the H$^{-}$ free-free radiation is summarized in Appendix following the discussion by \citet{reid1997}.
Hereafter we assume that Source~I is an edge-on disk with uniform density and temperature for simplicity.
The apparent size of Source~I of 0.20\arcsec$\times$0.03\arcsec \ is employed \citep{plambeck2013}, corresponding to the disk diameter and thickness of 84~AU and 12.6~AU, respectively.
With these parameters, a total hydrogen density of 10$^{12}$~cm$^{-3}$ corresponds to a column density of 1.26$\times$10$^{27}$~cm$^{-2}$ (with a diameter of 84~AU) and a total mass of 0.20$M_{\odot}$.
As suggested from Figure \ref{fig-sed}, Source~I seems to be optically thick at 245-339~GHz or even at 690~GHz.
Thus, the turnover frequency at which optical depth of the H$^{-}$ free-free radiation becomes unity is as high as $>$300-600~GHz (see Appendix).
Using the relationship between the total hydrogen density (a sum of molecular and atomic hydrogen) and temperature to explain the turnover frequency (Figure \ref{fig-turnfreq2} and Table \ref{tab-turnfreq} in Appendix), we can constrain the range of gas density and temperature.
According to the SMA continuum observations \citep{tang2010}, the mass of the main dust ridge including Source~I is estimated to be 2-12$M_{\odot}$.
\citet{favre2011} also derive a consistent value of 5.8$M_{\odot}$ for their continuum source Ca, in which Source~I is located at the western edge of the core.
The total mass of Source~I itself is estimated from the velocity structure of the rotating disk traced by the SiO masers and submillimeter H$_{2}$O lines \citep{kim2008, matthews2010, hirota2014a}, $>$7$M_{\odot}$, infrared spectroscopy \citep{testi2010}, 10$M_{\odot}$, or a momentum conservation law suggested by proper motion measurement \citep{goddi2011, bally2011}, 20$M_{\odot}$.
Thus, the circumstellar disk mass of Source~I would be much lower than that of the central protostar and its host core, $\ll$10$M_{\odot}$.
With this constraint, the total hydrogen density is lower than 5$\times$10$^{13}$~cm$^{-3}$ as indicated in Figure \ref{fig-turnfreq2} in Appendix.
In this case, the gas temperature would be larger than 1200~K for the turnover frequency of $>$300~GHz
(Figure \ref{fig-turnfreq2} and Table \ref{tab-turnfreq} in Appendix).
If the gas temperature is about 3000~K or higher, the lower gas density of $\sim$10$^{11}$~cm$^{-3}$ is allowed to explain the turnover frequency of $>$300~GHz.
Using the calculated optical depth, $\tau_{\nu,\rm{H}}$ and $\tau_{\nu,\rm{H}_{2}}$ for the above temperature (1200-3000~K) and density (10$^{11}$-10$^{14}$~cm$^{-3}$) ranges, we calculate the flux density $F_{\nu}$ at each frequency for the source size of $\Omega$;
\begin{eqnarray}
F_{\nu} & = & \int \frac{2 \nu^{2} kT_{B}}{c^{2}} d\Omega \nonumber \\
& = & \int \frac{2 \nu^{2} kT}{c^{2}} (1-e^{-\tau_{\rm{total}}}) d\Omega \\
T_{B} & = & (1-e^{-\tau_{\rm{total}}} ) T \\
\tau_{\rm{total}} & = & \tau_{\nu,\rm{H}} + \tau_{\nu,\rm{H}_{2}}
\end{eqnarray}
The results are shown in Figure \ref{fig-flux}.
Higher temperature or higher density results can explain the single power law SED even for the SMA result at 690~GHz.
In the case of total hydrogen densities of 10$^{11}$~cm$^{-3}$ and 10$^{12}$~cm$^{-3}$ corresponding to the mass of $\sim$0.02$M_{\odot}$-0.2$M_{\odot}$ (Figures \ref{fig-flux}(a) and (b)), gas temperatures of higher than 2700~K and 1800~K, respectively, can explain the SED of Source~I with single power law index of 2.0 up to 339~GHz band.
For larger densities of an order of 10$^{13}$~cm$^{-3}$ or the mass of a few $M_{\odot}$ (Figure \ref{fig-flux}(c)), the lower temperature of 1500~K can also match the slope of the SED of Source~I up to 339~GHz.
On the other hand, the higher temperature results tend to overestimate the flux densities by a factor of $\sim$2.
When the emission is optically thick, the flux density is proportional to the apparent source size and gas temperature.
To account for the observed flux density of Source~I, the lower gas temperature $\sim$1500~K would be more plausible if the source is resolved with the beam size of ALMA.
Alternatively, the angular size of Source~I, $\Omega$, would have smaller by a factor of 2 or the source structure would be clumpy with the beam filling factor of 50\%.
Higher resolution observations to resolve the source structure with accurate flux calibration will be able to solve this degeneracy of source size and temperature.
\citet{hirota2014a} reported that the vibrationally excited H$_{2}$O line at 336~GHz is emitted from a circumstellar disk.
The 336~GHz H$_{2}$O line is thought to be excited thermally with an excitation temperature of $>$3000~K probably heated via an accretion shock.
This excitation temperature is slightly higher than that estimated from the present continuum data.
According to the velocity structure, the 336~GHz H$_{2}$O line map is interpreted as a ring-like structure with a diameter 0.2\arcsec \ (84~AU), although the spatial resolution is still insufficient to resolve the structure.
It is proposed that the ring-like structure would reflect the distribution of the hot molecular (H$_{2}$O) gas probably heated via accretion \citep{hirota2014a}.
Alternatively, it is also likely that the 336~GHz H$_{2}$O line could trace only the edge of the disk due to the high continuum opacity even at 336~GHz as suggested by the SED (Figure \ref{fig-sed}).
If the optically thick H$^{-}$ free-free radiation is the main source of opacity at 336~GHz, the latter scenario would be more plausible.
\subsubsection{H$^{-}$ free-free emission + thermal dust emission}
In Figure \ref{fig-sed}, our ALMA band~7 data still suggest a marginal excess flux compared with the single power law fitting result.
The flux density at 690~GHz also shows a significant excess probably originated from thermal dust emission as suggested by \citet{beuther2006} and \citet{plambeck2013}.
There seems to be systematic deviation between the best-fit model and observed results from 43~GHz and 229~GHz, suggesting a change in the emission mechanism at the middle of the centimeter to millimeter wavelength.
Thus, we next consider the contribution from thermal graybody emission from dust.
The H$^{-}$ free-free emission and dust graybody emission contribute to the flux densities at lower and higher frequency regions, respectively.
In this case, the flux density $F_{\nu}$ at each frequency $\nu$ is fitted to the sum of power law function due to the H$^{-}$ free-free radiation and the graybody radiation from the thermal dust emission;
\begin{eqnarray}
F_{\nu} & = & F_{\nu}^{\rm{pl}} + F_{\nu}^{\rm{gb}}.
\end{eqnarray}
The combined fitting result gives the lower power law index of 1.60$\pm$0.24, than that of the single-power law fitting, 1.97 (Table \ref{tab-sed}).
The significant amount of the observed flux densities at higher frequency ($>$200~GHz) can be explained by the graybody radiation rather than the H$^{-}$ free-free radiation.
In such a case, the turnover frequency could be as low as 200~GHz.
The required density or temperature under this condition are reduced by a factor of about $(2/3)^{2}$ when we change the turnover frequency from 300~GHz to 200~GHz (see equation (\ref{eq-turnfreq})).
Because of the lack of far-infrared wavelength data corresponding to the flux maximum of the graybody function, we cannot constrain the dust temperature of the emitting region.
It is possible that the dust graybody emission arises from the different volume of gas mainly in a cooler outer layer of Source~I or the same volume of gas as the H$^{-}$ free-free emission at the temperature higher than $\sim$1000~K.
In the former case, the total dust+gas mass contributed by the graybody SED is estimated to be 0.082~M$_{\odot}$ assuming the dust to gas mass ratio of 100 and the dust temperature of 100~K (Table \ref{tab-sed}).
This value is smaller than that obtained from the SMA observations, 0.2~M$_{\odot}$ \citep{beuther2004}.
In the latter case, the dust temperature would be higher than 1000~K as estimated from the continuum emission \citep{reid2007, plambeck2013} and vibrationally excited H$_{2}$O lines \citep{hirota2014a}.
The resultant circumstellar dust+gas mass would be reduced by a factor of 10, $\sim$0.008~M$_{\odot}$, compared with that of 100~K.
Thus, the dust mass would be smaller by a factor of 100 (i.e. assumed dust to gas mass ratio), 8$\times$10$^{-5}$~M$_{\odot}$.
This value is much smaller than the gas mass estimated from the H$^{-}$ opacity at this temperature as discussed above (see Figure \ref{fig-turnfreq2}).
In either case, our result may suggest that the dust to gas mass ratio is unusually low in the close vicinity of Source~I within $\sim$100 AU in diameter because of the dust sublimation under the temperature of $>$1000~K \citep[e.g.][]{reid2007}.
A power law index smaller than 2.0 would reflect a density structure of the emitting region \citep[e.g. discussion in ][]{beuther2004,plambeck2013}.
In the present analysis, we do not make any correction for above flux densities such as the different source/beam sizes, which could more or less induce uncertainties in the derived parameters.
To better constrain the emission mechanism, higher resolution observations with accurate flux calibration are required to resolve the size of Source~I.
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{f5.eps}
\caption{
Spectral energy distribution (SED) of Source~I.
A solid blue line indicates the best fit single power law model, $F_{\nu}=p \nu^{q}$ with the index of $q=1.97\pm0.10$.
A solid red line indicates the combination of power law and graybody SED.
For the power law and graybody terms are shown by the red dotted and dashed lines, respectively.
Open squares represent all the data including the present ALMA results \citep[][and references therein]{plambeck2013}.
Filled circles represent the data employed in the fitting, as indicated Y in Table \ref{tab-flux}.
}
\label{fig-sed}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=15cm]{f6.eps}
\caption{
SED of Source~I predicted from H$^{-}$ free-free emission with total hydrogen densities, $n$(H)+2$n$(H$_{2}$) of (a) 10$^{11}$~cm$^{-3}$, (b) 10$^{12}$~cm$^{-3}$, (c) 10$^{13}$~cm$^{-3}$, and (d) 10$^{14}$~cm$^{-3}$.
The path length is assumed to be 84~AU, which is equal to the major axis of Source~I (0.2\arcsec).
Calculations are done with temperatures from 1200~K to 3000~K with a step of 300~K as shown by solid lines.
Filled and open squares are the same as Figure \ref{fig-sed}.
Note that the SED at the temperature of 1200~K with a total hydrogen density of 10$^{11}$~cm$^{-3}$ (a) is not plotted because of the lower turnover frequency than the plotting range.
}
\label{fig-flux}
\end{center}
\end{figure*}
\begin{deluxetable}{rccclcl}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablecaption{Flux densities of Source~I
\label{tab-flux}}
\tablehead{
\colhead{Frequency} & \colhead{Flux} & \colhead{HPBW} & \colhead{} & \colhead{} & \colhead{} &\colhead{} \\
\colhead{[GHz]} & \colhead{[mJy]} & \colhead{[mas]} & \colhead{Array} & \colhead{Epoch} & \colhead{SED\tablenotemark{a}} & \colhead{Reference}
}
\startdata
8.4 & 1.1$\pm$0.2 & 220 & VLA & 1994.3 & Y & \citet{menten1995} \\
8.4 & 1.2$\pm$0.1 & 262$\times$217 & VLA & 2006.4 & & \citet{gomez2008} \\
15 \ \ & 1.54$\pm$0.18 & 140$\times$130 & VLA & 1986.3 & Y & \citet{felli1993a} \\
15 \ \ & 1.6$\pm$0.4 & 150 & VLA & 1990 \ \ & & \citet{felli1993b} \\
22 \ \ & 5.7$\pm$0.9 & 109$\times$97 & VLA & 1991.5 & Y & \citet{forbrich2008} \\
43 \ \ & 13$\pm$2 & 250 & VLA & 1994.3 & & \citet{menten1995} \\
43 \ \ & 10.8$\pm$0.6 & 1960$\times$1410 & VLA & 1994.9 & & \citet{chandler1997} \\
43 \ \ & 13 & 41$\times$28 & VLA & 2000.9 & Y & \citet{reid2007} \\
43 \ \ & 14.5$\pm$0.7 & 170$\times$150 & VLA & 2007.9 & & \citet{rodriguez2009} \\
43 \ \ & 11$\pm$2 & 58$\times$39 & VLA & 2009.0 & & \citet{goddi2011} \\
86 \ \ & 34$\pm$5 & 1000$\times$380 & BIMA & 1995.0 & & \citet{plambeck1995} \\
90 \ \ & 50$\pm$5 & 400$\times$350 & CARMA & 2009.0 & Y & \citet{friedel2011} \\
229 \ \ & 310$\pm$45 & 150$\times$130 & CARMA & 2009.1 & Y & \citet{plambeck2013} \\
245 \ \ & 272$\pm$13 & 630$\times$470 & ALMA & 2012 \ \ & & Present study \\
339 \ \ & 849$\pm$29 & 460$\times$410 & ALMA & 2012 \ \ & Y & Present study \\
341 \ \ & 1400$\pm$100 & 800$\times$700 & SMA & 2009.1 & & \citet{tang2010} \\
348 \ \ & 320$\pm$48 & 780$\times$650 & SMA & 2004.1 & & \citet{beuther2004} \\
690 \ \ & 6700$\pm$3200 & 1400$\times$900 & SMA & 2005.1 & Y & \citet{beuther2006} \\
\enddata
\tablecomments{See \citet{plambeck2013}. }
\tablenotetext{a}{Y indicates the data employed in the SED fitting. }
\end{deluxetable}
\begin{deluxetable}{lcccc}
\tablewidth{0pt}
\tabletypesize{\scriptsize}
\tablecaption{SED of Source~I
\label{tab-sed}}
\tablehead{
\colhead{Model} & \colhead{$p$} & \colhead{$q$} & \colhead{$M$(100~K)[$M_{\odot}$]} & \colhead{$\beta$} }
\startdata
Single power law & -1.99(19) & 1.97(10) & \nodata & \nodata \\
power law$+$graybody & -1.51(30) & 1.60(24) & 0.082(49) & 1.34(83) \\
\enddata
\tablecomments{Dust temperature of 100~K is assumed. \\
Numbers in parenthesis represent fitting errors (1$\sigma$) in unit of the last significant digits. }
\end{deluxetable}
\subsection{Hot Core (HKKH5)}
Hot Core in Orion~KL has been recognized as a prototypical dense and hot molecular gas clump which are often found in massive star-forming regions.
Hot Core is known to show wealth of molecular line emissions in millimeter and submillimeter wavelengths \citep[e.g.][]{beuther2005}.
A submillimeter continuum source SMM3 detected with SMA \citep{zapata2011} corresponds to this compact millimeter/submillimeter source.
This source corresponds to the 3~mm continuum emission C18 \citep{friedel2011} while it cannot be seen in the molecular line map of methyl formate \citep{favre2011}.
It is most likely that methyl formate is deficient in the Hot Core where nitrogen-bearing organic molecules are dominant.
According to the recent molecular line observations, Orion Hot Core is most likely heated externally by the explosive outflow rather than by an embedded self-luminous source \citep{zapata2011}.
The mass derived from our ALMA data is 0.159$M_{\odot}$ at the largest estimate assuming the temperature of 20~K.
The dust opacity index derived from the graybody SED model is consistent with 1.0.
As discussed for Source~I, the mass of the main dust ridge including Source~I and Hot Core is estimated to be 2-12$M_{\odot}$ \citep{tang2010} and 5.8$M_{\odot}$ \citep{favre2011}.
Although the above estimated mass could not resolve more compact structures such as Source~I and SMA1, our result only recovers a part of the total mass of the dense gas.
\subsection{SMA1 (HKKH7)}
A submillimeter continuum source SMA1 is first identified by \citet{beuther2004} in the SMA observations at 348~GHz.
This source is identified to be the 3~mm continuum source C21 \citep{friedel2011} and the methyl formate peak MF6 \citep{favre2011}.
SMA1 is proposed to be a powering source of the explosive outflow \citep{beuther2008}, although the origin of this explosive outflow is still under debate \citep[e.g.][]{zapata2009,bally2011,goddi2011}.
The flux density and peak intensity of our ALMA band~7 data are 534~mJy and 237~mJy~beam$^{-1}$, respectively.
On the other hand, SMA1 was not resolved with the previous SMA observations with the flux density and peak intensity of 360~mJy and 360~mJy~beam$^{-1}$, respectively \citep{beuther2004}.
The difference between SMA and ALMA observations could be attributed to the different uv coverage.
The dust opacity index, $\beta$ is consistent with 1.0.
The derived circumstellar mass of SMA1 is only $\sim$0.18$M_{\odot}$ assuming the dust temperature of 20~K.
It is much smaller than the total mass of the main dust ridge of this region as discussed above.
\subsection{IRc7 (HKKH6) and IRc4 (HKKH10)}
We clearly see an anti-correlation between the millimeter/submillimeter emission and the mid-infrared emission (Figure \ref{fig-label}).
However, we detect compact continuum emission sources associated with infrared sources IRc4 and IRc7.
The nature of these sources whether they are heated internally or externally are still unclear.
The infrared spectrum of IRc4 observed with the Subaru telescope can be fitted to the Planck function with a single temperature of 140~K \citep{okumura2011}, and it is thought to be heated by an external energy source(s) \citep{okumura2011}.
On the other hand, a color temperature map obtained from longer wavelength observations with SOFIA suggests that IRc4 is a self-luminous source \citep{debuizer2012}.
For IRc7, it is proposed that an outflow or radiation from Source~n would form a fan-like structure in the infrared emission \citep{greenhill2004}.
According to near-infrared polarization observations with Hubble Space Telescope (HST), IRc4 is thought to be a reflection nebula illuminated by a nearby source(s) while IRc7 is most likely heated by an embedded YSO \citep{simpson2006}.
In the 3~mm continuum map, IRc7 is identified as a compact source C23 but IRc4 is only found as a diffuse emission \citep{friedel2011}.
We detect compact high density cores with the sizes and H$_{2}$ densities of $\sim$200-300~AU and $>$10$^{8}$~cm$^{-3}$ at the positions of IRc4 and IRc7.
Circumstellar masses are estimated to be larger than 0.145$M_{\odot}$ and 0.086$M_{\odot}$ for IRc4 and IRc7, respectively.
Except for IRc4, IRc7, and BN as discussed below, we cannot detect millimeter/submillimeter counterparts at the position of other IRc sources.
\subsection{Compact Ridge (HKKH11)}
We identify a compact millimeter/submillimeter emission corresponding to a submillimeter source SMM1 \citep{zapata2011}, 3~mm continuum source C32 \citep{friedel2011} and methyl formate peak MF1 \citep{favre2011} in the Compact Ridge.
The mass of the continuum source at the Compact Ridge is estimated to be 4$M_{\odot}$ \citep{tang2010}.
\citet{eisner2008} identified a compact 1.3~mm continuum source named HC~438 in the Compact Ridge by using CARMA at the highest resolution of 0.5\arcsec \ compared with other previous results.
They detected the 1.3~mm continuum emission with a flux density of 67.8$\pm$14.2~mJy.
This is consistent with our ALMA band~6 data at a similar spatial resolution.
The circumstellar mass of HC~438 is derived to be 0.085~$M_{\odot}$ with the assumed dust temperature of 20~K \citep{eisner2008}.
Our result, 0.105$M_{\odot}$ agrees well with that of \citet{eisner2008}.
If the higher temperature of 100~K derived from the methyl formate data \citep{favre2011} is employed, the smaller mass of 0.010$M_{\odot}$ is obtained.
As discussed by \citet{favre2011}, there are possible counterpart physically associated with the millimeter/submillimeter continuum source; a radio continuum source labeled R \citep{felli1993a}, a molecular peak traced by methyl formate lines, MF1 \citep{favre2011}, and optical source Parenago~1822.
In addition, an extremely bright 22~GHz H$_{2}$O maser source appears in the Compact Ridge at the systemic velocity of $\sim$ 6.9~km~s$^{-1}$ and 7.5~km~s$^{-1}$.
It is sometimes called supermaser \citep[][and references therein]{hirota2011, hirota2014b}.
The supermaser is located within the millimeter/submillimeter continuum source detected with ALMA.
\citet{favre2011} suggest that the supermaser would be related to the shocked molecular gas of the Compact Ridge \citep{liu2002}.
It is also suggested that the maser burst could occur as a result of an interaction with outflows and a pre-existing YSO in the Compact Ridge \citep{garay1989}.
\citet{hirota2014b} speculate that the compact continuum source would play an important role to amplify the supermaser which can supply plenty of ambient molecular gas.
Near the supermaser, a cluster of other 22~GHz H$_{2}$O maser features detected with VLA \citep{gaume1998} are distributed around the continuum source.
It may imply an embedded YSO as a powering source of the H$_{2}$O maser cluster \citep{hirota2014b}.
We note that an infrared source IRc5 is located at 1\arcsec east of the ALMA continuum peak position and is corresponding to another 3~mm continuum peak, C29 \citep{friedel2011}.
The infrared spectrum of IRc5 can be fitted with the Planck function at the temperature of 120~K \citep{okumura2011}.
Similar to IRc4, IRc5 is thought to be a shocked molecular gas heated externally \citep{okumura2011}.
Thus, IRc5 is not physically associated with the millimeter/submillimeter compact source detected with ALMA.
\subsection{BN}
Orion BN object is another interesting source regarding the energy source of this region \citep{galvan-madrid2012, plambeck2013} and a possible counterpart of the close encounter with Source~I \citep{gomez2008,goddi2011, bally2011} or $\theta^{1}C$ \citep{chatterjee2012}.
It is a massive YSO associated with a hypercompact H{\sc{ii}} region ionized by a zero age main sequence B star \citep{plambeck2013}.
According to the ALMA SV data \citep{galvan-madrid2012} with a spatial resolution of 1.32\arcsec$\times$0.62\arcsec, a compact continuum emission at 230~GHz is detected with a flux density of 126~mJy by using the baselines longer than 100~k$\lambda$ to filter-out extended emission.
On the other hand, the higher resolution CARMA observation detects the 229~GHz continuum emission of 240~mJy with the resolution of 0.14\arcsec \ \citep{plambeck2013}.
A continuum emission source associated with the BN object is marginally seen in our ALMA bands~6 and 7 images outside the primary beam size of the ALMA antenna (Figures \ref{fig-mapb6} and \ref{fig-mapb7}).
Flux densities at bands~6 and 7 are derived to be 55$\pm$4~mJy and 33$\pm$6mJy, respectively.
When we correct for an effect of the primary beam attenuation by a factor of 2, the calibrated peak intensities are 110~mJy and 66~mJy, respectively.
Marginal detection in our ALMA image at band~6 does not contradict to the results of \citet{galvan-madrid2012} although our ALMA results are quite uncertain.
Thus, we do not consider the BN object as an identified source in the present paper.
\subsection{Source~n}
Source~n is also detected in the SMA observation at 348~GHz \citep{beuther2004}.
It is proposed to be a YSO associated with a bipolar radio jet \citep{menten1995} and a circumstellar disk traced by a mid-infrared emission \citep{greenhill2004}.
Our ALMA images at bands~6 and 7 show no compact emission at the position of Source~n.
The upper limit ($5\sigma$) of their peak intensities are 25~mJy~beam$^{-1}$ for both bands~6 and 7.
The peak intensity derived from the SMA observation, 300~mJy~beam$^{-1}$ is much larger than our ALMA result.
The non-detection of Source~n in our ALMA images are possibly due to our lower sensitivities for extended emission components.
\section{Summary}
We have carried out millimeter/submillimeter continuum imaging of the Orion~KL region by using newly constructed ALMA at a resolution of 0.5\arcsec, \ corresponding to a linear scale of 200~AU.
Compact continuum emission sources are detected and 11 sources are identified at both at band~6 (245~GHz) and band~7 (339~GHz).
They include some of remarkable sources in Orion~KL such as Source~I, Hot Core, SMA1, IRc4, IRc7, and Compact Ridge.
Their physical properties such as size, mass, H$_{2}$ number density and column densities are discussed by employing published 3~mm continuum data \citep{friedel2011} to construct SED of dust graybody emission.
Among these identified sources, SED of Source~I, which is a dominant energy source in Orion~KL, are extensively studied by using previous observational results from centimeter to submillimeter wavelengths \citep[see references in ][]{plambeck2013}.
The SED model with H$^{-}$ free-free emission is presented following the discussion by \citet{reid1997} to explain the power-law index of the SED, 1.97, consistent with an optically thick emission.
We introduce a turnover frequency of the H$^{-}$ free-free emission to constrain the gas temperature and total hydrogen density of the circumstellar disk of Source~I.
As a result, total hydrogen density of 10$^{11}$-10$^{14}$~cm$^{-3}$ are required to account for the SED with a single power-law index of 2.0 under the temperature of 1200-3000~K in the case of turnover frequency of $\sim$300~GHz.
When we employ the combination of dust graybody emission and power-law SED, the turnover frequency would be as low as 200~GHz, which reduces the estimated temperature and/or density by a factor of $(2/3)^{2}$.
The fitting result yields a smaller power-law index of 1.60, suggesting a compact size or clumpy structure of the emission region unresolved with the present study \citep{beuther2004,plambeck2013}.
The estimated temperature, density and source size are strongly coupled with each other and hence, future higher resolution observations with ALMA will be key issues to solve these degeneracy in physical properties.
\acknowledgements
We are grateful to S. Okumura for providing a Subaru mid-infrared image.
This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2011.0.00009.SV and 2011.0.00199.S.
ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile.
The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
We thank the staff at ALMA for making observations and reducing the science verification data.
T.H. is supported by the MEXT/JSPS KAKENHI Grant Numbers 21224002, 24684011, and 25108005, and the ALMA Japan Research Grant of NAOJ Chile Observatory, NAOJ-ALMA-0006.
M.H. is supported by the MEXT/JSPS KAKENHI Grant Numbers 24540242 and 25120007.
{\it Facilities:} \facility{ALMA}.
|
train/arxiv
|
BkiUd0rxK7kjXLlzcojl
| 5 | 1 |
\section{Introduction}
Dov Tamari, in his 1951 thesis \cite{D.Tamari}, first described associahedra (with notation $\mathscr{M}_{n-1}$) as the realization of his poset lattice of bracketings (parenthesizations) of a word with $n$ letters. He had also pictured the $1$, $2$ and $3$ dimensional cases (cf. figure \ref{fig:D.Tamari}). Later these were rediscovered by Jim Stasheff \cite{HAH} in his 1960 thesis on homotopy associativity and based loop spaces. Stasheff had defined these (with notation $K_n$) as a convex, curvilinear subset of the $(n-2)$ dimensional unit cube (cf. figure \ref{fig:Stasheff}) such that it is homeomorphic to the cube. Convex polytope realization of associahedra were subsequently done by many people \cite{HuguetTamari, Haiman, Lee, Loday1}. These polytopes are commonly known as \textit{associahedra} or \textit{Stasheff polytopes}. \\
\hspace*{0.5cm}Ever since Stasheff's work, associahedra (and their face complexes) have continued to appear in various mathematical fields apart from its crucial role in homotopy associative algebras and its important role in discrete geometry. Indeed, the associahedron $K_{n-1}$ appears as a fundamental tile of $\overline{\mathcal{M}}_{0,n}(\mathbb{R})$, the compactification of the real moduli space of punctured Riemann sphere \cite{Devadoss3}. It also appears in the analysis of the compactified moduli space of \textit{nodal disks} with markings, as described by Fukaya and Oh \cite{Fukaya}. An important connection between associahedra (and its generalizations) and finite root systems was established in 2003 by the work of Fomin and Zelevinsky \cite{FominZelevinsky}. In 2006 Carr and Devadoss \cite{Devadoss2} generalized associahedra to graph associahedra $\mathcal{K}G$ for a given graph $G$. These appear as the tiling of minimal blow-ups of certain Coxeter complexes \cite{Devadoss2}. In particular, if $G$ is a path graph, then $\mathcal{K}G$ is an associahedron. Bowlin and Brin \cite{BowlinBrin}, in 2013, gave a precise conjecture about existence of coloured paths in associahedra. They showed that this conjecture is equivalent to the four colour theorem (4CT). Earlier, in 1988, there was a celebrated work \cite{SleatorTarjanThurston} of Sleator, Tarjan and Thurston on the diameter of associahedra. While working on dynamic optimality conjecture, they had used hyperbolic geometry techniques to show that the diameter of $K_d$ is at most $2d-8$ when $d\geq 11$, and this bound is sharp when $d$ is large enough. Pournin \cite{Pournin}, almost twenty five years later, showed that this bound is sharp for $d\geq 11$. Moreover, his proof was combinatorial. Even in theoretical physics, recent works \cite{Mizera,HamedBaiHeYan,FerroTomasz} indicate that associahedron plays a key role in the theory of scattering amplitudes.
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{Tamari_picture.pdf}
\caption{Tamari's associahedra}
\label{fig:D.Tamari}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{Stasheff_picture.pdf}
\caption{Stasheff's associahedra}
\label{fig:Stasheff}
\end{subfigure}
\caption{Earliest realizations of associahedra}
\end{figure}
Let us briefly recall the construction in \cite{HAH}. Stasheff, respecting Tamari's description, had sub-divided the boundary of $K_n$ in such a way that the number of faces of codimension $1$ and the adjacencies in his model matched with that in \cite{D.Tamari}. The boundary of $K_n$, denoted by $L_n$, is the union of homeomorphic images of $K_p\times_r K_q$ ($p+q=n+1$, $r=1,2,...,p$), where $K_p\times_r K_q$ corresponds to the bracketing $x_1\ldots (x_r\ldots x_{r+q-1})\ldots x_n$.
Stasheff started with $K_2$ as a point and defined $K_n$, inductively, as a cone over $L_n$. This definition of $K_n$ involves $K_2$ through $K_{n-1}$ all together. \\
\hspace*{0.5cm}As associahedra are contractible, these are of less interest as spaces in isolation. However, as combinatorial objects, the key properties of it are inherent in its description as a convex polytope. Much later, in 2005, J. L. Loday \cite{Loday2} gave a different inductive construction of $K_n$ starting from $K_{n-1}$, leaving it to the reader to verify the details. Being a predominantly topological construction, it is not apparent why the cone construction of Loday gives rise to the known combinatorial structure on the associahedra. It is, therefore, natural to search for an explicit combinatorial isomorphism between these two constructions, leading to our first result (Theorem \ref{thm:StasheffLoday}).
\begin{customthm}{A} \label{thm:iStasheffLoday}
Stasheff polytopes are combinatorially isomorphic to Loday's cone construction of associahedra.
\end{customthm}
There is another set of complexes $\mathcal{J}(n)$, known as \textit{multiplihedra}, which were first introduced and pictured by Stasheff \cite{Stas} in order to define $A_\infty$ maps between $A_\infty$ spaces, for $n\leq 4$. Mau and Woodward \cite{Woodward} have shown $\mathcal{J}(n)$'s to be compactification of the moduli space of quilted disks. Boardman and Vogt \cite{Boardman} provided a definition of $\mathcal{J}(n)$ in terms of painted trees (refer to Definition \ref{def:PaintedTree}). The first detailed definition of $\mathcal{J}(n)$ and its combinatorial properties were described by Iwase and Mimura \cite{IwaseMimura}, while its realization as convex polytopes was achieved by Forcey \cite{Forcey1}, combining the description of Boardman-Vogt and Iwase-Mimura. Later, Devadoss and Forcey \cite{Devadoss4} generalized multiplihedra to \textit{graph multiplihedra} $\mathcal{J}G$ for a given graph $G$. \\
\hspace*{0.5cm}In the study of $A_\infty$ maps from an $A_\infty$ space to a strictly associative $H$ space (i.e., a topological monoid), multiplihedra degenerate to what we call \textit{collapsed multiplihedra}. Stasheff \cite{Stas} had pointed out that these polytopes resemble associahedra. It has been observed that collapsed multiplihedra can be viewed as degeneration of graph multiplihedra for path graphs.
It was long assumed that for $A_\infty$ maps from a strictly associative $H$ space to a $A_\infty$ space, multiplihedra would likewise degenerate to yield associahedra. But it was Forcey \cite{Forcey2} who realized that new polytopes were needed. These were constructed by him and named \textit{composihedra}. \\
\hspace*{0.5cm}In this paper, we will give an equivalent definition (Definition \ref{def:Multiplihedra}) of multiplihedra, which induces a definition for collapsed multiplihedra (Definition \ref{def:CollMulti}). Using this definition, we will give a proof of the following (Proposition \ref{thm:StasMulti}) by providing a new bijection of underlying posets.
\begin{customobs}{b} \label{thm:iStasMulti}
Stasheff polytopes and collapsed multiplihedra are combinatorially isomorphic.
\end{customobs}
\noindent There is a well-known bijection $bij_3$ (cf. Forcey's paper \cite[p. 195]{ForceyLattice}; prior to Remark 2.6 and Figure 7) which is different from ours.
In 2010, Devadoss, Heath, and Vipismakul \cite{Devadoss1} defined a polytope called \textit{graph cubeahedron} (denoted by $\mathcal{C}G$) associated to a graph $G$. These are obtained by truncating certain faces of a cube. They gave a convex realization of these polytopes as simple convex polytopes whose face poset is isomorphic to the poset of design tubings for graphs. Graph cubeahedra for cycle graphs $G$ (called \textit{halohedra}) appear as the moduli space of annulus with marked points on one boundary circle. In this paper, we are mainly interested in $\mathcal{C}G$ for path graphs $G$ and will prove the following (Proposition \ref{thm:MultiCubea}) by providing a new bijection of underlying posets.
\begin{customobs}{c}\label{thm:iMultiCubea}
The collapsed multiplihedra and graph cubeahedra for path graphs are combinatorially isomorphic.
\end{customobs}
\noindent It turns out that bijection obtained between the posets governing Stasheff polytopes and graph cubeahedra (for path graphs), by combining our bijections from Observations \ref{thm:iStasMulti} and \ref{thm:iMultiCubea}, is the bijection of posets defined in \cite[Proposition 14]{Devadoss1}. Form our perspective, the bijection in Observation \ref{thm:iMultiCubea} is natural. Combining Theorem \ref{thm:iStasheffLoday}, Observations \ref{thm:iStasMulti} and \ref{thm:iMultiCubea}, we obtain the following result (Theorem \ref{thm:Main}).
\begin{customthm}{B}\label{thm:iMain}
The four models of associahedra - Stasheff polytopes, complexes obtained by Loday's cone construction, collapsed multiplihedra, graph cubeahedra for path graphs - are all combinatorially isomorphic.
\end{customthm}
\noindent \textsc{Organization of the paper}. The paper is organized as follows. In \S \ref{Stasheff}, we will review some of the definitions and results related to Stasheff's description of associahedra. In \S \ref{LodayCone}, the description of Loday's cone construction and some related theorems are presented while in \S \ref{CollapsedMulti} an equivalent definition of multiplihedra and collapsed multiplihedra are given. In \S \ref{Cubeahedra} the definition of tubings, design tubings, graph cubeahedra, and related results are presented. The next section \S \ref{Main} contains the proof of the main result (Theorem \ref{thm:iMain}), which is a combination of three results. In \S \ref{LodayStasheff} we prove Theorem \ref{thm:iStasheffLoday} while \S \ref{StasMulti} and \S \ref{MultiCubea} are devoted to the proofs of Observations \ref{thm:iStasMulti} and \ref{thm:iMultiCubea} respectively.\\
\noindent \textsc{Acknowledgments.} The authors would like to thank Stefan Forcey for an initial discussion on this topic as well as several useful comments on the first draft. The first author acknowledges the support of SERB MATRICS grant MTR/2017/000807 for the funding opportunity. The second author is supported by a PMRF fellowship.
\section{Description of Four Models of Associahedra}
An $H$-space is a topological space $X$ equipped with a binary operation $m:X^2 \to X$ having a unit $e$. It is a natural generalization of the notion of topological groups. We can rewrite $m$ as a map $m_2:K_2\times X^2\to X$, where $K_2$ is a point. If $m$ is not associative but homotopy associative (called weakly associative), then we have a map $m_3:K_3\times X^3\to X$ defined through the homotopy between $m\comp (m\times 1)$ and $m\comp (1\times m)$, where $K_3$ is an interval. Similarly, we can define five different maps from $X^4\to X$ using $m$, and between any two such maps, there are two different homotopies (using the chosen homotopy associativity). If those two homotopies are homotopic themselves, then this defines a map $m_4:K_4\times X^4\to X$, where $K_4$ is a filled pentagon. If we continue this process, we obtain a map $m_n:K_n\times X^n\to X$ for $n\geq 2$. These complexes $K_n$, called associahedra, are our main objects of interest.\\
\hspace*{0.5cm}We will briefly describe the four models of associahedra, one in each subsection, we are concerned with: Stasheff polytopes, Loday's cone construction, collapsed multiplihedra, and graph cubeahedra for path graphs.
\subsection{Stasheff Polytopes} \label{Stasheff}
Stasheff defined for each $i\geq 2$, a special cell complex $K_i$ as a subset of $I^{i-2}$. It is a simplicial complex and has $i$ degeneracy operators $s_1,...,s_i$. Moreover, $K_i$ has $\binom{i}{2}-1$ faces of codimension $1$. The complexes $K_i$, as combinatorial objects, are more complicated than the standard simplices $\Delta^{i-2}$. According to Stasheff \cite{HAH}, it is defined through following intuitive content:\\
\indent Consider a word with $i$ letters and all meaningful ways of inserting one set of parentheses. To each such insertion except for $(x_1,x_2,...,x_i)$, there corresponds a cell of $L_i$, the boundary of $K_i$. If the parentheses enclose $x_k$ through $x_{k+s-1}$, we regard this cell as $K_r\times_k K_s$, the homeomorphic image of $K_r\times K_s$ under a map which we call $\partial_k(r,s)$, where $r+s=i+1$. Two such cells intersect only on their boundaries and the `edges' so formed correspond to inserting two sets of parentheses in the word.
Thus we have the relations:
\begin{itemize}
\item[(a)] $\partial_{j}(r, s+t-1)\left(1 \times \partial_{k}(s, t)\right)=\partial_{j+k-1}(r+s-1, t)\left(\partial_{j}(r, s) \times 1\right)$
\item[(b)] $\partial_{j+s-1}(r+s-1, t)\left(\partial_{k}(r, s) \times 1\right)=\partial_{k}(r+t-1, s)\left(\partial_{j}(r, t) \times 1\right)(1 \times T)$
\end{itemize}
where $T: K_{s} \times K_{t} \rightarrow K_{t} \times K_{s}$ permutes the factors. Observe that, in terms of homeomorphic images of $K_r\times K_s\times K_t$, the two relations above are equivalent respectively to the identifications
\begin{align}
&K_r\times_j(K_s\times_k K_t)=(K_r\times_j K_s)\times_{j+k-1} K_t\label{eq:1}\\
&(K_r\times_k K_s)\times_{j+s-1} K_t=(K_r\times_j K_t)\times_k K_s\label{eq:2}
\end{align}
This is enough to obtain $K_{i}$ by induction. Start with $K_{2}=\{\ast\}$ as a point. Given $K_{2}$ through $K_{i-1}$, construct $L_{i}$ by fitting together copies of $K_{r} \times_k K_{s}$ as indicated by the above conditions. Take $K_{i}$ to be the cone on $L_{i}$.
Stasheff proved that these polytopes are homeomorphic to cubes.
\begin{prop}{\cite[Proposition 3]{HAH}}
$K_{i}$ is homeomorphic to $I^{i-2}$ and
degeneracy maps $s_{j}: K_{i} \rightarrow K_{i-1}$ for $1 \leq j \leq i$ can be defined so that the following relations hold:
\begin{enumerate}
\item $s_{j} s_{k}=s_{k} s_{j+1}$ for $k \leq j$.
\item $s_{j} \partial_{k}(r, s)=\partial_{k-1}(r-1, s)\left(s_{j} \times 1\right)$ for $j<k$ and $r>2$.
\item $s_{j} \partial_{k}(r, s)=\partial_{k}(r, s-1)\left(1 \times s_{j-k+1}\right)$ for $s>2, k \leq j<k+s,\\
s_{j} \partial_{k}(i-1,2)=\pi_{1}$ for $1<j=k<i$ and $1<j=k+1 \leq i, $\\
$s_{1} \partial_{2}(2, i-1)=\pi_{2}$ and $s_{i} \partial_{1}(2, i-1)=\pi_{2}$,\\
where $\pi_{m}$ for $m=1,2$ is projection onto the $m$th factor.
\item $s_{j} \partial_{k}(r, s)=\partial_{k}(r-1, s)\left(s_{j-s+1} \times 1\right)$ for $k+s \leq j$.
\end{enumerate}
\end{prop}
\noindent
Using boundary maps $\partial_k(r,s)$ and degeneracy maps $s_j$, Stasheff defined the following.
\begin{defi}[$A_n$ form and $A_n$ space]
An $A_n$ form on a space $X$ consists of a family of maps $m_i:K_i\times X^i\to X$ for $2\leq i\leq n$ such that
\begin{enumerate}
\item there exists $e\in X$ with $m_{2}(*, e, x)=m_{2}(*, x, e)=x$ for $x \in X, *=K_{2}$.
\item For $\rho \in K_{r}, \sigma \in K_{s}, r+s=i+1$, we have
$$
\begin{aligned}
m_{i}\left(\partial_{k}(r, s)(\rho, \sigma), x_{1}, \cdots, x_{i}\right)=
m_{r}\left(\rho, x_{1}, \cdots, x_{k-1}, m_{s}\left(\sigma, x_{k}, \cdots, x_{k+s-1}\right), x_{k+s}, \cdots, x_{i}\right).
\end{aligned}
$$
\item For $\tau \in K_{i}$ and $i>2$, we have
$$
m_{i}\left(\tau, x_{1}, \cdots, x_{j-1}, e, x_{j+1}, \cdots, x_{i}\right)=m_{i-1}\left(s_{j}(\tau), x_{1}, \cdots, x_{j-1}, x_{j+1}, \cdots, x_{i}\right).
$$
\end{enumerate}
The pair $(X,\{m_i\}_{2\leq i\leq n})$ is called an $A_n$ space. \\
If the maps $m_i$ exist for all $i$, then it is called an $A_\infty$ form and the corresponding pair is called an $A_\infty$ space.
\end{defi}
\noindent
Homotopy associative algebras (or $A_\infty$ algebras), $A_\infty$ spaces and operads have been extensively studied. The interested reader is directed to the excellent books \cite{May, Boardman, Adams} and introductory notes \cite{Keller}.
\noindent
Related to the notion of $A_n$ space, Stasheff \cite{HAH} also defined the notion of $A_n$ structure.
\begin{defi}[$A_n$ structure]
An $A_n$ \textit{structure} on a space $X$ consists of an $n$-tuple of maps $p_i:E_i\to B_i$ for $1\leq i\leq n$ with $X=E_1\subset E_2 \subset \cdots \subset E_n$ and $*=B_1\subset B_2\subset \cdots B_n$ such that $p_{i*}:\pi_q(E_i,X)\to \pi_q(B_i)$ is an isomorphism for all $q$, together with a contracting homotopy $h:CE_{n-1}\to E_n$ such that $h(CE_{i-1})\subset E_i$.
\end{defi}
One of the key results in Stasheff \cite[Theorem 5]{HAH} states that a space admits $A_n$ structure if and only if it has an $A_n$ form. Topological groups and more generally based loop spaces admit $A_n$ structures for all values of $n$. The landmark result in \cite[Remarks before \S 6 in page 283 of HAH-I]{HAH}, essentially motivated by earlier works of Sugawara \cite[Theorem 4.2]{Sugawara}, \cite[Lemma 10]{SugawaraH-space}, is a recognition principle for based loop spaces.
\begin{thm}[Stasheff]
A space $Y$, having the homotopy type of a CW complex, is an $A_\infty$ space if and only if $Y$ is homotopy equivalent to a based loop space.
\end{thm}
In this paper, however, we are exclusively interested in the combinatorial description of the complexes $K_i$.
The correspondence between faces of Stasheff polytopes (associahedra) and the bracketings indicate that these polytopes can also be defined as follows.
\begin{defi}[Associahedron]\label{def:Associahedra1}
Let $\mathfrak{P}(n)$ be the poset of bracketings of a word with $n$ letters, ordered such that $p < p^{\prime}$ if $p$ is obtained from $p^{\prime}$ by adding new brackets. The associahedron $K_{n}$ is a convex polytope of dimension $n-2$ whose face poset is isomorphic to $\mathfrak{P}(n)$.
\end{defi}
\noindent
This construction of the polytope $K_{n}$ was first given in 1984 by Haiman in his (unpublished) manuscript \cite{Haiman}. In 1989, C. Lee \cite[Theorem 1]{Lee} proved this by considering the collection of all sets of mutually non-crossing diagonals of a polygon.
Observe that the sets of mutually non-crossing diagonals of an $(n+1)$-gon are in bijective correspondence with the bracketings of a word with $n$ letters.
We will use this description later in \S \ref{StasMulti}.
\subsection{Loday's Cone Construction}\label{LodayCone}
From the combinatorial description given by Stasheff, the associahedron $K_n$ is a polytope of dimension $n-2$ whose vertices are in bijective correspondence with the $(n-2)$-bracketing of the word $x_1x_2\ldots x_n$.
But each $(n-2)$-bracketing of the word $x_1x_2\ldots x_n$ corresponds to a rooted planar binary tree with $n+1$ leaves, one of them being the root. For example, the planar rooted trees associated to $x_1(x_2(x_3x_4))$ and $(x_1x_2)(x_3x_4)$ are depicted below (cf. figure \ref{fig:tree1}, \ref{fig:tree2}), the root being represented by the vertical leaf in each case.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\scalebox{0.25}{\tree{[[[[[][,tier=L]][,tier=L]][,tier=L]]]}}
\subcaption{$x_1(x_2(x_3x_4))$}
\label{fig:tree1}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\scalebox{0.25}{\tree{[[[[][,tier=L]][~,phantom,fit=band][[][,tier=L]]]]}}
\subcaption{$(x_1x_2)(x_3x_4)$}
\label{fig:tree2}
\end{subfigure}
\caption{Correspondence between bracketing and rooted binary tree}
\end{figure}
\noindent Thus $K_n$ can also be thought of as a polytope of dimension $n-2$ whose vertices are in bijective correspondence with planar rooted binary trees with $n$ leaves and $1$ root.
Let $Y_{n}$ be the set of such trees. The trees are depicted below for $2\leq n\leq 4$.\\
$$
Y_{2}=
\left\{
\begin{matrix}
\scalebox{0.3}{\tree{[[[][]]]}}
\end{matrix}
\right\},\
Y_{3}=
\left\{
\begin{matrix}
\scalebox{0.2}{\tree{[[[[][,tier=L]][,tier=L]]]}}, \scalebox{0.2}{\tree{[[[,tier=L][[,tier=L][]]]]}}
\end{matrix}
\right\},\
Y_{4}=
\left\{
\begin{matrix}
\scalebox{0.15}{\tree{[[[,tier=L][[,tier=L][[,tier=L][]]]]]}}, \scalebox{0.20}{\tree{[[[[][]][[][]]]]}}, \scalebox{0.15}{\tree{[[[,tier=L][[[][,tier=L]][,tier=L]]]]}}, \scalebox{0.15}{\tree{[[[[,tier=L][[,tier=L][,tier=L]]][,tier=L]]]}}, \scalebox{0.15}{\tree{[[[[[][,tier=L]][,tier=L]][,tier=L]]]}}
\end{matrix}
\right\}
$$
\vspace{2pt}
Any $t \in Y_{n}$ is defined to have \textit{degree} $n$. We label the leaves (not the root) of $t$ from left to right by $0, 1, \cdots, n-1$. Then we label the internal vertices by $1,2, \cdots, n-1$.
The $i$th internal vertex is the one which falls in between the leaves $i-1$ and $i$. We denote by $a_{i}$, respectively $b_{i}$, the number of leaves on the left side, respectively right side, of the $i$th vertex.
The product $a_{i} b_{i}$ is called the \textit{weight} of the $i$th internal vertex. To each tree $t\in Y_{n}$, we associate the point $M(t) \in \mathbb{R}^{n-1}$, whose $i$th coordinate is the weight of the $i$th vertex:
$$
M(t)=\left(a_{1} b_{1}, \cdots, a_{i} b_{i}, \cdots, a_{n-1} b_{n-1}\right) \in \mathbb{R}^{n-1}
$$
\noindent For instance,\vspace*{0.2cm}
$$\begin{aligned}
&M\left(
\begin{matrix}
\scalebox{0.2}{\tree{[[[][]]]}}
\end{matrix}
\right)=(1),\
M\left(
\begin{matrix}
\scalebox{0.18}{\tree{[[[[][,tier=L]][,tier=L]]]}}
\end{matrix}
\right)=(2,1),\
M\left(
\begin{matrix}
\scalebox{0.18}{\tree{[[[,tier=L][[,tier=L][]]]]}}
\end{matrix}
\right)=(1,2),\\[5pt]
&M\left(
\begin{matrix}
\scalebox{0.13}{\tree{[[[,tier=L][[,tier=L][[,tier=L][]]]]]}}
\end{matrix}
\right)=(1,2,3),\
M\left(
\begin{matrix}
\scalebox{0.18}{\tree{[[[[][]][[][]]]]}}
\end{matrix}
\right)=(1,4,1)
\end{aligned}$$
\noindent Observe that the weight of a vertex depends only on the sub-tree that it determines. Using these integral coordinates, Loday \cite{Loday1} gave a convex realization of $K_{n+1}$ in $\mathbb{R}^n$.
\begin{lemma}{\cite[Lemma 2.5]{Loday1}}
For any tree $t \in Y_{n+1}$ the coordinates of the point $M(t)=\left(x_{1}, \cdots, x_{n}\right) \in \mathbb{R}^{n}$ satisfy the relation $$\sum_{k=1}^{n} x_{k}=\textstyle{\frac{1}{2} n(n+1)} .$$
Thus, it follows that
$$M(t) \in H_n=\left\{(x_1,...,x_n)\in \mathbb{R}^n :x_1+x_2+...+x_n= \textstyle{\frac{n(n+1)}{2}}\right\}.$$
\end{lemma}
\begin{thm}{\cite[Theorem 1.1]{Loday1}}
The convex hull of the points $M(t) \in \mathbb{R}^{n}$, for $t\in Y_{n+1}$, is a realization of the Stasheff polytope $K_{n+1}$ of dimension $n-1$.
\end{thm}
\noindent For example, the polytope $K_5$ lies in the hyperplane $H_4$ in $\mathbb{R}^4$. Under an isometric transformation of $H_4$ to $\mathbb{R}^3$ (i.e., $x_4=0$ hyperplane), the embedded picture of $K_5$ is shown in figure \ref{fig:embeddedK5}.
\begin{figure}[H]
\centering
\includegraphics[width=0.3\linewidth]{Loday-K_5.pdf}
\caption{Loday's embedded $K_5$ in $\mathbb{R}^3$}
\label{fig:embeddedK5}
\end{figure}
Now starting with $K_2$ as point, Loday \cite[\S 2.4]{Loday2} gave a different inductive construction of the polytopes $K_{n+1}$. The steps are as follows:
\begin{enumerate}
\item Start with the associahedron $K_n$, which is a ball and whose boundary is a cellular sphere. The cells of the boundary are of the form $K_p\times_r K_q$ where $p+q=n+1$ and $r=1,2,...,p$.
\item Enlarge each cell $K_p\times_r K_q$ into a cell of dimension $n$ by replacing it by $K_{p+1}\times_r K_q$. We denote the total enlarged complex by $\widehat{K}_n$.
\item Take the cone over the above enlargement and declare that to be $K_{n+1}$, i.e. $K_{n+1}:= C(\widehat{K}_n)$.
\end{enumerate}
The following examples in low dimensions illustrate how this process works.
\begin{itemize}
\item[(i)] To construct $K_3$ from $K_2$, form the enlarged complex $\widehat{K}_2$, which is a point (as $K_2$ has no boundary). Then $K_3$ is cone over the point $\widehat{K}_2$, i.e., an interval.\\
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{Loday1.pdf}
\caption{$K_3$ from $K_2$}
\end{figure}
\vspace*{-0.2cm}\item[(ii)] To construct $K_4$ from $K_3$, note that $K_3$ has two boundary points namely $K_2\times_1 K_2$ and $K_2\times_2 K_2$. Thus $\widehat{K}_3$ consist of the original $K_3$ together with $K_3\times_1 K_2$ and $K_3\times_2 K_2$, which looks like an angular `C' shape. Finally $K_4$ is the cone over $\widehat{K}_3$, resulting in a filled pentagon. \\
\begin{figure}[H]
\centering
\includegraphics[width=0.85\linewidth]{Loday2.pdf}
\caption{$K_4$ from $K_3$}
\end{figure}
\end{itemize}
\subsection{Collapsed Multiplihedra}\label{CollapsedMulti}
Suppose $(X\{m_i\}),(Y,\{m_i^\prime\})$ are two $A_\infty$ spaces and $f:X\to Y$ is a weak homomorphism i.e., there is a homotopy between the maps $f\comp m_2$ and $m_2^\prime\comp (f\times f)$. Such maps called $H$-maps. In general, there is a notion of $A_n$ maps in Stasheff \cite[II, Def. 4.1]{HAH}, which satisfy $f\comp m_i=m_i^\prime\comp (1\times f^i)$ for $i\leq n$.
Thus we have a map $f_2:\mathcal{J}(2)\times X^2\to Y$, where $\mathcal{J}(2)$ is an interval.
To match things up, rewrite $f$ as $f_1:\mathcal{J}(1) \times X\to Y$, where $\mathcal{J}(1)$ is a single point.
Now using $m_2, m_2^\prime, f$, there are six different ways (cf. figure \ref{fig:JPaintedTrees}) to define a map from $X^3$ to $Y$, namely $f\comp (m_2\comp (m_2\times 1)),$ $f\comp (m_2\comp (1\times m_2)),$ $m_2^\prime\comp (f\times m_2),$ $m_2^\prime\comp (m_2\times f),$ $m_2^\prime\comp (1\times m_2^\prime)\comp(f\times f\times f),$ $m_2^\prime\comp ( m_2^\prime \times 1)\comp(f\times f\times f)$.
Using the weak homomorphism of $f$ and weak associativity in $X,Y$ (due to the existence of $m_3$, $m_3^\prime$), one realizes that there are two different homotopies between any two of the six maps. If those two homotopies are homotopic themselves, then we have a map $f_3:\mathcal{J}(3)\times X^3\to Y$, where $\mathcal{J}(3)$ is a filled hexagon.
If we continue this process, we will get a map $f_n:\mathcal{J}(n)\times X^n\to Y$ for each $n\geq 1$. These complexes $\mathcal{J}(n)$ are called multiplihedra. In the figure \ref{fig:CollapsedJ_4} below, the blue edges collapses to a point so that the rectangular faces degenerate to brown edges and the pentagonal face degenerates to a single point, giving rise to Loday's realization of $K_5$. There is a different degeneration from $\mathcal{J}(n)$ to $K_{n+1}$, as shown in \cite[\S 5]{SaneblidzeUmble}; figure \ref{fig:CollapsedJ_4-1} exhibits this for $\mathcal{J}(4)$.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=0.5 \linewidth]{Embedded-J_4.pdf}
\caption{Embedded $\mathcal{J}(4)$ in $\mathbb{R}^3$}
\label{fig:EmbJ(4)}
\end{subfigure}
\hspace{0.05cm}
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=0.52 \linewidth]{J_4_Degenerate_to_K5.pdf}
\caption{Blue faces collapsed to get $K_5$}
\label{fig:CollapsedJ_4}
\end{subfigure}
\hspace{0.05cm}
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=0.52 \linewidth]{J_4_Degenerate_to_K5-1.pdf}
\caption{Another degeneration}
\label{fig:CollapsedJ_4-1}
\end{subfigure}
\caption{$\mathcal{J}(4)$ and its degeneration to $K_5$}
\end{figure}
Multiplihedra first appeared in the work of Stasheff \cite{Stas}. However, in 1986, Norio Iwase and Mamoru Mimura \cite[Section 2]{IwaseMimura} gave the first detailed construction of $\mathcal{J}(n)$ with face operators, and described their combinatorial properties. It was also shown that $\mathcal{J}(n)$ is homeomorphic to the unit cube of dimension $n-1$. Using this description of $\mathcal{J}(n)$, they defined $A_n$ maps. But even before them, Boardman and Vogt \cite{Boardman} (around 1973) had developed several homotopy equivalent versions of a space of \textit{painted binary trees} with interior edges of length in $[0,1]$ to define maps between $A_\infty$ spaces which preserve the multiplicative structure up to homotopy. In 2008, Forcey \cite[Theorem 4.1]{Forcey1} proved that the space of painted trees with $n$ leaves, as convex polytopes, are combinatorially equivalent to the CW-complexes described by Iwase and Mimura. Indeed, Forcey associated a co-ordinate to each painted binary trees, which generalized the Loday's integer coordinates associated to binary trees corresponding to the vertices of associahedra. Figure \ref{fig:EmbJ(4)} of $\mathcal{J}(4)$ is drawn with such coordinates for the vertices. We shall use the definition of $\mathcal{J}(n)$, as defined in \cite{Forcey1}, in terms of painted trees.
\begin{defi}\label{def:PaintedTree}
A \textit{painted tree} is painted beginning at the root edge (the leaf edges are unpainted), and always painted in such a way that there are only following three types of nodes:
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=0.45\linewidth]{Painted_tree_pic/Painted_tree2-i.pdf}
\subcaption{} \label{PT2_i}
\end{subfigure}
\hspace{0.1cm}
\begin{subfigure}[b]{0.3\linewidth}
\centering \includegraphics[width=0.45\linewidth]{Painted_tree_pic/Painted_tree2-ii.pdf}
\subcaption{}
\label{PT2_ii}
\end{subfigure}
\hspace{0.1cm}
\begin{subfigure}[b]{0.3\linewidth}
\centering \includegraphics[width=0.45\linewidth]{Painted_tree_pic/Painted_tree2-iii.pdf}
\subcaption{} \label{PT2_iii}
\end{subfigure}
\caption{Admissible nodes}
\end{figure}
\noindent This limitation on nodes implies that painted regions must be connected, that painting must never end precisely at a node of valency three or more, and that painting must proceed up every branches of such nodes.\\
\end{defi}
Let $J(n)$ consist of all painted trees with $n$ leaves. There is a refinement ordering defined as follows.
\begin{defi}{\cite[Definition 1]{Forcey1}}
For $t,t^\prime\in J(n)$, we say $t$ \textit{refines} $t'$ and denote by $t\preccurlyeq t^\prime$ if $t^\prime$ obtained from $t$ by collapsing some of its internal edges.\\
We say $t$ \textit{minimally refines} $t'$ if $t$ refines $t'$ and there is no $s\in J(n)$ such that both $t$ refines $s$ and $s$ refines $t'$.
\end{defi}
\noindent Now $(J(n),\preccurlyeq)$ is a poset with painted binary trees as smallest elements (in the sense that nothing refine them) and the painted corolla as the greatest element (in the sense that everything refines it). The $n$th multiplihedra is defined as follows.
\begin{defi}
The $n$th multiplihedra $\mathcal{J}(n)$ is a convex polytope whose face poset is isomorphic to the poset $(J(n),\preccurlyeq)$ of painted trees with $n$ leaves.
\end{defi}
The explicit inductive construction of these polytopes and the correspondence between the facets of $\mathcal{J}(n)$ and the painted trees follows from \cite[Definition 4]{Forcey1}. For instance, the vertices of $\mathcal{J}(n)$ are in bijection with the painted binary trees with $n$ leaves; the edges are in bijection with those painted trees with $n$ leaves which are obtained by the minimal refinement of painted binary trees with $n$ leaves and they are glued together along the end points with matching associated to painted binary trees. In this way, the $(n-2)$-dimensional cells of $\mathcal{J}(n)$ are in bijection with those painted trees which refine to corolla with $n$ leaves. They are glued together along $(n-3)$-dimensional cells with matching to associated painted trees to form the complex $\partial \mathcal{J}(n)$. Finally the $(n-1)$ dimensional complex $\mathcal{J}(n)$ is defined as the cone over $\partial \mathcal{J}(n)$ and it corresponds to the painted corolla with $n$ leaves in the poset $J(n)$.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\linewidth]{J_3.pdf}
\caption{$\mathcal{J}(3)$ labelled by painted trees}
\label{fig:JPaintedTrees}
\end{figure}
\indent We shall give an equivalent description of $\mathcal{J}(n)$ which reflects the promised representation of it stated at the beginning of this subsection. It is given as follows.
Let $f:A\to B$ be a weak homomorphism (i.e., respects the multiplication in $A$ and $B$ up to homotopy) from an $A_\infty$ space to another $A_\infty$ space. For a given ordered collection $a_1,a_2,...,a_n\in A$, there are three types of elements.
\begin{enumerate}
\item[I.] The $f$-image of the elements, obtained using different association of the elements $a_1,a_2,...,a_n$ in $A$. For example, $f(X),$ where $X$ is some rule of association of the elements $a_1,a_2,...,a_n$.
\item[II.] The elements obtained using $f$ being homomorphism up to homotopy on the elements of type I and following the same association rule in $B$. For example, if $X=(X_1)((X_2)(X_3))$ is some rule of association of $a_1,a_2,...,a_n$, then elements of the form $f((X_1)\cdot((X_2)(X_3)))$ is of this type. Here $f((X_1)\cdot((X_2)(X_3)))$ denotes the homotopy equivalence between $f((X_1)((X_2)(X_3)))$ and $f(X_1)f((X_2)(X_3))$. Similarly, $f((X_1)\cdot ((X_2)\cdot (X_3)))$, representing the homotopy equivalence between $f((X_1)(X_2\cdot X_3))$ and $f(X_1)f((X_2)\cdot (X_3))$, is also of this type.
\item[III.] The elements obtained using different association of the elements of type II in $B$. For example, if $X=(X_1)((X_2)((X_3)(X_4)))$ is some rule of association of $a_1,a_2,...,a_n$, then the elements obtained using the different association of $f(X_1), f(X_2), f(X_3), f(X_4)$ in $B$, namely \vspace*{-0.2cm}
\begin{align*}
(f(X_1)f(X_2))(f(X_3)f(X_4)),\ f(X_1)f(X_2)&(f(X_3)f(X_4)),\ f(X_1)(f(X_2)f(X_3))f(X_4),\\ (f(X_1)(f(X_2)f(X_3)))f(X_4),&\
f(X_1)f(X_2)f(X_3)f(X_4)
\end{align*} are of this type. \vspace*{-0.2cm}
\end{enumerate}
\begin{defi}
Let $\mathfrak{J}_n$ be the poset of all of the above three types of elements in $B$, ordered such that $P\prec P'$ if $P$ is obtained from $P'$ by at least one of the following operations:
\begin{enumerate}
\item \label{it:op1} adding brackets in domain or co-domain elements.
\item \label{it:op2} replacing $\cdot$ by $)f($ without changing the association rule in $P'$.
\item \label{it:op3} removing one or more consecutive $\cdot$ by adding a pair of brackets that encloses all the adjacent elements to all those $\cdot$ which are removed. In this process, ignore redundant bracketing (if obtained). The requirement of consecutive $\cdot$ is to ensure allowable bracketing.
\end{enumerate}
\end{defi}
The above operations are to be understood in the following ways:
\begin{itemize}
\item For two type I (or III) elements $P,P'$, we say $P\prec P'$ if $P,P'$ follow above operation (\ref{it:op1}) in domain (or co-domain).
For example, $f(a_1(a_2(a_3a_4)))\prec f(a_1(a_2a_3a_4)),$ $f(a_1)(f(a_2)f(a_3a_4))\prec f(a_1)f(a_2)f(a_3a_4)$.
\item For two type II elements $Q,Q'$, we say $Q\prec Q'$ if $Q,Q'$ follow above operation (\ref{it:op2}) or (\ref{it:op3}).
For example, $f(a_1)f(a_2\cdot (a_3a_4))\prec f(a_1\cdot a_2\cdot (a_3a_4)),$ $f(a_1\cdot (a_2(a_3a_4)))\prec f(a_1\cdot (a_2\cdot (a_3a_4))).$
\item For type I element $P$ and type II element $Q$, we say $P\prec Q$ if $P,Q$ follow above operation (\ref{it:op3}).
For example, $f((a_1a_2)(a_3a_4)) \prec f((a_1a_2) \cdot (a_3a_4)),$ $f(a_1a_2a_3a_4) \prec f(a_1\cdot a_2\cdot a_3\cdot a_4)$.
\item For type II element $Q$ and type III element $P$, we say $P\prec Q$ if $P,Q$ follow above operation (\ref{it:op2}) or (\ref{it:op3}).
For example, $(f(a_1)f(a_2a_3))f(a_4)\prec f(a_1\cdot (a_2a_3))f(a_4),$ $f(a_1)(f(a_2a_3)f(a_4))\prec f(a_1)(f(a_2\cdot a_3)f(a_4))$.
\end{itemize}
Now, depending on the poset $(\mathfrak{J}_n,\prec)$, we define another set of complexes $J_n$ for $n\geq 1$.
\begin{defi}\label{def:Multiplihedra}
Define $J_n$ to be the convex polytope of dimension $n-1$, whose face poset is isomorphic to $(\mathfrak{J}_n,\prec)$ for $n\geq 1$.
\end{defi}
The existence and the equivalence of these complexes with the multiplihedra follows from the following lemma.
\begin{lemma}\label{lem:ConRealMulti}
$J_n$ is isomorphic to the multiplihedron $\mathcal{J}(n)$ for any $n\geq 1$.
\end{lemma}
\begin{proof}
It follows from the definitions of $\mathcal{J}(n)$ and $J_n$ that to exhibit an isomorphism between the mentioned complexes, it is enough to provide an isomorphism at the poset level. Define a map $\Phi:J(n)\to \mathfrak{J}_n$ as follows.
\begin{itemize}
\item[i)] Put $a_1$ through $a_n$ from left to right above the leaves of a painted tree.
\item[ii)] If the leaves corresponding to $a_k$ through $a_l$ for $1\leq k< l\leq n$ are joined to a node of type \ref{PT2_i} or of type \ref{PT2_iii}, then associate $(a_ka_{k+1}\ldots a_l)$ (cf. figure \ref{PT3_i}) or $f(a_k\cdot a_{k+1}\cdot \ldots \cdot a_l)$ (cf. figure \ref{PT3_iii}) respectively to that node. In case $1\leq k=l\leq n$, then associate $f(a_k)$ to the corresponding node.
\item[iii)] Then proceed to the nodes just below the above ones. If a node is of type \ref{PT2_i} or \ref{PT2_iii} joining $X_1$ through $X_m$ as associated nodes just above, then associate $(X_1X_2\ldots X_m)$ or $f(X_1\cdot X_2\cdot \ldots\cdot X_m)$ respectively to that node. If a node is of type \ref{PT2_ii} joining $f(Y_1)$ through $f(Y_m)$ as associated nodes just above, then associate $(f(Y_1)f(Y_2)\ldots f(Y_m))$ to that node (cf. figure \ref{PT3_ii}).
\item[iv)] Continue the above step iii) till the root node of a painted tree.
\end{itemize}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=0.57\linewidth]{Painted_tree_pic/Painted_tree3-i.pdf}
\caption{} \label{PT3_i}
\end{subfigure}
\hspace{0.1cm}
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{Painted_tree_pic/Painted_tree3-ii.pdf}
\caption{} \label{PT3_ii}
\end{subfigure}
\hspace{0.1cm}
\begin{subfigure}[b]{0.3\linewidth}
\centering
\includegraphics[width=0.64\linewidth]{Painted_tree_pic/Painted_tree3-iii.pdf}
\caption{} \label{PT3_iii}
\end{subfigure}
\caption{Bijection between the nodes of painted tree and the elements of defined poset}
\end{figure}
The element (ignoring redundant brackets, if exists) associated to the root node of a painted tree $t\in J(n)$, is defined to be $\Phi(t)\in \mathfrak{J}_n$. For example, the $\Phi$-image of the painted tree $t\in J(5)$ in figure \ref{fig:Bijection_eg} is $f(a_1a_2)(f(a_3)f(a_4\cdot a_5))\in \mathfrak{J}_5$.\\
Note that each painted tree is uniquely determined by its nodes and each position of those nodes associates a unique element. Also, the image of $t\in J(n)$ under $\Phi$ is determined by the associated elements to the nodes of $t$. Thus $\Phi$ maps each element of $J(n)$ to a unique element of $\mathfrak{J}_n$ and hence $\Phi$ is a bijection.
\begin{figure}[H]
\centering
\includegraphics[width=0.35\linewidth]{Painted_tree_pic/Bijection_eg.pdf}
\caption{Elements associated to the nodes}
\label{fig:Bijection_eg}
\end{figure}
It remains to check that $\Phi$ preserves the partial order. By the definition of $\preccurlyeq$, it is enough to show that $\Phi(t)\prec \Phi(t')$ when $t\preccurlyeq t'$ minimally. If $t\preccurlyeq t'$ minimally, then $t'$ is obtained from $t$ by collapsing an unpainted internal edge or a painted internal edge or a bunch of painted edges. Note that collapsing an unpainted internal edge results in either removal of brackets in the domain (operation (\ref{it:op1}) in $\mathfrak{J}_n$) or addition of one or more $\cdot$ by removing brackets (operation (\ref{it:op3}) in $\mathfrak{J}_n$). Collapsing a painted internal edge results in removal of brackets in the co-domain (operation (\ref{it:op1}) in $\mathfrak{J}_n$) while collapsing a bunch of painted edges result in replacing $)f($ by $\cdot$ (operation (\ref{it:op2}) in $\mathfrak{J}_n$). In all the cases $\Phi(t)\prec \Phi(t')$, completing the proof.
\end{proof}
\noindent Using this lemma, we consider $J_n$ (Definition \ref{def:Multiplihedra}) as the $n$th multiplihedron. The pictures of $J_1,$ $J_2,$ $J_3$ are depicted later in figure \ref{fig:multiplihedra}, with labelling of the faces in terms of elements of $\mathfrak{J}(1)$, $\mathfrak{J}(2)$, $\mathfrak{J}(3)$ respectively.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.08\linewidth}
\centering
\includegraphics[width=1.25cm]{multiplihedra_1.pdf}
\caption{$J_1$}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}[b]{0.13\linewidth}
\includegraphics[width=\linewidth]{multiplihedra_2.pdf}
\caption{$J_2$}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{0.5\linewidth}
\includegraphics[width=\linewidth]{multiplihedra_3.pdf}
\caption{$J_3$}
\end{subfigure}
\caption{Multiplihedra}
\label{fig:multiplihedra}
\end{figure}
Now suppose $B$ is an associative space. Due to the associativity in $B$, there will be only one element of type III (as defined before) for each association rule of $a_1,a_2,...,a_n.$ For example, if $X=((X_1X_2)(X_3X_4))$ is some association rule of $a_1,a_2,...,a_n$, then there is only one element $f(X_1)f(X_2)$ $f(X_3)f(X_4)$ in $B$ using the fact that $f$ is a homomorphism up to homotopy. We will call them degenerate type III elements.
\begin{defi}\label{def:CollMulti}
Let $\mathfrak{J}'_n$ be the poset of all type I, type II, and degenerate type III elements in $B$ with the ordering induced from $(\mathfrak{J}'_n,\prec)$.
We define the \textit{collapsed multiplihedron} $J'_n$ to be the convex polytope of dimension $n-1$, whose face poset is isomorphic to $\mathfrak{J}'_n.$
\end{defi}
\noindent As the posets $\mathfrak{J}'_n$ are obtained by degeneracy of certain elements in $\mathfrak{J}_n$, the polytopes $J'_n$ are obtained by collapsing certain faces of $J_n$. Thus the existence of the polytopes $J'_n$ guaranteed by the existence of multiplihedron $J_n$. We will use this definition to show that $J'_n$ is combinatorially isomorphic to the associahedron $K_{n+1}$ in \S \ref{StasMulti}.
\subsection{Graph Cubeahedra and Design Tubings}\label{Cubeahedra}
Devadoss \cite{Devadoss1} gave an alternate definition of $K_{n}$ with respect to tubings on a path graph.
\begin{defi}[Tube]
Let $\Gamma$ be a graph. A \textit{tube} is a proper nonempty set of nodes of $\Gamma$ whose induced graph is a proper, connected subgraph of $\Gamma$.
\end{defi}
There are three ways that two tubes $t_1$ and $t_2$ may interact on the graph.
\begin{itemize}
\item $t_1$ and $t_2$ are \textbf{nested} if $t_1\subset t_2$ or $t_2\subset t_1$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{Tubing_images/nested_all.pdf}
\caption{Nested tubes}
\end{figure}
\item $t_1$ and $t_2$ \textbf{intersect} if $t_1\cap t_2\neq \phi$ and $t_1\nsubseteq t_2$ and $t_2\nsubseteq t_1$.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.375\linewidth}
\centering
\includegraphics[width=0.6\linewidth]{Tubing_images/intersect_1.pdf}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}[b]{0.32\linewidth}
\includegraphics[width=0.6\linewidth]{Tubing_images/intersect_2.pdf}
\end{subfigure}
\caption{Intersection of tubes}
\end{figure}
\item $t_1$ and $t_2$ are \textbf{adjacent} if $t_1\cap t_2= \phi$ and $t_1\cup t_2$ is a tube.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.325\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{Tubing_images/adjacent_1.pdf}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}[b]{0.32\linewidth}
\includegraphics[width=0.8\linewidth]{Tubing_images/adjacent_2.pdf}
\end{subfigure}
\caption{Adjacent tubes}
\end{figure}
\end{itemize}
Two tubes are $\textbf{compatible}$ if they are neither adjacent nor they intersect i.e., $t_1$ and $t_2$ are compatible if they are nested or $t_1\cap t_2= \phi$ with $t_1\cup t_2$ is not a tube.
\begin{defi}
A \textit{tubing} $T$ of $\Gamma$ is a set of tubes of $\Gamma$ such that every pair of tubes in $T$ is compatible. A \textit{$k$-tubing} is a tubing with $k$ tubes.
\end{defi}
\noindent A few examples of tubings are given below.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.29\linewidth}
\centering
\includegraphics[width=\linewidth]{Tubing_images/tubing_1.pdf}
\caption*{2-tubing}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}[b]{0.29\linewidth}
\includegraphics[width=\linewidth]{Tubing_images/tubing_2.pdf}
\caption*{3-tubing}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}[b]{0.29\linewidth}
\includegraphics[width=\linewidth]{Tubing_images/tubing_3.pdf}
\caption*{4-tubing}
\end{subfigure}
\caption{Tubings}
\end{figure}
If we think of the $n-1$ nodes of a path graph $\Gamma$ as dividers between the $n$ letters of a word and the tube as a pair of parentheses enclosing the letters, then the compatibility condition of the tubes corresponds to the permissible bracketing of word. Now using the combinatorial description (cf. Definition \ref{def:Associahedra1}) of $K_n$, one has the following result.
\begin{lemma}{\cite[Lemma 2.3]{Devadoss2}}
\,Let $\Gamma$ be a path graph with $n-1$ nodes. The face poset of $K_n$ is isomorphic to the poset of all valid tubings of $\Gamma$, ordered such that tubings $T\prec T'$ if $T$ is obtained from $T'$ by adding tubes.
\end{lemma}
\noindent On a graph, Devadoss \cite{Devadoss1} defines another set of tubes called design tubes.
\begin{defi}[Design Tube]
Let $G$ be a connected graph.
A \textit{round tube} is a set of nodes of $G$ whose induced graph is a connected (and not necessarily proper) subgraph of $G$.
A \textit{square tube} is a single node of $G$. Then round tubes and square tubes together called \textit{design tubes} of $G$.
\end{defi}
\noindent Two design tubes are compatible if
\begin{enumerate}
\item they are both round, they are not adjacent and do not intersect;
\item otherwise, they are not nested.
\end{enumerate}
\begin{defi}[Design Tubing]
A \textit{design tubing} $U$ of $G$ is a collection of design tubes of $G$ such that every pair of tubes in $U$ is compatible.
\end{defi}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.29\linewidth}
\centering
\includegraphics[width=0.8\linewidth]{Tubing_images/dtubing_1.pdf}
\caption*{4-design tubing}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}[b]{0.29\linewidth}
\includegraphics[width=\linewidth]{Tubing_images/dtubing_2.pdf}
\caption*{5-design tubing}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}[b]{0.29\linewidth}
\includegraphics[width=\linewidth]{Tubing_images/dtubing_3.pdf}
\caption*{6-design tubing}
\end{subfigure}
\caption{Design tubings}
\end{figure}
\noindent
Note that, unlike ordinary tubes, round tubes do not have to be proper subgraphs of $G$. \\
\hspace*{0.5cm}Based on design tubings, Devadoss \cite{Devadoss1} constructed a set of polytopes called graph cubeahedra. For a graph $G$ with $n$ nodes, define $\boxdot_G$ to be the \textit{$n$-cube} where each pair of opposite facets correspond to a particular node of $G$. Specifically, one facet in the pair represents that node as a round tube and the other represents it as a square tube.
Each subset of nodes of $G$, chosen to be either round or square, corresponds to a unique face of $\boxdot_G$ defined by the intersection of the faces associated to those nodes. The empty set corresponds to the face which is the entire polytope $\boxdot_G$.
\begin{defi}[Graph Cubeahedron]
For a graph $G$, truncate faces of $\boxdot_G$ which correspond to round tubes in increasing order of dimension. The resulting polytope $\mathcal{C}G$ is the \textit{graph cubeahedron}.
\end{defi}
\noindent The graph cubeahedron $\mathcal{C}G$ can also be described as a convex polytope whose face poset formed through the design tubings.
\begin{thm}{\cite[Theorem 12]{Devadoss1}}\label{thm:OrderCubeahedra}
For a graph $G$ with $n$ nodes, the graph cubeahedron $\mathcal{C} G$ is a simple convex polytope of dimension $n$ whose face poset is isomorphic to the set of design tubings of $G$, ordered such that $U \prec U^{\prime}$ if $U$ is obtained from $U^{\prime}$ by adding tubes.
\end{thm}
\noindent In this article, we are interested in the case when $G$ is a path graph. We will make use of the above theorem to show a combinatorial isomorphism between $\mathcal{C}G$ for $G$ is a path graph with $n$ nodes and multiplihedra $J_{n+1}$ in \S \ref{MultiCubea}.
\section{Isomorphisms Between The Four Models}\label{Main}
We prove the main result of this paper in this section.
\begin{thm}\label{thm:Main}
The four models of associahedra: Stasheff polytopes, polytopes obtained by Loday's cone construction, collapsed multiplihedra, graph cubeahedra for path graphs are all combinatorially isomorphic.
\end{thm}
\begin{proof}
We prove the isomorphisms in the next three subsections. In \S \ref{LodayStasheff} we prove that the polytopes obtained via the cone construction of Loday are combinatorially isomorphic to the Stasheff polytopes (Theorem \ref{thm:StasheffLoday}). In \S \ref{StasMulti} we prove that the Stasheff polytopes and collapsed multiplihedra are isomorphic (Proposition \ref{thm:StasMulti}). Finally, in \S \ref{MultiCubea}, the isomorphism between the collapsed multiplihedra and graph cubeahedra is shown (Proposition \ref{thm:MultiCubea}). Combining all three, we have our required result.
\end{proof}
\subsection{Loday's construction vs Stasheff polytopes}\label{LodayStasheff}
By Stasheff's description, $K_{n+1}$ is the cone over its boundary elements $K_p\times_r K_q$ for $p+q=n+2$, $2\leq p\leq n$ and $r=1,2,\ldots, p.$ On the other hand, consider $C(\widehat{K}_n)$, where $\widehat{K}_n$ consists of the initial $K_n$ together with $K_{p+1}\times_r K_q$ such that $p+q=n+1$, $2\leq p\leq n-1$ and $r=1,2,\ldots, p$. This enlargement $\widehat{K}_n$ can be described in terms of bracketing as follows.
\begin{itemize}
\item $K_n$ corresponds to $0$-bracketing of the word $x_1x_2\ldots x_n$ i.e., the word itself or the trivial bracketing $(x_1x_2\ldots x_n)$. The immediate faces i.e., the boundary consists of $K_p\times_r K_q$ with $p+q=n+1$, $2\leq p\leq n-1$ and $r=1,2,\ldots,p$. Now $K_p\times_r K_q$ corresponds to the $1$-bracketing $x_1\ldots x_{r-1} (x_r\ldots x_{r+q-1})x_{r+q}\ldots x_n$.
\item The enlargement $\widehat{K}_n$ corresponds to the adding of a letter $x_{n+1}$ to the right of the bracketing corresponding to $K_n$. Then the bracketing $x_1\ldots x_{r-1} (x_r\ldots x_{r+q-1})x_{r+q}\ldots x_n$ extends to $x_1\ldots x_{r-1} (x_r\ldots x_{r+q-1})x_{r+q}\ldots x_nx_{n+1}$ for each $p,q,r$ such that $p+q=n+1$, $2\leq p\leq n-1$, and $r=1,2,\ldots, p$. Also the initial $K_n$ i.e., $(x_1x_2\ldots x_n)$ extends to $(x_1x_2\ldots x_n)x_{n+1}$, which corresponds to $K_2\times_1 K_n$ in $K_{n+1}$.
\item Finally one takes cone over the enlarged complex to obtain $K_{n+1}.$
\end{itemize}
From the above description, $\widehat{K}_n$ can be thought of as union of $K_p\times_r K_q$ with $p+q=(n+1)+1$ for $2\leq p\leq n$ and $r=1,2,\ldots, p-1$. Thus $\widehat{K}_n$ is a part of the boundary of $K_{n+1}$ (following Stasheff's description).
\begin{thm}\label{thm:StasheffLoday}
Stasheff polytopes are combinatorially isomorphic to Loday's cone construction of associahedra.
\end{thm}
To prove combinatorial isomorphism between the two mentioned models, we must show bijective correspondence between vertices, edges and faces of each codimension for the both models respecting the adjacencies. But the faces of codimension more than $1$ are contained in the faces of codimension $1$. Thus if we have appropriate bijection between the faces of codimension $1$ respecting the adjacencies for both models, then the resulting models being cone over combinatorially isomorphic codimension $1$ faces, they are combinatorially isomorphic.
\begin{proof}[Proof of Theorem \ref{thm:StasheffLoday}]
It is enough to show that the boundary of $K_{n+1}$ in Loday's construction can be subdivided to match them with the boundary elements $K_p\times_r K_q$ of $K_{n+1}$ in Stasheff model for $p+q=n+2$, $2\leq p\leq n$ and $r=1,\ldots, p$.
As observed in the initial discussion, the only missing boundary part of $K_{n+1}$ in Loday's construction is the union of $K_p\times_p K_q$ for $p+q=n+2$ with $2\leq p\leq n$. Note that all these missing faces are adjacent to a common vertex, which corresponds to the right to left $(n-1)$-bracketing $x_1(x_2(\ldots (x_{n-1}(x_n x_{n+1}))...))$. As there are $\binom{n-1}{n-2}=n-1$ many choices for removing $(n-2)$ brackets from a $(n-1)$-bracketing (that corresponds to the vertices of $K_{n+1}$), each vertex of $K_{n+1}$ is adjacent to exactly $n-1$ faces codimension $1$ of $K_{n+1}$ (by poset description of Stasheff's $K_{n+1}$). So the vertex corresponding to $x_1(x_2(\ldots (x_{n-1}(x_n x_{n+1}))...))$ is not obtained in $\widehat{K}_n$. Now if we consider any other $(n-1)$-bracketing, then there can at most $n-2$ parentheses after $x_{n+1}$. So removing those parentheses along with some others, we can get a $1$-bracketing that do not enclose $x_{n+1}$ i.e., those vertices are adjacent to some $K_p\times_r K_q$ for $p+q=n+2$ and $r=1,2,\ldots,p-1$. Thus any vertex of $K_{n+1}$ except that corresponding to $x_1(x_2(\ldots (x_{n-1}(x_n x_{n+1}))...))$ is present in $\widehat{K}_n$. We identify this missing vertex with the coning vertex of Loday's construction.\\
\hspace*{0.5cm}We shall prove that the missing faces of $K_{n+1}$ in $C(\widehat{K}_n)$ can be realized as a cone over some portion of the boundary of $\widehat{K}_n$. Then we will divide the part $C(\partial \widehat{K}_n)$ accordingly to identify those with the missing faces. We will prove this together with final result by induction on the following statements:
\begin{itemize}
\item[I.] $Q_{n-3}:$ $K_p \times_r K_q = C\left((\widehat{K}_{p-1}\times_r K_q) \cup (K_p\times_r \widehat{K}_{q-1})\right) \text{ if } p+q=n+1 \text{ and }p,q\geq 3$.
\item[II.] $P_{n-2}:$ $K_{n}=C\left( \widehat{K}_{n-1}\right)$, $n \geq 3.$
\end{itemize}
Here the equalities in the statements represent a combinatorial isomorphism. Note that $Q_{n-3}$ is a collection of statements and the index $r$ is actually superfluous. We will use the convention that $\widehat{K}_1=\varnothing$, $C(\varnothing)=\{\ast\}$ and allow $p,q\geq 2$. Then $Q_{n-3}$ contains the statement for $K_{n-1}\times_r K_2$ as well as $K_2\times_r K_{n-1}$. Moreover, these are equivalent to the statement $P_{n-3}$ since $K_2$ is a point and $K_{n-1}\times K_2$ is $K_{n-1}$.\\
\hspace*{0.5cm}The steps of induction are as follows.\\
\noindent\textbf{Step 0:} \textit{Show that $P_1$ holds.}\\
Note $K_2$ is the convex polytope that parametrizes the binary operation, i.e., it is a point. As a point has no boundary, so $\widehat{K}_2$ is also a point and $C(\widehat{K}_2)$ is an interval. Now $K_3$ is the convex polytope that parametrizes the family of $\mathrm{3}$-ary operations that relate the two ways of forming a $\mathrm{3}$-ary operation via a given binary operation. Thus, $K_3$ also represents an interval. Here the boundary of $K_3$ consist of two points $K_2\times_1 K_2$ and $K_2\times_2 K_2$. Let us map $K_2\times_1 K_2$ and $K_2\times_2 K_2$ to $\widehat{K}_2$ and the coning point in $C(\widehat{K}_2)$ respectively. Then we can map the other points of $K_3$ linearly to $C(\widehat{K}_2)$. Thus we get $K_3$ and $C(\widehat{K}_2)$ are combinatorially isomorphic. So $P_1$ is true.\\
\noindent \textbf{Step 1:} \textit{Assuming that $P_1$ through $P_{n-4}$ hold, show that $Q_{n-3}$ holds.}\\
To prove it we will use the following lemma, the proof of which given at the end of this subsection.
\begin{lemma}\label{lemma:CrossCone}
There is a natural homeomorphism
\begin{displaymath}
C(X)\times C(Y)\equiv C\left((X\times C(Y)) \cup (C(X)\times Y)\right),
\end{displaymath}
where $x_0,y_0$ are cone points for $C(X),C(Y)$ respectively and $(x_0,y_0)$ is the cone point for $C(Z)$, where $Z=(C(X)\times Y)\cup (X\times C(Y))$.
\end{lemma}
Now assuming $P_1$ through $P_{n-4}$, we have $K_l=C(\widehat{K}_{l-1})$ for $l=3,4,...,n-2$. Take any $p,q\geq 3$ with $p+q=n+1$ i.e., $p,q$ both ranges through $3$ to $n-2$. So
\begin{align*}
K_p\times_r K_q = &\ C(\widehat{K}_{p-1})\times_r C(\widehat{K}_{q-1})\ \ (\text{by the assumption})\\
= &\ C\left((\widehat{K}_{p-1}\times_r C(\widehat{K}_{q-1}))\cup (C(\widehat{K}_{p-1})\times_r \widehat{K}_{q-1})\right)\ \ (\text{by the Lemma \ref{lemma:CrossCone}})\\
= &\ C((\widehat{K}_{p-1}\times_r K_q)\cup (K_p\times_r \widehat{K}_{q-1}))\ \ (\text{by the assumption})
\end{align*}
This shows that $Q_{n-3}$ is true.\\
\noindent \textbf{Step 2:} \textit{Assuming $P_1$ through $P_{n-3}$, show that $P_{n-2}$ hold.}\\
As discussed earlier, to prove that $P_{n-2}$ is true, it is enough to show $K_s\times_s K_t$ with $s+t=n+1$ for $s,t\geq 2$ can be obtained from $C(\widehat{K}_{n-1})$. Consider $s,t\geq 2$ with $s+t=n+1$. Then using the conventions $\widehat{K}_1=\varnothing$ and $C(\varnothing)=\{*\}$, we can write
\begin{align*}
&\ K_s\times_s K_t\\
= &\ C(\widehat{K}_{s-1})\times_s C(\widehat{K}_{t-1})\ (\text{by $P_1$ through $P_{n-3}$})\\
= &\ C\left((\widehat{K}_{s-1}\times_s K_t)\cup (K_s\times_s \widehat{K}_{t-1})\right)\ (\text{by the Lemma \ref{lemma:CrossCone}})\\
= &\ C\left( \left\{\bigcup_{(p,q,r)\in V_s}\left((K_p\times_r K_q)\times_s K_t\right)\right\}\bigcup \left\{\bigcup_{(p,q,r)\in V_t}\left(K_s\times_s (K_p\times_r K_q)\right)\right\} \right)
(\text{by definition of } \widehat{K}_{i-1}),\\[4pt]
&\text{ where }V_i=\{(a,b,c)\in \mathbb{N}^3: 2\leq a\leq i-1,\ a+b=i+1,\ 1\leq c\leq a-1\},\ i=s,t.
\end{align*}
Now using equation (\ref{eq:2}) (in \S \ref{Stasheff}), we can write
$$(K_p\times_r K_q)\times_s K_t=(K_p\times_{s-q+1} K_t)\times_r K_q$$
(obtained by substituting $r=p,s=q,t=t,k=r,j=s-q+1$) for the terms in the first set of unions. As $K_p\times_{s-q+1} K_t$ is a face of $K_{p+t-1}$, so $(K_p\times_{s-q+1} K_t)\times_r K_q$ is a face of $K_{p+t-1} \times_r K_q$, which is again a face of $K_{n}$ because for $(p,q,r)\in V_s$, $$(p+t-1)+q=p+q+t-1=s+1+t-1=s+t=n+1.$$
Thus $(K_p\times_r K_q)\times_s K_t$ is a face of $K_{p+t-1} \times_r K_q$ of codimension $1$. But as $t\geq 2$ and $1\leq r\leq p-1$, so $r<p+t-1$, which implies that the face $K_{p+t-1} \times_r K_q$ is already present in the enlargement $\widehat{K}_{n-1}$. Thus each term in the first set of unions is already present in $\widehat{K}_{n-1}$.\\
Similarly, using equation (\ref{eq:1}), we have the identification
$$K_s\times_s (K_p\times_r K_q) = (K_s\times_{s} K_p)\times_{s+r-1} K_q$$
(obtained by substituting $r=s,s=p,t=q,k=r,j=s$) for the terms in the second set of unions. Here $(K_s\times_{s} K_p)\times_{s+r-1} K_q$ is a face of $K_{s+p-1}\times_{s+r-1} K_q$, which is a face of $K_n$ because for $(p,q,r)\in V_t$,
$$(s+p-1)+q=s-1+(p+q)=s-1+t+1=s+t=n+1.$$
Thus $(K_s\times_s K_{p}) \times_{s+r-1} K_q$ is a face of $K_{s+p-1} \times_{s+r-1} K_q$ of codimension $1$. But $r\leq p-1<p$ implies $s+r-1<s+p-1$, which further implies that the face $K_{s+p-1} \times_{s+r-1} K_q$ is already present in the enlargement $\widehat{K}_{n-1}$. Thus each term in the second set of unions is also present in $\widehat{K}_{n-1}$.\\
It follows that all the parts in the unions are present as a part of the boundary of $\widehat{K}_{n-1}$. Thus the cone over that particular part of the boundary of $\widehat{K}_{n-1}$, we will get $K_s\times_s K_t$ for all $s,t\geq 2$ (with $s+t=n+1$). Also these are present as a part of boundary of $C(\widehat{K}_{n-1})$. Therefore we get a bijection between the faces (of codimension $1$) of $K_n$ and $\widehat{K}_{n-1}$. Consequently, they are combinatorially isomorphic. So $P_{n-2}$ is true. This completes the induction step as well the proof of the theorem.
\end{proof}
\begin{remark}
In the above isomorphism, we mapped the starting $K_n$ to $K_2\times_1 K_n$ and the extension of the boundary element $K_p\times_r K_q$ to $K_{p+1}\times_r K_q$. Similarly we could map the starting $K_n$ to $K_2\times_2 K_n$ and the extension of the boundary $K_p\times_r K_q$ to $K_{p+1}\times_{r+1} K_q$. But if we want to map the starting $K_n$ to $K_n\times_r K_2$ ($r=1,2,...,n$), the corresponding extension of boundary $K_p\times_t K_q$ should map to
$$\begin{cases}
K_p\times_t K_{q+1} & \text{if }t\leq r\leq t+q-1\\
K_{p+1}\times_t K_q & \text{if }r>t+q-1\\
K_{p+1}\times_{t+1} K_q & \text{if }r<t.
\end{cases}$$
With a slight modification in the above proof, one can similarly prove that this produces an isomorphism. This, in turn, implies that the faces $K_n\times_r K_2$ or $K_2\times_r K_n$ of $K_{n+1}$ are all equivalent from the point of view of Loday's construction.
\end{remark}
We end this subsection with the proof of Lemma \ref{lemma:CrossCone}.
\begin{proof}[Proof of Lemma \ref{lemma:CrossCone}]
We will prove the the equality by showing both inclusions. First suppose $(x,y)=t(x_0,y_0)+(1-t)(x_1,y_1)\in C(Z)$, where $t\in [0,1]$ and $(x_1,y_1)\in Z$.
Without loss of generality suppose $(x_1,y_1)\in C(X)\times Y$ i.e., $x_1=t'x_0+(1-t')x'_1$ for some $t'\in [0,1]$ and $x'_1\in X$.
So
\begin{align*}
(x,y)
=&\ (tx_0+(1-t)x_1,ty_0+(1-t)y_1)\\
=&\ (tx_0+(1-t)t'x_0+(1-t)(1-t')x'_1,ty_0+(1-t)y_1)\\
=&\ ((1-(1-t)(1-t'))x_0+(1-t)(1-t')x'_1,ty_0+(1-t)y_1)\\
=&\ (t_1x_0+(1-t_1)x'_1,ty_0+(1-t)y_1)
\in C(X)\times C(Y)
\end{align*}
and $t_1=1-(1-t)(1-t')$. This implies that $C(Z)\subseteq C(X)\times C(Y).$
\begin{figure}[H]
\centering
\includegraphics[width=0.55\linewidth]{product_cone_square.pdf}
\caption{Visual proof when $X=Y=$ point}
\end{figure}
Conversely let $(x,y)=(t_1x_0+(1-t_1)x_1,t_2y_0+(1-t_2)y_1)\in C(X)\times C(Y)$ for some $x_1\in X$, $y_1\in Y$ and $t_1,t_2\in [0,1]$. Now consider the following cases\\
\noindent
\textit{Case I}: $t_1=t_2=t$.
$$(x,y)=t(x_0,y_0)+(1-t)(x_1,y_1)\in C(Z).$$
\textit{Case II}: $t_1>t_2$.
\begin{align*}
(x,y)=&\ t_2(x_0,y_0)+(1-t_2)\left( \textstyle{\frac{t_1-t_2}{1-t_2}}x_0+\textstyle{\frac{1-t_1}{1-t_2}}x_1,y_1\right)\\
=&\ t_2(x_0,y_0) +(1-t_2) (t'x_0+ (1-t') x_1,y_1)\in C(Z),\\
&\ \text{ where }t'= \textstyle{\frac{t_1-t_2}{1-t_2}}.
\end{align*}
\textit{Case III}: $t_1<t_2$.
\begin{align*}
(x,y)=&\ t_1(x_0,y_0)+(1-t_1)\left(x_1, \textstyle{\frac{t_2-t_1}{1-t_1}}y_0+ \textstyle{\frac{1-t_2}{1-t_1}}y_1\right)\\
=&\ t_1(x_0,y_0)+(1-t_1)(x_1,t'y_0+(1-t')y_1)\in C(Z),\\
&\ \text{ where }t'= \textstyle{\frac{t_2-t_1}{1-t_1}}.
\end{align*}
Combining all three cases, we conclude that
$(x,y)\in C(Z)$ and
consequently $C(X)\times C(Y)\subseteq C(Z)$.
\end{proof}
\subsection{Stasheff polytopes vs Collapsed Multiplihedra}\label{StasMulti}
We shall use the Definition \ref{def:Associahedra1} for Stasheff polytopes. Similarly, due to Lemma \ref{lem:ConRealMulti}, we will use Definition \ref{def:CollMulti} for collapsed multiplihedra.
\begin{prop}\label{thm:StasMulti}
Stasheff polytopes $K_{n+1}$ and collapsed multiplihedra $J'_n$ are combinatorially isomorphic.
\end{prop}
\begin{proof}
Both $K_{n+1}$ and $J'_n$ are convex polytopes whose face posets are isomorphic to $\mathfrak{P}(n+1)$ and $\mathfrak{J}'_n$ respectively. Therefore, in order to exhibit an isomorphism between $J'_n$ and $K_{n+1}$, it suffies to find a bijection between $\mathfrak{P}(n+1)$ and $\mathfrak{J}'_n$ as posets.
Define $\phi:\mathfrak{J}'_n\to \mathfrak{P}(n+1)$ as follows
\begin{align*}
f(X_1) & \mapsto f(X_1)a_{n+1}:=(X_1)a_{n+1}\\
f((X_1)\cdot\ldots \cdot (X_{k-1})\cdot (X_k)) & \mapsto ((X_1)\cdot \ldots \cdot (X_{k-1})\cdot (X_k)) a_{n+1} := (X_1)\ldots (X_{k-1}) (X_k)a_{n+1}
\end{align*}
\begin{align*}
\phi\left(f(X_1)\ldots f(X_{k-1})f(X_k)\right)
&= f(X_1)\ldots f(X_{k-1}) f(X_k)a_{n+1}\\
&= f(X_1) \ldots f(X_{k-1}) ((X_k)a_{n+1})\\
&= f(X_1) \ldots f(X_{k-2})((X_{k-1})((X_k)a_{n+1}))\\
&= \cdots\\
&= (X_1)(\ldots ((X_{k-1})((X_k)a_{n+1})) \ldots),\\[4pt]
\phi\left(f((X_1)\cdot (X_2))f((X_{3})\cdot (X_4)\cdot (X_5))\right)
&= f((X_1)\cdot (X_2))(((X_{3})\cdot (X_4)\cdot (X_5))a_{n+1})\\
&= ((X_1)\cdot (X_2))((X_{3}) (X_4) (X_5)a_{n+1})\\
&= (X_1)(X_2)((X_{3}) (X_4)(X_5)a_{n+1}).
\end{align*}
Here $X_i$'s are some rule of association of the elements $a_1,a_2,...,a_n$ in $A$ of some length such that the total length of all $X_i$'s is $n$ and $a_{n+1}$ is some different element in $A$.
In the above correspondence, note that the bracketing in $X_i$'s are not changed. We only include some pair of brackets removing $f$'s or remove $\cdot$ and keep it as it is with an extra letter $a_{n+1}$ on the right to get a bracketing of the word $a_1a_2\ldots a_{n+1}$. Also, note that each parentheses right to the letter $a_{n+1}$ determines the number of $f$ and their position as well, where no parentheses means only single $f$ with the $\cdot$'s in between the associated words.
Thus, the position of each $f$ and $\cdot$ gives a unique bracketing of the word $a_1a_2\ldots a_{n+1}$ and the process can also be reversed. So $\phi$ is bijective.
Now in order to check $\phi$ preserves the poset relation, we need to show $\phi(P\prec P')\implies \phi(P)< \phi(P').$ There are three possible ways (cf. operation (\ref{it:op1}), (\ref{it:op2}), (\ref{it:op3})) by which $P$ can be related to $P'$.
\begin{enumerate}
\item $P$ is obtained from $P'$ by adding brackets in domain. Since $\phi$ do not interact with the brackets in domain, $\phi(P)$ is also obtained from $\phi(P')$ by adding brackets i.e., $\phi(P)< \phi(P')$.
\item $P$ is obtained from $P'$ by replacing $\cdot$ by `$)f($'. Thus $P$ contains more $f$ than $P'$. But from the correspondence, we know each $f$ corresponds to a pair of bracket, so $\phi(P)$ must be obtained from $\phi(P')$ by adding brackets i.e., $\phi(P)< \phi(P')$.
\item $P$ is obtained from $P'$ by removing one or more consecutive $\cdot$ by adding pair of brackets that encloses all the adjacent elements to those $\cdot$. To obtain $P$, this process adds brackets to $P'$ and $\phi$ does not change the parent bracketing. So so $\phi(P)$ must be obtained from $\phi(P')$ by adding brackets i.e., $\phi(P)< \phi(P')$.
\end{enumerate}
Thus $\phi$ defines a bijection of the posets $\mathfrak{J}'_n$ and $\mathfrak{P}(n+1)$. Hence $J'_n$ and $K_{n+1}$ are combinatorially isomorphic.
\end{proof}
\subsection{Collapsed Multiplihedra vs Graph Cubeahedra}\label{MultiCubea}
\begin{prop}\label{thm:MultiCubea}
Collapsed multiplihedra $J'_{n+1}$ and graph cubeahedra $\mathcal{C}P_n$ for path graph $P_n$ with $n$ nodes are combinatorially isomorphic.
\end{prop}
\begin{proof}
Recall from Theorem \ref{thm:OrderCubeahedra} that the graph cubeahedron $\mathcal{C}P_n$ is a convex polytope of dimension $n$ whose face poset is isomorphic to the set of design tubings of $P_n$. Recall that the collapsed multiplihedra $J'_{n+1}$ is a convex polytope of dimension $n$ whose face poset is isomorphic to $\mathfrak{J}'_{n+1}$. Thus, to describe an isomorphism, it is enough to prove a bijection at the poset level.\\
\hspace*{0.5cm}A bijection between the design tubings and the elements of $\mathfrak{J}'_{n+1}$ is defined through the following correspondences:
\begin{itemize}
\item Put $a_1$ through $a_{n+1}$ starting from the left of the left-most node to the right of the right-most node of the graph:
\begin{figure}[H]
\centering
\includegraphics[page=1]{Tubing_images/ditems.pdf}
\caption{Initial step}
\end{figure}
\item Each round tube corresponds to a pair of parentheses. If the round tube include $k$-th and $(k+r-1)$-th node of the graph, then the corresponding parentheses include $a_k$ through $a_{k+r}$.
\begin{figure}[H]
\centering
\includegraphics[page=3]{Tubing_images/ditems.pdf}
\caption{Correspondence of round tube}
\end{figure}
\item Each square tube corresponds to the inclusion of `$)f($' in the string $f(a_1a_2\ldots a_{n+1})$. If the square tube include $k$-th node of the graph, then `$)f($' will be included in between $a_k$ and $a_{k+1}$.
\begin{figure}[H]
\centering
\includegraphics[page=5]{Tubing_images/ditems.pdf}
\caption{Correspondence of square tube}
\end{figure}
\item An empty node in a tubing corresponds to `$\cdot$' i.e., if $k$-th node of the graph is not included by any tube of the given tubing, then put a `$\cdot$' between $a_k$ and $a_{k+1}$.
\begin{figure}[H]
\centering
\includegraphics[page=6]{Tubing_images/ditems.pdf}
\caption{Correspondence of empty node}
\end{figure}
\end{itemize}
Finally as position of each tube and its appearance give a unique element of $\mathfrak{J}'_{n+1}$, we get a bijective correspondence between design tubings and elements of $\mathfrak{J}'_{n+1}.$ An example, assuming $n=6$, is given below.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[page=7, width= \linewidth]{Tubing_images/ditems.pdf}
\end{subfigure}
\hspace{0.2cm}
\begin{subfigure}[b]{0.47\linewidth}
\centering
\includegraphics[page=8, width= \linewidth]{Tubing_images/ditems.pdf}
\end{subfigure}
\caption{Bijection between design tubings and multiplihedra}
\end{figure}
It follows from the correspondence that the removal of a round tube corresponds to removal of a pair of parentheses or adding `$\cdot$' and the removal of a square tube corresponds to replacing `$)f($' by `$\cdot$'. This shows that the poset relation between design tubings match with the poset relation in $\mathfrak{J}'_{n+1}.$ As the two posets are isomorphic, this finishes the proof.
\end{proof}
\bibliographystyle{siam}
|
train/arxiv
|
BkiUc7k4eIZjfagW4PWf
| 5 | 1 |
\section{Introduction}
\label{sec:1}
The microphysical source of viscosity in accretion disks has been a
long-standing puzzle. Since the early 1990s, there has been a growing
consensus that magnetic fields generated by the magnetorotational
instability (MRI) are key to providing the required viscosity in cold
accretion disks (\cite{PBM2} (and references
therein), \cite{PBM4}, \cite{PBM14}). The standard
treatment of the MRI is valid only for collisional plasmas, which can be
described in the MHD approximation. However, the plasmas comprising hot,
two-temperature accretion flows, like those described in \cite{PBM8} and \cite{PBM19} (hereafter SLE) are clearly collisionless. This is also the case for the radiatively
inefficient, advection-dominated accretion flows (ADAFs) treated in
\cite{PBM15} and \cite{PBM16}.
Filho (\cite{PBM9}), Kafatos (\cite{PBM12}) and Paczy\'nski (\cite{PBM17}) had initially
suggested that viscosity due to collisions between hot protons might be
important in two-temperature accretion flows, although the effects of an
embedded turbulent magnetic field were not included in their treatments.
Subramanian, Becker \& Kafatos (\cite{PBM23}; hereafter SBK96) proposed that a
{\it hybrid} viscosity, due to protons colliding with magnetic
scattering centers, might be the dominant viscosity mechanism in such
accretion disks. In this paper we investigate the implications of the
hybrid viscosity for the development of the MRI in hot disks. In
particular, we show that this mechanism can be used to establish an
interesting connection between the fluid models and the quasi-kinetic
treatments used by previous authors to study the viscous enhancement
of the growth rate during the early stages of the MRI.
\section{MVI in hot accretion disks}
Balbus (\cite{PBM1}) and Islam \& Balbus (\cite{PBM11}) employed an MHD approach to
study the effect of viscosity on the development of the MRI, and
discovered a robust instability which they call the magnetoviscous
instability (MVI). In the MVI, angular momentum is exchanged between
fluid elements via viscous transport, which plays a central role in the
development of the instability. Balbus (\cite{PBM1}) does not address the
physical origin of the viscosity that is central to the development of
the MVI, and therefore his results are stated in terms of an unspecified
coefficient of dynamic viscosity, $\eta$. Islam \& Balbus (\cite{PBM11}) assumed
the plain Spitzer (collisional) viscosity due to proton-proton
collisions in their treatment of the MVI, but this particular mechanism
is not effective in hot, collisionless disks. There have been some
recent attempts at quasi-kinetic treatments of MRI-like instabilities in
collisionless plasmas (e.g., \cite{PBM18}, \cite{PBM20}, \cite{PBM21}). It is interesting to note that the pressure anisotropy concept discussed in these papers is
somewhat similar to the idea embodied in the hybrid viscosity formalism
of SBK96. This suggests that it may be possible to develop a ``fluid''
picture based on the hybrid viscosity that would be applicable in hot
disks, hence bridging the gap between the two paradigms. The hybrid
viscosity concept of SBK96 relies only on the momentum deposited by
particles propagating along magnetic fields lines between adjacent
annuli in the disk.
\section{Applicability of the hybrid viscosity}
Paczy\'nski (\cite{PBM17}) and SBK96 noted that the presence of even a very weak
magnetic field can effectively ``tie'' protons to magnetic field lines.
Paczy\'nski argued that in this situation the ion-ion collisional mean
free path is much larger than the proton Larmor radius and therefore the
effective mean free path is equal to the proton Larmor radius. This led
him to conclude that the viscosity would effectively be quenched in such
a plasma. However, the protons in hot accretion disks are typically
super-Alfv\'enic, especially in the initial stages of a magnetic
field-amplifying instability such as the MRI, when the plasma $\beta$
parameter is quite large. This reflects the fact that the ratio of the
proton thermal speed to the Alfv\'en speed is equal to $(3\beta/2)^{1/2}$.
Since the magnetic field evolves on Alfv\'en timescales, it can be
considered to be static for our purposes. The motion of collisionless,
charged particles propagating through a static, tangled magnetic field
has been explored extensively in the context of cosmic ray propagation
(e.g., \cite{PBM5}, \cite{PBM6}, \cite{PBM10}). It has been conclusively established that the
particle transport does not obey Bohm diffusion for a wide range of
rigidities and turbulence levels (see, e.g., Fig.~4 of \cite{PBM5} and Fig.~4 of\cite{PBM6}). In particular,
the low rigidity, low turbulence level case appropriate for our
situation obeys the predictions of quasi-linear theory quite well, and
the mean free paths are much larger than the Larmor radius as expected.
Under these conditions, SBK96 demonstrated the importance of a new kind
of viscosity called the ``hybrid viscosity,'' in which angular momentum
is transported via collisions between protons and static irregularities
(``kinks'') in the magnetic field. In this picture, a proton spirals
tightly along a magnetic field line until its gyro-averaged guiding
center motion (and hence its gyro-averaged momentum) is changed via an
encounter with a kink. During the encounter the proton therefore
exchanges angular momentum with the field, which transfers the resulting
torque to the plasma. The effective mean free path used in the
computation of the viscosity is set equal to the distance between the
magnetic kinks (i.e., the field coherence length). We express the hybrid
viscosity mechanism in terms of a pressure anisotropy in \S~6.1.
Here we examine the implications of the hybrid viscosity for the
development of the MVI in hot, two-temperature accretion disks around
underfed black holes. We assume that the accreting plasma is composed of
fully ionized hydrogen. The physical picture involves the perturbation of
an initially straight magnetic field line that eventually leads to the
instability (see, e.g., Fig.~1 of \cite{PBM1}). Since the proton
Larmor radius is negligible in comparison to a macroscopic length scale,
we can effectively think of the proton as sliding along the field line
like a bead on a wire. The proton is forced to change its direction upon
encountering the kink associated with the initial field perturbation. In
such a situation, the effective mean free path, $\lambda$, used in the
description of the hybrid viscosity should be set equal to the
wavelength of the initial perturbation. We demonstrate that the hybrid
viscosity is the principle mediator of the MVI during the early stages
of the instability.
\section{Hybrid viscosity in hot accretion disks}
The structure of hot, two-temperature accretion disks was first studied
in detail by SLE, and later by Eilek \&
Kafatos (\cite{PBM8}) and SBK96. The closely related advection-dominated
accretion flows were analyzed by Narayan \& Yi (\cite{PBM15}), Narayan,
Mahadevan, \& Quataert (\cite{PBM16}), and many subsequent authors. In this
section, we investigate the nature of the viscosity operative in hot,
two-temperature accretion disks based on a simplified set of
model-independent relations that are applicable to both ADAF and SLE
disks.
The gas in the disk can be considered collisionless with respect to
the protons provided
\begin{equation}
\lambda_{ii} > H \ ,
\label{eq1}
\end{equation}
where $H$ is the half-thickness of the disk, and the ion-ion Coulomb
collisional mean free path, $\lambda_{ii}$, is given in cgs units by
(SBK96)
\begin{equation}
\lambda_{ii} = 1.80 \times 10^5 \, {T_i^{2} \over N_i \, \ln\Lambda}
\ , \label{eq2}
\end{equation}
for a plasma with Coulomb logarithm $\ln\Lambda$ and ion temperature and
number density $T_i$ and $N_i$, respectively. We can combine
equations~(\ref{eq1}) and (\ref{eq2}) to obtain
\begin{equation}
{\lambda_{ii} \over H} = 1.20 \times 10^{-19} \, {T_i^2 \over
\tau_{es} \, \ln\Lambda} \ > \ 1 \ ,
\label{eq3}
\end{equation}
where the electron scattering optical thickness, $\tau_{es}$, is given by
\begin{equation}
\tau_{es} = N_i \, \sigma_{_{\rm T}} \, H \ ,
\label{eq4}
\end{equation}
and $\sigma_{_{\rm T}}$ is the Thomson scattering cross section.
Equation~(\ref{eq3}) can be rearranged to obtain a constraint on
$\tau_{es}$ required for the disk to be collisionless, given by
\begin{equation}
\tau_{es} \ < \ {1.20 \times 10^5 \, T_{12}^2 \over \ln\Lambda}
\ \sim \ 4 \times 10^3 \ ,
\label{eq5}
\end{equation}
where $T_{12}\equiv T/10^{12}\,$K and the final result holds for
$\ln\,\Lambda=29$ and $T_{12}\sim 1$. This confirms that tenuous,
two-temperature disks with $T_i \sim 10^{11}$--$10^{12}\,$K will be
collisionless for typical values of $\tau_{es}$.
The collisionless nature of hot two-temperature accretion flows
established by equation~(\ref{eq5}) strongly suggests that the plain
Spitzer viscosity is not going to be relevant for such disks, although
the answer will depend on the strength of the magnetic field. The hybrid
viscosity will dominate over the Spitzer viscosity provided the ion-ion
collisional mean free path $\lambda_{ii}$ exceeds the Larmor radius,
$\lambda_{\rm L}$, so that the protons are effectively ``tied'' to
magnetic field lines. We therefore have
\begin{equation}
\lambda_{ii} > \lambda_{\rm L} \ ,
\label{eq6}
\end{equation}
where the Larmor radius is given in cgs units by (SBK96)
\begin{equation}
\lambda_{\rm L} = 0.95 \, {T_i^{1/2} \over B} \ ,
\label{eq7}
\end{equation}
where $B$ is the magnetic field strength. Whether the disk is of the SLE
or ADAF types, it is expected to be in vertical hydrostatic equilibrium,
and therefore
\begin{equation}
H \Omega_{\rm K} = c_s = \sqrt{k T_i \over m_p} \ ,
\label{eq8}
\end{equation}
where $\Omega_{\rm K}=(GM/r^3)^{1/2}$ is the Keplerian angular velocity
at radius $r$ around a black hole of mass $M$, $c_s$ is the
isothermal sound speed, and $k$ and $m_p$ denote Boltzmann's constant
and the proton mass, respectively.
We can utilize equation~(\ref{eq6}) to derive a corresponding constraint
on the plasma $\beta$ parameter,
\begin{equation}
\beta \equiv {8 \pi N_i k T_i \over B^2} \ ,
\label{eq9}
\end{equation}
such that the hybrid viscosity dominates over the Spitzer viscosity. By
combining equations~(\ref{eq2}), (\ref{eq4}), (\ref{eq6}), (\ref{eq7}),
(\ref{eq8}), and (\ref{eq9}), we find that
\begin{equation}
\beta \ < \ 3.71 \times 10^{32} \, {T_{12}^{9/2} \, R^{3/2} \, M_8
\over \tau_{es} \, (\ln\Lambda)^2} \ ,
\label{eq10}
\end{equation}
where $M_8 \equiv M/(10^8 \msun)$ and $R\equiv r c^2/(GM)$. The minimum
possible value of the right-hand side in equation~(\ref{eq10}) is
obtained for the maximum value of $\tau_{es}$, which is given by
equation~(\ref{eq5}). We therefore find that
\begin{equation}
\beta \ < \ 3.09 \times 10^{27} \, {T_{12}^{5/2} \, R^{3/2} \, M_8
\over \ln\Lambda} \ .
\label{eq11}
\end{equation}
This relation is certainly satisfied in all cases involving the
accretion of plasma onto a black hole, even in the presence of an
infinitesimal magnetic field. We therefore conclude that the protons
will be effectively tied to the magnetic field lines in two-temperature
accretion disks around stellar mass and supermassive black holes, which
implies that the hybrid viscosity dominates over the Spitzer viscosity
in either SLE or ADAF disks.
The results of this section confirm that protons in two-temperature
accretion disks rarely collide with each other, and are closely tied to
magnetic field lines, even for very weak magnetic fields. If a field
line is perturbed, a typical proton sliding along it will follow the
perturbation, and will thus be effectively redirected. This is the basic
premise of the hybrid viscosity concept, which we will now apply to the
development of the MVI.
\section{MVI driven by the hybrid viscosity}
Figure~1 of \cite{PBM11} shows that magnetoviscous effects
significantly enhance the MRI growth rates in the parameter regime
\begin{equation}
X \lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$} x \ , \ \ \ \ Y \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} y \ ,
\label{eq12}
\end{equation}
where $x \sim 1$, $y \sim 1$, and
\begin{equation}
\nonumber
X \equiv {2.0 \, (k_Z \, H)^2 \over \beta} \ , \ \ \ \
Y \equiv {1.5 \, \eta \, k_\perp^2 \over N_i \, m_p \, \Omega_{\rm K}} \ ,
\label{eq13}
\end{equation}
with $\eta$ denoting the coefficient of dynamic viscosity and $k_Z$ and
$k_\perp$ representing the $z$ and transverse components of the field
perturbation wavenumber, respectively. The maximum MVI growth rate is
$\sqrt{3}\,\Omega_{\rm K}$, which is $4/\sqrt{3} \sim 2.3$ times larger than
the maximum MRI growth rate of $(3/4)\,\Omega_{\rm K}$. The conditions in
equation~(\ref{eq12}) are derived from the dispersion relation given in
equation~(33) of \cite{PBM11}, which is general enough to
accommodate different prescriptions for the viscosity coefficient
$\eta$. The condition $X \lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$} x$ implies a constraint on $\beta$
given by
\begin{equation}
\beta \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} {2 \, (k_Z \, H)^2 \over x} \ .
\label{eq14}
\end{equation}
As mentioned earlier, a proton sliding along a given field line is
forced to change its direction when it encounters a kink/perturbation in
the field line. The effective viscosity arises due to the momentum
deposited in the fluid by the proton when it encounters the
perturbation. In this picture, the perturbation wavelength plays the
role of an effective mean free path. If we consider perturbations along
an initially straight field line, as in Figure~1 of \cite{PBM1},
then only the transverse component of the perturbation wavelength is
relevant, and the effective mean free path for the proton is therefore
\begin{equation}
\lambda = {2 \, \pi \over k_\perp} \equiv \xi H \ ,
\label{eq15}
\end{equation}
where $\xi \le 1$, since the perturbation wavelength $\lambda$ cannot exceed
the disk half-thickness $H$ (SBK96).
In general, the Shakura-Sunyaev (\cite{PBM22}) viscosity parameter $\alpha$ is
related to the coefficient of dynamic viscosity $\eta$ via (SBK96)
\begin{equation}
\alpha P \equiv - \eta \, R \, {d \Omega_{\rm K} \over d R}
= {3 \over 2} \, \eta \, \Omega_{\rm K} \ ,
\label{eq16}
\end{equation}
where $P=N_i \, k \, T_i$ is the pressure in a two-temperature disk with
$T_i \gg T_e$. By combining equations~(\ref{eq8}), (\ref{eq13}), and
(\ref{eq16}), we find that the condition $Y \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} y$ can be rewritten
as
\begin{equation}
(k_\perp \, H)^2 \, \alpha \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} y \ .
\label{eq17}
\end{equation}
Following Islam \& Balbus (\cite{PBM11}), we expect that $k_\perp \lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$} k_Z$.
By combining equations~(\ref{eq14}) and (\ref{eq17}), we therefore
conclude that $\beta$ must satisfy the condition
\begin{equation}
\beta \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} {2 \, y \over \alpha x} \ .
\label{eq18}
\end{equation}
We can also combine equations~(\ref{eq14}) and (\ref{eq15}) to obtain
the separate constraint
\begin{equation}
\beta \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} {79 \over \xi^2 x} \ .
\label{eq19}
\end{equation}
Equations~(\ref{eq18}) and (\ref{eq19}) must {\it both} be satisfied if
the MVI is to significantly enhance the MRI growth rates. Hence the
combined condition for $\beta$ is given by
\begin{equation}
\beta \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} {\rm Max}\left({79 \over x \xi^2} \ , \ {2 \, y \over \alpha x}
\right) \ .
\label{eq20}
\end{equation}
We can use equation~(\ref{eq16}) to calculate the Shakura-Sunyaev
parameter $\alpha_{\rm hyb}$ describing the hybrid viscosity. The
associated coefficient of dynamic viscosity is given by
\begin{equation}
\eta_{\rm hyb} = {\lambda \over \lambda_{ii}}
\, \eta_{_{\rm S}} \ ,
\label{eq21}
\end{equation}
where $\lambda_{ii}$ is computed using equation~(\ref{eq2}) and
$\eta_{_{\rm S}}$ is the standard Spitzer collisional viscosity,
evaluated in cgs units using
\begin{equation}
\eta_{_{\rm S}} = 2.20 \times 10^{-15} \, {T_i^{5/2} \over \ln\Lambda} \ .
\label{eq22}
\end{equation}
The quantity $\eta_{\rm hyb}$ defined in equation~(\ref{eq21}) describes
the effect of momentum deposition due to protons spiraling tightly along
a magnetic field line over a mean free path $\lambda$. It differs from
the expression given in equation~(2.14) of SBK96 by a factor of $2/15$,
because we do not consider tangled magnetic fields here. Setting $\eta =
\eta_{\rm hyb}$ in equation~(\ref{eq16}) and utilizing
equations~(\ref{eq2}), (\ref{eq8}), (\ref{eq15}), (\ref{eq21}), and
(\ref{eq22}), we find after some algebra that the expression for
$\alpha_{\rm hyb}$ reduces to the simple form
\begin{equation}
\alpha_{\rm hyb} = 1.2 \, \xi
\ .
\label{eq23}
\end{equation}
We can now combine equations~(\ref{eq20}) and (\ref{eq23}) to conclude
that in the case of the hybrid viscosity, the MVI is able to effectively
enhance the MRI growth rates if
\begin{equation}
\beta \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} \beta_{\rm crit} \equiv {\rm Max}\left({79 \over x \xi^2} \ ,
\ {1.7 y \over x \xi}\right)
\ .
\label{eq24}
\end{equation}
In particular, we note that if $x \sim 1$ and $y \sim 1$, then
equation~(\ref{eq24}) reduces to $\beta_{\rm crit} = 79\,\xi^{-2}$, since
$\xi \le 1$. We therefore conclude that magnetoviscous effects driven by
the hybrid viscosity will significantly enhance the growth rate
(compared with the standard MRI growth rate) until the plasma $\beta$
parameter reaches $\sim 80$, or, equivalently, until the field strength
$B$ reaches $\sim 10\%$ of the equipartition value. This assumes that
the dominant perturbations have $\xi \sim 1$, which is expected to be
the case during the early stages of the instability. Once the field
exceeds this strength, the growth rate of the instability during the
linear stage will be equal to the MRI rate.
\section{Relation to previous work}
It is interesting to contrast our result for the $\beta$ constraint with
those developed by previous authors using different theoretical
frameworks.
\subsection{Hybrid viscosity in terms of pressure anisotropy}
Before proceeding on to discussing the result for the $\beta$
constraint, we first cast the basic hybrid viscosity mechanism in terms
of a pressure anisotropy. Several similar treatments appeal to a
large-scale pressure anisotropy, rather than an explicit viscosity
mechanism (e.g., \cite{PBM18}, \cite{PBM20}, \cite{PBM21}). It is therefore instructive to show
that the hybrid viscosity mechanism we employ can be cast in these
terms.
We follow the approach of SBK96 in considering a perturbation in the
local magnetic field of an accretion disk. The pressure anisotropy due
to the momentum flux carried by the particles can be analyzed in the
local region using cartesian coordinates, with the $\hat{z}$-axis
aligned in the azimuthal (orbital) direction, the $\hat{y}$-axis
pointing in the outward radial direction, and the $\hat{x}$-axis
oriented in the vertical direction. The unperturbed magnetic field is
assumed to lie in the $\hat{z}$ direction, and the perturbed field makes
an angle $\theta$ with respect to the $\hat{z}$-axis, and an azimuthal
angle $\phi$ with respect to the $\hat{x}$-axis. In keeping with the
hybrid viscosity scenario, we assume that the particles spiral tightly
around the perturbed field line. In this situation, the component of the
particle pressure in the direction {\it parallel} to the magnetic field,
$P_{||}$, is equal to the $\hat{z}$-directed flux of the
$\hat{z}$-component of momentum, $P_{zz}$. Likewise, the total particle
pressure {\it perpendicular} to the field, $P_{\perp}$, is equal to the
sum of the $\hat{x}$-directed momentum in the $\hat{x}$-direction and
the $\hat{y}$-directed momentum in the $\hat{y}$-direction, denoted by
$P_{xx}$ and $P_{yy}$, respectively. Following the same approach that
leads to equation~(2.11) of SBK96, we obtain for the parallel pressure
\begin{eqnarray}
\nonumber
P_{||} = P_{zz} = 2\,m_p\,N_{i}\,{\rm cos}^{2} \theta \, \times \\
\biggl [\frac{k T_{i}}{2 m_p}
- \biggl ( \frac{2 k T_{i}}{\pi m_p} \biggr )^{1/2} \, u'(0) \, \lambda \,
\cos\theta \, \sin\theta \, \sin\phi \biggr ]
\ ,
\label{eq25}
\end{eqnarray}
where $u(y)$ represents the shear velocity profile and the prime denotes
differentiation with respect to $y$.
Similarly, the total perpendicular pressure is given by
\begin{eqnarray}
\nonumber
P_{\perp} = P_{xx} + P_{yy} = 2\,m_p\,N_{i}\, \sin^2\theta \, \times \\
\biggl [\frac{k T_{i}}{2 m_p}
- \biggl ( \frac{2 k T_{i}}{\pi m_p} \biggr )^{1/2} \, u'(0) \, \lambda \,
\cos\theta \, \sin\theta \, \sin\phi \biggr ] \ .
\label{eq26}
\end{eqnarray}
Taken together, equations~(\ref{eq25}) and (\ref{eq26}) imply that
\begin{equation}
\frac{P_{\perp}}{P_{||}} = \tan^2 \theta \ .
\label{eq27}
\end{equation}
This result characterizes the pressure anisotropy associated with the
hybrid viscosity mechanism. Equation~(\ref{eq27}) is strictly valid only
in the limit of zero proton gyroradius, which is a reasonable
approximation in hot advection-dominated disks. When the field line is
unperturbed, so that it lies precisely along the $\hat{z}$-direction,
then $\theta=0$, and equation~(\ref{eq27}) indicates that the
perpendicular pressure tends to zero; in reality, owing to finite
gyroradius effects, the perpendicular pressure would actually be a
small, but finite quantity even in this limit. Early in the instability,
when the field line is slightly perturbed, $\theta$ has a small but
non-zero value, and equation~(\ref{eq27}) predicts that the
perpendicular pressure starts to increase in relation to the parallel
pressure. We have cast the hybrid viscosity mechanism in terms of a
pressure anisotropy in this section in order to make contact with that
part of the literature in which viscous momentum transport is treated
solely in this manner. The quasi-kinetic treatments of Quataert and
co-workers rely on a Landau fluid closure scheme for deriving the
perturbed pressure. The pressure anisotropy implied by the hybrid
viscosity mechanism (eq.~[\ref{eq27}]) is much simpler than the
corresponding result obtained using either the fluid closure scheme of
Quataert et al., or the double adiabatic scheme (\cite{PBM7})
adopted by other authors.
\subsection{Relation to MVI treatment}
In their treatment of the MVI, Islam \& Balbus (\cite{PBM11}) parametrized the
viscous transport in terms of an unspecified proton-proton collision
frequency, $\nu$. Their estimates of the growth rates in {\it
collisional} plasmas agree fairly well with those derived using
quasi-kinetic treatments. Based on their formalism, they conclude that
the $\beta$ regime within which magnetoviscous effects can significantly
impact the MRI growth rates in two-temperature accretion flows extends
to $\beta_{\rm crit} \sim 1$. However, as they point out, their approach breaks
down in the collisionless limit $\nu \to 0$, which describes the ADAF
disks of interest here. It is therefore not surprising that their
constraint on $\beta$ is significantly different from the one we have
derived in equation~(\ref{eq24}).
\subsection{Relation to quasi-kinetic treatment}
Quataert, Dorland \& Hammett (\cite{PBM18}) have treated the case of a strictly
collisionless plasma using a fairly complex kinetic formalism. Their
results suggest that, for the case with $B_{\phi} = B_{z}$ and $k_{r} =
0$ (which is the one considered by Islam \& Balbus and ourselves),
viscous effects will significantly impact the MRI growth rates for
values of $\beta$ that are several orders of magnitude larger than those
predicted by our formalism. For example, their analysis predicts that a
growth rate of $1.5\, \Omega_{\rm K}$ can be achieved if $\beta \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} \beta_{\rm crit}
\sim 10^4$ (see Fig.~4 of \cite{PBM18}) and Fig.~2 of \cite{PBM20}). On the other hand, Figure~1 of
\cite{PBM11} indicates that a growth rate of $1.5\, \Omega_{\rm K}$
can be achieved in the MHD model if $X \lower.4ex\hbox{$\;\buildrel <\over{\scriptstyle\sim}\;$} 0.35$, $Y \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} 12$,
which corresponds to $x = 0.35$, $y = 12$ in equation~(\ref{eq12}).
Assuming that $\xi \sim 1$ as before, equation~(\ref{eq24}) yields in
this case the condition $\beta \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} \beta_{\rm crit} \sim 225$. Hence our MHD
model based on the hybrid viscosity predicts that viscous effects will
enhance the MRI growth rates down to much lower values of $\beta$ than
those obtained in the quasi-kinetic model. This difference reflects the
differing role of the particle pressure in the two scenarios.
In our formulation, the viscosity is expressed by protons that deposit
their momentum into the fluid upon encountering kinks in the magnetic
field, which is anchored in the local gas. The importance of forces due
to gas pressure relative to those due to the tension associated with the
magnetic field thus scales as the plasma $\beta$. On the other hand, gas
pressure forces are only $\sqrt{\beta}$ times as important as forces
arising out of magnetic tension in the quasi-kinetic treatment of
\cite{PBM18}. It follows that the value of
$\beta_{\rm crit}$ computed using our MHD model based on the hybrid viscosity
should be comparable to the square root of the $\beta_{\rm crit}$ value obtained
using the quasi-kinetic model, and this is borne out by the numerical
results cited above.
\section{Conclusions}
In this paper we have investigated the role of hot protons in
influencing the magnetoviscous instability described in \cite{PBM1}
and \cite{PBM11}. We have shown that the only relevant
viscosity mechanism in this situation is the ``hybrid'' viscosity, which
is due to the redirection of protons interacting with magnetic
irregularities (``kinks'') set up by the initial field perturbations. In
particular, we have demonstrated in equation~(\ref{eq24}) that viscous
effects associated with the hybrid viscosity will significantly augment
the MRI growth rates for $\beta \lower.4ex\hbox{$\;\buildrel >\over{\scriptstyle\sim}\;$} 80$, which corresponds to a
magnetic field strength $B$ below $\sim 10\%$ of the equipartition
value. For smaller values of $\beta$, we expect the instability to grow
at the MRI rate as long as it remains in the linear regime. This
conclusion is expected to be valid in any hot, two-temperature accretion
disk, including advection-dominated ones. We have obtained this result
using a relatively simple fluid treatment, based upon the general
dispersion relation obtained in \cite{PBM11}. Our use of the
hybrid viscosity concept alleviates an important drawback in the fluid
application made by Islam \& Balbus (\cite{PBM11}), because their treatment of
viscous transport breaks down in the collisionless plasmas of interest
here. The new results we have obtained allow an interesting comparison
between the MHD approach and the quasi-kinetic formalism used by other
authors. We show that the differences between the predictions made by
the two methodologies stem from the differing treatments of the particle
pressure.
PS gratefully acknowledges the hospitality of the Jagannath Institute of
Technology and Management, where part of this work was carried out.
\input{referenc1}
\printindex
\end{document}
|
train/arxiv
|
BkiUfeXxK7IAHfYXAAA9
| 5 | 1 |
\section{Supplemental material}
\begin{figure*}[t]
\includegraphics[width=\textwidth]{supp1.pdf}
\caption{Diffraction patterns from the bosons at $T/T_c=0.25$ and $\tilde{g} = 0.0217$. The intensity is normalized to 1 and in log scale.}
\label{fig:sup_diffs}
\end{figure*}
\myparagraph{Superfluidity and Bose-Einstein Condensation in two dimensions}
In this section, we clarify some confusion that may arise regarding the significance of superfluidity in our system, and its relation to the presence, or lack, of a Bose-Einstein condensate (BEC).
It is well-known that, in a homogeneous continuous system in $d=2$, there can be no second order phase transition, as fluctuations prevent long-range order from being established \cite{Hohenberg1967, Mermin1966}. In the case of Bose-Einstein condensation, the order parameter is represented by the condensate fraction \cite{pitaevski2003bose}; therefore, there can be no BEC in a homogeneous system in 2D.
On the other hand, the superfluid fraction is not an order parameter, and can instead be characterized as a response function to an external velocity field, a property that has been extensively used to characterize superfluidity through a reduction of the moment of inertia (we also do so in the next section). In this sense, it can be different from zero even when long-range order is absent. In $d=2$, even when long-range order is forbidden, a different kind of quasi-long-range order can be formed in the context of the Berezinskii-Kosterlitz-Thouless (BKT) transition, which leads to a non-zero superfluid fraction below a certain temperature \cite{KosterlitzThouless1973, ber70, cha95}.
When harmonic trapping is introduced, the system is not homogeneous anymore, and it is possible for the system to display BEC in 2D; this is indeed the case for non-interacting bosons in $d=2$. The question of whether interacting bosons in a trap undergo a transition of the BEC or BKT kind has led to investigations of what is called the BEC/BKT crossover, both theoretically and experimentally \cite{hol07, fle15}.
In this paper, we take the critical temperature of the $d=2$ trapped Bose gas as a reference point, but we do not concern ourselves with the intricacies related to boson condensation and the BEC/BKT crossover. For our purposes, what is important is that we can distinguish superfluid and insulating phases, and our methods, as described in the next section, rely only on the definition of superfluidity as a response function, with no explicit reference to condensation.
\myparagraph{Details on the Path integral Monte Carlo method} The core of the method lies in the application of Feynman's path integral to the partition function of a quantum system at finite temperature \cite{fey98, fey10}. Thermodynamic properties can then be measured on an equivalent, classical system, where each quantum particle is represented by a classical polymers. Quantum concepts, such as coherence and superfluidity, can be mapped across the equivalence as properties of the polymers, and can consequently be sampled by employing Monte Carlo procedures such as the Metropolis algorithm. In addition, we use the canonical Worm algorithm \cite{PhysRevLett.96.070601, Boninsegni2006} to efficiently sample configurations of connected polymers, which are crucial to the understanding of superfluidity. Reviews of and introductions to PIMC can be found in \cite{cep95, krauth2006statistical}.
The advantage of path integral techniques lies in their ability to determine the thermodynamic properties of the system starting from its basic constituents - the atoms and the microscopic interaction - within a precision limited only by numerical and statistical errors. In practice, the equivalence is realized approximately by breaking up the imaginary time interval $\beta$ into smaller intervals $\tau = \beta /M$. To each particle $i$ corresponds, then, a classical polymer made of $j=1\dots M$ beads, connected with each other through harmonic springs. Errors introduced by the equivalence are reduced as $M$ increases.
The basic version of our algorithm makes use of the harmonic propagator to efficiently simulate the behavior of bosons in the trapping potential, while the lattice is taken into account as en external potential in the sampling rates. The hard-core interaction is implemented through the pair-product approximation, requiring, in two dimensions, the use of tables for the propagator \cite{bar79, cep95, pil06}.
\myparagraph{Zonal superfluid fraction}
We define the zonal superfluid estimator, which was referenced in the main text. We begin with a quick review of the local estimator \cite{kwo06}.
In the context of the two-fluid model \cite{tis38, lon54}, the onset of superfluidity is described by separating the fluid into two components, a superfluid of density $\rho_s$ and a normal one of density $\rho_n$, contributing to the total density of the fluid:
\begin{equation}
\rho = \rho_n + \rho_s,
\end{equation}
The ratio of the superfluid density to the total one is the superfluid fraction,
\begin{equation}
n_s = \frac{\rho_s}{\rho}.
\end{equation}
The two components have different properties in terms of flow and entropy transport; in particular, the superfluid component displays zero viscosity, and is therefore unresponsive to the application of external velocity fields. When we consider angular velocities, this leads to a reduction of the total moment of inertia, compared to a classical fluid in the same conditions. This relationship is stated as
\begin{equation}
n_s = 1 - \frac{I}{I_{cl}},
\end{equation}
where $I$ is the measured moment of inertia, which only the normal component contributes to, while $I_{cl}$ is the classical moment of inertia, which is the one the same mass of fluid would have if it behaved classically.
In the context of PIMC, the expectation value of the angular momentum is given in terms of the area encircled by particle paths, leading to the estimator
\begin{equation} \label{sm_globsl}
n_s = \frac{2 m}{\lambda \beta} \frac{\langle A_z^2 \rangle}{I_{cl}},
\end{equation}
which is equation \eqref{global} in the main text, where we omitted the non-ergodic term for brevity, and $\lambda = \hbar^2/2m$. In this expression,
\begin{equation}
A_z = \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{M} \textbf{r}_{i,j} \times \textbf{r}_{i,j+1}
\end{equation}
is the total area enclosed by particle paths, and $\textbf{r}_{i,j}$ is the position of the $j$-th bead in the $i$-th particle.
We can manipulate the equations above to give
\begin{equation}\label{eq_iicl}
I = I_{cl} (1 - n_s) = I_{cl} - \frac{2 m}{\lambda \beta} \langle A_z^2 \rangle.
\end{equation}
In inhomogeneous systems, the fields describing the two components acquire a spatial dependence, and so does the superfluid fraction itself:
\begin{equation}
\rho(\textbf{r}) = \rho_n(\textbf{r}) + \rho_s(\textbf{r}),
\end{equation}
\begin{equation}
n_s(\textbf{r}) = \frac{\rho_s(\textbf{r})}{\rho(\textbf{r})}.
\end{equation}
This local superluid fraction can be characterized by breaking up the estimator \eqref{sm_globsl} into local contributions.
$I_{cl}$ is written explicitly as
\begin{equation}
I_{cl} = \int d\textbf{r} \; \rho(\textbf{r}) r^2.
\end{equation}
Conversely, the measured moment of inertia is calculated by considering only the contribution of the normal component:
\begin{equation}
I = \int d\textbf{r} \; \rho_n(\textbf{r}) r^2 = \int d\textbf{r} \; \left[\rho(\textbf{r}) - \rho_s(\textbf{r})\right] r^2 = I_{cl} - \int d\textbf{r} \; \rho_s(\textbf{r}) r^2.
\end{equation}
This, by comparison with \eqref{eq_iicl}, leads us to
\begin{equation} \label{eq_nsicl}
\int d\textbf{r} \; \rho_s(\textbf{r}) r^2 = n_s I_{cl} = \frac{2 m}{\lambda \beta} \langle A_z^2 \rangle.
\end{equation}
A possible definition then suggests itself, as
\begin{equation}
\rho_s(\textbf{r}) = \frac{2 m}{\lambda \beta}\frac{ \langle A_z A_z(\textbf{r}) \rangle }{r^2};
\end{equation}
this will integrate to the appropriate amount as long as $ \int d\textbf{r} \; A_z(\textbf{r}) = A_z $. The most common choice \cite{kwo06} is to define
\begin{equation}
A_z(\textbf{r}) = \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{M} \textbf{r} \times \textbf{r}_{i,j+1} \delta(\textbf{r}-\textbf{r}_{i,j}).
\end{equation}
The $r^2$ term in the denominator is sometimes named the ``local contribution to the classical moment of inertia''. This is not entirely correct, since the local contribution is actually $\rho(\textbf{r}) r^2$. It is possible to express the decomposition of the superfluid fraction so that the local moment of inertia becomes directly relevant. From \eqref{eq_nsicl}, we find that
\begin{equation}
n_s = \frac{1}{I_{cl}} \int d\textbf{r} \; \rho_s(\textbf{r}) r^2 = \frac{1}{I_{cl}} \int d\textbf{r} \; n_s(\textbf{r}) \rho(\textbf{r}) r^2
\end{equation}
meaning that the global superfluid fraction is given by the average of the local superfluid fraction, weighted by the local moment of inertia.
As we mentioned in the main text, the local estimator is noisy and difficult to sample, especially in the localized phase. We can, however, exploit the integral decomposition to define superfluid fractions in different regions of the system. Given a region $A$, we can write
\begin{equation}
n^A_s = \frac{1}{I^A_{cl}} \int_A d\textbf{r} \; \rho_s(\textbf{r}) r^2 = \frac{1}{I^A_{cl}} \int_A d\textbf{r} \; n_s(\textbf{r}) \rho(\textbf{r}) r^2,
\end{equation}
with the same definitions as before, but limiting the integration to the $A$ region. If the system is partitioned in a finite number of regions $A$, $B$..., we can then recover the global superfluid fraction as
\begin{equation}
n_s = \frac{I^A_{cl}}{I_{cl}} n^A_s + \frac{I^B_{cl}}{I_{cl}} n^B_s + \dots
\end{equation}
This is, again, an average of the superfluid fractions of each region, weighted by the respective moment of inertia. Crucially, this decomposition shows that a region can have a finite superfluid fraction, but still give a negligible contribution to the global $n_s$, if the associated moment of inertia is small. This is the case for regions close to the trap center.
\myparagraph{Density profiles}
In the context of PIMC, two main ways are available to display the spatial configuration of the system.
The first is to select a system configuration at a given simulation step, and to plot the position of each bead $\textbf{r}_i^j$, drawing a line between each pair of connected beads. The resulting figures are usually called snapshots. One advantage of this approach is that it allows to explicitly display connections between different particles, and therefore to obtain a visual representation of coherence. Such snapshots are the ones that we plot in \figref[a-c]{phasediagram}.
The second method is to plot density profiles, which are obtained as averages over simulation steps, as well as over the positions of all different beads associated to each particle. In continuous space, the average is usually performed by separating the simulation area into bins, and counting the number of beads in each at every simulation step. To obtain the density profiles shown in \figref[c-e]{geometry}, we counted particles in 360 bins in correspondence of the circles drawn in \figref[a]{geometry}.
\myparagraph{Diffraction patterns}
The structure factor is a quantity directly related to diffraction patterns, that can be observed experimentally in scattering experiments. It is defined, for a particle density $n(\textbf{r}) = \sum_i \delta(\textbf{r}_i)$, as
\begin{equation}
\label{struct}
I(\textbf{q}) = \langle n(\textbf{q}) n(-\textbf{q}) \rangle,
\end{equation}
with
\begin{equation}
n(\textbf{q}) = \int d^2\textbf{r} \; e^{-i\textbf{q}\cdot\textbf{r}} n(\textbf{r}) = \sum_{j} e^{-i\textbf{q}\cdot\textbf{r}^j}
\end{equation}
the Fourier transform of the particle distribution \cite{cha95}. To measure this quantity, we compute the sum and average over beads and simulations steps, similarly to what we do for the density profiles. This is done for a set of wavevectors, on the vertices of a grid in $\textbf{q}$-space.
In \figref{sup_temps1}, we display some diffraction patterns. These are the same reported in \figref[f-h]{geometry} of the main text, with the addition of the values of $V_0=0$ and $V_0/E_r=0.5$. As we could expect, the structure factor evolves from a single peak in the fluid phase to a typical quasicrystalline pattern.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{supp2.pdf}
\caption{Depletion of the global superfluid fraction at different values of $T$, in the non-interacting case. The dashed line is obtained analytically, while the dots are simulation results.}
\label{fig:sup_temps1}
\end{figure}
\begin{figure}[b!]
\includegraphics[width=\linewidth]{supp3.pdf}
\caption{Global superfluid fraction at different values of $T$, at $\tilde{g}=0.0217$. The lines are a guide for the eye.}
\label{fig:sup_temps2}
\end{figure}
\myparagraph{Temperature behavior of the non-interacting gas}
For free bosons in a harmonic trap, the temperature behavior of the global superfluid fraction can be predicted by analytical estimates. First, $n_s$ is related to the number of particles in the condensate, from considerations on its moment of inertia \cite{sch00}:
\begin{equation} \label{sm_2dsuper}
n_s(T) \simeq \frac{1}{1 + \frac{N - \langle N_0 \rangle}{\langle N_0 \rangle} \frac{2k_BT}{\hbar\omega}} ,
\end{equation}
where $\langle N_0 \rangle$ is the number of particles in the condensate at temperature $T$. This quantity can be directly computed from the energy density of states, to give
\begin{equation} \label{sm_condensation}
\langle N_0 \rangle = N - \int_0^{\infty} d\epsilon \rho(\epsilon) n(\epsilon) = N - \frac{k_B^2T^2}{\hbar^2\omega^2} \frac{\pi^2}{6},
\end{equation}
$\rho(\epsilon)$ being the energy density of states. Plugging \eqref{sm_condensation} into \eqref{sm_2dsuper}, we obtain a formula for the global superfluid fraction as a function of the temperature, which we plot as a dashed line in \figref{sup_temps1}. The dots are values of $n_s$ estimated from our simulations, which show perfect agreement with the analytical prediction.
In \figref[b]{globalns}, we showed plots of $n_s$ as a function of $V_0$, at different temperatures, for the interacting gas. In \figref{sup_temps2}, instead, we keep $V_0$ fixed and plot $n_s$ against $T$.
\end{document}
|
train/arxiv
|
BkiUcx7xK03BfL1dVANR
| 4 | 0.8 |
\section*{Acknowledgements}
We are greatful to A. Galajinsky for the comments on the manuscript. We also thank E.A. Ivanov and M.S. Plyushchay for the correspondence. This work was supported by the Russian Science Foundation, grant No 19-11-00005.
\fontsize{10}{13}\selectfont
|
train/arxiv
|
BkiUbKQ4ubnjopEPuFkR
| 5 | 1 |
\section{Introduction}
Let $G$ be a reductive connected algebraic group over $\mathbb C$,
endowed with a Borel subgroup $B$ and a maximal torus $T\subset B$.
Irreducible rational representations of $G$ are classified by their
highest weight: to the dominant integral weight $\lambda$ corresponds
the irreducible representation~$V(\lambda)$.
Several constructions allow to define nice bases of $V(\lambda)$, for
instance:
\begin{itemize}
\item
From the study of quantum groups, Lusztig \cite{Lusztig90} defined his
canonical basis in the quantum deformation $V_q(\lambda)$; taking the
classical limit $q=1$ provides a basis of $V(\lambda)$. For convenience,
we will in fact use the dual canonical of this basis, aka Kashiwara's
upper global basis~\cite{Kashiwara}.
\item
The geometric Satake correspondence \cite{Lusztig81,MirkovicVilonen}
realizes $V(\lambda)$ as the intersection cohomology of certain Schubert
varieties $\overline{\Gr^\lambda}$ in the affine Grassmannian of the
Langlands dual of $G$. The fundamental classes of the Mirković-Vilonen
cycles form a basis of this cohomology space, hence of $V(\lambda)$.
\end{itemize}
These two constructions can be extended to tensor products
$V(\lambda_1)\otimes\cdots\otimes V(\lambda_r)$, see chapter~27 in
\cite{Lusztig93} for the former and sect.~2.4 in \cite{GoncharovShen} for
the latter. These two bases share several nice properties, for instance
both are compatible with the isotypical filtration and with restriction
to standard Levi subgroups; also both are difficult to compute.
In general they differ: an example with $G=\SL_3(\mathbb C)$ and $r=12$
is given in~\cite{FontaineKamnitzerKuperberg}; examples for $r=1$
(hence for irreducible representations) are given in
\cite{BaumannKamnitzerKnutson} for $G=\SO_8(\mathbb C)$ and
$G=\SL_6(\mathbb C)$.
In type $A_1$, that is for $G=\SL_2(\mathbb C)$, the dual canonical basis
was computed by Frenkel and Khovanov~\cite{FrenkelKhovanov}. The aim of
this paper is to do the analog for the Mirković-Vilonen basis.
\trivlist
\item[\hskip\labelsep{\bfseries Theorem.}]
\itshape
For $G=\SL_2(\mathbb C)$, the Mirković-Vilonen basis of a tensor product
$V(\lambda_1)\otimes\cdots\otimes V(\lambda_r)$ coincides with the
dual canonical basis of this space specialized at $q=1$.
\upshape
\endtrivlist
This result is trivial in the case $r=1$ of an irreducible representation,
but the general case seems less obvious. We must also point out that in
truth, this result holds only after reversal of the order of the tensor
factors, but this defect is merely caused by a difference in the conventions.
In this case $G=\SL_2(\mathbb C)$, each dominant weight is a nonnegative
multiple of the fundamental weight $\varpi$. Then $V(n\varpi)$ has
dimension $n+1$ and is the Cartan component, i.e.\ the top step in the
isotypical filtration, of $V(\varpi)^{\otimes n}$. We can thus regard
$V(n_1\varpi)\otimes\cdots\otimes V(n_r\varpi)$ as a quotient of
$V(\varpi)^{\otimes(n_1+\cdots+n_r)}$. Since both the dual canonical
basis and the Mirković-Vilonen basis behave well under this quotient
operation, it is enough to establish the theorem in the particular case
of the tensor power $V(\varpi)^{\otimes n}$.
This paper is organized in the following way. In sect.~\ref{se:CombLin},
we define a basis of $V(\varpi)^{\otimes n}$ by a simple recursive formula
and argue that it matches Frenkel and Khovanov's characterization of the
dual canonical basis. In sect.~\ref{se:MVBasis}, we recall the definition of
the Mirković-Vilonen basis in tensor products of irreducible representations
and prove its good behavior under the quotient operation mentioned in
the previous paragraph. In sect.~\ref{se:Geometry}, we show that the
Mirković-Vilonen basis of $V(\varpi)^{\otimes n}$ satisfies the recursive
formula from sect.~\ref{se:CombLin} (this is the difficult part in the paper).
This work is based on the PhD thesis of the second author~\cite{Demarais}.
We however rewrote the proof to render it more accessible and remove
ambiguities.
While readying this paper, we learned that independently Pak-Hin Li
computed the Mirković-Vilonen basis for the tensor product of two
irreducible representations of $\SL_2(\mathbb C)$.
\textit{Acknowledgements.}
P.B.'s research is supported by the ANR project GeoLie,
ANR-15-CE40-0012.
\section{Combinatorics and linear algebra}
\label{se:CombLin}
Let $\mathbb K$ be a field and let $V$ be the vector space $\mathbb K^2$.
In this section, we define in an elementary manner an explicit basis in
each tensor power $V^{\otimes n}$ that has nice properties with respect
to the natural action of $\SL_2(\mathbb K)$.
\subsection{Words}
\label{ss:Words}
Given a nonnegative integer $n$, we set $\mathscr C_n=\{+,-\}^n$.
We regard an element in $\mathscr C_n$ as a word of length $n$ on the
alphabet $\{+,-\}$. Concatenation of words endows
$\mathscr C=\bigcup_{n\geq0}\mathscr C_n$ with the structure of a monoid.
The weight of a word $w\in\mathscr C$, denoted by $\wt(w)$, is the number
of letters $+$ minus the number of letters $-$ that $w$ contains. A word
$w=w(1)w(2)\cdots w(n)$ is said to be semistable if its weight is $0$ and
if each initial segment $w(1)\cdots w(j)$ has nonpositive weight.
Words are best understood through a representation as planar paths,
where letters $+$ and $-$ are depicted by upward and downward segments,
respectively. A word is semistable if the endpoints of its graphical
representation are on the same horizontal line and if the whole path
lies below this line.
Any word $w$ can be uniquely factorized as a concatenation
$$w_{-r}+\cdots+w_{-1}+w_0-w_1-\cdots-w_s$$
where $r$ and $s$ are nonnegative integers and where the words
$w_{-r}$, \dots, $w_s$ are semistable. The $r$ letters $+$ and
the $s$ letters $-$ that do not occur in the semistable words are
called significant; informally, a letter $+$ is significant if it
marks the first time a new highest altitude is reached.
\begin{other*}{Example}
The following picture illustrates the factorization of the word
$$w=\color{orange}-+\color{black}+\color{orange}-+-+\color{black}++
\color{orange}--+--+++\color{black}+-\color{orange}-+\color{black}-.$$
\vspace*{-18pt}
This word has length $22$ and weight $2$. Here $(r,s)=(4,2)$ and the
words $w_{-2}$, $w_0$ and $w_2$ are empty. Significant letters are
written in black.
\begin{center}
\begin{tikzpicture}[scale=0.5]
\draw[very thin,color=gray!40] (-0.5,-1.5) grid (22.5,4.5);
\draw[very thick,orange] (0,0)--(1,-1)--(2,0);
\draw[very thick] (2,0)--(3,1);
\draw[very thick,orange] (3,1)--(4,0)--(5,1)--(6,0)--(7,1);
\draw[very thick] (7,1)--(9,3);
\draw[very thick,orange] (9,3)--(10,2)--(11,1)--(12,2)--(13,1)--(14,0)--(15,1)--
(16,2)--(17,3);
\draw[very thick] (17,3)--(18,4)--(19,3);
\draw[very thick,orange] (19,3)--(20,2)--(21,3);
\draw[very thick] (21,3)--(22,2);
\end{tikzpicture}
\end{center}
\end{other*}
Given a word $w$, we denote by $\mathscr P(w)$ the set of words obtained
from $w$ by changing a single significant letter $+$ into a $-$. With our
previous notation, $\mathscr P(w)$ has $r$ elements.
\subsection{Bases}
\label{ss:Bases}
Let $(x_+,x_-)$ be the standard basis of the vector space $V$.
Each word $w=w(1)w(2)\cdots w(n)$ in $\mathscr C^n$ defines an element
$x_w=x_{w(1)}\otimes\cdots\otimes x_{w(n)}$ in the $n$-th tensor power of
$V$. The family $(x_w)_{w\in\mathscr C_n}$ is a basis of $V^{\otimes n}$.
We define another family of elements $(y_w)_{w\in\mathscr C}$ in
the tensor algebra of $V$ by the convention $y_\varnothing=1$ and the
recursive formulas
$$y_{+w}=x_+\otimes y_w\quad\text{and}\quad
y_{-w}=x_-\otimes y_w-\sum_{v\in\mathscr P(w)}x_+\otimes y_v.$$
Rewritting the latter as
\begin{equation}
\label{eq:DefYw}
x_+\otimes y_w=y_{+w}\quad\text{and}\quad
x_-\otimes y_w=y_{-w}+\sum_{v\in\mathscr P(w)}y_{+v}
\end{equation}
one easily shows by induction on the length of words that each element
$x_w$ can be expressed as a linear combination of elements $y_v$, with
$v$ in the set of words that the same length and weight as~$w$. It
follows in particular that for each nonnegative integer $n$, the family
$(y_w)_{w\in\mathscr C_n}$ spans $V^{\otimes n}$, hence is a basis of
this space.
\begin{proposition}
\label{pr:CaracYw}
The family $(y_w)_{w\in\mathscr C}$ is characterized by the following
conditions:
\vspace{-12pt}
\begin{enumerate}
\item
\label{it:PrCYa}
If $w$ is of the form $+\cdots+-\cdots-$, then $y_w=x_w$.
\item
\label{it:PrCYb}
$y_{-+}=x_{-+}-x_{+-}$.
\item
\label{it:PrCYc}
Let $u$ be a semistable word and let $(w',w'')\in\mathscr C_{n'}
\times\mathscr C_{n''}$. Write
$y_{w'w''}=\sum\limits_i\;y'_i\otimes y''_i$ with
$(y'_i,y''_i)\in V^{\otimes n'}\times V^{\otimes n''}$.
Then $y_{w'uw''}=\sum\limits_i\;y'_i\otimes y_u\otimes y''_i$.
\iffalse
Let $(u,w',w'')\in\mathscr C_m\times\mathscr C_{n'}\times\mathscr C_{n''}$.
Define a linear map
$J:V^{\otimes n'}\otimes V^{\otimes n''}\to V^{\otimes n'+m+n''}$ by
$J(x'\otimes x'')=x'\otimes y_u\otimes x''$.
If $u$ is semistable, then $y_{w'uw''}=J(y_{w'w''})$.
\fi
\end{enumerate}
\end{proposition}
\begin{proof}
Statements~\ref{it:PrCYa} and~\ref{it:PrCYb} follow straightforwardly
from the definition of the elements $y_w$. We prove~\ref{it:PrCYc} by
induction on the length of $w'uw''$. Discarding a trivial case, we
assume that $u$ is not the empty word.
Suppose first that $w'$ is the empty word. Let us write $u$ as a
concatenation $-u'+u''$ where $u'$ and $u''$ are (possibly empty)
semistable words. Equation~\eqref{eq:DefYw} gives
$$x_-\otimes y_{w''}=y_{-w''}+\sum_{v\in\mathscr P(w'')}y_{+v}.$$
Applying the induction hypothesis to the semistable word $u''$ and the
pairs $(-,w'')$ and $(+,v)$, for each $v\in\mathscr P(w'')$, we obtain
$$x_-\otimes y_{u''}\otimes y_{w''}=
y_{-u''w''}+\sum_{v\in\mathscr P(w'')}y_{+u''v}.$$
Since $x_-\otimes y_{u''}=y_{-u''}$, we get
$$y_{-u''}\otimes y_{w''}=y_{-u''w''}+\sum_{v\in\mathscr P(w'')}y_{+u''v}$$
and applying once more the induction hypothesis, this time to the
semistable word $u'$ and the pairs $(\varnothing,-u'')$,
$(\varnothing,-u''w'')$ and $(\varnothing,+u''v)$, we arrive at
\begin{equation}
\label{eq:PrCY1}
y_{u'-u''}\otimes y_{w''}=y_{u'-u''w''}
+\sum_{v\in\mathscr P(w'')}y_{u'+u''v}.
\end{equation}
Starting now with
$$x_+\otimes y_{w''}=y_{+w''}$$
we arrive by similar transformations at
\begin{equation}
\label{eq:PrCY2}
y_{u'+u''}\otimes y_{w''}=y_{u'+u''w''}.
\end{equation}
Since $\mathscr P(u'+u'')=\{u'-u''\}$, we have by definition
\begin{equation}
\label{eq:PrCY3}
y_u=x_-\otimes y_{u'+u''}-x_+\otimes y_{u'-u''}.
\end{equation}
Likewise, $\mathscr P(u'+u''w'')=\{u'-u''w''\}\cup\{u'+u''v\mid
v\in\mathscr P(w'')\}$ leads to
\begin{equation}
\label{eq:PrCY4}
y_{uw''}=x_-\otimes y_{u'+u''w''}-x_+\otimes y_{u'-u''w''}
-\sum_{v\in\mathscr P(w'')}x_+\otimes y_{u'+u''v}.
\end{equation}
Combining \eqref{eq:PrCY1}--\eqref{eq:PrCY4}, we obtain the desired equation
$$y_{uw''}=y_u\otimes y_{w''}.$$
We now address the case where $w'$ is not empty. Suppose that the
first letter of $w'$ is a $+$ and write $w'=+\,\widetilde w'$. Then
$$y_{w'w''}=x_+\otimes y_{\widetilde w'w''}\quad\text{and}\quad
y_{w'uw''}=x_+\otimes y_{\widetilde w'uw''}$$
and the result follows from the induction hypothesis applied to the
semistable word $u$ and the pair $(\widetilde w',w'')$.
If on the contrary the first letter of $w'$ is a $-$, then we write
$w'=-\,\widetilde w'$. Since $u$ is semistable, its insertion in the
middle of a word does not add or remove any significant letter; in
particular, the set of significant letters in $\widetilde w'w''$ is
in natural bijection with the set of significant letters in
$\widetilde w'uw''$. This observation leads to a bijection from
$\mathscr P(\widetilde w'w'')$ onto $\mathscr P(\widetilde w'uw'')$,
which splits a word $v$ in two subwords $v'\in\mathscr C_{n'-1}$ and
$v''\in\mathscr C_{n''}$ and then returns $v'uv''$. With this notation,
$$y_{w'w''}=x_-\otimes y_{\widetilde w'w''}-
\sum_{v\in\mathscr P(\widetilde w'w'')}x_+\otimes y_{v'v''}$$
and
$$y_{w'uw''}=x_-\otimes y_{\widetilde w'uw''}-
\sum_{v\in\mathscr P(\widetilde w'w'')}x_+\otimes y_{v'uv''}.$$
Again the desired equation follows from the induction hypothesis
applied to the semistable word $u$ and the pairs $(\widetilde w',w'')$
and $(v',v'')$, for each $v\in\mathscr P(\widetilde w'w'')$.
Condition~\ref{it:PrCYc} computes $y_{w'uw''}$ from the datum of
$y_{w'w''}$ and $y_u$ whenever $u$ is semistable;
condition~\ref{it:PrCYa} provide the value of $y_w$ when $w$ is of
the form $+\cdots+-\cdots-$; and condition~\ref{it:PrCYb} provides
the value of $y_{-+}$. Noting that any word in $\mathscr C$ can
be obtained from a word of the form $+\cdots+-\cdots-$ by repetitively
inserting the semistable word $-+$ (possibly at non disjoint positions),
we conclude that conditions \ref{it:PrCYa}--\ref{it:PrCYc} fully
characterize the family $(y_w)_{w\in\mathscr C}$.
\end{proof}
As a consequence of this proposition, we see that if
$w_{-k}+\cdots+w_{-1}+w_0-w_1-\cdots-w_\ell$ is the factorization of a
word $w$, as in section~\ref{ss:Words}, then
\begin{equation}
\label{eq:FacYw}
y_w=y_{w_{-k}}\otimes x_+\otimes\cdots\otimes x_+\otimes
y_{w_{-1}}\otimes x_+\otimes y_{w_0}\otimes x_-\otimes y_{w_1}\otimes
x_-\otimes \cdots\otimes x_-\otimes y_{w_\ell}.
\end{equation}
\begin{other*}{Remark}
The transition matrix between the two bases $(x_w)_{w\in\mathscr C_n}$
and $(y_w)_{w\in\mathscr C_n}$ of $V^{\otimes n}$ is unitriangular: if
we write
$$x_w=\sum_{v\in\mathscr C_n}n_{w,v}\,y_v$$
then the diagonal coefficient $n_{w,w}$ is equal to one and the
entry $n_{w,v}$ is zero except when the path representing $v$ lies
above the path representing $w$. In addition, all the coefficients
$n_{w,v}$ are nonnegative integers. The proof of these facts is
left to the reader.
\end{other*}
\subsection{Representations}
\label{ss:Reps}
In this section, we regard $V$ as the defining representation of
$\SL_2(\mathbb K)$. From now on, we assume that $\mathbb K$ has
characteristic zero. We denote by $(e,h,f)$ the usual basis of
$\mathfrak{sl}_2(\mathbb K)$.
Fix a nonnegative integer $n$. Given a word $w\in\mathscr C_n$,
we denote by $\varepsilon(w)$ (respectively, $\varphi(w)$) the number
of significant letters $-$ (respectively, $+$) that $w$ contains.
Thus, in the notation of section \ref{ss:Words}, $\varepsilon(w)=s$
and $\varphi(w)=r$. If $\varepsilon(w)>0$, we can change in $w$ the
leftmost significant letter $-$ into a $+$; the resulting word is
denoted by $\tilde e(w)$. Likewise, if $\varphi(w)>0$, we can change
in $w$ the rightmost significant letter $+$ into a $-$; the resulting
word is denoted by $\tilde f(w)$. If these operations are not feasible,
then $\tilde e(w)$ or $\tilde f(w)$ is defined to be $0$. Endowed with
the maps $\wt$, $\varepsilon$, $\varphi$, $\tilde e$, $\tilde f$, the
set $\mathscr C_n$ identifies with the crystal\footnote{In fact, we
here use the opposite of the usual tensor product of crystals.} of the
$\mathfrak{sl}_2(\mathbb K)$-module~$V^{\otimes n}$.
We denote by $\ell(w)=\varepsilon(w)+\varphi(w)$ the number of significant
letters in a word $w\in\mathscr C_n$; thus $w$ is semistable if and only
if $\ell(w)=0$. For each $p\in\{0,\ldots,n\}$, we denote by
$(V^{\otimes n})_{\leq p}$ the subspace of $V^{\otimes n}$
spanned by the elements $y_w$ such that $\ell(w)\leq p$. We agree that
$(V^{\otimes n})_{\leq-1}=\{0\}$.
\begin{proposition}
\label{pr:RepYw}
The basis $(y_w)_{w\in\mathscr C_n}$ of $V^{\otimes n}$ enjoys the
following properties.
\vspace{-12pt}
\begin{enumerate}
\item
\label{it:PrRYa}
For each $w\in\mathscr C_n$, we have
$$e\cdot y_w\equiv\varepsilon(w)\;y_{\tilde e(w)}\quad\text{and}\quad
f\cdot y_w\equiv\varphi(w)\;y_{\tilde f(w)}$$
modulo terms in $(V^{\otimes n})_{\leq\ell(w)-1}$.
\item
\label{it:PrRYb}
For each $p\in\{0,\ldots,n\}$, the subspace $(V^{\otimes n})_{\leq p}$
is a subrepresentation of $V^{\otimes n}$, and the quotient
$(V^{\otimes n})_{\leq p}/(V^{\otimes n})_{\leq p-1}$ is an isotypical
representation, sum of simple $\mathfrak{sl}_2(\mathbb K)$-modules of
dimension $p+1$.
\item
\label{it:PrRYc}
The elements $y_w$ with $w$ semistable form a basis of the space of
invariants $(V^{\otimes n})^{\SL_2(\mathbb K)}$.
\end{enumerate}
\end{proposition}
\trivlist
\item[\hskip\labelsep{\itshape Sketch of proof.}]
\upshape
We first note that any semistable word can be obtained from the empty
word by repetitively inserting the word $-+$ and that $y_{-+}$ is
invariant under the action of $\SL_2(\mathbb K)$ on $V^{\otimes2}$.
From Proposition~\ref{pr:CaracYw}~\ref{it:PrCYc}, it then follows that
any element $y_w$ with $w$ semistable is $\SL_2(\mathbb K)$-invariant.
Using now \eqref{eq:FacYw}, we reduce the proof of statement~\ref{it:PrRYa}
to the case where $w$ is of the form $+\cdots+-\cdots-$ (though possibly
for a smaller $n$), which is easily dealt with.
Statement~\ref{it:PrRYb} is a direct consequence of
statement~\ref{it:PrRYa} and implies that $(V^{\otimes n})_{\leq0}$
is the subspace of invariants $(V^{\otimes n})^{\SL_2(\mathbb K)}$,
an assertion equivalent to statement~\ref{it:PrRYc}.
\nobreak$\square$
\endtrivlist
The basis $(y_w)_{w\in\mathscr C_n}$ of~$V^{\otimes n}$ is even
more remarkable than what Proposition~\ref{pr:RepYw} claims.
Comparing Frenkel and Khovanov's work (\cite{FrenkelKhovanov},
Theorem~1.9) with Proposition~\ref{pr:CaracYw}, we indeed get:
\begin{theorem}
\label{th:FrenKhov}
$(y_w)_{w\in\mathcal C_n}$ is the dual canonical basis of
$V^{\otimes n}$ specialized at $q=1$.
\end{theorem}
As mentioned in the introduction, this result actually holds only after
reversal of the order of the tensor factors.
\section{The Mirković-Vilonen basis}
\label{se:MVBasis}
In this section, we consider a connected reductive group $G$ over
$\mathbb C$ and explain the definition of the Mirković-Vilonen basis
(from now on: MV basis) in a tensor product
$V(\bm\lambda)=V(\lambda_1)\otimes\cdots\otimes V(\lambda_n)$ of
irreducible representations of $G$; references for the material
presented here are \cite{MirkovicVilonen} and sect.~2.4 in
\cite{GoncharovShen}. We recall the recipe from
\cite{BaumannGaussentLittelmann} to compute the transition matrix
between the MV basis of $V(\bm\lambda)$ and the tensor product of
the MV bases of the factors $V(\lambda_1)$, \dots, $V(\lambda_n)$.
We state and prove a compatibility property of the MV bases with
tensor products of projections onto Cartan components.
\subsection{Definition of the basis}
\label{ss:DefBasis}
We choose a maximal torus $T$ and a Borel subgroup $B$ of $G$ such that
$T\subset B$. The Langlands dual $G^\vee$ of $G$ comes with a maximal
torus $T^\vee$ and a Borel subgroup $B^\vee$. We denote by $N^{-,\vee}$
the unipotent radical of the Borel subgroup of $G^\vee$ opposite to
$B^\vee$ with respect to $T^\vee$. We denote by $\Lambda$ the weight
lattice of $T$ and by $\Lambda^+\subset\Lambda$ the set of dominant
weights. Let $\leq$ be the dominance order on $\Lambda$; positive elements
with respect to $\leq$ are sums of positive roots. We view the half-sum
of all positive coroots as a linear form $\rho:\Lambda\to\mathbb Q$.
The affine Grassmannian of $G^\vee$ is the homogeneous space
$\Gr=G^\vee\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)/
G^\vee(\mathbb C[z])$, where $z$ is an indeterminate.
It is endowed with the structure of an ind-variety.
Each weight $\lambda\in\Lambda$ gives a point $z^\lambda$ in
$T^\vee\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)$, whose image in $\Gr$
is denoted by $L_\lambda$. The $G^\vee(\mathbb C[z])$-orbit through
$L_\lambda$ in $\Gr$ is denoted by $\Gr^\lambda$; this is a smooth
connected variety of dimension $2\rho(\lambda)$. The Cartan decomposition
implies that
$$\Gr=\bigsqcup_{\lambda\in\Lambda^+}\Gr^\lambda;\quad\text{moreover}\quad
\overline{\Gr^\lambda}=\bigsqcup_{\substack{\mu\in\Lambda^+\\[2pt]
\mu\leq\lambda}}\Gr^\mu.$$
The geometric Satake correspondence identifies the irreducible
representation of highest weight $\lambda$ with the intersection
cohomology of $\overline{\Gr^\lambda}$ with trivial local system of
coefficients:
$$V(\lambda)=IH\Bigl(\overline{\Gr^\lambda},\underline{\mathbb C}\Bigr).$$
Let $n$ be a positive integer. The group $G^\vee(\mathbb C[z])^n$ acts on
the space $G^\vee\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)^n$ by
$$(h_1,\ldots,h_n)\cdot(g_1,\ldots,g_n)=(g_1h_1^{-1},h_1g_2h_2^{-1},
\ldots,h_{n-1}g_nh_n^{-1})$$
where $(h_1,\ldots,h_n)\in G^\vee(\mathbb C[z])^n$ and
$(g_1,\ldots,g_n)\in G^\vee\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)^n$.
The quotient is called the $n$-fold convolution variety and is denoted
by $\Gr_n$. We will use the customary notation
$$\Gr_n=G^\vee\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)\,
\times^{G^\vee(\mathbb C[z])}\;\cdots\,\times^{G^\vee(\mathbb C[z])}\;
G^\vee\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)\,/\,G^\vee(\mathbb C[z])$$
to indicate this construction and denote the image in $\Gr_n$ of a tuple
$(g_1,\ldots,g_n)$ by $[g_1,\ldots,g_n]$. Then $\Gr_n$ is endowed with the
structure of an ind-variety. One may note that $\Gr_1$ is just the affine
Grassmannian $\Gr$. We define a map $m_n:\Gr_n\to\Gr$ by
$m_n([g_1,\ldots,g_n])=[g_1\ldots g_n]$.
For each tuple $\bm\lambda=(\lambda_1,\ldots,\lambda_n)$ in $\Lambda^n$,
we set
$$|\bm\lambda|=\lambda_1+\cdots+\lambda_n.$$
For each $G^\vee(\mathbb C[z])$-invariant subset $K\subset\Gr$, we
denote by $\widehat K$ the preimage of $K$ under the quotient map
$G^\vee\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)\to\Gr$.
Given $\bm\lambda=(\lambda_1,\ldots,\lambda_n)$ in
$(\Lambda^+)^n$, we define
$$\Gr_n^{\bm\lambda}=\widehat{\Gr^{\lambda_1}}\,
\times^{G^\vee(\mathbb C[z])}\;\cdots\,
\times^{G^\vee(\mathbb C[z])}\;
\widehat{\Gr^{\lambda_n}}\,/\,G^\vee(\mathbb C[z]).$$
This is a subset of $\Gr_n$ and the geometric Satake correspondence
identifies the tensor product
$$V(\bm\lambda)=V(\lambda_1)\otimes\cdots\otimes V(\lambda_n)$$
with the intersection cohomology of $\overline{\Gr_n^{\bm\lambda}}$.
Given $\mu\in\Lambda$, the
$N^{-,\vee}\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)$-orbit through
$L_\mu$ is denoted by $T_\mu$; this is a locally closed sub-ind-variety
of $\Gr$. The Iwasawa decomposition implies that
$$\Gr=\bigsqcup_{\mu\in\Lambda}T_\mu;\quad\text{moreover}\quad
\overline{T_\mu}=\bigsqcup_{\substack{\nu\in\Lambda\\[2pt]
\nu\geq\mu}}T_\nu.$$
For each $(\lambda,\mu)\in\Lambda^+\times\Lambda$, the intersection
$\overline{\Gr^\lambda}\cap T_\mu$ (if non-empty) has pure dimension
$\rho(\lambda-\mu)$. Using this fact, Mirković and Vilonen set up the
geometric Satake correspondence so that the $\mu$-subspace of
$V(\lambda)$ identifies with the top-dimensional Borel-Moore homology
of $\Gr^\lambda\cap T_\mu$ (\cite{MirkovicVilonen},~Corollary 7.4):
$$V(\lambda)_\mu=H^\BM_{2\rho(\lambda-\mu)}
\Bigl(\Gr^\lambda\cap T_\mu\Bigr).$$
We denote by $\mathscr Z(\lambda)_\mu$ the set of irreducible components
of $\overline{\Gr^\lambda}\cap T_\mu$. If $Z\in\mathscr Z(\lambda)_\mu$,
then $Z\cap\Gr^\lambda$ is an irreducible component of
$\Gr^\lambda\cap T_\mu$, whose fundamental class in Borel-Moore homology
is denoted by $\langle Z\rangle$. The classes $\langle Z\rangle$, for
$Z\in\mathscr Z(\lambda)_\mu$, form a basis of $V(\lambda)_\mu$.
Likewise, for each $(\bm\lambda,\mu)\in(\Lambda^+)^n
\times\Lambda$, the intersection $\overline{\Gr_n^{\bm\lambda}}\cap
(m_n)^{-1}(T_\mu)$ has pure dimension $\rho(|\bm\lambda|-\mu)$.
We can then identify
$$V(\bm\lambda)_\mu=H^\BM_{2\rho(|\bm\lambda|-\mu)}
\Bigl(\Gr_n^{\bm\lambda}\cap(m_n)^{-1}(T_\mu)\Bigr).$$
We denote by $\mathscr Z(\bm\lambda)_\mu$ the set of irreducible
components of $\overline{\Gr_n^{\bm\lambda}}\cap(m_n)^{-1}(T_\mu)$.
If $\mathbf Z\in\mathscr Z(\bm\lambda)_\mu$, then
$\mathbf Z\cap\Gr_n^{\bm\lambda}$ is an irreducible component of
$\Gr_n^{\bm\lambda}\cap(m_n)^{-1}(T_\mu)$, whose fundamental class in
Borel-Moore homology is denoted by $\langle\mathbf Z\rangle$. These classes
$\langle\mathbf Z\rangle$, for $\mathbf Z\in\mathscr Z(\bm\lambda)_\mu$,
form a basis of~$V(\bm\lambda)_\mu$.
We set
$$\mathscr Z(\lambda)=\bigsqcup_{\mu\in\Lambda}\mathscr Z(\lambda)_\mu
\quad\text{and}\quad\mathscr Z(\bm\lambda)=\bigsqcup_{\mu\in\Lambda}
\mathscr Z(\bm\lambda)_\mu.$$
Elements in these sets are called Mirković-Vilonen (MV) cycles, and the
bases of $V(\lambda)$ and $V(\bm\lambda)$ obtained above are called MV
bases.
\subsection{Indexation of the Mirković-Vilonen cycles}
In this short section, we explain that there is a natural bijection
\begin{equation}
\label{eq:ProdCycMV}
\mathscr Z(\bm\lambda)\cong
\mathscr Z(\lambda_1)\times\cdots\times\mathscr Z(\lambda_n)
\end{equation}
for any $\bm\lambda=(\lambda_1,\ldots,\lambda_n)$ in $\Lambda^n$.
The construction goes back to Braverman and
Gaitsgory~\cite{BravermanGaitsgory}; details can be found in
\cite{BaumannGaussentLittelmann}, Proposition~2.2 and Corollary~4.8.
For $\mu\in\Lambda$, we define
$$\widetilde T_\mu=
N^{-,\vee}\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)\,z^\mu$$
and note that the natural map
$$\widetilde T_\mu\,/\,N^{-,\vee}(\mathbb C[z])\to T_\mu$$
is bijective. Given a $N^{-,\vee}(\mathbb C[z])$-invariant subset
$Z\subset T_\mu$, we denote by $\widetilde Z$ the preimage of $Z$ by
the quotient map $\widetilde T_\mu\to T_\mu$. In particular, the
notation $\widetilde Z$ is defined for any MV cycle $Z$.
Pick $\bm\mu=(\mu_1,\ldots,\mu_n)$ in $\Lambda^n$ and
$\mathbf Z=(Z_1,\ldots,Z_n)$ in
$\mathscr Z(\lambda_1)_{\mu_1}\times\cdots\times
\mathscr Z(\lambda_n)_{\mu_n}$. Then
$$\Bigl\{[g_1,\ldots,g_n]\Bigm|(g_1,\ldots,g_n)\in
\widetilde Z_1\times\cdots\times\widetilde Z_n\Bigr\}$$
is contained in $(m_n)^{-1}\bigl(T_{|\bm\mu|}\bigr)$ and its closure
in this set is an MV cycle in $\mathscr Z(\bm\lambda)_{|\bm\mu|}$.
Each MV cycle in $\mathscr Z(\bm\lambda)$ can be uniquely obtained in
this manner, which defines the bijection~\eqref{eq:ProdCycMV}.
Because of this, we will allow ourselves to write elements in
$\mathscr Z(\bm\lambda)$ as tuples $\mathbf Z$ as above.
\subsection{Transition matrix}
\label{ss:TransMat}
We continue with our tuple of dominant weights
$\bm\lambda=(\lambda_1,\ldots,\lambda_n)$. To compute the MV basis of
$V(\bm\lambda)$, we compare it with the tensor product of the MV bases
of the factors $V(\lambda_1)$, \ldots, $V(\lambda_n)$. This requires the
introduction of a nice geometric object.
Let $n$ be a positive integer. We define the $n$-fold Beilinson-Drinfeld
convolution variety $\BDConv_n$ as the set of pairs
$(x_1,\ldots,x_n;[g_1,\ldots,g_n])$, where
$(x_1,\ldots,x_n)\in\mathbb C^n$ and $[g_1,\ldots,g_n]$ belongs to
$$G^\vee\bigl(\mathbb C\bigl[z,(z-x_1)^{-1}\bigr]\bigr)
\,\times^{G^\vee(\mathbb C[z])}\;\cdots\,\times^{G^\vee(\mathbb C[z])}\;
G^\vee\bigl(\mathbb C\bigl[z,(z-x_n)^{-1}\bigr]\bigr)
\,/\,G^\vee\bigl(\mathbb C[z]\bigr).$$
We denote by $\pi:\BDConv_n\to\mathbb C^n$ the morphism which forgets
$[g_1,\ldots,g_n]$. It is known that $\BDConv_n$ is endowed with
the structure of an ind-variety and that $\pi$ is ind-proper.
To each composition $\mathbf n=(n_1,\ldots,n_r)$ of $n$ in $r$ parts
corresponds a partial diagonal $\Delta_{\mathbf n}$ in $\mathbb C^n$,
defined as the set of all elements of the form
\begin{equation}
\label{eq:PartDiag}
\mathbf x=(\underbrace{x_1,\ldots,x_1}_{n_1\text{ times}},
\ldots,\underbrace{x_r,\ldots,x_r}_{n_r\text{ times}})
\end{equation}
for $(x_1,\ldots,x_r)\in\mathbb C^r$. The small diagonal is the particular
case $\mathbf n=(n)$; we denote it simply by $\Delta$. We define
$\BDConv_n\bigl|_{\Delta_{\mathbf n}}$ to be $\pi^{-1}(\Delta_{\mathbf n})$.
Given $g\in G^\vee\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)$ and
$x\in\mathbb C$, we denote by $g_{|x}$ the result of substituting
$z-x$ for $z$ in~$g$. For $\mu\in\Lambda$ and $x\in\mathbb C$, we define
$$\widetilde T_{\mu|x}=
N^{-,\vee}\bigl(\mathbb C\bigl[z,(z-x)^{-1}\bigr]\bigr)\,(z-x)^\mu;$$
this is the set of all elements of the form $g_{|x}$ with
$g\in\widetilde T_\mu$. Given $\bm\mu=(\mu_1,\ldots,\mu_n)$ in
$\Lambda^n$, we define $\mathcal T_{\bm\mu}$ to be the set of all pairs
$(x_1,\ldots,x_n,[g_1,\ldots,g_n])$ with $(x_1,\ldots,x_n)\in\mathbb C^n$
and $g_j\in\widetilde T_{\mu_j|x_j}$ for each $j\in\{1,\ldots,n\}$.
For $\mu\in\Lambda$, we set (leaving $n$ out of the notation)
$$\dot T_\mu=\bigcup_{\substack{\bm\mu\in\Lambda^n\\[2pt]
|\bm\mu|=\mu}}\mathcal T_{\bm\mu}.$$
Given a $N^{-,\vee}(\mathbb C[z])$-invariant subset $Z\subset T_\mu$, we
denote by $\widetilde Z_{|x}$ the set of all elements of the form
$g_{|x}$ with $g\in\widetilde Z$. Given $(\mu_1,\ldots,\mu_n)\in\Lambda^n$
and $\mathbf Z=(Z_1,\ldots,Z_n)$ in $\mathscr Z(\lambda_1)_{\mu_1}\times
\cdots\times\mathscr Z(\lambda_n)_{\mu_n}$, we define
$\dot{\mathcal X}(\mathbf Z)$ to be the set of all pairs
$(x_1,\ldots,x_n;[g_1,\ldots,g_n])$ with $(x_1,\ldots,x_n)\in\mathbb C^n$
and $g_j\in\widetilde Z_{j|x_j}$ for each $j\in\{1,\ldots,n\}$.
(This subset $\dot{\mathcal X}(\mathbf Z)$ is denoted by
$\Uppsi(Z_1\caltimes\cdots\caltimes Z_n)$
in~\cite{BaumannGaussentLittelmann}.)
Given in addition a composition $\mathbf n$ of $n$, we define
$$\mathcal X(\mathbf Z,\mathbf n)=\overline{\dot{\mathcal X}(\mathbf Z)
\bigl|_{\Delta_{\mathbf n}}}\cap\BDConv_n^{\bm\lambda}$$
where the bar means closure in $\dot T_\mu\bigl|_{\Delta_{\mathbf n}}$.
For given $\bm\lambda$, $\mu$ and $\mathbf n$,
the subsets $\mathcal X(\mathbf Z,\mathbf n)$ for $\mathbf Z$ in
$$\mathscr Z(\bm\lambda)_\mu=
\bigsqcup_{\substack{(\mu_1,\ldots,\mu_n)\in\Lambda^n\\[2pt]
\mu_1+\cdots+\mu_n=\mu}}\mathscr Z(\lambda_1)_{\mu_1}
\times\cdots\times\,\mathscr Z(\lambda_n)_{\mu_n}$$
are the irreducible components of $\bigl(\BDConv_n^{\bm\lambda}\cap
\dot T_\mu\bigr)\bigl|_{\Delta_{\mathbf n}}$ (see
\cite{BaumannGaussentLittelmann}, proof of Proposition~5.4).
We adopt a special notation for the small diagonal and set
$\mathcal Y(\mathbf Z)=\mathcal X(\mathbf Z,(n))$.
Now fix $n$, the tuple $\bm\lambda\in(\Lambda^+)^n$, the weight
$\mu\in\Lambda$, and the composition $\mathbf n$ of $n$. We write
$\bm\lambda$ as a concatenation
$\bigl(\bm\lambda_{(1)},\ldots,\bm\lambda_{(r)}\bigr)$, where each
$\bm\lambda_{(j)}$ belongs to $(\Lambda^+)^{n_j}$, and similarly we
write each tuple $\mathbf Z\in\mathscr Z(\bm\lambda)_\mu$ as
$\strut\bigl(\mathbf Z_{(1)},\ldots,\mathbf Z_{(r)}\bigr)$ with
$\mathbf Z_{(j)}\in\mathscr Z(\bm\lambda_{(j)})$. Then
$$V(\bm\lambda)=V\bigl(\bm\lambda_{(1)}\bigr)\otimes\cdots\otimes
V\bigl(\bm\lambda_{(r)}\bigr)\quad\text{and}\quad
\bigl\langle\mathbf Z_{(j)}\bigr\rangle\in V\bigl(\bm\lambda_{(j)}\bigr).$$
With this notation (\cite{BaumannGaussentLittelmann}, Proposition~5.10):
\begin{proposition}
\label{pr:TransMat}
Let $(\mathbf Z',\mathbf Z'')\in(\mathscr Z(\bm\lambda)_\mu)^2$.
The coefficient $b_{\mathbf Z',\mathbf Z''}$ in the expansion
$$\bigl\langle\mathbf Z''_{(1)}\bigr\rangle\otimes\cdots\otimes
\bigl\langle\mathbf Z''_{(r)}\bigr\rangle=\sum_{\mathbf Z\in\mathscr Z
(\bm\lambda)_\mu}b_{\mathbf Z,\mathbf Z''}\;\langle\mathbf Z\rangle$$
is the multiplicity of $\mathcal Y(\mathbf Z')$ in the intersection
product $\mathcal X(\mathbf Z'',\mathbf n)\,\cdot\,
\BDConv_n^{\bm\lambda}\bigl|_\Delta$ computed in the
ambient~space $\BDConv_n^{\bm\lambda}\bigl|_{\Delta_{\mathbf n}}$.
\end{proposition}
\subsection{Projecting onto Cartan components}
\label{ss:ProjCart}
We continue with the setup of the previous section. First let $n$ be
a positive integer, let $\bm\lambda\in(\Lambda^+)^n$, and denote by
$p:V(\bm\lambda)\to V(|\bm\lambda|)$ the projection onto the Cartan
component, with kernel the sum of the other isotypical components of
$V(\bm\lambda)$.
The map $m_n:\Gr_n\to\Gr$ restricts to an isomorphism
$\Gr_n^{\bm\lambda}\cap(m_n)^{-1}\bigl(\Gr^{|\bm\lambda|}\bigr)\to
\Gr^{|\bm\lambda|}$ (see \cite{Haines}, p.~2110). Given $\mu\in\Lambda$
and $\mathbf Z\in\mathscr Z(\bm\lambda)_\mu$, we define $|\mathbf Z|$
to be the closure in $T_\mu$ of $m_n(\mathbf Z)\cap\Gr^{|\bm\lambda|}$.
The following proposition is a direct consequence of Theorem~3.3 in
\cite{BaumannGaussentLittelmann} and its proof.
\begin{proposition}
\label{pr:CompatIsot}
\begin{enumerate}
\item
The map $\mathbf Z\mapsto|\mathbf Z|$ defines a bijection
$\bigl\{\mathbf Z\in\mathscr Z(\bm\lambda)\bigm||
\mathbf Z|\neq\varnothing\bigr\}\to\mathscr Z(|\bm\lambda|)$.
\item
Let $\mathbf Z\in\mathscr Z(\bm\lambda)$. If $|\mathbf Z|\neq\varnothing$,
then $p(\langle\mathbf Z\rangle)=\bigl\langle|\mathbf Z|\bigr\rangle$;
otherwise $p(\langle\mathbf Z\rangle)=0$.
\end{enumerate}
\end{proposition}
Now let $\mathbf n=(n_1,\ldots,n_r)$ be a composition of $n$ in $r$ parts.
We again write $\bm\lambda$ as a concatenation $\bigl(\bm\lambda_{(1)},
\ldots,\bm\lambda_{(r)}\bigr)$, where each $\bm\lambda_{(j)}$ belongs
to $(\Lambda^+)^{n_j}$, and set $\|\bm\lambda\|=\bigl(\bigl|
\bm\lambda_{(1)}\bigr|,\ldots,\bigl|\bm\lambda_{(r)}\bigr|\bigr)$; then
$$V(\|\bm\lambda\|)=V\bigl(\bigl|\bm\lambda_{(1)}\bigr|\bigr)
\otimes\cdots\otimes V\bigl(\bigl|\bm\lambda_{(r)}\bigr|\bigr).$$
For each $j\in\{1,\ldots,r\}$, we denote by $p_{(j)}:V\bigl(
\bm\lambda_{(j)}\bigr)\to V\bigl(\bigl|\bm\lambda_{(j)}\bigr|\bigr)$
the projection onto the Cartan component and define
$$\mathbf p=p_{(1)}\otimes\cdots\otimes p_{(r)};$$
thus $\mathbf p:V(\bm\lambda)\to V(\|\bm\lambda\|)$.
Likewise, we again write each tuple $\mathbf Z\in\mathscr Z(\bm\lambda)$
as a concatenation $\bigl(\mathbf Z_{(1)},\ldots,\mathbf Z_{(r)}\bigr)$
with $\mathbf Z_{(j)}\in\mathscr Z(\bm\lambda_{(j)})$ and set
$\|\mathbf Z\|=\bigl(\bigl|\mathbf Z_{(1)}\bigr|,\ldots,
\bigl|\mathbf Z_{(r)}\bigr|\bigr)$.
\begin{proposition}
\label{pr:ProjCart}
Let $\mathbf Z\in\mathscr Z(\bm\lambda)$.
If $\bigl|\mathbf Z_{(j)}\bigr|\neq\varnothing$
for all $j\in\{1,\ldots,r\}$, then
$\mathbf p(\langle\mathbf Z\rangle)=\bigl\langle\|\mathbf Z\|\bigr\rangle$;
otherwise $p(\langle\mathbf Z\rangle)=0$.
\end{proposition}
\begin{proof}
Let $\mathring{\mathscr Z}(\bm\lambda)$ be the set of all
$\mathbf Z\in\mathscr Z(\bm\lambda)$ such that
$\bigl|\mathbf Z_{(j)}\bigr|\neq\varnothing$ for all $j\in\{1,\ldots,r\}$;
then the map $\mathbf Z\mapsto\|\mathbf Z\|$ realizes a bijection from
$\mathring{\mathscr Z}(\bm\lambda)$ onto $\mathscr Z(\|\bm\lambda\|)$.
We fix a weight $\mu\in\Lambda$ and introduce the transition matrices
$(b_{\mathbf Z',\mathbf Z''})$ and $(a_{\mathbf Y',\mathbf Y''})$,
where $(\mathbf Z',\mathbf Z'')\in(\mathscr Z(\bm\lambda)_\mu)^2$
and $(\mathbf Y',\mathbf Y'')\in(\mathscr Z(\|\bm\lambda\|)_\mu)^2$,
that encode the expansions
$$\bigl\langle\mathbf Z''_{(1)}\bigr\rangle\otimes\cdots\otimes
\bigl\langle\mathbf Z''_{(r)}\bigr\rangle=\sum_{\mathbf Z'\in\mathscr Z
(\bm\lambda)_\mu}b_{\mathbf Z',\mathbf Z''}\;\langle\mathbf Z'\rangle$$
and
$$\bigl\langle Y''_1\bigr\rangle\otimes\cdots\otimes
\bigl\langle Y''_r\bigr\rangle=\sum_{\mathbf Y'\in\mathscr Z
(\|\bm\lambda\|)_\mu}a_{\mathbf Y',\mathbf Y''}\;\langle\mathbf Y'\rangle$$
on the MV bases of $V(\bm\lambda)$ and $V(\|\bm\lambda\|)$.
We claim that if $\mathbf Z'\in\mathring{\mathscr Z}(\bm\lambda)$, then
\begin{equation}
\label{eq:ClaimTM}
b_{\mathbf Z',\mathbf Z''}=
\begin{cases}
a_{\|\mathbf Z'\|,\|\mathbf Z''\|}&
\text{if $\mathbf Z''\in\mathring{\mathscr Z}(\bm\lambda)$,}\\[2pt]
0&\text{otherwise.}
\end{cases}
\end{equation}
Assuming \eqref{eq:ClaimTM}, we conclude the proof as follows. Let
$\widetilde{\mathbf p}:V(\bm\lambda)\to V(\|\bm\lambda\|)$ be the linear
map defined by the requirement that for all
$\mathbf Z\in\mathscr Z(\bm\lambda)$,
$$\widetilde{\mathbf p}(\langle\mathbf Z\rangle)=
\begin{cases}
\bigl\langle\|\mathbf Z\|\bigr\rangle&
\text{if $\mathbf Z\in\mathring{\mathscr Z}(\bm\lambda)$,}\\[2pt]
0&\text{otherwise.}
\end{cases}$$
Then \eqref{eq:ClaimTM} gives
$$\widetilde{\mathbf p}\bigl(\bigl\langle\mathbf Z_{(1)}\bigr\rangle
\otimes\cdots\otimes\bigl\langle\mathbf Z_{(r)}\bigr\rangle\bigr)=
\begin{cases}
\bigl\langle|\mathbf Z_{(1)}|\bigr\rangle\otimes\cdots\otimes
\bigl\langle|\mathbf Z_{(r)}|\bigr\rangle&
\text{if $\mathbf Z\in\mathring{\mathscr Z}(\bm\lambda)$,}\\[2pt]
0&\text{otherwise,}
\end{cases}$$
and from Proposition~\ref{pr:CompatIsot}, we conclude that
$\widetilde{\mathbf p}=\mathbf p$.
We are thus reduced to prove~\eqref{eq:ClaimTM}. We define a map
$\mathbf{m_n}:\BDConv_n\bigl|_{\Delta_{\mathbf n}}\to\BDConv_r$ by
$$\mathbf{m_n}(\mathbf x;[g_1,\ldots,g_n])=
(x_1,\ldots,x_r;[g_1\cdots g_{n_1},\ g_{n_1+1}\cdots
g_{n_1+n_2},\ \ldots,\ g_{n_1+\ldots+n_{r-1}+1}\cdots g_n])$$
for $\mathbf x$ as in \eqref{eq:PartDiag}. Then
$\mathcal U=\BDConv_n^{\bm\lambda}\bigl|_{\Delta_{\mathbf n}}\cap
(\mathbf{m_n})^{-1}\Bigl(\BDConv_r^{\|\bm\lambda\|}\Bigr)$ is an open
subset of $\BDConv_n^{\bm\lambda}\bigl|_{\Delta_{\mathbf n}}$ and
$\mathbf{m_n}$ restricts to an isomorphism
$\mathcal U\to\BDConv_r^{\|\bm\lambda\|}$.
Let $(\mathbf Z',\mathbf Z'')\in(\mathscr Z(\bm\lambda)_\mu)^2$.
By Proposition~\ref{pr:TransMat}, the
coefficient $b_{\mathbf Z',\mathbf Z''}$ is the multiplicity of
$\mathcal Y(\mathbf Z')$ in the intersection product
$\mathcal X(\mathbf Z'',\mathbf n)\,\cdot
\bigl(\BDConv_n^{\bm\lambda}\bigr)\bigl|_\Delta$ computed in the
ambient space $\BDConv_n^{\bm\lambda}\bigl|_{\Delta_{\mathbf n}}$.
Assume first that both $\mathbf Z'$ and $\mathbf Z''$ lie in
$\mathring{\mathscr Z}(\bm\lambda)$. Then the open subset $\mathcal U$
meets $\mathcal Y(\mathbf Z')$ and $\mathcal X(\mathbf Z'',\mathbf n)$.
Since intersection multiplicities are of local nature,
$b_{\mathbf Z',\mathbf Z''}$ is the multiplicity of
$\mathcal Y(\mathbf Z')\cap\mathcal U$ in the intersection product
$\bigl(\mathcal X(\mathbf Z'',\mathbf n)\cap\mathcal U\bigr)\,\cdot
\mathcal U\bigl|_\Delta$ computed in the ambient space
$\mathcal U\bigl|_{\Delta_{\mathbf n}}$. On the other hand,
Proposition \ref{pr:TransMat} for the composition $(1^r)=(1,\ldots,1)$
of $r$ gives that $a_{\|\mathbf Z'\|,\|\mathbf Z''\|}$ is
the multiplicity of $\mathcal Y(\|\mathbf Z'\|)$ in the intersection
product $\mathcal X(\|\mathbf Z''\|,(1^r))\,\cdot
\bigl(\BDConv_r^{\|\bm\lambda\|}\bigr)\bigl|_\Delta$ computed in the
ambient space $\BDConv_r^{\|\bm\lambda\|}$.
Observing that
$$\mathbf{m_n}\bigl(\mathcal Y(\mathbf Z')\cap\mathcal U\bigr)
=\mathcal Y(\|\mathbf Z'\|)\quad\text{and}\quad
\mathbf{m_n}\bigl(\mathcal X(\mathbf Z'',\mathbf n)\cap
\mathcal U\bigr)=\mathcal X(\|\mathbf Z''\|,(1^r)),$$
we conclude that
$b_{\mathbf Z',\mathbf Z''}=a_{\|\mathbf Z'\|,\|\mathbf Z''\|}$
in this case.
Now assume that $\mathbf Z'$ is in $\mathring{\mathscr Z}(\bm\lambda)$
but not $\mathbf Z''$. Then there exists $j\in\{1,\ldots,r\}$ such that
$\mathbf Z''_{(j)}$ is contained in
$F=\overline{\Gr_{n_j}^{\bm\lambda_{(j)}}}
\setminus(m_{n_j})^{-1}\bigl(\Gr^{|\bm\lambda_{(j)}|}\bigr)$.
For $x\in\mathbb C$, denote by $\widehat F_{\,|x}$ the set of all
tuples $\bigl(g_{1|x},\ldots,g_{n_j|x}\bigr)$ where
$$(g_1,\ldots,g_{n_j})\in\bigl(G^\vee\bigl(\mathbb C\bigl[z,z^{-1}\bigr]
\bigr)\bigr)^{n_j}\quad\text{and}\quad[g_1,\ldots,g_{n_j}]\in F;$$
denote by $\mathcal F$ the subset of
$\BDConv_n^{\bm\lambda}\bigl|_{\Delta_{\mathbf n}}$ consisting
of all pairs $(\mathbf x;[g_1,\ldots,g_n])$ such that
$$(g_{n_1+\cdots+n_{j-1}+1},\ldots,g_{n_1+\cdots+n_j})\in
\widehat F_{\,|x_j}$$
where $\mathbf x$ is written as in \eqref{eq:PartDiag}.
Then $F$ is closed in $\overline{\Gr_{n_j}^{\bm\lambda_{(j)}}}$ and
$\mathcal X(\mathbf Z'',\mathbf n)$ is contained in $\mathcal F$.
As $\mathcal Y(\mathbf Z')$ is not contained in $\mathcal F$,
it is not contained in $\mathcal X(\mathbf Z'',\mathbf n)$,
so here $b_{\mathbf Z',\mathbf Z''}=0$.
\end{proof}
\subsection{Truncation}
\label{ss:Trunc}
In this section, we come back to the setup of sect.~\ref{ss:TransMat}
and record a property which will simplify our analysis.
We fix nonnegative integers $n_1$, $n_2$, $n_3$ and tuples
$\bm\lambda_{(1)}\in(\Lambda^+)^{n_1}$,
$\bm\lambda_{(2)}\in(\Lambda^+)^{n_2}$,
$\bm\lambda_{(3)}\in(\Lambda^+)^{n_3}$.
We define $\bm\lambda$ to be the concatenation
$\bigl(\bm\lambda_{(1)},\bm\lambda_{(2)},\bm\lambda_{(3)}\bigr)$ and we
regard elements $\mathbf Z\in\mathscr Z(\bm\lambda)$ as concatenations
$\bigl(\mathbf Z_{(1)},\mathbf Z_{(2)},\mathbf Z_{(3)}\bigr)$
where each $\mathbf Z_{(j)}$ belongs to $\mathscr Z(\bm\lambda_{(j)})$.
If $\nu\in\Lambda$ and $\mathbf Z\in\mathscr Z(\bm\lambda_{(3)})_\nu$,
then we set $\wt\mathbf Z=\nu$.
We fix a weight $\mu\in\Lambda$ and introduce the transition
matrix $(a_{\mathbf Z',\mathbf Z''})$,
where $(\mathbf Z',\mathbf Z'')\in(\mathscr Z(\bm\lambda)_\mu)^2$,
that encodes the expansions
$$\bigl\langle\mathbf Z''_{(1)}\bigr\rangle\otimes\bigl\langle
\bigl(\mathbf Z''_{(2)},\mathbf Z''_{(3)}\bigr)\bigr\rangle=
\sum_{\mathbf Z'\in\mathscr Z(\bm\lambda)_\mu}
a_{\mathbf Z',\mathbf Z''}\;\bigl\langle\bigl(\mathbf Z'_{(1)},
\mathbf Z'_{(2)},\mathbf Z'_{(3)}\bigr)\bigr\rangle$$
on the MV basis of $V(\bm\lambda)$.
\begin{proposition}
\label{pr:Trunc}
\begin{enumerate}
\item
Let $(\mathbf Z',\mathbf Z'')\in(\mathscr Z(\bm\lambda)_\mu)^2$.
If $a_{\mathbf Z',\mathbf Z''}\neq0$, then either $\wt\mathbf Z'_{(3)}
<\wt\mathbf Z''_{(3)}$ or $\mathbf Z'_{(3)}=\mathbf Z''_{(3)}$.
\item
Let $\mathbf Z''\in\mathscr Z(\bm\lambda)_\mu$. Then
$$\bigl\langle\mathbf Z''_{(1)}\bigr\rangle\otimes
\bigl\langle\mathbf Z''_{(2)}\bigr\rangle=
\sum_{\substack{\mathbf Z'\in\mathscr Z(\bm\lambda)_\mu\\[2pt]
\mathbf Z'_{(3)}=\mathbf Z''_{(3)}}}
a_{\mathbf Z',\mathbf Z''}\;\bigl\langle\bigl(\mathbf Z'_{(1)},
\mathbf Z'_{(2)}\bigr)\bigr\rangle$$
in $V\bigl(\bm\lambda_{(1)}\bigr)\otimes V\bigl(\bm\lambda_{(2)}\bigr)$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\mathbf Z''\in\mathscr Z(\bm\lambda)_\mu$ and set $\nu=\wt Z''_{(3)}$.
We can expand
$$\bigl\langle\mathbf Z''_{(1)}\bigr\rangle\otimes
\bigl\langle\mathbf Z''_{(2)}\bigr\rangle=
\sum_{\mathbf Z\in\mathscr Z(\bm\lambda_{(1)},\bm\lambda_{(2)})_{\mu-\nu}}
c_{\mathbf Z,\mathbf Z''}\;\langle\mathbf Z\rangle$$
on the MV basis of
$V\bigl(\bm\lambda_{(1)}\bigr)\otimes V\bigl(\bm\lambda_{(2)}\bigr)$.
We denote by $V\bigl(\bm\lambda_{(3)}\bigr){}^{}_{<\nu}$ the sum of the
$\xi$-weight subspaces of $V\bigl(\bm\lambda_{(3)}\bigr)$ with
$\xi<\nu$. By~Theorem~5.13
in~\cite{BaumannGaussentLittelmann},
$$\bigl\langle\mathbf Z''_{(2)}\bigr\rangle\otimes
\bigl\langle\mathbf Z''_{(3)}\bigr\rangle\equiv
\bigl\langle\bigl(\mathbf Z''_{(2)},\mathbf Z''_{(3)}\bigr)\bigr\rangle
\quad\bigl(\bmod\ V\bigl(\bm\lambda_{(2)}\bigr)\otimes
V\bigl(\bm\lambda_{(3)}\bigr){}^{}_{<\nu}\bigr)$$
and for each $\mathbf Z\in\mathscr Z(\bm\lambda_{(1)},\bm\lambda_{(2)})$,
$$\bigl\langle\mathbf Z\bigr\rangle\otimes
\bigl\langle\mathbf Z''_{(3)}\bigr\rangle\equiv
\bigl\langle\bigl(\mathbf Z,\mathbf Z''_{(3)}\bigr)\bigr\rangle
\quad\bigl(\bmod\ V\bigl(\bm\lambda_{(1)}\bigr)\otimes
V\bigl(\bm\lambda_{(2)}\bigr)\otimes
V\bigl(\bm\lambda_{(3)}\bigr){}^{}_{<\nu}\bigr).$$
Thus,
$$\sum_{\mathbf Z'\in\mathscr Z(\bm\lambda)_\mu}
a_{\mathbf Z',\mathbf Z''}\;\bigl\langle\bigl(\mathbf Z'_{(1)},
\mathbf Z'_{(2)},\mathbf Z'_{(3)}\bigr)\bigr\rangle\equiv
\sum_{\mathbf Z\in\mathscr Z(\bm\lambda_{(1)},\bm\lambda_{(2)})_{\mu-\nu}}
c_{\mathbf Z,\mathbf Z''}\;
\bigl\langle\bigl(\mathbf Z,\mathbf Z''_{(3)}\bigr)\bigr\rangle$$
modulo $V\bigl(\bm\lambda_{(1)}\bigr)\otimes
V\bigl(\bm\lambda_{(2)}\bigr)\otimes
V\bigl(\bm\lambda_{(3)}\bigr){}^{}_{<\nu}$. We conclude by noting, by
means of Proposition~5.11 in~\cite{BaumannGaussentLittelmann}, that the
latter space is spanned by the basis vectors $\langle\mathbf Z'\rangle$
such that $\wt\mathbf Z'_{(3)}<\nu$.
\end{proof}
\section{Geometry}
\label{se:Geometry}
In this section, we prove that the MV basis of the tensor powers of the
natural representation of $G=\SL_2(\mathbb C)$ is the basis $(y_w)$ from
sect.~\ref{se:CombLin}. As a matter of fact, by Theorem~5.13
in~\cite{BaumannGaussentLittelmann}, the MV basis satisfies the first
equation in \eqref{eq:DefYw}, so we only have to prove that it satisfies
the second one too.
\subsection{Notation}
We endow $G$ with its usual maximal torus and Borel subgroup. The weight
lattice is represented as usual as the quotient $(\mathbb Z\varepsilon_1
\oplus\mathbb Z\varepsilon_2)/\mathbb Z(\varepsilon_1+\varepsilon_2)$.
The fundamental weight $\varpi$ is the image of $\varepsilon_1$ in this
quotient. The notation $\Gr$ indicates the affine Grassmannian of
$G^\vee=\PGL_2(\mathbb C)$.
In this section, $\bm\lambda$ will always be of the form
$(\varpi,\ldots,\varpi)$; the number $n$ of times $\varpi$ is repeated will
usually appears as a subscript in notation like $\Gr_n^{\bm\lambda}$ or
$\BDConv_n^{\bm\lambda}$.
The cell $\Gr^{\varpi}$ is isomorphic to the projective line, hence is
closed. The two MV cycles in $\mathscr Z(\varpi)$ are
$$Z_+=\Gr^{\varpi}\cap T_{\varpi}=\biggl\{\biggl[\begin{pmatrix}z&0\\
0&1\end{pmatrix}\biggr]\biggr\}\quad\text{and}\quad
Z_-=\Gr^{\varpi}\cap T_{-\varpi}=\biggl\{\biggl[\begin{pmatrix}1&0\\
a&z\end{pmatrix}\biggr]\biggm|a\in\mathbb C\biggr\}$$
(the matrices above should actually be viewed in
$\PGL_2\bigl(\mathbb C\bigl[z,z^{-1}\bigr]\bigr)$).
The standard basis of $V(\varpi)=\mathbb C^2$ is then
$(x_+,x_-)=(\langle Z_+\rangle,\langle Z_-\rangle)$.
Given a word $v\in\mathscr C_n$, we set
$$P(v)=\bigl\{\ell\in\{1,\ldots,n\}\bigm|v(\ell)=+\bigr\}\quad\text{and}
\quad\mathbf Z_v=\bigl(Z_{v(1)},\ldots,Z_{v(n)}\bigr).$$
Thanks to the bijection~\eqref{eq:ProdCycMV}, we regard $\mathbf Z_v$
as an element in $\mathscr Z(\bm\lambda)$.
For $(x,a)\in\mathbb C^2$, we set
$$\varphi_+(x,a)=\begin{pmatrix}z-x&a\\0&1\end{pmatrix}\quad\text{and}\quad
\varphi_-(x,a)=\begin{pmatrix}1&0\\a&z-x\end{pmatrix}.$$
Recall the notation introduced in sect.~\ref{ss:TransMat}.
For each word $v\in\mathscr C_n$, we define an embedding
$\upphi_v:\mathbb C^{2n}\to\BDConv_n^{\bm\lambda}$ by
$$\upphi_v(\mathbf x;\mathbf a)=\bigl(\mathbf x;\bigl[\varphi_{v(1)}
(x_1,a_1),\ldots,\varphi_{v(n)}(x_n,a_n)\bigr]\bigr)$$
where $\mathbf x=(x_1,\ldots,x_n)$ and $\mathbf a=(a_1,\ldots,a_n)$.
The image of $\upphi_v$ is an open subset $U_v$ and $\upphi_v$ can be
regarded as a chart on the manifold $\BDConv_n^{\bm\lambda}$. This
chart is designed so that $\dot{\mathcal X}(\mathbf Z_v)$ is the
algebraic subset of $U_v$ defined by the equations $a_\ell=0$ for
$\ell\in P(v)$ (compare with the construction presented
in~\cite{GaussentLittelmann}).
\subsection{The simplest example}
In this section, we consider the case $n=2$; the variety
$\BDConv_2^{\bm\lambda}$ has dimension $4$. The words $v=+-$ and
$w=-+$ give rise to charts $\upphi_v$ and $\upphi_w$ on
$\BDConv_2^{\bm\lambda}$ defined by
\begin{align*}
\upphi_v(x_1,x_2;a_1,a_2)&=\biggl(x_1,x_2;\biggl[
\begin{pmatrix}z-x_1&a_1\\0&1\end{pmatrix},
\begin{pmatrix}1&0\\a_2&z-x_2\end{pmatrix}\biggr]\biggr),\\[6pt]
\upphi_w(x_1,x_2;b_1,b_2)&=\biggl(x_1,x_2;\biggl[
\begin{pmatrix}1&0\\b_1&z-x_1\end{pmatrix},
\begin{pmatrix}z-x_2&b_2\\0&1\end{pmatrix}\biggr]\biggr).
\end{align*}
The transition map $(\upphi_w)^{-1}\circ\upphi_v$ is given by
$$b_1=1/a_1,\quad\;b_2=-a_1(x_2-x_1+a_1a_2)$$
on the domain
$$(\upphi_v)^{-1}(U_v\cap U_w)=\bigl\{(x_1,x_2,a_1,a_2)\in\mathbb C^4
\bigm|a_1\neq0\bigr\}.$$
We set $A=\mathbb C[x_1,x_2,a_1,a_2]$; this is the coordinate ring of
$(\upphi_v)^{-1}(U_v)$. We let $B=\mathscr S^{-1}A$ be the localization
of $A$ with respect to the multiplicative subset $\mathscr S$ generated
by~$a_1$; this is the coordinate ring of $(\upphi_v)^{-1}(U_v\cap U_w)$.
In the chart $\upphi_v$, the cycle $\mathcal Y(\mathbf Z_v)$ is defined
by the equations $a_1=x_1-x_2=0$, so the ideal in $A$ of the subvariety
$$V=(\upphi_v)^{-1}(\mathcal Y(\mathbf Z_v))$$
is
$$\mathfrak p=(a_1,x_1-x_2).$$
In the chart $\upphi_w$, the cycle $\dot{\mathcal X}(\mathbf Z_w)$
is defined by the equation $b_2=0$, and the closure in $U_v$
of $U_v\cap\dot{\mathcal X}(\mathbf Z_w)$ is
$U_v\cap\mathcal X(\mathbf Z_w,(1,1))$. Therefore the ideal in
$B$ of $(\upphi_v)^{-1}\bigl(U_v\cap\dot{\mathcal X}(\mathbf Z_w)\bigr)$
is $\mathring{\mathfrak q}=(-a_1(x_2-x_1+a_1a_2))$ and the ideal
in $A$ of the subvariety
$$X=(\upphi_v)^{-1}(U_v\cap\mathcal X(\mathbf Z_w,(1,1)))$$
is the preimage
$$\mathfrak q=(x_2-x_1+a_1a_2)$$
of $\mathring{\mathfrak q}$ under the canonical map $A\to B$.
Plainly $\mathfrak q\subset\mathfrak p$, which shows that $V\subset X$.
The local ring $\mathscr O_{V,X}$ of $X$ along $V$ is the localization
of $\overline A=A/\mathfrak q$ at the ideal
$\overline{\mathfrak p}=\mathfrak p/\mathfrak q$. Since $a_2$ is not
in $\mathfrak p$, its image in $\overline A_{\,\overline{\mathfrak p}}$
is invertible, and then we see that $x_1-x_2$ generates the maximal ideal
of $\overline A_{\,\overline{\mathfrak p}}$. As a consequence, the order
of vanishing of $x_1-x_2$ along $V$ (see~\cite{Fulton}, sect.~1.2) is
equal to one. By definition, this is the multiplicity of
$\mathcal Y(\mathbf Z_v)$ in the intersection product
$\mathcal X(\mathbf Z_w,(1,1))\cdot\,\BDConv_2^{\bm\lambda}\bigl|_\Delta$.
Proposition~\ref{pr:TransMat} then asserts that
$y_{+-}=\langle\mathbf Z_v\rangle$ occurs with coefficient one
in the expansion of
$x_w=\langle\mathbf Z_-\rangle\otimes\langle\mathbf Z_+\rangle$ on
the MV basis of $V(\varpi)^{\otimes2}$, in agreement with the equation
$$x_{-+}=y_{-+}+y_{+-}.$$
The proof of the general case follows the same pattern, but more
elaborate combinatorics is needed to manage the equations.
\subsection{Transition maps}
\label{ss:TransMaps}
Pick $v$, $w$ in $\mathscr C_n$. Set $P_0=S_0=1$ and $Q_0=R_0=0$.
For $\ell\in\{1,\ldots,n\}$, let $K_\ell=\mathbb C(x_1,\ldots,x_\ell,
a_1,\ldots,a_\ell)$ be the field of rational functions and define by
induction an element $b_\ell\in K_\ell$ and a matrix
$$\begin{pmatrix}P_\ell&Q_\ell\\R_\ell&S_\ell\end{pmatrix}$$
with coefficients in $K_\ell[z]$ and determinant one as follows:
\begin{itemize}
\item
If $(v(\ell),w(\ell))=(+,+)$, then
$$b_\ell=\frac{\bigl(a_\ell P_{\ell-1}+Q_{\ell-1}\bigr)\bigl(x_\ell\bigr)}
{\bigl(a_\ell R_{\ell-1}+S_{\ell-1}\bigr)\bigl(x_\ell\bigr)},\qquad
\left\{\begin{alignedat}2
P_\ell&=P_{\ell-1}-b_\ell R_{\ell-1},\quad\;&Q_\ell&=\frac{a_\ell
P_{\ell-1}+Q_{\ell-1}-b_\ell S_\ell}{z-x_\ell},\\
R_\ell&=(z-x_\ell)R_{\ell-1},\quad\;&S_\ell&=a_\ell R_{\ell-1}+S_{\ell-1}.
\end{alignedat}\right.$$
\item
If $(v(\ell),w(\ell))=(-,+)$, then
$$b_\ell=\frac{\bigl(P_{\ell-1}+a_\ell Q_{\ell-1}\bigr)\bigl(x_\ell\bigr)}
{\bigl(R_{\ell-1}+a_\ell S_{\ell-1}\bigr)\bigl(x_\ell\bigr)},\qquad
\left\{\begin{alignedat}2
P_\ell&=\frac{P_{\ell-1}+a_\ell Q_{\ell-1}-b_\ell R_\ell}{z-x_\ell},
\quad\;&Q_\ell&=Q_{\ell-1}-b_\ell S_{\ell-1},\\
R_\ell&=R_{\ell-1}+a_\ell S_{\ell-1},\quad\;&S_\ell&=(z-x_\ell)S_{\ell-1}.
\end{alignedat}\right.$$
\item
If $(v(\ell),w(\ell))=(+,-)$, then
$$b_\ell=\frac{\bigl(a_\ell R_{\ell-1}+S_{\ell-1}\bigr)\bigl(x_\ell\bigr)}
{\bigl(a_\ell P_{\ell-1}+Q_{\ell-1}\bigr)\bigl(x_\ell\bigr)},\qquad
\left\{\begin{alignedat}2
P_\ell&=(z-x_\ell)P_{\ell-1},\quad\;&Q_\ell&=a_\ell P_{\ell-1}+Q_{\ell-1},\\
R_\ell&=R_{\ell-1}-b_\ell P_{\ell-1},\quad\;&S_\ell&=\frac{a_\ell
R_{\ell-1}+S_{\ell-1}-b_\ell Q_\ell}{z-x_\ell}.
\end{alignedat}\right.$$
\item
If $(v(\ell),w(\ell))=(-,-)$, then
$$b_\ell=\frac{\bigl(R_{\ell-1}+a_\ell S_{\ell-1}\bigr)\bigl(x_\ell\bigr)}
{\bigl(P_{\ell-1}+a_\ell Q_{\ell-1}\bigr)\bigl(x_\ell\bigr)},\qquad
\left\{\begin{alignedat}2
P_\ell&=P_{\ell-1}+a_\ell Q_{\ell-1},\quad\;&Q_\ell&=(z-x_\ell)Q_{\ell-1},\\
R_\ell&=\frac{R_{\ell-1}+a_\ell S_{\ell-1}-b_\ell P_\ell}{z-x_\ell},
\quad\;&S_\ell&=S_{\ell-1}-b_\ell Q_{\ell-1}.
\end{alignedat}\right.$$
\end{itemize}
Since the matrix $\begin{pmatrix}P_{\ell-1}&Q_{\ell-1}\\
R_{\ell-1}&S_{\ell-1}\end{pmatrix}$ has determinant one,
the denominator in the fraction that defines $b_\ell$ is not the
zero polynomial and everything is well-defined.
\begin{proposition}
The transition map
$$(\upphi_w)^{-1}\circ\upphi_v:\upphi_v^{-1}(U_w)\to\upphi_w^{-1}(U_v)$$
is given by the rational map
$$(x_1,\ldots,x_n;a_1,\ldots,a_n)\mapsto(x_1,\ldots,x_n;b_1,\ldots,b_n)$$
where $b_1$, \dots, $b_n$ are defined above.
\end{proposition}
\begin{proof}
The definitions are set up so that
$$\varphi_{w(\ell)}(x_\ell,b_\ell)
\begin{pmatrix}P_\ell&Q_\ell\\R_\ell&S_\ell\end{pmatrix}
=\begin{pmatrix}P_{\ell-1}&Q_{\ell-1}\\R_{\ell-1}&S_{\ell-1}\end{pmatrix}
\varphi_{v(\ell)}(x_\ell,a_\ell)$$
and therefore
$$\Biggl(\prod_{j=1}^\ell\varphi_{w(j)}(x_j,b_j)\Biggr)
\begin{pmatrix}P_\ell&Q_\ell\\R_\ell&S_\ell\end{pmatrix}
=\Biggl(\prod_{j=1}^\ell\varphi_{v(j)}(x_j,a_j)\Biggr)$$
for each $\ell\in\{1,\ldots,n\}$. Thus, when complex values are assigned
to the indeterminates $x_1$, \dots, $x_n$, $a_1$, \dots, $a_n$, we get
$$\Biggl[\prod_{j=1}^\ell\varphi_{v(j)}(x_j,a_j)\Biggr]
=\Biggl[\prod_{j=1}^\ell\varphi_{w(j)}(x_j,b_j)\Biggr]$$
in $\PGL_2\bigl(\mathbb C\bigl[z,(z-x_1)^{-1},\ldots,(z-x_\ell)^{-1}
\bigr]\bigr)\,/\,\PGL_2(\mathbb C[z])$. This implies the equality
$$\upphi_v(x_1,\ldots,x_n;a_1,\ldots,a_n)=
\upphi_w(x_1,\ldots,x_n;b_1,\ldots,b_n)$$
in $\BDConv_n$.
\end{proof}
The parameters $b_\ell$ and the coefficients of the polynomials
$P_\ell$, $Q_\ell$, $R_\ell$, $S_\ell$ were defined as elements in
$K_\ell$. We can however be more precise and define recursively a
subring $B_\ell\subset K_\ell$ to which they belong: we start with
$B_0=\mathbb C$, and for $\ell\in\{1,\ldots,n\}$, we set
$B_\ell=B_{\ell-1}\bigl[x_\ell,a_\ell,f_\ell^{-1}\bigr]$, where
$f_\ell\in B_{\ell-1}[x_\ell,a_\ell]$ is the denominator in the
fraction that defines~$b_\ell$.
Let $A_\ell=\mathbb C[x_1,\ldots,x_\ell,a_1,\ldots,a_\ell]$ be the
polynomial algebra. One can then easily build by induction a finitely
generated multiplicative set $\mathscr S_\ell\subset A_\ell$ such that
$B_\ell$ is the localization $\mathscr S_\ell^{-1}A_\ell$. While $A_n$
is the coordinate ring of $(\upphi_v)^{-1}(U_v)$, we see that $B_n$ is
the coordinate ring of the open subset $(\upphi_v)^{-1}(U_v\cap U_w)$.
In fact, since the matrix $\begin{pmatrix}P_\ell&Q_\ell\\R_\ell&S_\ell
\end{pmatrix}$ has determinant one, the numerator and the denominator
of $b_\ell$ cannot both vanish at the same time. As a consequence,
$(\upphi_w)^{-1}\circ\upphi_v$ cannot be defined at a point where a
function in $\mathscr S_n$ vanishes.
\subsection{Finding the equations}
\label{ss:FindEqns}
To prove that the MV basis satisfies the equation~\eqref{eq:DefYw},
we need intersection multiplicities in the ambient space
$\BDConv_n^{\bm\lambda}\bigl|_{\Delta_{(1,n-1)}}$. In practice,
we make the base change $\Delta_{(1,n-1)}\to\mathbb C^n$ by letting
$x_2=\cdots=x_n$ in the definition of the charts and by agreeing that
\textbf{from now on, $U_v$ actually means $U_v\bigl|_{\Delta_{(1,n-1)}}$.}
Then, in view of the invariance of the whole system under translation
along the small diagonal $\Delta$, all our equations will only involve
the difference $x=x_1-x_2$.
We will consider words $v$ and $w$ in $\mathscr C_n$ such that
$(v(1),w(1))=(+,-)$ and $\wt(v)=\wt(w)$. The planar paths that
represent $v$ and $w$ have then the same endpoints. We write
$w$ as a concatenation $-w'$ where $w'\in\mathscr C_{n-1}$.
Proposition~\ref{pr:TransMat} asserts that the basis element $y_v$
occurs in the expansion of $x_-\otimes y_{w'}$ on the MV basis of
$V(\varpi)^{\otimes n}$ only if
$\mathcal Y(\mathbf Z_v)\subset\mathcal X(\mathbf Z_w,(1,n-1))$,
and when this condition is fulfilled, its coefficient is the multiplicity
of $\mathcal Y(\mathbf Z_v)$ in the intersection product
$\mathcal X(\mathbf Z_w,(1,n-1))\cdot\,\BDConv_n^{\bm\lambda}
\bigl|_{\Delta}$.
The next sections are devoted to the determination of these inclusions and
intersection multiplicities. The actual calculations require the ideals in
$A_n$ of the subvarieties $(\upphi_v)^{-1}(\mathcal Y(\mathbf Z_v))$
and $(\upphi_v)^{-1}(U_v\cap\mathcal X(\mathbf Z_w,(1,n-1)))$ of
$(\upphi_v)^{-1}(U_v)$: the first one, denoted by $\mathfrak p$,
is generated by $x$ and the elements $a_\ell$ for $\ell\in P(v)$;
the second one, denoted by $\mathfrak q$, is less easily determined.
Taking into account our notational convention regarding the base change
$\Delta_{(1,n-1)}\to\mathbb C^n$, we observe that
$U_v\cap\mathcal X(\mathbf Z_w,(1,n-1))$ is the closure in $U_v$ of
$U_v\cap\dot{\mathcal X}(\mathbf Z_w)$. Now let $\mathring{\mathfrak q}_n$
be the ideal in $B_n$ of the closed subset
$(\upphi_v)^{-1}\bigl(U_v\cap\dot{\mathcal X}(\mathbf Z_w)\bigr)$
of $(\upphi_v)^{-1}(U_v\cap U_w)$. Then $\mathring{\mathfrak q}_n$ is
generated by the elements $b_\ell$ for $\ell\in P(w)$ and $\mathfrak q$
is the preimage of $\mathring{\mathfrak q}_n$ under the canonical map
$A_n\to B_n$. In other words, $\mathfrak q$ is the saturation with
respect to $\mathscr S_n$ of the ideal of $A_n$ generated by the numerators
of the elements $b_\ell$ for $\ell\in P(w)$. Though algorithmically doable
in any concrete example, finding the saturation is a demanding calculation,
which we will bypass by replacing $\mathfrak q$ by an approximation
$\widetilde{\mathfrak q}_n$.
\subsection{Inclusion and multiplicity, I}
\label{ss:IncMulI}
This section is devoted to the situation where the paths representing $v$
and $w$ stay parallel to each other at distance~two; specifically, we
assume that $v(\ell)=w(\ell)$ for each $\ell\in\{2,\ldots,n-1\}$ and
$(v(n),w(n))=(-,+)$.
\begin{proposition}
\label{pr:IncMulI}
Under these assumptions:
\vspace{-12pt}
\begin{enumerate}
\item
\label{it:PrIMIa}
The inclusion
$\mathcal Y(\mathbf Z_v)\subset\mathcal X(\mathbf Z_w,(1,n-1))$
holds if and only if the last latter of $w'$ is significant.
\item
\label{it:PrIMIb}
If the condition in \ref{it:PrIMIa} is fulfilled, then the multiplicity
of $\mathcal Y(\mathbf Z_v)$ in the intersection product
$\mathcal X(\mathbf Z_w,(1,n-1))\cdot\,\BDConv_n^{\bm\lambda}
\bigl|_{\Delta}$ is equal to one.
\end{enumerate}
\end{proposition}
The proof of Proposition~\ref{pr:IncMulI} fills the remainder of this
section.
Let us denote by $S(v)$ the set of all positions $\ell\in\{1,\ldots,n\}$
such that the letter $v(\ell)$ is significant in~$v$.
In agreement with the convention set forth in sect.~\ref{ss:FindEqns},
we define $A_\ell=\mathbb C[x_2][x,a_1,\ldots,a_\ell]$ for each
$\strut\ell\in\{1,\ldots,n\}$, where $x=x_1-x_2$. We rewrite the
indeterminate $z$ as $\tilde z+x_2$. We~set $\widetilde P_1=\tilde z-x$
and $\widetilde Q_1=a_1$. For $\strut\ell\in\{2,\ldots,n-1\}$, we define
by induction two polynomials~$\widetilde P_\ell$, $\widetilde Q_\ell$
in $A_\ell[\tilde z]$ as follows:
\begin{itemize}
\item
If $v(\ell)=w(\ell)=+$ and $\ell\in S(v)$, then
$$\widetilde P_\ell=\widetilde P_{\ell-1}\quad\;\text{and}\quad\;
\widetilde Q_\ell=\frac{a_\ell\widetilde P_{\ell-1}+\widetilde Q_{\ell-1}
-\bigl(a_\ell\widetilde P_{\ell-1}+\widetilde Q_{\ell-1}\bigr)
\bigl(0\bigr)}{\tilde z}.$$
\item
If $v(\ell)=w(\ell)=+$ and $\ell\notin S(v)$, then
$\widetilde P_\ell=\widetilde P_{\ell-1}$ and
$\widetilde Q_\ell=\bigl(\widetilde Q_{\ell-1}-
\widetilde Q_{\ell-1}\bigl(0\bigr)\bigr)/{\tilde z}$.
\item
If $v(\ell)=w(\ell)=-$, then
$\widetilde P_\ell=\widetilde P_{\ell-1}+a_\ell\widetilde Q_{\ell-1}$ and
$\widetilde Q_\ell=\tilde z\,\widetilde Q_{\ell-1}$.
\end{itemize}
Moreover, in the case where $v(\ell)=w(\ell)=+$, set
$$\widetilde c_\ell=\begin{cases}
\bigl(a_\ell\widetilde P_{\ell-1}+\widetilde Q_{\ell-1}\bigr)
\bigl(0\bigr)&\text{if $\ell\in S(v)$,}\\[4pt]
\;a_\ell&\text{otherwise,}
\end{cases}$$
and set
$$\widetilde c_n=\bigl(\widetilde P_{n-1}+a_n\widetilde Q_{n-1}\bigr)
\bigl(0\bigr).$$
\begin{other}{Remark}
\label{rk:DependVar}
The polynomials $\widetilde P_\ell$ and $\widetilde Q_\ell$ do not depend
on the variables $a_j$ with $j\in P(v)\setminus S(v)$. The elements
$\widetilde c_\ell$ for $\ell\in\{2,\ldots,n-1\}\cap P(v)\cap S(v)$ and
$\widetilde c_n$ enjoy the same property.
\end{other}
\vspace*{-4pt}
For $\ell\in\{1,\ldots,n\}$:
\vspace*{-10pt}
\begin{itemize}
\item
let $\mathring{\mathfrak q}_\ell$ be the ideal of
$B_\ell$ generated by $\{b_j\mid j\in P(w),\;j\leq\ell\}$;
\item
let $\widetilde{\mathfrak q}_\ell$ be the ideal of $A_\ell$ generated
by $\{\widetilde c_j\mid j\in P(w),\;j\leq\ell\}$;
\item
let $d_\ell$ be the weight of the word $v(1)v(2)\cdots v(\ell)$ and
set $D_\ell=\max(d_1,d_2,\ldots,d_\ell)$.
\end{itemize}
As noticed before, a $+$ letter at position $\ell$ in $v$ is
significant if and only if $\ell$ marks the first time that the path
representing $v$ reaches a new height; agreeing that $D_0=0$, this
translates to
$$\ell\in P(v)\cap S(v)\;\Longleftrightarrow\;d_\ell>D_{\ell-1}.$$
For the record, we also note that the last letter of $w'$ is significant
if and only if $d_{n-1}=D_{n-1}$.
\begin{lemma}
\label{le:Induc}
For $\ell\in\{1,\ldots,n-1\}$, we have
\vspace{-12pt}
\renewcommand\theenumi{(\roman{enumi})${}_\ell$}
\begin{enumerate}
\item
$\mathscr S_\ell^{-1}\widetilde{\mathfrak q}_\ell=\mathring{\mathfrak q}_\ell$,
\item
$\widetilde P_\ell(\tilde z)\equiv P_\ell(z)
\pmod{\mathring{\mathfrak q}_\ell[z]}\;$ and
$\;\widetilde Q_\ell(\tilde z)\equiv Q_\ell(z)
\pmod{\mathring{\mathfrak q}_\ell[z]}$,
\item
$\tilde z^{D_\ell-d_\ell}$ divides $\widetilde Q_\ell$.
\end{enumerate}
\renewcommand\theenumi{(\alph{enumi})}
\end{lemma}
\begin{proof}
We proceed by induction on $\ell$. The statements are banal for $\ell=1$.
Suppose that $2\leq\ell\leq n-1$ and that statements
(i)${}_{\ell-1}$, (ii)${}_{\ell-1}$ and (iii)${}_{\ell-1}$ hold.
Suppose first that $(v(\ell),w(\ell))=(+,+)$. Then by construction
\begin{gather}
\label{eq:Induc1}
b_\ell=\bigl(a_\ell P_{\ell-1}+Q_{\ell-1}\bigr)\bigl(x_2\bigr)\times
f_\ell^{-1},\\[4pt]
\label{eq:Induc2}
P_\ell=P_{\ell-1}-b_\ell R_{\ell-1},\qquad Q_\ell=\frac{a_\ell
P_{\ell-1}+Q_{\ell-1}-b_\ell S_\ell}{z-x_2}.
\end{gather}
If $\ell\notin S(v)$, then $d_{\ell-1}+1=d_\ell\leq D_{\ell-1}$,
and we see by (iii)${}_{\ell-1}$ that $\widetilde Q_{\ell-1}(0)=0$.
Using (ii)${}_{\ell-1}$, we deduce that
$Q_{\ell-1}(x_2)\in\mathring{\mathfrak q}_{\ell-1}$.
On the other hand, the matrix
$\begin{pmatrix}P_{\ell-1}(x_2)&Q_{\ell-1}(x_2)\\
R_{\ell-1}(x_2)&S_{\ell-1}(x_2)\end{pmatrix}$
with coefficients in $B_{\ell-1}$ has determinant one. After reduction
modulo $\strut\mathring{\mathfrak q}_{\ell-1}$, the coefficient in the top
right corner becomes zero; it follows that $P_{\ell-1}(x_2)$ is invertible
in the quotient ring $B_{\ell-1}/\mathring{\mathfrak q}_{\ell-1}$.
Reducing~\eqref{eq:Induc1} modulo $\mathring{\mathfrak q}_{\ell-1}B_\ell$
and noting that here $\widetilde c_\ell=a_\ell$, we deduce that $b_\ell$
and $\widetilde c_\ell$ generate the same ideal in
$B_\ell/\mathring{\mathfrak q}_{\ell-1}B_\ell$.
This piece of information allows to deduce (i)${}_\ell$
from~(i)${}_{\ell-1}$. From~\eqref{eq:Induc2} and the fact that
$a_\ell\in\mathring{\mathfrak q}_\ell$, we get
$$P_\ell\equiv P_{\ell-1}\pmod{\mathring{\mathfrak q}_\ell[z]},\qquad
Q_\ell\equiv\frac{Q_{\ell-1}-Q_{\ell-1}(x_2)}{z-x_2}
\pmod{\mathring{\mathfrak q}_\ell[z]}.$$
Then (ii)${}_\ell$ and (iii)${}_\ell$ follow from
(ii)${}_{\ell-1}$ and (iii)${}_{\ell-1}$ and from the
definition of $\widetilde P_\ell$ and $\widetilde Q_\ell$.
If $\ell\in S(v)$, then \eqref{eq:Induc1} and (ii)${}_{\ell-1}$
lead to $b_\ell\equiv\widetilde c_\ell/f_\ell$ modulo
$\mathring{\mathfrak q}_{\ell-1}B_\ell$. Again, $b_\ell$ and
$\widetilde c_\ell$ generate the same ideal in
$B_\ell/\mathring{\mathfrak q}_{\ell-1}B_\ell$, so we can deduce
(i)${}_\ell$ from~(i)${}_{\ell-1}$. Then (ii)${}_\ell$ follows from
(ii)${}_{\ell-1}$ and~\eqref{eq:Induc2}. Also, (iii)${}_{\ell-1}$
holds trivially since $D_\ell=d_\ell$.
It remains to tackle the case $(v(\ell),w(\ell))=(-,-)$. Here
(i)${}_\ell$, (ii)${}_\ell$ and (iii)${}_\ell$ can be deduced from
(i)${}_{\ell-1}$, (ii)${}_{\ell-1}$ and (iii)${}_{\ell-1}$ without ado.
\end{proof}
\begin{lemma}
With the notation above,
$$\mathscr S_n^{-1}\widetilde{\mathfrak q}_n=\mathring{\mathfrak q}_n
\quad\;\text{and}\quad\;
\mathfrak q=\bigl\{g\in A_n\bigm|\exists f\in\mathscr S_n,\;fg\in
\widetilde{\mathfrak q}_n\bigr\}.$$
\end{lemma}
\begin{proof}
From $(v(n),w(n))=(-,+)$, we deduce
$$b_n=\bigl(P_{n-1}+a_nQ_{n-1}\bigr)\bigl(x_2\bigr)\times f_n^{-1}.$$
From the assertion (ii)${}_{n-1}$ in Lemma~\ref{le:Induc},
we deduce that $b_n\equiv\widetilde c_n/f_n$ modulo
$\mathring{\mathfrak q}_{n-1}B_n$. Thus, $b_n$ and
$\widetilde c_n$ generate the same ideal in
$B_n/\mathring{\mathfrak q}_{n-1}B_n$, and from the
assertion~(i)${}_{n-1}$ in Lemma~\ref{le:Induc}, we conclude that
$\mathscr S_n^{-1}\widetilde{\mathfrak q}_n=\mathring{\mathfrak q}_n$.
The second announced equality then follows from the definition of
$\mathfrak q$ as the preimage of $\mathring{\mathfrak q}_n$ under the
canonical map $A_n\to B_n$, with $B_n=\mathscr S_n^{-1}A_n$.
\end{proof}
\begin{lemma}
\label{le:Exclus}
If the last letter of $w'$ is not significant,
then $\mathring{\mathfrak q}_n=B_n$.
\end{lemma}
\begin{proof}
Assume that the last letter of $w'$ is not significant. Then
$D_{n-1}-d_{n-1}\geq1$, and by assertion (iii)${}_{n-1}$ in
Lemma~\ref{le:Induc}, we get $\widetilde Q_{n-1}(0)=0$.
Using assertion (ii)${}_{n-1}$ in that lemma, we deduce that
$Q_{n-1}(x_2)\in\mathring{\mathfrak q}_{n-1}$. Since the
matrix $\begin{pmatrix}P_{n-1}(x_2)&Q_{n-1}(x_2)\\
R_{n-1}(x_2)&S_{n-1}(x_2)\end{pmatrix}$ has determinant~$1$,
we see that $P_{n-1}(x_2)$ is invertible in the ring
$B_{n-1}/\mathring{\mathfrak q}_{n-1}$. Then
$b_n=\bigl(P_{n-1}+a_nQ_{n-1}\bigr)\bigl(x_2\bigr)\times f_n^{-1}$
is invertible in $B_n/\mathring{\mathfrak q}_{n-1}B_n$, and we conclude
that $\mathring{\mathfrak q}_n=B_n$.
\end{proof}
Lemma~\ref{le:Exclus} asserts that if the last letter of $w'$ is not
significant, then $U_v\cap\dot{\mathcal X}(\mathbf Z_w)=\varnothing$,
and thus $U_v\cap\mathcal X(\mathbf Z_w,(1,n-1))=\varnothing$.
Since $U_v$ contains $\mathcal Y(\mathbf Z_v)$, this proves half of
Proposition~\ref{pr:IncMulI}~\ref{it:PrIMIa}.
For the rest of this section, we assume that the last letter of $w'$
is significant. We want to show that $\mathcal Y(\mathbf Z_v)$ is
contained in $\mathcal X(\mathbf Z_w,(1,n-1))$. It would be rather easy
to prove the inclusion $\widetilde{\mathfrak q}_n\subset\mathfrak p$,
but this would not be quite enough, since we do not know that
$\widetilde{\mathfrak q}_n=\mathfrak q$. (We believe that this
equality is correct but we are not able to prove it.) Instead we will
look explicitly at the zero set of $\widetilde{\mathfrak q}_n$ in the
neighborhood of $(\upphi_v)^{-1}(\mathcal Y(\mathbf Z_v))$. This zero
set is the algebraic subset of $(\upphi_v)^{-1}(U_v)$ defined by the
equations $\widetilde c_\ell$ for $\ell\in P(w)$.
Our analysis is pedestrian. We observe that there are two kinds of
equations~$\widetilde c_\ell$. When $\ell\in P(v)\setminus S(v)$, the
equation $\widetilde c_\ell$ reduces to the variable $a_\ell$; this
equation and variable can simply be discarded because $a_\ell$ is an
equation for $\mathcal Y(\mathbf Z_v)$ as well. The other equations
involve the other variables.
Set $D=D_n$. The map $\ell\mapsto d_\ell$ is an increasing bijection
from $P(v)\cap S(v)$ onto $\{1,\ldots,D\}$. We define $L$ as the
largest element in $P(v)\cap S(v)$; then $L$ is the smallest element
in $\{\ell\mid d_\ell=D\}$. For $\ell\in\{1,\ldots,n\}$, we denote
by $\ell^-$ the largest element in $\{1,\ldots,\ell\}\cap P(v)\cap S(v)$.
In partic\-ular,~$\ell^-=\ell$ if $\ell\in P(v)\cap S(v)$ and $\ell^-=L$
if $\ell\geq L$; also $d_{\ell^-}=D_\ell$.
Given $\ell\in\{1,\ldots,n\}$, let $\sigma_\ell$ be the sum of the
variables $a_j$ for $j\in\{2,\ldots,\ell\}$ such that $v(j)=-$ and
$d_{j-1}=D$; thus $\sigma_\ell=0$ if $\ell\leq L$.
We define a graduation on $A_n$ by setting $\deg x=1$,
$\deg a_\ell=D+1-d_\ell$ for $\ell\in P(v)\cap S(v)$, and $\deg a_\ell=0$
for the other variables. For $d\geq1$, we denote by $J_d$ the ideal
of $A_n$ spanned by monomials of degree at least $d$.
\begin{lemma}
\label{le:ValTildeP}
Let $\ell\in\{1,\ldots,n-1\}$.
\vspace{-12pt}
\renewcommand\theenumi{(\roman{enumi})${}_\ell$}
\begin{enumerate}
\item
If $\ell\leq L$, then $\widetilde P_\ell(\tilde z)\equiv\tilde z-x
\pmod{J_2[\tilde z]}$; if $\ell\geq L$, then
$\widetilde P_\ell(0)\equiv a_L\sigma_\ell-x\pmod{J_2}$.
\item
$\widetilde Q_\ell(\tilde z)\equiv\tilde z^{D_\ell-d_\ell}\,a_{\ell^-}
\pmod{J_{D+2-d_{\ell^-}}[\tilde z]}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof starts with a banal verification for $\ell=1$ and then
proceeds by induction on~$\ell$. Suppose that $2\leq\ell\leq n-1$
and that statements (i)${}_{\ell-1}$ and (ii)${}_{\ell-1}$ hold.
Assume first that $v(\ell)=w(\ell)=-$. Here (ii)${}_\ell$ is an
immediate consequence of (ii)${}_{\ell-1}$. If $\ell-1<L$, then
$d_{(\ell-1)^-}<D$, so $\deg a_{(\ell-1)^-}\geq2$, and
$\widetilde Q_{\ell-1}\in J_2[\tilde z]$ by statement
(ii)${}_{\ell-1}$. As a result,
$\strut\widetilde P_\ell\equiv\widetilde P_{\ell-1}\pmod{J_2[\tilde z]}$,
so (i)${}_\ell$ directly follows from (i)${}_{\ell-1}$.
If $\ell-1\geq L$, then either $d_{\ell-1}=D$, in which case
$\strut\widetilde Q_{\ell-1}(0)\equiv a_L\pmod{J_2}$ and
$\sigma_\ell=\sigma_{\ell-1}+a_\ell$, or $d_{\ell-1}<D$,
in which case $\strut\widetilde Q_{\ell-1}(0)\equiv0\pmod{J_2}$
and $\sigma_\ell=\sigma_{\ell-1}$. In both cases,
$\widetilde P_\ell(0)-(a_L\sigma_\ell)
\equiv\widetilde P_{\ell-1}(0)-(a_L\sigma_{\ell-1})\pmod{J_2}$,
and again (i)${}_\ell$ follows from (i)${}_{\ell-1}$.
Assume now that $v(\ell)=w(\ell)=+$ and that $\ell\in S(v)$.
Certainly then (i)${}_\ell$ is readily deduced from (i)${}_{\ell-1}$.
Further, we remark that $d_{(\ell-1)^-}=d_{\ell^-}-1$, so
$\deg a_{(\ell-1)^-}=D+2-d_{\ell^-}$, hence $\widetilde Q_{\ell-1}$
is zero modulo $J_{D+2-d_{\ell^-}}[\tilde z]$ by (ii)${}_{\ell-1}$.
Using (i)${}_{\ell-1}$, we conclude that
$\widetilde Q_\ell\equiv a_\ell\pmod{J_{D+2-d_{\ell^-}}[\tilde z]}$,
so (ii)${}_\ell$ holds.
The third situation, namely $v(\ell)=w(\ell)=+$ and $\ell\notin S(v)$,
presents no difficulties.
\end{proof}
\begin{lemma}
\label{le:ValTildeC}
\mbox{}
\vspace{-12pt}
\begin{enumerate}
\item
\label{it:LeVTCa}
For $\ell\in\{2,\ldots,n-1\}\cap P(v)\cap S(v)$, we have
$\widetilde c_\ell\equiv-a_\ell\,x+ a_{(\ell-1)^-}\pmod{J_{D+3-d_\ell}}$.
\item
\label{it:LeVTCb}
We have $\widetilde c_n\equiv a_L\sigma_n-x\pmod{J_2}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\ell\in\{2,\ldots,n-1\}\cap P(v)\cap S(v)$. Then
$D_{\ell-1}=d_{\ell-1}$ and $d_{(\ell-1)^-}=d_\ell-1$. By
Lemma~\ref{le:ValTildeP}, $\widetilde P_{\ell-1}(0)\equiv-x\pmod{J_2}$ and
$\widetilde Q_{\ell-1}(0)\equiv a_{(\ell-1)^-}\pmod{J_{D+3-d_\ell}}$.
This gives~\ref{it:LeVTCa}.
Since the last letter of $w'$ is assumed to be significant,
we have $d_{n-1}=D_{n-1}=D$, so $\sigma_n=\sigma_{n-1}+a_n$.
From Lemma~\ref{le:ValTildeP}, we get
$\widetilde P_{n-1}(0)\equiv a_L\sigma_{n-1}-x\pmod{J_2}$ and
$\widetilde Q_{n-1}(0)\equiv a_L\pmod{J_2}$. This gives~\ref{it:LeVTCb}.
\end{proof}
\begin{lemma}
\label{le:Elimin}
There exists an element $\widetilde g\in A_n$, which depends only on the
variables $x$, $a_1$, and $a_j$ with $v(j)=-$, such that
\begin{alignat}2
\label{eq:CongTildeG}
\widetilde g&\equiv
\widetilde c_n\,x^{D-1}\times\prod_{\substack{\ell\in P(v)\cap S(v)\\\ell\geq2}}
\Bigl(-\widetilde P_{\ell-1}(0)\Bigr)^{p_\ell}&&
\pmod{\widetilde{\mathfrak q}_L}\\[8pt]
\label{eq:NewtonVert}
\widetilde g&\equiv x^q\,\bigl(a_1\sigma_n-x^D\bigr)&&\pmod{J_{q+D+1}}
\end{alignat}
where each $p_\ell$ and $q$ are nonnegative integers.
\end{lemma}
\begin{proof}
Consider
$$\widetilde g_L=\widetilde c_n\,x^{D-1}+\sum_{\substack{\ell\in P(v)
\cap S(v)\\[2pt]\ell\geq2}}\widetilde c_\ell\,\sigma_n\,x^{d_\ell-2}.$$
An immediate calculation based on Lemma~\ref{le:ValTildeC} yields
$$\widetilde g_L\equiv a_1\sigma_n-x^D\pmod{J_{D+1}}.$$
This $\widetilde g_L$ meets the specifications for $\widetilde g$
(with $p_\ell$ and $q$ all equal to zero) except that it may involve
other variables than those prescribed.
We are not bothered by the variables $a_j$ for $j\in P(v)\setminus S(v)$
because $\widetilde g_L$ do not depend on them (see Remark~\ref{rk:DependVar}).
The variables $x$ and $a_j$ with $v(j)=-$ are allowed. The only trouble comes
then from the variables $a_j$ with $j\in\{2,\ldots,n-1\}\cap P(v)\cap S(v)$.
We will eliminate them in turn.
Assume that $L\geq2$. Let $\ell\in\{2,\ldots,n-1\}\cap P(v)\cap S(v)$
and assume that we succeeded in constructing an element
$\widetilde g_\ell\in\widetilde{\mathfrak q}_n$ which satisfies
\eqref{eq:CongTildeG} and \eqref{eq:NewtonVert} and depends only
on the variables $x$ and $a_j$ with $v(j)=-$ or $j\leq\ell$.
Expand $\widetilde g_\ell$ as a polynomial in $a_\ell$
$$\widetilde g_\ell=\sum_{s=0}^rh_s\,a_\ell^s$$
where the coefficients $h_s$ only depend on $x$ and on the variables
$a_j$ such that $v(j)=-$ or $j<\ell$. Then define
$$\widetilde g_{(\ell-1)^-}=\sum_{s=0}^rh_s\,
\Bigl(-\widetilde P_{\ell-1}(0)\Bigr)^{r-s}\,
\Bigl(\widetilde Q_{\ell-1}(0)\Bigr)^s.$$
This $\widetilde g_{(\ell-1)^-}$ only involves the variables $x$ and
$a_j$ with $v(j)=-$ or $j\leq\ell-1$. In fact, we can strengthen
the latter inequality to $j\leq(\ell-1)^-$ because
$\widetilde g_{(\ell-1)^-}$ does not depend on the variables
$a_j$ with $j\in P(v)\setminus S(v)$. Moreover,
$\widetilde g_{(\ell-1)^-}$ also satisfies \eqref{eq:CongTildeG}
and \eqref{eq:NewtonVert}, but for different integers than
$\widetilde g_\ell$: one has to increase $p_\ell$ and $q$ by $r$.
(To verify that $\widetilde g_{(\ell-1)^-}$ satisfies
\eqref{eq:NewtonVert} with $q+r$ instead of $q$, one observes that
\begin{alignat*}2
h_0&\equiv x^q\,\bigl(a_1\sigma_n-x^D\bigr)&&(\bmod\,\,J_{q+D+1})\\[4pt]
h_s&\in J_{q+D+1-s(D+1-d_\ell)}&\quad&\text{for each
$s\in\{1,\ldots,r\}$}
\end{alignat*}
and uses Lemma~\ref{le:ValTildeP}.)
At the end of the process, we obtain an element
$\widetilde g=\widetilde g_1$ which enjoys the desired properties.
\end{proof}
Let us recall a few important points:
\vspace{-12pt}
\begin{itemize}
\item
$A_n=\mathbb C[x_2][x,a_1,\ldots,a_n]$ is the coordinate ring
of $(\upphi_v)^{-1}(U_v)$. The variable $x_2$ is dumb (no equations
depend on it); we get rid of it by specializing it to an arbitrary value.
\item
The ring $B_1$ is $\mathbb C[x_2]\bigl[x,a_1,f_1^{-1}\bigr]$
with $f_1=a_1$. For $\ell\geq2$, we produce an explicit function
$f_\ell\in B_{\ell-1}[a_\ell]$ and we set
$B_\ell=B_{\ell-1}\bigl[a_\ell,f_\ell^{-1}\bigr]$. The ring $B_n$
is the coordinate ring of~$(\upphi_v)^{-1}(U_v\cap U_w)$.
\item
$\mathscr S_n$ is a finitely generated multiplicative subset of
$A_n$ such that $B_n=\mathscr S_n^{-1}A_n$.
\item
Polynomials $\widetilde c_\ell\in A_\ell$ are defined for each
$\ell\in P(w)$. The ideal of $A_n$ generated by these elements is
denoted by $\widetilde{\mathfrak q}_n$.
\item
The ideal $\mathfrak p\subset A_n$ of
$(\upphi_v)^{-1}(\mathcal Y(\mathbf Z_v))$
is generated by the variables $x$ and $a_\ell$ for $\ell\in P(v)$.
\item
The ideal $\mathfrak q\subset A_n$ of
$(\upphi_v)^{-1}(U_v\cap\mathcal X(\mathbf Z_w,(1,n-1)))$
is the saturation of $\widetilde{\mathfrak q}_n$ with respect to
$\mathscr S_n$.
\item
$\sigma_1$, \dots, $\sigma_n$ are certain sums of variables $a_\ell$
with $v(\ell)=-$; these linear forms are not pairwise distinct, but
$\sigma_n$ differs from all the other ones, for only it involves $a_n$.
\end{itemize}
\begin{lemma}
\label{le:GermCurv}
Fix $\alpha_\ell\in\mathbb C$ for each $\ell\in\{1,\ldots,n\}\setminus
P(v)$ such that, when $a_\ell$ is assigned the value $\alpha_\ell$,
the linear form $\sigma_n$ takes a value different from all the other
$\sigma_j$. Consider these numbers $\alpha_\ell$ as constant
functions of the variable $\xi$. Set also $\alpha_\ell=0$ for
$\ell\in P(v)\setminus S(v)$. Then there exists a neighborhood $\Omega$
of $0$ in $\mathbb C$ and analytic functions
$\alpha_\ell:\Omega\to\mathbb C$ for $\ell\in P(v)\cap S(v)$ such that
\begin{enumerate}
\item
\label{it:LeGCa}
If $\ell\in P(v)\cap S(v)$, then $\alpha_\ell(\xi)\sim
\xi^{D+1-d_\ell}/\sigma_n$.
\item
\label{it:LeGCb}
The point $(\xi,\alpha_1(\xi),\ldots,\alpha_n(\xi))$
belongs to the zero locus of\/ $\widetilde{\mathfrak q}_n$ for each
$\xi\in\Omega$.
\item
\label{it:LeGCc}
The point $(\xi,\alpha_1(\xi),\ldots,\alpha_n(\xi))$ belongs to
$U_w$ for each $\xi\neq0$ in $\Omega$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\widetilde g$ be as in Lemma~\ref{le:Elimin}.
We consider that the variables $a_\ell$ with $\ell>1$ occuring in
$\widetilde g$ are assigned the values $\alpha_\ell$ fixed in the
statement of the lemma. We can then regard $\widetilde g$ as a polynomial
in the indeterminates $x$ and $a_1$ with complex coefficients,
or as a polynomial in the indeterminate $a_1$ with coefficients in the
valued field $\mathbb C(\!(x)\!)$. Equation \eqref{eq:NewtonVert} shows that
the points $(0,D+q)$ and $(1,q)$ are vertices of the Newton polygon of
$\widetilde g$. Therefore $\widetilde g$ admits a unique root of valuation
$D$ in $\mathbb C(\!(x)\!)$, which we denote by $\alpha_1$, and the power
series $\alpha_1$ has a positive radius of convergence. Proceeding by
induction on $\ell\in\{2,\ldots,n-1\}\cap P(v)\cap S(v)$, and solving the
equation $\widetilde c_\ell=0$, we define
\begin{equation}
\label{eq:GermCurv}
\alpha_\ell(\xi)=-\widetilde Q_{\ell-1}(0)/\widetilde P_{\ell-1}(0),
\end{equation}
where the right-hand side is evaluated at
$(\xi,\alpha_1(\xi),\ldots,\alpha_{\ell-1}(\xi))$; this is a well-defined
process and $\alpha_\ell(\xi)$ satisfies the equivalent given in the
statement, because Lemma~\ref{le:ValTildeP} guarantees that after
evaluation
$$\widetilde P_{\ell-1}(0)=-\xi+O\bigl(\xi^2\bigr)\quad\text{and}\quad
\widetilde Q_{\ell-1}(0)=\alpha_{(\ell-1)^-}(\xi)+
O\Bigl(\xi^{D+2-d_{(\ell-1)^-}}\Bigr),$$
so the denominator in~\eqref{eq:GermCurv} does not vanish if
$\xi\neq0$. Moreover, \eqref{eq:CongTildeG} ensures that the
equation $\widetilde c_n=0$ is enforced too. Therefore this
construction gives~\ref{it:LeGCa} and \ref{it:LeGCb}.
We will prove~\ref{it:LeGCc} by showing that none of the
functions $f_\ell$ vanish when evaluated on the point
$(\xi,\alpha_1(\xi),\ldots,\alpha_n(\xi))$ with $\xi\neq0$.
This is true for $\ell=1$, because $f_1=a_1$ and
$\alpha_1(\xi)\sim\xi^D/\sigma_n$. We assume known that $f_1$,
\dots, $f_{\ell-1}$ do not vanish on our germ of curve.
\vspace{-8pt}
\begin{itemize}
\item
In the case $(v(\ell),w(\ell))=(+,+)$, we have
$$f_\ell=\bigl(a_\ell R_{\ell-1}+S_{\ell-1}\bigr)\bigl(x_2\bigr).$$
The congruences in Lemma~\ref{le:Induc} allow to rewrite the
equation $\widetilde c_\ell=0$ in the form
$$\bigl(a_\ell P_{\ell-1}+Q_{\ell-1}\bigr)\bigl(x_2\bigr)=0;$$
this is satisfied after evaluation at the point
$(\xi,\alpha_1(\xi),\ldots,\alpha_n(\xi))$. Using then the relation
$\bigl(P_{\ell-1}S_{\ell-1}-Q_{\ell-1}R_{\ell-1}\bigr)\bigl(x_2\bigr)=1$,
we obtain
$$P_{\ell-1}(x_2)\times f_\ell=P_{\ell-1}(x_2)\bigl(a_\ell R_{\ell-1}+S_{\ell-1}\bigr)
\bigl(x_2\bigr)=1+R_{\ell-1}(x_2)\bigl(a_\ell P_{\ell-1}+Q_{\ell-1}\bigr)
\bigl(x_2\bigr)=1.$$
Thus, $f_\ell$ does not vanish at
$(\xi,\alpha_1(\xi),\ldots,\alpha_n(\xi))$.
\item
The case $(v(\ell),w(\ell))=(-,+)$, that is $\ell=n$, is amenable to
a similar treatment.
\item
The remaining case is $(v(\ell),w(\ell))=(-,-)$. Here by
Lemma~\ref{le:Induc} we have after substitution
$$f_\ell=\bigl(P_{\ell-1}+a_\ell Q_{\ell-1}\bigr)\bigl(x_2\bigr)
=\bigl(\widetilde P_{\ell-1}+a_\ell\widetilde Q_{\ell-1}\bigr)\bigl(0\bigr),$$
and by Lemma~\ref{le:ValTildeP} and the equivalence in~\ref{it:LeGCa}
$$\widetilde P_{\ell-1}(0)=(\sigma_{\ell-1}/\sigma_n-1)\,\xi
+O\bigl(\xi^2\bigr)\ \;\text{and}\ \;\widetilde Q_{\ell-1}(0)=
\begin{cases}
\xi/\sigma_n+O(\xi^2)&\text{if $d_{\ell-1}=D_{\ell-1}=D$,}\\
O(\xi^2)&\text{otherwise.}
\end{cases}$$
Therefore $f_\ell$ is equivalent to $(\sigma_\ell/\sigma_n-1)\,\xi$.
Shrinking $\Omega$ if necessary, we can ensure that $f_\ell$ does not
vanish.
\end{itemize}
\vspace{-8pt}
This concludes the induction and establishes~\ref{it:LeGCc}.
\end{proof}
To sum up, we construct a germ of smooth algebraic curve contained in
the zero locus of $\widetilde{\mathfrak q}_n$. The ideal of this curve
is a prime ideal of $A_n$ which contains $\widetilde{\mathfrak q}_n$
and is disjoint from $\mathscr S_n$; hence it contains $\mathfrak q$.
As a result, our curve is contained in
$(\upphi_v)^{-1}(U_v\cap\mathcal X(\mathbf Z_w,(1,n-1)))$. The point
at $\xi=0$ of this curve has for coordinates the values $\alpha_\ell$
chosen for each $\ell\in\{1,\ldots,n\}\setminus P(v)$, contingent on
$\sigma_n\neq\sigma_j$ for $j\in\{1,\ldots,n-1\}$, the other coordinates
being zero. Such points form an open dense subset of
$(\upphi_v)^{-1}(\mathcal Y(\mathbf Z_v))$, so we conclude that
$\mathcal Y(\mathbf Z_v)\subset\mathcal X(\mathbf Z_w,(1,n-1))$.
This proves the missing half of Proposition~\ref{pr:IncMulI}~\ref{it:PrIMIa}
(the first half was obtained just after Lemma~\ref{le:Exclus}).
As a consequence, $\mathfrak q\subset\mathfrak p$. To ease the reading
of the sequel, we will omit the subscripts $n$ in the notation $A_n$ and
$\widetilde{\mathfrak q}_n$. For $\ell\in\{1,\ldots,n\}$, we set
$R(\ell)=\bigl\{j\in\{2,\ldots,\ell\}\bigm|v(j)=-,\;d_{j-1}=D_{j-1}\bigr\}$.
\begin{lemma}
\label{le:PrepNaka}
\begin{enumerate}
\item
\label{it:LePNa}
For each $\ell\in\{1,\ldots,n-1\}$, we have
\begin{align*}
&\widetilde P_\ell\equiv\tilde z\pmod{\mathfrak p[\tilde z]},\qquad
\widetilde Q_\ell\equiv\tilde z^{D_\ell-d_\ell}\,a_{\ell^-}
\pmod{\mathfrak p^2[\tilde z]},\\[4pt]
&\widetilde P_\ell(0)\equiv-x+\sum_{j\in R(\ell)}a_{(j-1)^-}a_j
\pmod{\mathfrak p^2}.
\end{align*}
\item
\label{it:LePNb}
In the local ring $A_{\mathfrak p}$, we have
$\mathfrak pA_{\mathfrak p}=xA_{\mathfrak p}+\mathfrak qA_{\mathfrak p}
+\mathfrak p^2A_{\mathfrak p}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Statement \ref{it:LePNa} is proved by a banal induction. Let us
tackle \ref{it:LePNb}.
If $\ell\in P(v)\setminus S(v)$, then $a_\ell=\widetilde c_\ell$
belongs to $\widetilde{\mathfrak q}$.
If $\ell\in(P(v)\cap S(v))\setminus\{L\}$, then there exists
$m\in P(v)\cap S(v)$ such that $d_\ell=d_m-1$. Then $\ell=(m-1)^-$ and
$D_{m-1}=d_{m-1}$, whence by statement \ref{it:LePNa}
$$a_\ell\equiv\widetilde Q_{m-1}(0)=\widetilde c_m-a_m\widetilde P_{m-1}(0)
\equiv\widetilde c_m\pmod{\mathfrak p^2},$$
and therefore $a_\ell\in\widetilde{\mathfrak q}+\mathfrak p^2$.
Surely $D_{n-1}=d_{n-1}=D$ and $L=(n-1)^-$, so again by
statement \ref{it:LePNa}, we have
$$\widetilde c_n=\widetilde P_{n-1}(0)+a_n\widetilde Q_{n-1}(0)\equiv
\widetilde P_{n-1}(0)+a_La_n\equiv-x+\sum_{j\in R(n)}a_{(j-1)^-}a_j
\pmod{\mathfrak p^2}.$$
In the last sum, we gather the terms with the same value $\ell$ for
$(j-1)^-$: denoting by $\tau_\ell$ the sum of the variables $a_j$ for
$j\in\{2,\ldots,n\}$ such that $v(j)=-$ and $d_{j-1}=D_{j-1}=d_\ell$,
we obtain
$$\widetilde c_n\equiv-x+\sum_{\ell\in P(v)\cap S(v)}a_\ell\,\tau_\ell
\pmod{\mathfrak p^2}.$$
Noting that $a_\ell\in\widetilde{\mathfrak q}+\mathfrak p^2$ for
$\ell\in P(v)\cap S(v)\setminus\{L\}$ and that $\tau_L=\sigma_n$,
we get $a_L\sigma_n\in(x)+\widetilde{\mathfrak q}+\mathfrak p^2$.
Since $\sigma_n$ is invertible in $A_{\mathfrak p}$, we conclude that
$a_L\in xA_{\mathfrak p}+\widetilde{\mathfrak q}
A_{\mathfrak p}+\mathfrak p^2A_{\mathfrak p}$.
Altogether the remarks above show the inclusion
$$\mathfrak pA_{\mathfrak p}\subset xA_{\mathfrak p}
+\widetilde{\mathfrak q}A_{\mathfrak p}+\mathfrak p^2
A_{\mathfrak p}.$$
Joint with $\widetilde{\mathfrak q}\subset\mathfrak q\subset\mathfrak p$,
this gives statement \ref{it:LePNb}.
\end{proof}
The ideal in $A$ of the subvarieties
$$V=(\upphi_v)^{-1}(\mathcal Y(\mathbf Z_v))\quad\text{and}\quad
X=(\upphi_v)^{-1}(U_v\cap\mathcal X(\mathbf Z_w,(1,n-1)))$$
are $\mathfrak p$ and $\mathfrak q$, respectively. The local ring
$\mathscr O_{V,X}$ of $X$ along $V$ is the localization of
$\overline A=A/\mathfrak q$ at the ideal
$\overline{\mathfrak p}=\mathfrak p/\mathfrak q$.
Lemma~\ref{le:PrepNaka}~\ref{it:LePNb} combined with Nakayama's lemma
shows that the image of $x=x_1-x_2$ in $\overline A$ generates the
ideal $\overline{\mathfrak p}\,\overline A_{\overline{\mathfrak p}}$.
As a consequence, the order of vanishing of $x_1-x_2$ along $V$ is
equal to one, and by definition, this is the multiplicity of
$\mathcal Y(\mathbf Z_v)$ in the intersection product
$\mathcal X(\mathbf Z_w,(1,n-1))\cdot\,\BDConv_n^{\bm\lambda}\bigl|_\Delta$.
This proves Proposition~\ref{pr:IncMulI}~\ref{it:PrIMIb}.
\subsection{Inclusion, II}
\label{ss:IncMulII}
In this section, we again consider words $v$ and $w$ such that
$(v(1),w(1))=(+,-)$ and $\wt(v)=\wt(w)$ and explore the situation
where the path representing $v$ lies strictly above the one
representing $w$ (except of course at the two endpoints) but does
not stay parallel to it. We thus assume that there exists
$k\in\{2,\ldots,n-1\}$ such that $(v(k),w(k))=(+,-)$.
\begin{proposition}
\label{pr:IncMulII}
Under these assumptions,
$\mathcal Y(\mathbf Z_v)\not\subset\mathcal X(\mathbf Z_w,(1,n-1))$.
\end{proposition}
The proof of Proposition~\ref{pr:IncMulII} fills the remainder
of this section. Our argument is similar to our proof in
Proposition~\ref{pr:IncMulI}~\ref{it:PrIMIa}.
For each $\ell\in\{1,\ldots,n\}$, we define
$A_\ell=\mathbb C[x_2][x,a_1,\ldots,a_\ell]$,
where $x=x_1-x_2$. We introduce $\tilde z=z-x_2$.
In addition:
\vspace{-12pt}
\begin{itemize}
\item
let $K$ be the largest integer $k\in\{2,\ldots,n-1\}$ such that
$(v(k),w(k))=(+,-)$;
\item
for $\ell\in\{K,\ldots,n\}$, let $d_\ell$ be the weight of the word
$v(K+1)v(K+2)\cdots v(\ell)$, with the convention $d_K=0$;
\item
let $L$ be the smallest position $\ell>K$ such that
$(v(\ell),w(\ell))=(-,+)$ or $d_\ell>0$.
\end{itemize}
Set $\widetilde P_1=\tilde z-x$ and $\widetilde Q_1=a_1$.
For $\ell\in\{2,\ldots,L-1\}$, define by induction two polynomials
$\widetilde P_\ell$, $\widetilde Q_\ell$ in $A_\ell[z]$ as follows:
\begin{itemize}
\item
If $(v(\ell),w(\ell))=(+,+)$, then
$$\widetilde P_\ell=\widetilde P_{\ell-1}\quad\;\text{and}\quad\;
\widetilde Q_\ell=\begin{cases}
\displaystyle\frac{a_\ell\widetilde P_{\ell-1}+\widetilde Q_{\ell-1}
-\bigl(a_\ell\widetilde P_{\ell-1}+\widetilde Q_{\ell-1}\bigr)
\bigl(0\bigr)}{\tilde z}&\text{if $\ell<K$,}\\[8pt]
\displaystyle\frac{\widetilde Q_{\ell-1}-\widetilde Q_{\ell-1}(0)}{\tilde z}
&\text{if $\ell>K$.}
\end{cases}$$
\item
If $(v(\ell),w(\ell))=(-,+)$, then
$$\widetilde P_\ell=\frac{\widetilde P_{\ell-1}+a_\ell\widetilde
Q_{\ell-1}-\bigl(\widetilde P_{\ell-1}+a_\ell\widetilde Q_{\ell-1}\bigr)
\bigl(0\bigr)}{\tilde z}\quad\;\text{and}\quad\;
\widetilde Q_\ell=\widetilde Q_{\ell-1}.$$
\item
If $(v(\ell),w(\ell))=(+,-)$, then
$\widetilde P_\ell=\tilde z\,\widetilde P_{\ell-1}$ and
$\widetilde Q_\ell=a_\ell\widetilde P_{\ell-1}+\widetilde Q_{\ell-1}$.
\item
If $(v(\ell),w(\ell))=(-,-)$, then
$\widetilde P_\ell=\widetilde P_{\ell-1}+a_\ell\widetilde Q_{\ell-1}$ and
$\widetilde Q_\ell=\tilde z\,\widetilde Q_{\ell-1}$.
\end{itemize}
For $\ell\in\{1,\ldots,L\}$:
\vspace*{-10pt}
\begin{itemize}
\item
let $\mathring{\mathfrak q}_\ell$ be the ideal of
$B_\ell$ generated by $\{b_j\mid j\in P(w),\;j\leq\ell\}$;
\item
if $\ell\geq K$, let $\sigma_\ell$ be the sum of the $a_j$ for
$j\in\{K+1,\ldots,\ell\}$ such that $v(j)=-$ and $d_{j-1}=0$,
with the convention $\sigma_K=0$.
\end{itemize}
\begin{lemma}
\label{le:Induc2}
For $\ell\in\{1,\ldots,L-1\}$, we have
\vspace{-12pt}
\renewcommand\theenumi{(\roman{enumi})${}_\ell$}
\begin{enumerate}
\item
$\widetilde P_\ell(\tilde z)\equiv P_\ell(z)
\pmod{\mathring{\mathfrak q}_\ell[z]}\;$ and
$\;\widetilde Q_\ell(\tilde z)\equiv Q_\ell(z)
\pmod{\mathring{\mathfrak q}_\ell[z]}$,
\item
if $\ell\geq K$, then $\widetilde P_\ell(0)=\widetilde Q_K(0)\sigma_\ell$
and $\widetilde Q_\ell=\tilde z^{-d_\ell}\widetilde Q_K$.
\end{enumerate}
\renewcommand\theenumi{(\alph{enumi})}
\end{lemma}
\begin{proof}
One again proceeds by induction. The details are straightforward indeed,
except in the case where $(v(\ell),w(\ell))=(+,+)$ and $\ell>K$, where
one can follow the arguments offered in the proof of
Lemma~\ref{le:Induc} to get $a_\ell\in\mathring{\mathfrak q}_\ell$.
\end{proof}
We now distinguish three cases:
\begin{itemize}
\item
Assume that $d_{L-1}<0$. Then necessarily $(v(L),w(L))=(-,+)$. By
assertion~(ii)${}_{L-1}$ in Lemma~\ref{le:Induc2}, we get
$\widetilde Q_{L-1}(0)=0$. Using assertion (i)${}_{L-1}$ in that
lemma, we deduce that $Q_{L-1}(x_2)\in\mathring{\mathfrak q}_{L-1}$.
Then, by the identity $P_{L-1}S_{L-1}-Q_{L-1}R_{L-1}=1$, we see
that $P_{L-1}(x_2)$ is invertible in the ring
$B_{L-1}/\mathring{\mathfrak q}_{L-1}$. Thus,
$b_L=\bigl(P_{L-1}+a_LQ_{L-1}\bigr)\bigl(x_2\bigr)\times f_L^{-1}$
is invertible in $B_L/\mathring{\mathfrak q}_{L-1}B_L$.
We conclude that $\mathring{\mathfrak q}_L=B_L$, and therefore
$\mathring{\mathfrak q}_n=B_n$. Thus,
$U_v\cap\dot{\mathcal X}(\mathbf Z_w)=\varnothing$, so
$\mathcal X(\mathbf Z_w,(1,n-1))$ does not meet $U_v$ and cannot
contain $\mathcal Y(\mathbf Z_v)$.
\item
Assume that $d_{L-1}=0$ and $(v(L),w(L))=(-,+)$. We note that
$P_K(x_2)=0$ by construction. The identity $P_KS_K-Q_KR_K=1$ then implies
that $Q_K(x_2)$ is invertible in $B_K$, and by assertion~(i)${}_K$
in Lemma~\ref{le:Induc2}, $\widetilde Q_K(0)$
is invertible in $B_K/\mathring{\mathfrak q}_K$. Moreover,
$f_Lb_L=\bigl(P_{L-1}+a_LQ_{L-1}\bigr)\bigl(x_2\bigr)$
belongs to $\mathring{\mathfrak q}_L$. Using assertion~(ii)${}_{L-1}$
in Lemma~\ref{le:Induc2}, we deduce that
$$\bigl(\widetilde P_{L-1}+a_L\widetilde Q_{L-1}\bigr)\bigl(0\bigr)
=\widetilde Q_K(0)(\sigma_{L-1}+a_L)=\widetilde Q_K(0)\sigma_L$$
belongs to $\mathring{\mathfrak q}_L$ too. Therefore $\sigma_L$
belongs to $\mathring{\mathfrak q}_L$, hence to $\mathfrak q$.
However $\sigma_L\notin\mathfrak p$,
because $a_L$ is a summand in the sum that defines $\sigma_L$
whereas $L\notin P(v)$. We must then conclude that
$\mathfrak q\not\subset\mathfrak p$, in other words that
$\mathcal Y(\mathbf Z_v)\not\subset\mathcal X(\mathbf Z_w,(1,n-1))$.
\item
Assume that $d_{L-1}=0$ and $(v(L),w(L))=(+,+)$. As in the
previous case, we note that $\widetilde Q_K(0)$ is invertible in
$B_K/\mathring{\mathfrak q}_K$. But now we have
$f_Lb_L=\bigl(a_LP_{L-1}+Q_{L-1}\bigr)\bigl(x_2\bigr)$, so we get
$$\widetilde Q_K(0)(a_L\sigma_{L-1}+1)\in\mathring{\mathfrak q}_L$$
and then $a_L\sigma_{L-1}+1\in\mathfrak q$. Here however
$a_L\in\mathfrak p$, so $a_L\sigma_{L-1}+1\notin\mathfrak p$.
Again we must conclude that $\mathfrak q\not\subset\mathfrak p$ and
$\mathcal Y(\mathbf Z_v)\not\subset\mathcal X(\mathbf Z_w,(1,n-1))$.
\end{itemize}
Proposition~\ref{pr:IncMulII} is then proved.
\subsection{Loose ends}
\label{ss:LooseEnds}
We can now prove that the MV basis of $V(\varpi)^{\otimes n}$
satisfies the second formula in \eqref{eq:DefYw}. We consider
two words $v$ and $w$ in $\mathscr C_n$ with $w(1)=-$ and
$\wt(v)=\wt(w)$ and look for the coefficient of
$y_v$ in the expansion of $x_-\otimes y_{w'}$ on the MV basis,
where $w'$ is the word $w$ stripped from its first letter.
If $v(1)=-$, then this coefficient is zero except for $v=w$, in
which case the coefficient is one. This follows from Theorem~5.13
in \cite{BaumannGaussentLittelmann}.
If $v(1)=+$, then the path representing $v$ starts above the path
representing $w$. We distinguish two cases.
In the case where $v$ stays strictly above $w$ until the very end,
we can refer to Propositions~\ref{pr:IncMulI} and~\ref{pr:IncMulII}:
the coefficient of $y_v$ is non-zero only if $v$ stays parallel
to $w$ at distance two and the last letter of $w'$ is significant. If this
condition is fulfilled, then the coefficient is one.
In the case where $v$ and $w$ rejoin before the end, after $m$ letters,
then we write $v$ and $w$ as concatenations $+v_{(2)}v_{(3)}$ and
$-w_{(2)}w_{(3)}$, respectively, with $v_{(2)}$ and $w_{(2)}$ of length
$m-1$ and $v_{(3)}$ and $w_{(3)}$ of length $n-m$. By assumption,
$\wt v_{(3)}=\wt w_{(3)}$. We can then apply Proposition~\ref{pr:Trunc}
with $n_1=1$, $n_2=m-1$ and $n_3=n-m$: if $v_{(3)}\neq w_{(3)}$, then
the coefficient of $y_v$ in the expansion of $x_-\otimes y_{w'}$ is zero;
otherwise, it is equal to the coefficient of $y_{+v(2)}$ in the expansion
of $x_-\otimes y_{w(2)}$ on the MV basis of $V(\varpi)^{\otimes m}$.
Thus, Proposition~\ref{pr:Trunc} reduces the second case to the first
one, but for words of length $m$. The coefficient is then non-zero only
if $+v_{(2)}$ stays parallel to $-w_{(2)}$ at distance two and the last
letter of $w_{(2)}$ is significant, in which case the coefficient is
one.
To sum up: if $(v(1),w(1))=(+,-)$, then the coefficient of $y_v$ in the
expansion of $x_-\otimes y_{w'}$ is either zero or one; it is one if and
only if $v$ is obtained by flipping the first letter $-$ of $w$ into a
$+$ and flipping a significant letter $+$ in $w'$ into a $-$. This
shows that the MV basis satisfies the second formula in \eqref{eq:DefYw}.
We have proved:
\begin{theorem}
\label{th:MVbasis}
$(y_w)_{w\in\mathcal C_n}$ is the MV basis of $V(\varpi)^{\otimes n}$.
\end{theorem}
Putting Theorem~\ref{th:MVbasis} alongside Theorem~\ref{th:FrenKhov},
Proposition~\ref{pr:ProjCart}, and Theorem~1.11 in \cite{FrenkelKhovanov},
we obtain the result stated in the introduction.
|
train/arxiv
|
BkiUdZk5qhLBXbB1ef57
| 5 | 1 |
\section{Introduction}
Extending physical parameters from the real axis to the complex plane largely deepens our understanding of quantum mechanics \cite{Bender1998, Moiseyev2011} and enriches our controllability of quantum systems \cite{Mostafazadeh2009, Chong2010, Wan2011, Feng2012, Peng2014, Hodaei2014, Miao2016, Feng2017, El-Ganainy2018}. One intriguing phenomenon that emerges from this extension is the non-Hermitian degeneracy, known as the exceptional point (EP). In contrast to level degeneracy points in Hermitian systems, the EP is associated with level coalescence, in which not only the eigenenergies but also the eigenstates become identical \cite{Heiss2004, Berry2004}. Many distinctive effects without Hermitian counterparts arise around the EP, such as the square root frequency dependence \cite{Peng2014} and the nontrivial topological property resulting from the Riemann sheet structures of the EP-ended branch-cut in the complex parameter plane \cite{Rotter2015, Doppler2016, Xu2016, Hassan2017, Leykam2017, zhou2018, Shen2018, Zhang2018, Ding2018, GoldZak2018}. Other intriguing phenomena include unidirectional reflectionless and coherent perfect absorption due to the spectral singularity in non-Hermitian systems \cite{Chong2010, Wan2011, Lin2011, Feng2012, Sun2014, Jin2018}.
Around the $n$-th order EP \cite{Heiss2008,Demange2012}, where the coalescence of $n$ levels occurs, the eigenenergy shows an $\epsilon^{1/n}$ dependence on the perturbative parameter $\epsilon$. This result stands in sharp contrast to the Hermitian degeneracy, where the eigenenergy has a linear or high-order dependence. That means the eigenenergies around EPs have diverging susceptibility on the parameter change since $d\epsilon^{1/n}/d\epsilon=\epsilon^{1/n-1}$ diverges at $\epsilon=0$. Based on this divergence, schemes of parameter estimation (or sensing) working around EPs were proposed for the purpose of beating the metrology limit of Hermitian systems \cite{Wiersig2014, Wiersig2016}. Recently, this idea has been experimentally studied \cite{Chen2017, Hodaei2017, Zhao2018}. However, the diverging eigenvalue susceptibility does not necessarily lead to arbitrary high {\em sensitivity}. In parameter estimation the sensitivity is usually defined as minimum parameter change that can be determined above the noise level within a given data acquisition time. Thus defined sensitivity is more relevant than the eigenvalue susceptibility is to practical applications of parameter estimation. In Hermitian systems, the sensitivity is inversely proportional to the eigenvalue susceptibility, i.e., the larger the susceptibility, the higher the sensitivity. Such a relation is based on the fact that all the eigenstates are distinguishable and the transitions between these eigenenergies can be excited to measure the eigenvalue susceptibility. However, non-Hermitian systems are fundamentally different. Because different eigenstates of non-Hermitian systems are in general non-orthogonal and even become identical at the EP, exciting the transitions between different eigenstates near the EP to measure the eigenvalue susceptibility is infeasible.
In this paper, we study the sensitivity around the EP of a coupled cavity system for its immediate relevance to recent experimental studies \cite{Chen2017, Hodaei2017}. Nonetheless, the theoretical formalism and the main conclusion - no dramatic sensitivity enhancement at the EP - are applicable to a broad range of systems, such as magnon-cavity systems \cite{Zhang2016, Zhang2017} and opto-mechanical systems \cite{Jing2014, lue2015, Xu2016a}. We use the exact formalism of quantum Fisher information (QFI) \cite{Paris2009} to characterize the sensitivity of parameter estimation. The QFI formalism enables us to evaluate the sensitivity without referring to a specific measurement scheme - be it phase, intensity, or any other complicated measurements of the output from the system. We find that no sensitivity boost exists at the EP. The reason boils down to the coalescence of the eigenstates around the EP. Due to the indistinguishability of different eigenstates around the EP, not one but all eigenstates are equally excited by an arbitrary detection field. The average of all eigenstates exactly cancels out the singularity in the susceptibility divergence of the eigenenergies and makes the sensitivity normal around the EP.
\section{Model}
We consider two near resonance coupled cavities with the effective non-Hermitian Hamiltonian
\begin{equation}
\hat{H}_{\mathrm{eff}}= (\nu_{a}-i \frac{\gamma_{a}}{2}) \hat{a}^{\dagger} \hat{a} + (\nu_{b}-i\frac{\gamma_{b}}{2}) \hat{b}^{\dagger} \hat{b} +g (\hat{a}^{\dagger}\hat{b}+\hat{b}^{\dagger} \hat{a}),
\label{eq:EffectiveH}
\end{equation}
where $\nu_{a(b)}$ is the cavity frequency and $\gamma_{a(b)}$ is the decay rate induced by the photon leakage of the cavity $a(b)$, $g$ is the coupling strength, and the Planck constant $\hbar$ is taken as unity throughout this paper. For the quadratic Hamiltonian in Eq. \eqref{eq:EffectiveH}, the dynamics are captured by the coefficient matrix
\begin{equation}
M=(\bar{\nu}-i \frac{\bar{\gamma}}{2})\mathbb{I}+\left(
\begin{array}{cc}
\frac{\epsilon}{2}-i\frac{\gamma}{2} & g \\
g & -\frac{\epsilon}{2}+ i\frac{ \gamma}{2}
\end{array}
\right),
\label{eq:CoefficientM}
\end{equation}
where $\bar{\nu}=\frac{\nu_{a}+\nu_{b}}{2}$ and $\epsilon=\nu_{a}-\nu_{b}$ are the average and detuning of the cavity frequencies, respectively, and $\bar{\gamma}=\frac{\gamma_{a}+\gamma_{b}}{2}$ and $ \gamma=\frac{\gamma_{a}-\gamma_{b}}{2}$ are the average and difference in decay rates, respectively. In sensing experiments, the detuning $\epsilon \rightarrow 0$ is a perturbation term and can be introduced, e.g., by a nanoparticle that changes the effective volume and hence the frequency of one of the cavities, say, cavity $a$ \cite{Chen2017}.
The eigenvalues and the corresponding right eigenvectors are obtained by diagonalizing the coefficient matrix $M$. Results are $\nu_{\pm}=\bar{\nu}-i\frac{\bar{\gamma}}{2}\pm \sqrt{g^{2}+\left(\frac{\epsilon}{2}-i\frac{\gamma}{2}\right)^2}$ and $\psi^{\mathrm{R}}_{\pm}=z_{\pm} [(\frac{\epsilon}{2}-i\frac{\gamma}{2}) \pm \sqrt{g^{2}+(\frac{\epsilon}{2}-i\frac{\gamma}{2})^{2}}, g]^{T}$, where $z_{\pm}$ are the normalization factors such that $ \psi^{\mathrm{R} \dagger}_{\pm}\psi^{\mathrm{R}}_{\pm}=1$. The left eigenvectors $\psi^{\mathrm{L}}_{\pm}=\frac{1}{z_{\pm}\sqrt{g^{2}+(\frac{\epsilon}{2}-i\frac{\gamma}{2})^{2}}}[\pm g, \sqrt{g^{2}+(\frac{\epsilon}{2}-i\frac{\gamma}{2})^{2}}\mp (\frac{\epsilon}{2}-i\frac{\gamma}{2})]$, which are in general not the Hermitian conjugate of the right eigenvectors, are determined by the conditions that $\psi^{\mathrm{L}}_{i}\psi^{\mathrm{R}}_{j}=\delta_{i,j}$. The EP occurs at $\epsilon=0$ and $g=|\gamma|/2$, where the eigenvalues are degenerate and the eigenstates coalesce. Around the EP, the energy splitting shows a square root perturbation dependence on $\epsilon$ as $\Delta\equiv(\nu_{+}-\nu_{-}) \approx 2\sqrt{|\gamma|\left(g-\frac{|\gamma|}{2}\right)-i\gamma \frac{\epsilon}{2}}$. The susceptibility of the energy splitting diverges at the EP as
\begin{equation}
\chi\equiv \frac{\partial \Delta }{\partial \epsilon} \approx \frac{-i \gamma/2}{\sqrt{|\gamma|\left(g-\frac{|\gamma|}{2}\right)-i\gamma \frac{\epsilon}{2}}}.
\label{eq:Susceptibility}
\end{equation}
The eigenvectors $\psi^{R}_{\pm}$ of the non-Hermitian $M$ are in general non-orthogonal and coalescent at the EP as $|\psi^{R \dagger}_{+}\psi^{R}_{-}|\approx 1-\frac{2}{|\gamma|}\sqrt{(g-\frac{|\gamma|}{2})^2+(\frac{\epsilon}{2})^{2}}$.
\section{Quantum Fisher information}
In general, the sensing can be viewed as a scattering process. The input state $\rho^{\mathrm{in}}$ after scattering with the sensing system yields an output state $\rho(\epsilon)$, which depends on the parameter $\epsilon$ that is to be estimated. Certain measurement of the output state $\rho(\epsilon)$ determines the parameter $\epsilon$. The sensitivity is defined as
\begin{equation}
\eta=\delta \epsilon_{\mathrm{min}} \sqrt{T},
\label{eq:Sensitivity}
\end{equation}
where $\delta \epsilon_{\mathrm{min}}$ is the minimum detectable parameter change for a detection time $T$ \cite{Degen2017}. In general, the sensitivity depends on the specific measurement scheme, which, in optics, is usually the measurement of the phase, the intensity, or various quadratures. However, there is a theoretical lower bound for all kinds of measurement, which is known as the quantum Cram\'{e}r-Rao bound \cite{Cramer1946}
\begin{equation}
\eta \ge 1/\sqrt{F^{\epsilon} n/T}.
\label{eq:CramerRaoBound}
\end{equation}
Here $F^{\epsilon}$ is the QFI of the output state $\rho(\epsilon)$ and $n/T$ is the number of experiment repetitions per unit time. Mathematically, QFI is defined as the infinitesimal Bures distance between two close-by output states $\rho(\epsilon)$ and $\rho(\epsilon+\delta\epsilon)$ \cite{Braunstein1994}, namely
\begin{equation}
F^{\epsilon}= \lim_{\delta \epsilon \rightarrow 0}\frac{4}{\delta\epsilon^{2}}d^{2}_{B}[\rho(\epsilon),\rho(\epsilon+\delta\epsilon)].
\label{eq:QFI}
\end{equation}
Here $d_{\mathrm{B}}(\rho,\rho^{\prime})$ is the Bures distance, which describes the indistinguishability between the states $\rho$ and $\rho^{\prime}$ \cite{Uhlmann1976, Bures1969}. Formally, it has an expression
\begin{equation}
d^{2}_{\mathrm{B}}\left(\rho,\rho^{\prime}\right)= 2-2\sqrt{\mathcal{F}(\rho,\rho^{\prime})},
\label{eq:BuresDistance}
\end{equation}
where $\mathcal{F}(\rho,\rho^{\prime})=\left[\mathrm{Tr}\sqrt{\rho^{1/2} \rho^{\prime} \rho^{1/2}}\right]^{2}$ is the fidelity between the states $\rho$ and $\rho^{\prime}$. A particular advantage of the QFI is that it is independent of the specific measurement scheme. In the following, we use the QFI to characterize the sensitivity of a non-Hermitian system. According to the definition of QFI in Eqs. \eqref{eq:QFI} and \eqref{eq:BuresDistance}, the highest sensitivity is determined by the change of the state $\rho(\epsilon)$ in response to the variation of the parameter $\epsilon$.
The output state $\rho (\epsilon)$ and the input state $\rho^{\mathrm{in}}$ are connected via the scattering process \cite{Taylor2006, Newton2013}. The input $\nu$-frequency photon $\hat{c}^{\mathrm{in}}_{\nu}$ after scattering by the sensing system gives the output photon $\hat{c}^{\mathrm{out}}_{\nu}$. In formula, we have
\begin{equation}
\hat{c}^{\mathrm{out}}_{\nu}=\hat{c}_{\nu}^{\mathrm{in}}-\hat{s}^{\mathrm{in}}_{\nu},~ \hat{s}=\frac{1}{\nu+\hat{H}\wedge} [\hat{V}, \hat{c}_{\nu}],
\label{eq:InputOutput-S}
\end{equation}
where the input and output operators are defined as $\hat{o}^{\mathrm{in/out}}(t)= \hat{\Omega}^{\dagger}_{\pm} \hat{o}(t) \hat{\Omega}_{\pm}$ with the Moller operators $\hat{\Omega}_{\pm}=\lim_{t^{\prime}\rightarrow \mp \infty} e^{i \hat{H}t^{\prime}} e^{-i \hat{H}_{0}t^{\prime}}$ and $\hat{H}=\hat{H}_{0}+\hat{V}$ is the total Hamiltonian with $\hat{H}_{0}$ being the free Hamiltonian and $\hat{V}$ being the interaction between the sensing system and the input photons (see Appendix \ref{sec:QST-QFI-app} for details). Here the symbol ``$\wedge$'' denotes the commutation operation, i.e., $\hat{A}\wedge \hat{B}=[\hat{A},\hat{B}]$ and the subscript $\nu$ in $\hat{s}^{\mathrm{in}}_{\nu}=\int \frac{dt}{\sqrt{2\pi}}e^{i\nu t} \hat{s}^{\mathrm{in}} (t)$ denotes the $\nu$-frequency component.
We consider a general case of linear systems. The Hamiltonian reads $\hat{H}_{0}=\int d\nu \nu \hat{c}^{\dagger}_{\nu} \hat{c}_{\nu}+\sum_{lk}\hat{o}^{\dagger}_{l}M_{lk}\hat{o}_{k}$ and $\hat{V}=\int \frac{d \nu}{\sqrt{2\pi}} \sum_{j} \sqrt{\gamma_{ex,j}} (\hat{o}^{\dagger}_{j} \hat{c}_{\nu}+h.c.)$, where $[\hat{o}^\dagger_{l}, \hat{o}_{k}]=\delta_{l,k}$ are linear operators and $\gamma_{ex,j}$ is the coupling strength between the input photons and $j$-th mode of the sensing system. For example $\hat{o}_{1}=\hat{a}$, $\hat{o}_{2}=\hat{b}$ and $\gamma_{ex,j}=\gamma_{ex}\delta_{j, 1}$ for the coupled cavity system shown in Fig. \ref{fig:SensingScheme}. Taking the interaction $\hat{V}$ as a perturbation and expanding it to the second order, we obtain
\begin{equation}
\hat{c}^{\mathrm{out}}_{\nu}\approx \hat{c}^{\mathrm{in}}_{\nu}+\sum_{lj}(M_{\nu}^{-1})_{lj}\sqrt{\gamma_{ex,j}} [\frac{\hat{o}^{\mathrm{in}}_{l}(\nu)}{\sqrt{2\pi}} -i\sqrt{\gamma_{ex,l}}\hat{c}^{\mathrm{in}}_{\nu}],
\label{eq:ScatteringTheory}
\end{equation}
where $M_{\nu}=(\nu \mathbb{I}-M$) is the frequency-shifted coefficient matrix. The output state $\rho(\epsilon)=\bigotimes_{\nu} P^{\nu}_{n,m}\frac{(\hat{c}^{\mathrm{out} \dagger}_{\nu})^{n}}{\sqrt{n!}}|0\rangle\langle0| \frac{(\hat{c}^{\mathrm{out}}_{\nu})^{m}}{\sqrt{m!}}$, where $P^{\nu}_{n,m}=\langle n_{\nu}| \rho^{\mathrm{in}}|m_{\nu}\rangle$ is the density matrix element of the input state. Here we assume that the input state is a product state of different frequency modes. A small disturbance $\delta \epsilon$ of the sensing system changes the output state to
\begin{eqnarray}
\rho(\epsilon+\delta \epsilon)&=&\rho(\epsilon)+\partial_{\epsilon}\rho(\epsilon)\delta \epsilon+\hat{O}(\delta \epsilon^2),
\label{eq:FunctionalAnalysis}\\
\partial_{\epsilon}\rho(\epsilon)&=& \int \frac{d \nu}{\sqrt{2\pi}} \sum_{lj}\frac{\partial \rho(\epsilon)}{\partial (M^{-1}_{\nu})_{lj}}\frac{d (M^{-1}_{\nu})_{lj}}{d \epsilon} ,
\label{eq:Differential}
\end{eqnarray}
The QFI, with the expansion in Eq. \eqref{eq:FunctionalAnalysis} kept to the leading order of $\delta \epsilon$, becomes
\begin{equation}
F^{\epsilon}=2\sum_{\alpha,\beta} \frac{|\langle \mu_{\alpha}| \partial_{\epsilon}\rho(\epsilon)|\mu_{\beta}\rangle|^{2}}{p_{\alpha}+p_{\beta}} ,
\label{eq:QFI-LRep}
\end{equation}
where $|\mu_{\alpha}\rangle$ is $\alpha$-th eigenstate of $\rho(\epsilon)$ with eigenvalue $p_{\alpha}$. The output state $\rho(\epsilon)$ and its differential $\partial_{\epsilon} \rho(\epsilon)$, as functions of $(M^{-1}_{\nu})_{lj}$, are well defined unless the matrix $M_{\nu}$ is singular, i.e., $\det[M_{\nu}]=0$ and $M^{-1}_{\nu}$ is divergent. Note that such a singular condition is independent of the EP. For example, the coefficient matrix in Eq. \eqref{eq:CoefficientM} shows no divergence at the EP as $\det[M_{\nu}]=(\nu-\bar{\nu}+i \frac{\bar{\gamma}}{2})^{2}\neq 0$ for all frequencies. Therefore, the QFI for a sensing system with well defined $M_{\nu}$ shows no $\epsilon$ singularity at the EP.
For the completeness of discussion, we briefly comment on sensing systems with $\det[M_{\nu}]=0$. In such a case, $\rho(\epsilon)$ and its differential $\partial_{\epsilon}\rho(\epsilon)$, in general, are singular because the divergence of $(M^{-1}_{\nu})_{lj}$ makes the output state $\rho(\epsilon)$ sensitive to the parameter $\epsilon$. A small change of $\epsilon$ can make an abrupt change of $\rho(\epsilon)$. In physics, the abrupt change of the output state indicates a non-equilibrium phase transition. An explicit example is the lasing transition of a gain cavity system \cite{DeGiorgio1970}. By embedding a gain medium into cavity $b$ and applying optical pumping, the effective decay rate is effectively reduced to $\gamma_{b}^{\prime}$ and even change its sign (see Fig. \ref{fig:SensingScheme-G}). That yields the lasing threshold $\gamma^{\prime}_{b}=-4\frac{g^2}{\gamma_{a}}$ if $g<\gamma_{a}/2$ or $\gamma^{\prime}_{b}=-\gamma_{a}$ if $g\ge \gamma_{a}/2$. Above the threshold, the system is in lasing phase. The singular point is in general not related to the EP that occurs at $g=|\gamma|/2=(\gamma_{a}-\gamma_{b}^{\prime})/4$, unless the non-equilibrium phase transition coincides with the EP. An example is the $\mathcal{PT}$ phase transition that occurs at $g=\gamma_{a}/2$ and $\gamma^{\prime}_{b}=-\gamma_{a}$. But even for such coincidence, the divergence of QFI is caused by the phase transition rather than the EP. This is evidenced by the fact that $F^{\epsilon}$, as a function of $(M^{-1}_{\nu})_{lj}$, diverges as $\int \frac{d \nu}{2\pi}|(M^{-1}_{\nu})_{lj}|^{\alpha}$ near the transition point, where $\alpha$ is the critical exponent. For example, $\alpha=2$ for the lasing transition (See Appendix \ref{sec:APsystem-app} for details). The above discussions are based on a linear theory in which the dynamics of the sensing system are captured by a linear matrix $M_{lk}$. However, near the non-equilibrium transition, the critical fluctuations diverge and their effects become nonlinear. The diverging critical fluctuations may prevent the singular behavior of the QFI. A systematic study on the competition between the critical fluctuation and the abrupt change of output near the non-equilibrium phase transition is needed before a conclusion can be made on whether the phase transition can dramatically enhance the sensitivity of parameter estimation, which, however, is beyond the scope of this paper.
Physically, the lack of divergence of QFI at the EP is due to the coalescence of the eigenvectors (quasinormal modes). The coefficient matrix in Eq. \eqref{eq:CoefficientM} can be diagonalized as $M_{\nu}=VD_{\nu}V^{-1}$, where $D_{\nu}$ is a diagonal matrix of eigenvalues $(\nu-\nu_{\pm})$ and $V$ is the matrix composed of the eigenvectors $\psi^{R}_{\pm}$. The differential is $\frac{d M_{\nu}}{d \epsilon}=\frac{1}{2}(\mathbb{I}+\sigma_{z})+\frac{1}{2}[(\frac{\psi^{R}_{+,1}}{\psi^{R}_{+,2}}+\frac{\psi^{R}_{-,1}}{\psi^{R}_{-,2}})+(\frac{\psi^{R}_{+,1}}{\psi^{R}_{+,2}}-\frac{\psi^{R}_{-,1}}{\psi^{R}_{-,2}})\chi]\sigma_{+} $, where $\sigma_{x/y/z}$ are the Pauli matrices, $\sigma_{\pm}=\frac{1}{2}(\sigma_{x}\pm i \sigma_{y})$, and $\psi^{R}_{\pm,i}$ denotes the $i$-th element of the right eigenvector $\psi^{R}_{\pm}$. The term $(\frac{\psi^{R}_{+,1}}{\psi^{R}_{+,2}}-\frac{\psi^{R}_{-,1}}{\psi^{R}_{-,2}})=\frac{\Delta}{g}$ vanishes at the EP due to the eigenvector coalescence, canceling the $\Delta^{-1}$ susceptibility divergence near the EP.
The analysis based on Eq. \eqref{eq:FunctionalAnalysis} is applicable for a coefficient matrix $M_{\nu}$ of any dimensions and hence an EP of arbitrary order. Therefore, the QFI shows no divergence at the EP in general.
\begin{figure}[tpb]
\centering
\includegraphics[width=1.0\columnwidth]{Scheme-I}
\caption{(a) Schematic of a coupled cavity sensing system. Two cavities, $a$ and $b$, coupled through the photon transmission with coupling strength $g$, have rates $\gamma_{a}$ and $\gamma_{b}$ of leakage to free space, respectively. A waveguide is coupled to cavity $a$, for photon input and output. The waveguide-cavity coupling strength is characterized by the decay rate $\gamma_{ex}$. (b) Energy diagram of the sensing system. The EP occurs at the point (vertical dashed line) where both the real part and the imaginary part of the eigen frequencies are degenerate. (c) Overlap of the quasinormal modes $\psi^{R}_{\pm}$. Quasinormal modes are in general non-orthogonal and coalesce at the EP. Parameters are $\gamma_{b}=1.0$, $\gamma_{a}=5.0\gamma_{b}$, $\gamma_{ex}=0.1 \gamma_{b}$, and $\nu_{a}=\nu_{b}$.}
\label{fig:SensingScheme}
\end{figure}
\section{Input-output theory}
We consider the configuration of a coupled cavity system with input and output channels (as shown in Fig. \ref{fig:SensingScheme}). The QFI is extracted from the output for the parameter estimation (e.g., estimation of the frequency of a cavity). In addition to the waveguide input and output, we also include the realistic leakage into the free space, with rates $\gamma_{a/b}$ for cavity $a/b$. The Hamiltonian of the open system is written as
\begin{equation}
\hat{H}=\hat{H}_{S}+\hat{H}_{I}+\hat{H}_{B},
\label{eq:TotalHamiltonian}
\end{equation}
where $\hat{H}_{S}=\nu_{a} \hat{a}^{\dagger} \hat{a} + \nu_{b} \hat{b}^{\dagger} \hat{b} + g (\hat{a}^{\dagger} \hat{b} + \hat{b}^{\dagger} \hat{a})$ is the Hamiltonian of the coupled cavity system, $\hat{H}_{I}= \int \frac{d\nu}{\sqrt{2\pi}} [ \hat{a}^{\dagger} (\sqrt{\gamma_{a}} \hat{a}_{\nu}+\sqrt{\gamma_{ex}} \hat{c}_{\nu})+\sqrt{\gamma_{b}} \hat{b}^{\dagger} \hat{b}_{\nu}+h.c.]$ is the coupling to the open channel and the free space photons, and $\hat{H}_{B}=\int d\nu \nu(\hat{a}^{\dagger}_{\nu} \hat{a}_{\nu}+\hat{b}^{\dagger}_{\nu} \hat{b}_{\nu}+\hat{c}^{\dagger}_{\nu} \hat{c}_{\nu})$ is the non-interacting Hamiltonian of the open channel and the free space photons. Here $\hat{a}_{\nu} (\hat{b}_{\nu})$ and $\hat{c}_{\nu}$ are the $\nu$ frequency annihilation operators of the free space photons for cavity $a$ ($b$) and the waveguide photons, respectively.
For the input state (the photon state in the remote past $t=-\infty$) $\rho^{\mathrm{in}}:=\rho_{a}(-\infty)\otimes \rho_{b}(-\infty)\otimes \rho_{c}(-\infty)$, we are to determine the waveguide output state at the remote future $\rho^{\mathrm{out}}_{c}:=\rho_{c}(\infty)$.
In the Markovian noise process, the input-output theory gives \cite{Collett1984}
\begin{equation}
\hat{c}^{\mathrm{out}}(t)-\hat{c}^{\mathrm{in}}(t)=-i \sqrt{\gamma_{ex}} \hat{a}(t),
\label{eq:InputOutputR}
\end{equation}
where $\hat{c}^{\mathrm{in}}(t)=\lim_{t_{i}\rightarrow -\infty} \int \frac{d\nu}{\sqrt{2\pi}} e^{-i \nu (t-t_{i})} \hat{c}_{\nu}(t_{i})$ and $\hat{c}^{\mathrm{out}}(t)=\lim_{t_{f}\rightarrow \infty} \int \frac{d\nu}{\sqrt{2\pi}} e^{i \nu (t_{f}-t)} \hat{c}_{\nu}(t_{f})$ are the noise operators at $t=-\infty$ (input) and $t=+\infty$ (output), respectively. The evolution of the cavity operators $\hat{a}(t)$ and $\hat{b}(t)$ is governed by the quantum Langevin equations
\begin{subequations}\label{eq:QLangevinE}
\begin{eqnarray}
\partial_{t} \hat{a}(t)&=& (-i\nu_{a}-\frac{\gamma_{a}^{\prime}}{2}) \hat{a}(t)-i g \hat{b}(t)- i\sqrt{\gamma_{a}} \hat{a}^{\mathrm{in}}(t)-i \sqrt{\gamma_{ex}}\hat{c}^{\mathrm{in}}(t),~~ \label{eq:QLangevinE-a}\\
\partial_{t} \hat{b}(t)&=& (-i\nu_{b}-\frac{\gamma_{b}}{2}) \hat{b}(t)-i g \hat{a}(t)- i \sqrt{\gamma_{b}} \hat{b}^{\mathrm{in}}(t),
\label{eq:QLangevinE-b}
\end{eqnarray}
\end{subequations}
where $\gamma_{a}^{\prime}=\gamma_{a}+\gamma_{ex}$ and the definitions of $\hat{a}^{\mathrm{in}}(t)$ and $\hat{b}^{\mathrm{in}}(t)$ are similar to that of $\hat{c}^{\mathrm{in}}(t)$. The input-output relation is found to be
\begin{align}
\hat{c}^{\mathrm{out}}(\omega)=&\hat{c}^{\mathrm{in}}(\omega)-i\sqrt{\gamma_{ex} }G_{a}(\omega)\left[\sqrt{\gamma_{a}}\hat{a}^{\mathrm{in}}(\omega)+\sqrt{\gamma_{ex}} \hat{c}^{\mathrm{in}}(\omega)\right] \nonumber \\
&-i\sqrt{\gamma_{ex} \gamma_{b}}G_{a}(\omega)g G^{(0)}_{b} \hat{b}^{\mathrm{in}}(\omega),
\label{eq:WaveguideO}
\end{align}
where $G_{a}(\omega)=\frac{1}{\omega-\nu_{a}+i \frac{\gamma_{a}^{\prime}}{2}-g^{2}G_{b}^{(0)}(\omega)}$ is the dressed propagator of cavity $a$, and $G_{b}^{(0)}(\omega)=\frac{1}{\omega-\nu_{b}+i\frac{\gamma_{b}}{2}}$ is the free propagator of cavity $b$. The solution is in frequency domain after the Fourier transform $\hat{o}(\omega)=\int \frac{d t}{\sqrt{2\pi}} \hat{o}(t) e^{i \omega t}$. Comparing Eq. \eqref{eq:ScatteringTheory} with Eq. \eqref{eq:WaveguideO}, we find a correspondence between these two theories as $\hat{o}_{1}=\hat{a}$, $\hat{o}_{2}=\hat{b}$, $\gamma_{ex,j}=\gamma_{ex} \delta_{j,1}$, and $\hat{o}^{\mathrm{in}}_{1/2}(\nu)=-i\sqrt{2\pi \gamma_{a/b}}\hat{a}^{\mathrm{in}}(\nu)(\hat{b}^{\mathrm{in}}(\nu))$. For a given input state $\rho^{\mathrm{in}}$, Eq. \eqref{eq:WaveguideO} provides us a way to calculate the output average of any waveguide operator $\hat{o}(\hat{c}_{\nu},\hat{c}^{\dagger}_{\nu})$ and hence the waveguide output state $\rho_{c}^{\mathrm{out}}$ (See Appendix \ref{sec:WignerR-app} for details).
Now to be specific we consider a Gaussian input state, which is the most commonly used in experiments. The output state must also be Gaussian since the scattering is a linear transform. The Gaussian state enables the exact calculation of the QFI. We assume that the free space and the waveguide are in the thermal equilibrium state with temperature $1/\beta$ (Boltzmann constant $k_{\mathrm{B}}$ taken as unity) and the input signal is in the coherent state. The density matrix of the waveguide photons at frequency $\nu$ is $\rho_{c,\nu}^{\mathrm{in}}=\hat{D}(\alpha_{\nu})\rho_{c,\nu}^{T} \hat{D}^{\dagger}(\alpha_{\nu})$,
where $\rho_{c,\nu}^{T}=(1-e^{-\beta \nu})e^{-\beta \nu \hat{c}^{\dagger}_{\nu}\hat{c}_{\nu}}$ represents the thermal background photons in the waveguide, and $\hat{D}(\alpha_{\nu})=e^{\alpha_{\nu}\hat{c}^{\dagger}_{\nu}-\alpha_{\nu}^{\ast} \hat{c}_{\nu}}$ is the displacement operator which superimposes the coherent state on the thermal background. The free-space photons coupled to cavity $a$ and $b$ are in the thermal states $\rho^{T}_{a,\nu}$ and $\rho^{T}_{b,\nu}$, respectively. Thus the input state $\rho^{\mathrm{in}}=\bigotimes_{\nu}\rho_{a,\nu}^{T}\otimes \rho_{b,\nu}^{T}\otimes \rho_{c, \nu}^{\mathrm{in}}$. By using the input-output relation Eq. \eqref{eq:WaveguideO}, the waveguide output state $\rho^{\mathrm{out}}_{c}=\bigotimes_{\nu}\rho^{\mathrm{out}}_{c,\nu}$ is obtained (see Appendix \ref{sec:WignerR-app} for details). For the Gaussian output, the density matrix $\rho^{\mathrm{out}}_{c,\nu}$ and hence the QFI are fully determined by the expectation values and the second-order correlations of the quadrature operators $\hat{X}_{1,\nu}=\frac{1}{\sqrt{2}}(\hat{c}_{\nu}+\hat{c}^{\dagger}_{\nu})$ and $ \hat{X}_{2,\nu}= \frac{1}{i\sqrt{2}}(\hat{c}_{\nu}-\hat{c}^{\dagger}_{\nu})$. We denote the expectation values as $\bar{\mathbf{X}}_{\nu}=\left[ \langle \hat{X}_{1,\nu}\rangle, \langle \hat{X}_{2,\nu}\rangle \right]$ and the correlations as the covariance matrix $\left(\mathbb{C}_{\nu}\right)_{ij}=\frac{1}{2}\langle \hat{X}_{i,\nu}\hat{X}_{j,\nu}+\hat{X}_{j,\nu}\hat{X}_{i,\nu}\rangle -\langle \hat{X}_{i,\nu}\rangle \langle \hat{X}_{j,\nu}\rangle$. The results are
\begin{equation}
\bar{\mathbf{X}}^{T}_{\nu}= \sqrt{2} \left[
\begin{array}{c}
\mathrm{Re}[\alpha_{\nu}(1-i\gamma_{ex} G_{a}(\nu))]\\
\mathrm{Im}[\alpha_{\nu}(1-i \gamma_{ex} G_{a}(\nu))]
\end{array}
\right], ~
\mathbb{C}_{\nu}= (\bar{n}_{\nu}+\frac{1}{2}) \mathbb{I},
\label{eq:AverageResults}
\end{equation}
where $\bar{n}_{\nu}=(e^{\beta \nu}-1)^{-1}$ is the average thermal
photon number. The identity form of $\mathbb{C}_{\nu}$ here is due to the particular coherent input state $\rho^{\mathrm{in}}_{c,\nu}$. It does not hold for general Gaussian input states. For example, off-diagonal elements exist for the squeezed input state. The QFI for the single-mode Gaussian state $\rho^{\mathrm{out}}_{c,\nu}$ reads (see Appendix \ref{sec:QFI-app} for details) \cite{Scutaru1998,Pinel2013},
\begin{equation}
F^{\epsilon}_{\nu}=\frac{\mathrm{Tr}[(\mathbb{C}_{\nu}^{-1}\dot{\mathbb{C}}_{\nu})^{2}]}{2(1+P_{\nu}^{2})} + \frac{2 (\dot{P}_{\nu})^{2}}{1-P_{\nu}^{4}}+\dot{\bar{\mathbf{X}}}_{\nu}^{T}\mathbb{C}_{\nu}^{-1}\dot{\bar{\mathbf{X}}}_{\nu},
\label{eq:FI-Gaussian}
\end{equation}
where the dot symbol denotes the derivative $\partial_{\epsilon}$ and $P_{\nu}\equiv \mathrm{det}[2\mathbb{C}_{\nu}]^{-1/2}$ denotes the purity. The QFI for all the waveguide modes (which are taken as independent of each other) is $F^{\epsilon}=\int \frac{d\nu}{2 \pi} F^{\epsilon}_{\nu}$. Using Eq. \eqref{eq:AverageResults}, we obtain
\begin{equation}
F^{\epsilon}= 4 \int \frac{d\nu}{2\pi} \frac{|\alpha_{v}|^{2}}{2\bar{n}_{\nu}+1} \left| \frac{d S_{\nu}}{d \epsilon}\right|^{2},
\label{eq:FI-CoupledCavity}
\end{equation}
where $S_{\nu}= \gamma_{ex} G_{a}(\nu)$ characterizes the scattering amplitude and the term $|\alpha_{\nu}|^{2}/(2 \bar{n}_{\nu}+1)$ characterizes the signal-to-noise ratio. The propagator has an explicit expression $G_{a}(\nu)=\frac{(\nu-\nu_{b}+i \frac{\gamma_{b}}{2})}{(\nu-\nu_{+})(\nu-\nu_{-})}$. Near the EP, each mode $\nu_{\pm}$ shows square root perturbation dependence, which makes the susceptibility divergent. However, the product $(\nu-\nu_{+})(\nu-\nu_{-})\approx(\nu-\frac{\epsilon+i(\bar{\gamma}+\frac{\gamma_{ex}}{2})}{2})^{2}-\gamma(g-\frac{|\gamma|}{2}-i \frac{\epsilon}{2})$ gives a smooth, linear perturbation dependence. Therefore the QFI $F^{\epsilon}$ shows no divergence at the EP.
\begin{figure}[tpb]
\centering
\includegraphics[width=1.0\columnwidth]{QFI-I}
\caption{Numerical results for the coupled-cavity sensing near the EP. (a) The QFI of cavity $a$ frequency $F^{\epsilon}$, (b) the QFI of the energy splitting $F^{\Delta}$, and (c) the susceptibility of the energy splitting $\chi^{2}$, as functions of the coupling strength $g$. The vertical dashed lines indicate the position of the EP. Parameters are $\gamma_{b}=1.0$, $\gamma_{a}=5.0\gamma_{b}$, $\gamma_{ex}=0.1\gamma_{b}$, $\nu_{a}=\nu_{b}$, $\alpha=1000.0$, $\Gamma=200\gamma_{b}$, and $\beta\rightarrow\infty$.}
\label{fig:FI-I}
\end{figure}
Figure \ref{fig:FI-I} presents the numerical results of the QFI and the energy splitting susceptibility as functions of the coupling strength $g$. Here, the input coherent state is assumed to have a spectrum $\alpha_{\nu}=\alpha \frac{\sqrt{2 \Gamma}}{\nu-\nu_{b}+i \frac{\Gamma}{2}}$, where $\alpha$ is the amplitude and the bandwidth $\Gamma \gg \gamma_{a},\gamma_{b}$. In the calculation, zero temperature is considered, i.e., $\bar{n}_{\nu}=0$, and the two cavities are tuned to resonance, i.e., $\nu_{a}=\nu_{b}$. The results reveal that $F^{\epsilon}$ is a smooth function of $g$ even at the EP (indicated by vertical dashed lines in Fig. \ref{fig:FI-I}).
To show that the absence of divergence of QFI at the EP is related to the state coalescence, we expand the QFI as
\begin{equation}
F^{\epsilon}= 4\int \frac{d\nu}{2\pi} \frac{|\alpha_{\nu}|^{2}}{2\bar{n}_{\nu}+1}\left[ \left|\frac{\partial S_{\nu}}{\partial \Delta} \frac{d \Delta}{d \epsilon}\right|^{2}+2\mathfrak{R}\left(\frac{\partial S_{\nu}}{\partial \Delta} \frac{\partial S_{\nu}}{\partial \bar{\nu}}\frac{d \Delta}{d \epsilon} \frac{d \bar{\nu}}{d \epsilon} \right)+\left|\frac{\partial S_{\nu}}{\partial \bar{\nu}} \frac{d \bar{\nu}}{d \epsilon}\right|^{2} \right],
\label{eq:QFI-expansion}
\end{equation}
from which, we define the QFI for the splitting $\Delta=(\nu_{+}-\nu_{-})$ as
\begin{equation}
F^{\Delta}=4 \int \frac{d\nu}{2\pi} \frac{|\alpha_{\nu}|^{2}}{2\bar{n}_{\nu}+1}\left|\frac{\partial S_{\nu}}{\partial \Delta} \right|^{2}.
\label{eq:QFIofSpliting}
\end{equation}
It measures the available information in the output state $\rho_{\mathrm{c}}^{\mathrm{out}}$ to distinguish the energy splitting. From $\partial_{\Delta }S_{\nu}=\frac{(\nu+i \frac{\gamma_{b}}{2})\Delta}{(\nu-\nu_{+})^{2}(\nu-\nu_{-})^{2}}$, one can see that $F^{\Delta} \sim |\Delta|^{2}$ near the EP (see Fig. \ref{fig:FI-I} b). It reflects that the eigenstates are indistinguishable at the EP. Combining $F^{\Delta }$ with the divergent susceptibility $\chi^{2}$ (see Fig. \ref{fig:FI-I} c), we find that the susceptibility divergence is exactly counteracted by the vanishing QFI $F^{\Delta}$. Similar arguments apply to the second term in the expression of $F^{\epsilon}$ shown in Eq. \eqref{eq:QFI-expansion}. Thus the QFI $F^{\epsilon}$ is a smooth function around the EP.
\section{Active-passive cavity system}
\begin{figure}[tpb]
\centering
\includegraphics[width=1.0\columnwidth]{Scheme-II}
\caption{The active-passive coupled cavity sensing system. Here cavity $b$ is filled with a gain medium. The effective decay rate $\gamma^{\prime}_{b}$ is tuned by varying the optical pumping power.}
\label{fig:SensingScheme-G}
\end{figure}
By embedding a gain medium into cavity $b$, the decay rate $\gamma_{b}$ is effectively reduced and can even change the sign to realize an active cavity. Through this method, an effective active-passive coupled cavity system has been realized to study the $\mathcal{PT}$ symmetry \cite{Peng2014, Chang2014}. It is interesting to know whether the EP in the active-passive system can enhance the sensitivity. The gain can be realized, e.g., by stimulated emission of a medium with population inversion. However, there exists a threshold that limits the maximal achievable gain rate. Above the threshold, the system will be in the lasing phase (a self-adaptive region) in which the effective decay rate description becomes invalid. In this study, we constrain the gain rate below the lasing threshold.
Below the threshold, the gain cavity works as an amplifier. The decay rate of the gain cavity due to the pumped gain medium becomes $\gamma_{b}^{\prime}=\gamma_{b}-4 S_{z} g^{2}_{G}/\kappa$ and the noise operator $\sqrt{\gamma^{\prime}_{b}} \hat{b}^{\mathrm{in}\prime}(\omega)=\sqrt{\gamma_{b}} \hat{b}^{\mathrm{in}}(\omega)+i \sqrt{\frac{4S_{z} g^2_{G}}{\kappa}} \hat{d}^{\mathrm{in} \dagger}(\omega)$, where $S_{z}\equiv N_{e}-N_{g}$ denotes the population inversion of the gain medium, $g_{G}$ is the cavity-gain medium coupling coefficient, $\kappa$ is the effective decay rate of the gain medium, and $\hat{d}^{\mathrm{in} \dagger}(\omega)$ is the noise operator induced by the gain medium with the average excitation number $\bar{n}_{d}=\frac{N_{g}}{S_Z}$ in the thermal state (see Appendix \ref{sec:APsystem-app} for details). The input-output theory, the waveguide output states, and the QFI for the passive-passive coupled cavity system are still valid here with only substitutions $\gamma_{b}\rightarrow \gamma_{b}^{\prime}$ and $\hat{b}^{\mathrm{in}}(\omega)\rightarrow \hat{b}^{\mathrm{in}\prime}(\omega)$. That yields
\begin{equation}
\bar{\mathbf{X}}^{T}_{\nu}= \sqrt{2} \left[
\begin{array}{c}
\mathrm{Re}[\alpha_{\nu}(1-i\gamma_{ex} G_{a}(\nu))]\\
\mathrm{Im}[\alpha_{\nu}(1-i \gamma_{ex} G_{a}(\nu))]
\end{array}
\right], ~
\mathbb{C}_{\nu}= (\bar{n}^{\prime}_{\nu}+\frac{1}{2})\mathbb{I},
\label{eq:QuaVar-GP}
\end{equation}
where $\bar{n}^{\prime}_{\nu}=\bar{n}_{\nu}+\gamma_{ex} \frac{4S_{z}g_{G}^{2}}{\kappa}(\bar{n}_{\nu}+\frac{N_{e}}{S_{z}}) |g G_{a}(\nu) G_{b}^{(0)}(\nu)|^{2}$ is the average photon number modified by the gain medium. Then the QFI in Eq. \eqref{eq:FI-Gaussian} becomes
\begin{equation}
F^{\epsilon}=\int \frac{d \nu}{2\pi} \left[4\frac{|\alpha_{\nu}|^{2}}{2\bar{n}^{\prime}_{\nu}+1} \left|\frac{d S_{\nu}}{d \epsilon}\right|^{2}+\frac{|\partial_{\epsilon}\bar{n}^{\prime}_{\nu}|^{2}}{\bar{n}^{\prime}_{\nu}(\bar{n}^{\prime}_{\nu}+1)}\right].
\label{eq:QFI-GP}
\end{equation}
Below the threshold, both $G_{a}(\nu)=\frac{\nu-\nu_{b}+i \frac{\gamma^{\prime}_{b}}{2}}{(\nu-\nu_{+})(\nu-\nu_{-})}$ and $\bar{n}^{\prime}(\nu)=\bar{n}(\nu)+\frac{\gamma_{ex}g^{2} \frac{4 S_z g_{G}^{2}}{\kappa}}{|(\nu-\nu_{+})(\nu-\nu_{-})|^{2}}$ are well defined, where $\nu_{\pm}=\bar{\nu}-i \frac{\gamma_{a}^{\prime}+\gamma_{b}^{\prime}}{4}\pm\sqrt{g^{2}+(\frac{\epsilon}{2}-i \frac{\gamma_{a}^{\prime}-\gamma_{b}^{\prime}}{4})^{2}}$ are the eigenvalues of the coefficient matrix of the active-passive coupled cavity system. Therefore, $F^{\epsilon}$ is a smooth function at the EP.
\begin{figure}[tpb]
\centering
\includegraphics[width=1.0\columnwidth]{QFI-II}
\caption{(a) The frequencies of the quasinomal modes and (b) the QFI as functions of $S_{z}/S_{c}$, where $S_{c}$ is the lasing threshold. The QFI is a smooth function of $S_{z}$ at the EP (indicated by the vertical dashed line). Near the threshold, the QFI diverges as revealed by the dotted line in (b). Results are all based on the linearized theory. Parameters are: $\gamma_{b}=1.0$, $\gamma_{a}=5.0\gamma_{b}$, $\gamma_{ex}=0.1\gamma_{b}$, $\nu_{a}=\nu_{b}$, $g=2.4\gamma_{b}$, $\kappa=100\gamma_{b}$, $N=2 \times 10^{12}$, $\gamma_{1}=0.01\gamma_{b}$, $g_{G}=10^{-5}\gamma_{b}$, $S_{c}=1.38\times 10^{12}$, and $\beta\rightarrow \infty$.}
\label{fig:QFI-GP}
\end{figure}
Figure \ref{fig:QFI-GP} presents the numerical results of the quasinormal mode frequencies (a) and the QFI (b) as functions of $S_{z}$. The input state takes the same form as in Fig. \ref{fig:FI-I}. Figure \ref{fig:QFI-GP} (b) reveals that the QFI is a smooth function of the population inversion $S_{z}$ around the EP (indicated by the vertical dashed line). The enhancement of the QFI with increasing the population inversion is induced by the gain medium. The stronger the optical pumping is, the larger population inversion is induced, and the higher sensitivity is obtained.
In the linear theory, the QFI diverges at the lasing threshold, i.e., $S_{z}=S_{c}$. In Fig. \ref{fig:QFI-GP} (b), the divergent behavior is shown in the dotted line. However, the critical fluctuation neglected in linear description becomes important near the threshold, which may prevent the sensitivity from divergence. The discussion of the effects of the critical fluctuations is beyond the scope of this paper. Further increasing the pumping power, the coupled cavity system will exceed the threshold and enter the lasing phase. The EP, known as the $\mathcal{PT}$ phase transition point, occurs at the point $g=\frac{\gamma^{\prime}_{a}}{2}$ and $\epsilon=0$ in the parametric space; when $2g>\gamma^{\prime}_{a}$, the system is in the $\mathcal{PT}$ symmetric lasing phase, where both modes are lasing; whereas when $2g<\gamma^{\prime}_{a}$, the $\mathcal{PT}$ symmetry breaks and the system is in the single mode lasing phase \cite{Feng2014, Hodaei2014, Hossein2016, Zhang2018a}. In contrast to the cases below the lasing threshold, a non-equilibrium phase transition occurs at the EP. The conclusion revealed in Eq. \eqref{eq:FunctionalAnalysis}, that the enhancement of the QFI at the lasing transition is not caused by the divergence of the energy splitting susceptibility but the phase transition, can be generalized to this case. We can also understand this conclusion from the coalescence of the different quasinormal modes counteracts the susceptibility divergence at the EP.
\section{Conclusion and discussion}
We show that the exceptional point in a non-Hermitian sensing system does not dramatically enhance the sensitivity, since the coalescence of the different quasinormal modes counteracts the singular behavior of the mode splitting. This is verified in the passive-passive and active-passive coupled cavity systems through the exact calculation of the quantum Fisher information. This conclusion is valid for high-order EPs and other sensing schemes.
\textbf{Notes.} After completion of this work, we came across the paper [W. Langbein, arXiv:1801.05750], to whose conclusion ours is similar.
\begin{acknowledgments}
This work was supported by Hong Kong Research Grants Council and National Natural Science Foundation of China (Grant No. 11605094).
\end{acknowledgments}
|
train/arxiv
|
BkiUb_rxK7Tt6AlxCl1v
| 5 | 1 |
\section{Introduction}\label{secint}
An explosion of research activity associated with the novel two-dimensional
material graphene has prompted a reexamination of its bulk parent, graphite.
Much, of course, is known about graphite \cite{BCP88}. Bernal graphite is a hexgonal
crystal consisting of graphene sheets stacked in an ABAB configuration.
The $sp^2$-hybridized $\sigma$ electrons form double bonds between the carbon
atoms, while the remaining $\pi$ electrons, in the $p^z$ orbital, are itinerant.
The electronic structure parameters for graphite were first derived by Wallace, and
by Slonczewski, Weiss, and McClure (SWMC) \cite{SWMC}.
Within each plane, the $\pi$ electrons move on a honeycomb lattice with a nearest
neighbor hopping integral $\gamma\ns_0\approx 3.2\,$eV. Of the four atoms
per unit cell, two are arranged in vertical chains, with a vertical nearest neighbor
hopping of $\gamma\ns_1\approx -390\,$meV. Additional further neighbor hoppings are also
present. For example, the $\pi$ electrons on the non-chain sites undergo
two-layer vertical hopping through open hexagons in the neighboring layers, with
amplitude $\frac{1}{2}\gamma\ns_2\approx -10\,$meV. This
results in a very narrow band of width $40\,$meV along the $K$-$H$ spine of the
Brillouin zone, with electron pockets at $K$ and hole pockets at $H$ \cite{DM64}.
Recently, striking experimental observations of what may be bulk three-dimensional
quantum Hall plateaus in graphite has been reported \cite{KEK06}. Any two-dimensional
(2D) system, such as graphene, which exhibits the quantum Hall effect (QHE) should
exhibit a 3DQHE if the interplane coupling is sufficiently weak. The reason for this is
that the cyclotron gaps between Landau levels narrow continuously as one adiabatically
switches on the $c$-axis couplings, and cannot collapse immediately. For a 3D electron
system in a periodic potential and subject to a magnetic field, a generalization of the
TKNN result \cite{TKNN} by Halperin \cite{Hal87} shows that the conductivity tensor must
be of the form
\begin{equation}
\sigma^{\vphantom{\dagger}}_{ij} ={e^2\over h}\, \epsilon^{\vphantom{\dagger}}_{ijk} \,G^{\vphantom{\dagger}}_k\ ,
\end{equation}
whenever the Fermi level $E^{\vphantom{\dagger}}_\ssr{F}$ lies within a bulk gap, where $\epsilon_{ijk}$
is the fully antisymmetric tensor and $\vec{G}$ is a reciprocal lattice vector of the
potential (which may be ${\vec G}=0$). The Hall current is then carried by a sheath of
chiral surface states. Eventually, however,
the $c$-axis hopping will become large enough that the Landau gaps collapse.
Equivalently, for a given value of the $c$-axis hopping $\gamma\ns_1$, the magnetic
field $B$ must exceed a critical strength $B^{\vphantom{\dagger}}_{\rm c}$ in order that the Landau level
spacing overwhelms the $c$-axis bandwidth and opens up a bulk gap.
Typically, the field scale $B^{\vphantom{\dagger}}_{\rm c}$ is extremely large, and much beyond the
scale of current experimentally available fields.
For a system with ballistic dispersion described by an effective mass
$m^*$, the orbital part of the spectrum ({\it i.e.\/}\ neglecting Zeeman coupling)
yields a dispersion $\varepsilon^{\vphantom{\dagger}}_n=(n+\frac{1}{2})\,\hbar\omega^{\vphantom{\dagger}}_{\rm c}$, where $n$
is a nonnegative integer and where $\omega^{\vphantom{\dagger}}_{\rm c}=eB/m^* c$ is the cyclotron
frequency. The cyclotron energy may be written as $\hbar\omega^{\vphantom{\dagger}}_{\rm c}=
W^{\vphantom{\dagger}}_\parallel\cdot(B/B^{\vphantom{\dagger}}_\Omega)$ and the field scale as
$B^{\vphantom{\dagger}}_\Omega =hc/e\Omega$, where $\Omega$ the unit cell area. $W^{\vphantom{\dagger}}_\parallel$ is
on the order of the bandwidth in zero field, which is typically several electron volts.
Since $hc/e=4.13\times 10^5\,{\rm T}{\rm\AA}^2$, $B^{\vphantom{\dagger}}_\Omega$ is typically
enormous, on the order of tens of thousands of Tesla. Thus, if the $c$-axis bandwidth
is $W^{\vphantom{\dagger}}_\perp$, the critical field is given by
$B^{\vphantom{\dagger}}_{\rm c}=(W^{\vphantom{\dagger}}_\perp/W^{\vphantom{\dagger}}_\parallel)\cdot B^{\vphantom{\dagger}}_\Omega$, and even highly
anisotropic materials with $W^{\vphantom{\dagger}}_\perp\,{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}\, 10^{-2}\,W^{\vphantom{\dagger}}_\parallel$ will have
critical fields in the range of hundreds of Tesla.
As shown by Bernevig {\it et al.\/}\ \cite{BHRA07}, similar considerations would apply
for graphene sheets in AAA (simple hexagonal) stacking. The Landau level dispersion
is then
\begin{equation}
E^{\vphantom{\dagger}}_n(B,{\mib k})=2\gamma\ns_1\cos(k^{\vphantom{\dagger}}_z c)+{\rm sgn}(n)\,
\gamma\ns_0\sqrt{nB/B_0}\ ,
\end{equation}
with $B^{\vphantom{\dagger}}_0=B^{\vphantom{\dagger}}_\Omega\big/2\pi\sqrt{3}=7275\,{\rm T}$,
where $\gamma\ns_0\approx 3.16\,$eV is the in-plane hopping and $\gamma\ns_1\approx 0.39\,$eV
is the hopping integral between layers \cite{SWMC}.
The gap between Landau levels $n$ and $n+1$ collapses at a critical field
\begin{equation}
B^{\vphantom{\dagger}}_{{\rm c},n}=\bigg({4\gamma\ns_1\over\gamma\ns_0}\bigg)^{\!\!2}\cdot{B_0\over
\left(\sqrt{n+1}-\sqrt{n}\right)^2}\ .
\end{equation}
For $n=0$ one finds $B^{\vphantom{\dagger}}_{{\rm c},0}\approx 1800\,{\rm T}$.
However, due to the Bernal stacking, one finds \cite{BHRA07} that the principal
cyclotron gap surrounding the $n=0$ Landau levels opens above $B_{\rm c}=15\,{\rm T}$
(electrons; $n=+1$) or above $B_{\rm c}=7\,{\rm T}$ (holes; $n=-1$). When the Fermi
level lies within either of these gaps, the Hall conductance is quantized at
$\sigma^{\vphantom{\dagger}}_{xy}=2e^2/hd$, where $d=3.37\,$\AA\ is the inter-plane separation.
The analysis of ref. \cite{BHRA07} shows that the second cyclotron gap should not
open below fields on the order of $B^{\vphantom{\dagger}}_{{\rm c},2}\approx 1000\,{\rm T}$. This suggests
that the multiple QHE plateaus observed by Kempa {\it et al.\/}\ \cite{KEK06} are of a different
origin, and are not describable by a model of crystalline Bernal graphite alone.
In this paper, we consider two variations which lead to a different plateau structure
to that of crystalline Bernal graphite. The first is rhombohedral graphite, which
is stacked in ABCABC fashion. For this structure, we find
\begin{equation*}
B^{\vphantom{\dagger}}_{{\rm c},n}=\Big(\sqrt{n+1}+\sqrt{n}\Big)^{\!2}\,B_{{\rm c},0}\ ,
\end{equation*}
with $B^{\vphantom{\dagger}}_{{\rm c},0}\approx 0.123\,{\rm T}$. When $E^{\vphantom{\dagger}}_\ssr{F}$ lies in the Fermi level
between the $n$ and $n+1$ Landau levels, the Hall conductivity is given by
$\sigma^{\vphantom{\dagger}}_{xy}=(4n+2)\,e^2/h\, d$. {\it Ab initio\/} calculations show
that the total energy of rhombohedral graphite to be approximately
$0.11\,$meV per atom larger than the Bernal hexagonal phase \cite{CGM94}.
With such a small energy difference, even highly oriented pyrolytic graphite (HOPG)
is believed to contain several percent rhombohedral inclusions.
Powdered graphite samples with up to $\sim 40\%$ of the rhombohedral phase
are obtainable \cite{CGAF00}.
The second possibility we examine is that of a simple stacking fault in Bernal
graphite, of the form ABABCBCB, This fault interpolates between two
degenerate vacua -- the ABAB and CBCB Bernal phases. We analyze the
$c$-axis transport through such a defect, within a simple model of nearest
neighbor hopping, and compute the $S$-matrix as a
function of in-plane wavevector. As expected, the transmission is sharply
attenuated in the vicinity of the Dirac points. We also find a novel bound state
associated with the stacking defect, with two-dimensional dispersion
$E({\mib k})\propto |{\mib k}-{\mib K}|^3$ near the Dirac points. In the presence of a
$c$-axis magnetic field, this leads to a bound state Landau level energy
$E(n,B)\propto |nB|^{3/2}$. In the appendix, we undertake a calculation of
the bound state spectrum in zero field for the full SWMC model \cite{SWMC},
which includes seven tight binding parameters.
We conclude with a discussion of surface spectroscopy of buried stacking faults,
and with remarks about the relevance of our results to future experiments.
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{RHG.eps}
\caption
{\label{RHG} Crystal structure of rhombohedral graphite.}
\end{figure}
\section{Rhombohedral Graphite}\label{rhombo}
In rhombohedral graphite (RG) there are two sublattices, in contrast to four in the case
of Bernal hexagonal graphite (BHG). The primitive direct lattice vectors are
\begin{align*}
{\mib a}^{\vphantom{\dagger}}_1&= a\,{\hat{\mib x}}\\
{\mib a}^{\vphantom{\dagger}}_2&=\frac{1}{2}\, a\,{\hat{\mib x}} + \frac{\sqrt{3}}{2}\,a\,{\hat{\mib y}}\\
{\mib a}^{\vphantom{\dagger}}_3&=\frac{1}{2}\, a\,{\hat{\mib x}} + \frac{1}{2\sqrt{3}}\,a\,{\hat{\mib y}} + d\,{\hat\bfz} \ .
\end{align*}
The basis vector ${\mib\delta}=-\frac{1}{3}\big({\mib a}^{\vphantom{\dagger}}_1+{\mib a}^{\vphantom{\dagger}}_2\big)$
separates the $A$ and $B$ sublattices. Note that ${\mib a}^{\vphantom{\dagger}}_3=d\,{\hat\bfz} -{\mib\delta}$.
The lattice parameters are $a=2.46\,$\AA\ and $d=3.37\,$\AA.
Our treatment starts with a simplified version of the work of McClure \cite{McC69}.
We consider several types of hopping processes:
\begin{itemize}
\item[(i)] in-plane $A-B$ hopping:
\begin{equation}
H_1^{AB}=-\gamma\ns_0\,\Big[ t({\mib\delta}) + t({\mib a}^{\vphantom{\dagger}}_1+{\mib\delta}) + t({\mib a}^{\vphantom{\dagger}}_2+{\mib\delta})\Big]\ ,
\end{equation}
where, $t({\mib d})$ is a translation operator through a vector ${\mib d}$.
\item[(ii)] neighboring plane diagonal $A-B$ hopping:
\begin{align}
H_2^{AB}&=\gamma\ns_3\Big[t({\mib a}^{\vphantom{\dagger}}_1-{\mib a}^{\vphantom{\dagger}}_3+{\mib\delta})+t({\mib a}^{\vphantom{\dagger}}_2-{\mib a}^{\vphantom{\dagger}}_3+{\mib\delta})\\
&\qquad\qquad\qquad+ t({\mib a}^{\vphantom{\dagger}}_1+{\mib a}^{\vphantom{\dagger}}_2-{\mib a}^{\vphantom{\dagger}}_3+{\mib\delta})\Big]\,\nonumber
\end{align}
\item[(iii)] nearest neighbor and next nearest neighbor plane vertical $A-B$ hopping:
\begin{equation}
H_3^{AB}=\gamma\ns_1\,t({\mib a}^{\vphantom{\dagger}}_3+{\mib\delta})+\gamma\ns_2\,t({\mib a}^{\vphantom{\dagger}}_1+{\mib a}^{\vphantom{\dagger}}_2-2{\mib a}^{\vphantom{\dagger}}_3+{\mib\delta})
\end{equation}
\item[(iv)] neighboring plane diagonal $A-A$ hopping:
\begin{equation}
H_4^{AA}=\gamma\ns_3\Big[t({\mib a}^{\vphantom{\dagger}}_3)+t({\mib a}^{\vphantom{\dagger}}_3-{\mib a}^{\vphantom{\dagger}}_1)
+t({\mib a}^{\vphantom{\dagger}}_3-{\mib a}^{\vphantom{\dagger}}_2)\Big] + {\rm H.c.}
\end{equation}
\item[(v)] neighboring plane diagonal $B-B$ hopping:
\begin{equation}
H_4^{BB}=\gamma\ns_3\Big[t({\mib a}^{\vphantom{\dagger}}_3)+t({\mib a}^{\vphantom{\dagger}}_3-{\mib a}^{\vphantom{\dagger}}_1)
+t({\mib a}^{\vphantom{\dagger}}_3-{\mib a}^{\vphantom{\dagger}}_2)\Big]+ {\rm H.c.}
\end{equation}
\end{itemize}
The full Hamiltonian is then given by
\begin{equation}
H=\begin{pmatrix} H_4^{AA} & H_1^{AB}+H_2^{AB}+H_3^{AB} \\ &&\\
H_1^{BA}+H_2^{BA}+H_3^{BA} &H_4^{BB}\end{pmatrix}\ ,
\end{equation}
where $H_n^{BA}=\big(H_n^{AB}\big)^\dagger$ for $n=1,2,3$.
From Wallace and SWMC \cite{SWMC}, we take
\begin{align*}
\gamma\ns_0&=3160\,{\rm meV}\qquad&\gamma\ns_1&=390\,{\rm meV}\\
\gamma\ns_2&=10\,{\rm meV}
\qquad&\gamma\ns_3&=250\,{\rm meV}\ .
\end{align*}
(In the language of McClure \cite{McC69}, $\gamma'_2=\gamma\ns_2$ and $\gamma'_1=\gamma\ns_3$,
and we ignore McClure's parameters $\gamma'_0$ and $\gamma''_2$.)
We then have
\begin{equation}
H=\begin{pmatrix} \eta & 0 \\ 0 & 1\end{pmatrix}
\, \begin{pmatrix} A & B \\ B^* & A \end{pmatrix}\,
\begin{pmatrix} \eta^* & 0 \\ 0 & 1\end{pmatrix}\ ,
\end{equation}
with $\eta= e^{i(\theta^{\vphantom{\dagger}}_1+\theta^{\vphantom{\dagger}}_2)/3}$ and
\begin{align*}
A(\theta^{\vphantom{\dagger}}_1,\theta^{\vphantom{\dagger}}_2,\theta^{\vphantom{\dagger}}_3)&=\gamma\ns_3\,e^{-i\theta^{\vphantom{\dagger}}_3}\,T(\theta\nd_1,\theta\nd_2) +\gamma\ns_3\,
e^{i\theta^{\vphantom{\dagger}}_3}\,T^*(\theta\nd_1,\theta\nd_2)\\
B(\theta^{\vphantom{\dagger}}_1,\theta^{\vphantom{\dagger}}_2,\theta^{\vphantom{\dagger}}_3)&=-\gamma\ns_0\, T(\theta\nd_1,\theta\nd_2) +\gamma\ns_3\,e^{-i\theta^{\vphantom{\dagger}}_3}\,T^*(\theta\nd_1,\theta\nd_2)\\
&\qquad\qquad +\gamma\ns_1\,e^{i\theta^{\vphantom{\dagger}}_3} +\gamma\ns_2\,e^{i(\theta^{\vphantom{\dagger}}_1+\theta^{\vphantom{\dagger}}_2-2\theta^{\vphantom{\dagger}}_3)}
\end{align*}
where
\begin{equation}
T(\theta\nd_1,\theta\nd_2)=1+e^{i\theta^{\vphantom{\dagger}}_1} + e^{i\theta^{\vphantom{\dagger}}_2} \ .
\end{equation}
The energy eigenvalues are clearly
\begin{equation}
E^{\vphantom{\dagger}}_\pm({\mib\theta})=A({\mib\theta})\pm \big| B({\mib\theta})\big|\ .
\end{equation}
Under a $60^\circ$ rotation, we have
\begin{equation}
\theta'_1=\theta^{\vphantom{\dagger}}_2\quad,\quad\theta'_2=\theta^{\vphantom{\dagger}}_2-\theta^{\vphantom{\dagger}}_1
\quad,\quad\theta'_3=\theta^{\vphantom{\dagger}}_2-\theta^{\vphantom{\dagger}}_3\ .
\end{equation}
One then finds $A({\mib\theta}')=A({\mib\theta})$ and $B({\mib\theta}')=e^{i\theta^{\vphantom{\dagger}}_2}\,
B({\mib\theta})$. Hence, $E^{\vphantom{\dagger}}_\pm({\mib\theta}')=E^{\vphantom{\dagger}}_\pm({\mib\theta})$.
Degeneracies identified with a one-parameter family of Dirac points occur when
$B({\mib\theta})=0$. Solving, we obtain the relation
\begin{equation}
T(\theta\nd_1,\theta\nd_2)=\Gamma^{\vphantom{\dagger}}_1\,e^{i\theta^{\vphantom{\dagger}}_3} +
\Gamma^{\vphantom{\dagger}}_2\,e^{i(\theta^{\vphantom{\dagger}}_1+\theta^{\vphantom{\dagger}}_2-2\theta^{\vphantom{\dagger}}_3)}
\label{SDir}
\end{equation}
along the degeneracy curve, where
\begin{align}
\Gamma^{\vphantom{\dagger}}_1&\equiv{\gamma\ns_0\,\gamma\ns_1+\gamma\ns_2\,\gamma\ns_3 \over\gamma_0^2 -\gamma_3^2}=-0.124\\
\vphantom{\sum_N^N} \Gamma^{\vphantom{\dagger}}_2&\equiv {\gamma\ns_1\,\gamma\ns_3+\gamma\ns_0\,
\gamma\ns_2 \over\gamma_0^2 -\gamma_3^2}=-1.30\times 10^{-2}\ .
\end{align}
The energy along this Dirac curve is
\begin{equation}
E({\mib\theta}^{\vphantom{\dagger}}_\ssr{D})={\cal E}^{\vphantom{\dagger}}_0 +W\,
\cos\big(\theta^{\vphantom{\dagger}}_1+\theta^{\vphantom{\dagger}}_2 - 3\theta^{\vphantom{\dagger}}_3\big)\ .
\label{ADir}
\end{equation}
with
\begin{align}
{\cal E}^{\vphantom{\dagger}}_0&=2\Gamma^{\vphantom{\dagger}}_1\,\gamma\ns_3=62\,{\rm meV}\\
W&=2\Gamma^{\vphantom{\dagger}}_2\,\gamma\ns_3=6.5\,{\rm meV} .
\end{align}
Since $\Gamma^{\vphantom{*}}_1$ and $\Gamma^{\vphantom{*}}_2$ are small, the Dirac curve, when projected
into the basal Brillouin zone, lies close to the zone corners. Note that
$E({\mib\theta}^{\vphantom{\dagger}}_D)$ goes through three complete periods
as $\theta^{\vphantom{\dagger}}_3$ advances from $0$ to $2\pi$, resulting in McClure's `sausage link'
Fermi surface \cite{McC69}, depicted in fig. \ref{FS}. To find the equation of the Dirac
curve, we expand about ${\mib\Theta}=(\theta^{\vphantom{\dagger}}_1,\theta^{\vphantom{\dagger}}_2)=
\big(\frac{4\pi}{3},\frac{2\pi}{3}\big)$ at the $K$ point,
writing ${\mib\theta}={\mib\Theta}^{\vphantom{\dagger}}+{\mib\zeta}$, and find
\begin{equation}
T\big(\Theta^{\vphantom{\dagger}}_1+\delta\theta^{\vphantom{\dagger}}_1,\Theta^{\vphantom{\dagger}}_2+\delta\theta^{\vphantom{\dagger}}_2)=e^{-i\pi/6}\,\delta\theta^{\vphantom{\dagger}}_1
-e^{i\pi/6}\,\delta\theta^{\vphantom{\dagger}}_2 + {\cal O}(\delta\theta^2)\ .
\label{Sexp}
\end{equation}
Solving for the Dirac line ${\mib\zeta}(\theta^{\vphantom{\dagger}}_3)$ as a formal series in the
small parameters $\Gamma^{\vphantom{\dagger}}_1$ and $\Gamma^{\vphantom{\dagger}}_2$, we obtain
\begin{align*}
\delta\theta^{\vphantom{\dagger}}_1&=\frac{2}{\sqrt{3}}\,\Big[\!-\!\Gamma^{\vphantom{\dagger}}_1\,
\sin\big(\theta^{\vphantom{\dagger}}_3-\frac{\pi}{6}\big)
+\Gamma^{\vphantom{\dagger}}_2\,\sin\big(2\theta^{\vphantom{\dagger}}_3+\frac{\pi}{6}\big)\Big]+{\cal O}(\Gamma^2)\\
\delta\theta^{\vphantom{\dagger}}_2&=\frac{2}{\sqrt{3}}\,\Big[\Gamma^{\vphantom{\dagger}}_1\,\sin\big(\theta^{\vphantom{\dagger}}_3+\frac{\pi}{6}\big)
-\Gamma^{\vphantom{\dagger}}_2\,\sin\big(2\theta^{\vphantom{\dagger}}_3-\frac{\pi}{6}\big)\Big]+
{\cal O}(\Gamma^2)\ .
\end{align*}
Note that the bandwidth of the Dirac point energies is tiny:
$2W\approx 13\,{\rm meV}$.
This means that the Landau levels are quite narrow -- moreso than in Bernal
stacked graphite.
The Fermi surface resembles the sketch in fig. \ref{FS}, which is adapted from fig. 2
of ref. \cite{McC69}
\begin{figure}[!t]
\centering
\includegraphics[width=6cm, height=7cm]{RHGFS.eps}
\caption
{\label{FS} McClure's ``sausage link" Fermi surface for rhombohedral graphite, greatly
exaggerated. See also fig. 2 of ref. \cite{McC69}.}
\end{figure}
\subsection{Weak Fields : Kohn-Luttinger Substitution}\label{weak}
We assume the magnetic field ${\mib B}$ is directed along ${\hat\bfz}$.
To obtain the Landau levels, we expand about the Dirac points. (This is essentially
equivalent to expanding about the Fermi energy, since the bandwidth of the Dirac
points is so tiny.) We write
\begin{equation}
{\mib k}\longrightarrow{\mib K} + \hbar^{-1}\,{\mib\pi}\ ,
\end{equation}
where ${\mib\pi}={\mib p}+\frac{e}{c}{\mib A}$. With $\delta\theta^{\vphantom{\dagger}}_j=({\mib k}-{\mib K})\cdot{\mib a}^{\vphantom{\dagger}}_j$,
we have
\begin{align}
\delta\theta^{\vphantom{\dagger}}_1&={1\over\hbar}\,\pi^{\vphantom{\dagger}}_x\,a\\
\delta\theta^{\vphantom{\dagger}}_2&={1\over 2\hbar}\,\pi^{\vphantom{\dagger}}_x\,a + {\sqrt{3}\over 2\hbar}\,\pi^{\vphantom{\dagger}}_y\,a\ .
\end{align}
Recall $[\,\pi^{\vphantom{\dagger}}_x,\pi^{\vphantom{\dagger}}_y\,]=-i\hbar^2/\ell_B^2$ where $\ell\ns_B=\sqrt{\hbar c/eB}$ is the
magnetic length. From eqn. \ref{Sexp}, to lowest order in $\delta\theta^{\vphantom{\dagger}}_{1,2}$, we have
\begin{align}
\big[\, T \, , \, T^\dagger \, \big]&=2i\sin(\pi/3)\,
\big[\,\delta\theta^{\vphantom{\dagger}}_1 \, , \, \delta\theta^{\vphantom{\dagger}}_2\,\big]\nonumber\\
&=2\pi\sqrt{3}\,p/q\ .
\end{align}
where the flux per unit cell area $\Omega=\frac{\sqrt{3}}{2}\,a^2$ is assumed to be a
rational multiple $p/q$ of the Dirac flux quantum $\phi^{\vphantom{\dagger}}_0=hc/e$.
This means we may write
\begin{equation}
T(\theta\nd_1,\theta\nd_2)=-\,\epsilon\,b\ ,
\end{equation}
where
\begin{equation}
\epsilon=\sqrt{B/B_0}\ ,
\label{epse}
\end{equation}
and $b^\dagger$ is a Landau level raising operator: $\big[b,b^\dagger]=1$. Recall that the field
scale $B^{\vphantom{\dagger}}_0=(hc/e)/3\pi a^2=7275\,{\rm T}$.
It is convenient to define ${\bar\theta}^{\vphantom{*}}_3=\theta^{\vphantom{*}}_3-\frac{1}{3}(\theta^{\vphantom{*}}_1+\theta^{\vphantom{*}}_2)$,
and to absorb a phase into the definition of $b$,
taking $T=-\,\epsilon\,e^{-i{\bar\theta}_3}\,b^\dagger$. Note that when the magnetic field lies
along the $c$-axis, it is $\exp(i{\bar\theta}^{\vphantom{*}}_3)$ and not
$\exp(i\theta^{\vphantom{*}}_3)$ which commutes with the magnetic translations $t({\mib a}^{\vphantom{*}}_{1,2})$.
The Hamiltonian is then
\begin{align}
H&={\cal E}^{\vphantom{\dagger}}_0 + W\cos(3{\bar\theta}^{\vphantom{\dagger}}_3) \\
&\qquad + \epsilon
\begin{pmatrix} -\gamma\ns_3 \,(b +b^\dagger) &
\gamma\ns_0\,e^{-i{\bar\theta}_3}\,b^\dagger -\gamma\ns_3\,b\\
\gamma\ns_0\,e^{i{\bar\theta}_3}\,b -\gamma\ns_3\,b^\dagger &
-\gamma\ns_3 \,(b + b^\dagger) \end{pmatrix}\ .
\end{align}
Consider the matrix operators
\begin{align}
{\cal Q}^{\vphantom{\dagger}}_0&=\gamma\ns_0\begin{pmatrix} 0 & e^{-i{\bar\theta}_3}\,b^\dagger \\
e^{i{\bar\theta}_3}\,b & 0 \end{pmatrix} \\
{\cal Q}^{\vphantom{\dagger}}_1&=\gamma\ns_3\begin{pmatrix} b +b^\dagger & b\\ b^\dagger & b + b^\dagger
\end{pmatrix}
\end{align}
The eigenvectors of ${\cal Q}^{\vphantom{\dagger}}_0$ are
\begin{equation}
\ket{{\rm\Psi}^{\vphantom{\dagger}}_0}=\begin{pmatrix} \ket{0} \\ 0 \end{pmatrix} \quad,\quad
E^0_0=0
\end{equation}
and
\begin{equation}
\ket{{\rm\Psi}^\pm_n}={1\over\sqrt{2}}\begin{pmatrix} e^{-i{\bar\theta}_3}\,\ket{n} \\
\pm\, \ket{n-1} \end{pmatrix} \quad,\quad
E^0_n=\pm\, \sqrt{n}\,\epsilon\,\gamma\ns_0\ ,
\end{equation}
where $n=1,2,3,\ldots\ .$ It is easy to see that
\begin{equation}
\expect{{\rm\Psi}^\pm_n}{{\cal Q}^{\vphantom{\dagger}}_1}{{\rm\Psi}^\pm_n}=0\ ,
\end{equation}
as well as $\expect{{\rm\Psi}^{\vphantom{\dagger}}_0}{{\cal Q}^{\vphantom{\dagger}}_1}{{\rm\Psi}^{\vphantom{\dagger}}_0}=0$, hence there is no first
order shift of the eigenvalues. Therefore, up to first order in $\epsilon$, the Landau
level energies are given by
\begin{equation}
E^{\vphantom{\dagger}}_n(\theta^{\vphantom{\dagger}}_3)={\cal E}^{\vphantom{\dagger}}_0+W\cos(3{\bar\theta}^{\vphantom{\dagger}}_3)\pm\epsilon\,\gamma\ns_0\,\sqrt{n}\ ,
\end{equation}
where $n=0,1,2,\ldots\ .$ The gap between Landau levels $n$ and $n+1$ collapses when
\begin{equation}
\epsilon\,\gamma\ns_0\,\sqrt{n}+W=\epsilon\,\gamma\ns_0\,\sqrt{n+1}-W\ ,
\end{equation}
which gives a critical field of
\begin{equation}
B^{\vphantom{\dagger}}_{{\rm c},n}=\Big(\sqrt{n+1}+\sqrt{n}\Big)^{\!2}\,B_{{\rm c},0}\ ,
\end{equation}
with $B^{\vphantom{\dagger}}_{{\rm c},0}=(2W/\gamma\ns_0)^2\cdot B^{\vphantom{\dagger}}_0=0.123\,${\rm T}.
\begin{figure}[!b]
\centering
\includegraphics[width=7cm]{bernal_2.eps}
\caption
{\label{bernal_2} Top-view of Bernal hexagonal graphite.}
\end{figure}
\subsection{Comparison with Bernal Stacking}
The $ABAB$ stacking pattern of Bernal hexagonal graphite is shown in fig. \ref{bernal_2}.
To obtain the critical fields in BHG, it suffices to consider
a simple nearest-neighbor model \cite{BHRA07}. Expanding about the $K$-$H$ spine
in the Brillouin zone, we obtain in the presence of a uniform $c$-axis magnetic field,
\begin{equation}
H = \left(
\begin{array}{cccc}
0 & \epsilon\,\gamma\ns_0\, b & 2\,\gamma\ns_1 \cos\theta^{\vphantom{\dagger}}_3 & 0 \\
\epsilon\,t^{\vphantom{\dagger}}_\parallel\, b^\dagger & 0 & 0 & 0 \\
2\, \gamma\ns_1 \cos\theta^{\vphantom{\dagger}}_3 & 0 & 0 & -\epsilon\,\gamma\ns_0\, b^\dagger \\
0 & 0 & \epsilon\,\gamma\ns_0\, b & 0 \\
\end{array}\right) \label{toy}
\end{equation}
where $\epsilon=(2\pi\sqrt{3}\,p/q)^{1/2}=\sqrt{B/B_0}$ as in the rhombohedral case.
The spectrum has explicit particle-hole symmetry. For $n=0$ there
are eigenvalues at
$\pm\big(\epsilon^2\,\gamma_0^2 + 4\gamma_1^2\,\cos^2\!\theta^{\vphantom{\dagger}}_3\big)^{1/2}$
and a doubly degenerate level at $E^{\vphantom{\dagger}}_0=0$. For $n\ne 0$,
\begin{align}
E^2_n=&(n+\frac{1}{2})\, \epsilon^2\, \gamma_0^2+2\gamma_1^2
\cos^2\!\theta^{\vphantom{\dagger}}_3\\
&\hskip -0.6cm \pm\sqrt{\frac{1}{4}\,\epsilon^4\,\gamma_0^4+ 4\,(n+\frac{1}{2}) \,\epsilon^2\,
\gamma_0^2\,\gamma_1^2\, \cos^2\!\theta^{\vphantom{\dagger}}_3+
4\, \gamma_1^4\,\cos^4\!\theta^{\vphantom{\dagger}}_3} \ .\nonumber
\end{align}
In fig. \ref{BHG_bands}, we plot the lowest several energy bands {\it versus\/} magnetic
field for BHG. Due to the inadequacies of the nearest neighbor model, the principal
gap surrounding central $E=0$ Landau levels opens immediately for nonzero $B$.
Including more realistic band structure effects, consistent with the semimetallic nature
of BHG, this gap opens at a critical field of $B^{\vphantom{\dagger}}_{\rm c}\approx 15\,{\rm T}$ for positive
energies and $B^{\vphantom{\dagger}}_{\rm c}\approx 7\,{\rm T}$ for negative energies \cite{BHRA07}.
The Hall conductance
is quantized at $\sigma^{\vphantom{\dagger}}_{xy}=2C\,e^2/hc^{\vphantom{\dagger}}_0$ when the Fermi level lies in a bulk gap,
where $c^{\vphantom{\dagger}}_0=2d$ in BHG and $c^{\vphantom{\dagger}}_0=3d$ in RG, where $d=3.37\,$\AA\ is the spacing
between planes, and $C$ is a topological integer associated with the gap.
In both cases, the values of $C$ are such that $\sigma^{\vphantom{\dagger}}_{xy}$ corresponds to
the graphene quantization per layer, changing by $4 e^2/hd$ as one crosses a Landau
level. We indicate the width of the bands by shading
the region between $\cos^2\!\theta^{\vphantom{\dagger}}_0=0$ and $\cos^2\!\theta^{\vphantom{\dagger}}_3=1$.
In both cases, the Zeeman coupling is omitted; with $g\approx 2$ the Zeeman
splitting is small compared with the cyclotron energy.
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{RHG_bands.eps}
\caption
{\label{RHG_bands} Landau level structure in rhombohedral graphite within the
tight binding model of section \ref{rhombo}, with Zeeman term ignored. Principal band
gaps are labeled by the Chern number $C$ (per spin degree of freedom). When $E_\ssr{F}$
lies within a gap, the Hall conductivity is $2C\times\frac{e^2}{h}\big/(3d)$, where $d=3.37\,$
\AA\ is the interplane spacing.}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{BHG_bands.eps}
\caption
{\label{BHG_bands} Landau level structure in Bernal graphite within the nearest-neighbor
hopping model, with Zeeman term ignored. Principal band gaps are labeled by the
Chern number $C$ (per spin degree of freedom). When $E_\ssr{F}$ lies within a gap,
the Hall conductivity is $2C\times \frac{e^2}{h}\big/(2d)$. When further neighbor hoppings are
included, particle-hole symmetry is
broken, and a finite field is required to open the principal gap. \cite{BHRA07}.}
\end{figure}
\section{Chiral Surface States}
As shown by Hatsugai \cite{Hat93}, the Chern number $C$ can also be computed by
following the spectral flow in a system with edges, wrapped around a cylinder, as
a function of the gauge flux through the cylinder. To elicit this spectral flow, we
derive a Hofstadter Hamiltonian \cite{Hof76} for RG. We start with the Hamiltonian
elements in section \ref{rhombo}, but now treating them as magnetic translations,
which satisfy the algebra
\begin{equation}
t({\mib a})\,t({\mib b})=e^{i{\mib a}\times{\mib b}\cdot{\hat\rbfn}/2\ell_B^2}\,t({\mib a}+{\mib b})\ ,
\end{equation}
where ${\mib B}=B\,{\hat\rbfn}$. For our problem we define the elementary translations
\begin{equation}
t^{\vphantom{\dagger}}_1\equiv t({\mib\delta}) \quad,\quad t^{\vphantom{\dagger}}_2\equiv t({\mib a}^{\vphantom{\dagger}}_1+{\mib\delta})
\end{equation}
as well as $\tau\equiv t(d{\hat\bfz})=t({\mib a}^{\vphantom{\dagger}}_3+{\mib\delta})$.
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{RHG_edge.eps}
\caption
{\label{RHG_edge} Spectral flow in rhombohedral graphite showing edge state evolution.
Top panel: armchair edge, perpendicular to ${\mib\delta}$; bottom panel: zigzag edge,
perpendicular to ${\mib a}_1$. The bulk gaps are labeled by Chern numbers $C$ which
correspond to the number of edge states crossing the gap as the angle $\theta_1$
is varied. The flux per unit cell here is rather large, with $p=1$ and $q=200$,
corresponding to a field of $B=396\,{\rm T}$. The topological features of the
edge state spectral flow are robust with respect to field.}
\end{figure}
Since with ${\hat\rbfn}={\hat\bfz}$ we have that $\tau$ commutes with $t^{\vphantom{\dagger}}_1$ and $t^{\vphantom{\dagger}}_2$,
and we can specify its eigenvalue as $e^{i{\bar\theta}^{\vphantom{\dagger}}_3}$. As for $t^{\vphantom{\dagger}}_{1,2}$,
we have
\begin{equation}
t^{\vphantom{\dagger}}_1\>t^{\vphantom{\dagger}}_2=e^{i\phi/3}\,t^{\vphantom{\dagger}}_2\>t^{\vphantom{\dagger}}_1\ ,
\end{equation}
where $\phi=\Omega/\ell_B^2=2\pi p/q$ is the flux per graphene hexagon in units
of $\hbar c/e$. We may then write
\begin{equation}
H=\begin{pmatrix} H^{\vphantom{\dagger}}_{AA} & H^{\vphantom{\dagger}}_{AB} \\ & \\ H^\dagger_{AB} & H^{\vphantom{\dagger}}_{BB} \end{pmatrix}\ ,
\end{equation}
with
\begin{align}
H^{\vphantom{\dagger}}_{AA}&=\gamma\ns_3\,\big(e^{i{\bar\theta}_3}\,t^{\vphantom{\dagger}}_1 + e^{-i{\bar\theta}_3}\,
t_1^\dagger\big)\\
&\qquad + \gamma\ns_3\,\big(e^{i{\bar\theta}_3} + e^{-i{\bar\theta}_3}\,
e^{-i\phi/6}\,t^{\vphantom{\dagger}}_1\big)\,t^{\vphantom{\dagger}}_2\nonumber\\
&\qquad + \gamma\ns_3\,\big(e^{-i{\bar\theta}_3} + e^{i{\bar\theta}_3}\,
e^{-i\phi/6}\,t_1^\dagger\big)\,t_2^\dagger=H^{\vphantom{\dagger}}_{BB}\nonumber
\end{align}
and
\begin{align}
H^{\vphantom{\dagger}}_{AB}&=\big(\gamma\ns_1\,e^{i{\bar\theta}_3}+\gamma\ns_2\,e^{-2i{\bar\theta}_3}
-\gamma\ns_0\,t^\dagger_1 + \gamma\ns_3\,e^{-i{\bar\theta}_3}\,t^{\vphantom{\dagger}}_1\big)\nonumber\\
&\qquad\qquad +\big(\gamma\ns_3\,e^{-i{\bar\theta}_3} - \gamma\ns_0\,e^{-i\phi/6}\,t^{\vphantom{\dagger}}_1\big)\,t^{\vphantom{\dagger}}_2
\nonumber\\
&\qquad\qquad -\big(\gamma\ns_0 - \gamma\ns_3\,e^{i{\bar\theta}_3}\,e^{-i\phi/6}\,t^\dagger_1\big)\,t^\dagger_2\ .
\end{align}
We define the basis $\big\{\ket{n}\big\}$ as follows:
\begin{align}
t^{\vphantom{\dagger}}_1\,\ket{n}&=e^{i{\bar\theta}^{\vphantom{\dagger}}_1}\,e^{i n\phi/3}\,\ket{n} \\
t^{\vphantom{\dagger}}_2\,\ket{n}&= e^{i\theta^{\vphantom{\dagger}}_2}\,\ket{n+1}\ ,
\end{align}
where ${\bar\theta}^{\vphantom{\dagger}}_1=\theta^{\vphantom{\dagger}}_1/3q$ and $\ket{n+3q}=\ket{n}$.
Taking the matrix elements of $H$ within this basis, one obtains a rank $6q$ matrix
$H$ to diagonalize, with periodic boundary conditions. If we introduce an edge
by eliminating the coupling between states $\ket{1}$ and $\ket{3q}$, and plot the
spectral flow as a function of ${\bar\theta}^{\vphantom{\dagger}}_1$, we obtain the top panel in fig.
\ref{RHG_edge}. We can also obtain the chiral surface state flow for a zigzag edge,
perpendicular to the vector ${\mib a}^{\vphantom{\dagger}}_1$; this is shown in the bottom panel of
fig. \ref{RHG_edge}. For periodic systems, exact diagonalizations performed using the
Lanczos method for $q$ up to $1500$ with the package ARPACK were found to agree with
the weak field results of section \ref{weak}.
\section{Stacking Faults in Bernal Hexagonal Graphite}
We now turn to an analysis of simple stacking faults in BHG, first with ${\mib B}=0$ and
then for finite ${\mib B}$. Consider first a triangular lattice, which is tripartite, and its
three triangular sublattices $A$, $B$, and $C$. By eliminating one of these three
sublattices, the remaining structure will be a honeycomb lattice. Now imagine
a stack of triangular lattices. At each layer, we choose a sublattice $A$, $B$, or $C$
to remove; this defines a stacking pattern. Since it is energetically unfavorable to
stack a honeycomb layer directly atop another, at each layer we have two choices
consistent with the layer below. If the empty sublattices are in $ABC$ {\it et cyc.\/}
order from layer $l$ to layer $l+1$, we write $\sigma^{\vphantom{\dagger}}_{n,n+1}=+1$. If instead the
order is $CBA$ {\it et cyc.\/}, we write $\sigma^{\vphantom{\dagger}}_{l,l+1}=-1$. For RG, the $\sigma$
indices are `ferromagnetic', {\it i.e.\/}\ $++++$ or $----$. For BHG, the indices are ordered
`antiferromagnetically', {\it i.e.\/}\ $+-+-$.
The three three triangular sublattices A, B, and C are defined by
\begin{align}
u^\ssr{A}_{n^{\vphantom{*}}_1,n^{\vphantom{*}}_2}&=n^{\vphantom{*}}_1\,{\mib a}^{\vphantom{*}}_1 + n^{\vphantom{*}}_2\, {\mib a}^{\vphantom{*}}_2\\
u^\ssr{B}_{n^{\vphantom{*}}_1,n^{\vphantom{*}}_2}&=n^{\vphantom{*}}_1\,{\mib a}^{\vphantom{*}}_1 + n^{\vphantom{*}}_2\, {\mib a}^{\vphantom{*}}_2 + {\mib\delta}^{\vphantom{*}}_1\\
u^\ssr{C}_{n^{\vphantom{*}}_1,n^{\vphantom{*}}_2}&=n^{\vphantom{*}}_1\,{\mib a}^{\vphantom{*}}_1 + n^{\vphantom{*}}_2\, {\mib a}^{\vphantom{*}}_2 + 2\,{\mib\delta}^{\vphantom{*}}_1
\end{align}
We define three additional sublattices by
\begin{align}
v^\ssr{A}_{n^{\vphantom{*}}_1,n^{\vphantom{*}}_2}&=u^\ssr{B}_{n^{\vphantom{*}}_1,n^{\vphantom{*}}_2}=n^{\vphantom{*}}_1\,{\mib a}^{\vphantom{*}}_1 + n^{\vphantom{*}}_2\, {\mib a}^{\vphantom{*}}_2 + {\mib\delta}^{\vphantom{*}}_1\\
v^\ssr{B}_{n^{\vphantom{*}}_1,n^{\vphantom{*}}_2}&=u^\ssr{C}_{n^{\vphantom{*}}_1,n^{\vphantom{*}}_2}=n^{\vphantom{*}}_1\,{\mib a}^{\vphantom{*}}_1 + n^{\vphantom{*}}_2\, {\mib a}^{\vphantom{*}}_2 + 2\,{\mib\delta}^{\vphantom{*}}_1 \\
v^\ssr{C}_{n^{\vphantom{*}}_1,n^{\vphantom{*}}_2}&=u^\ssr{A}_{n^{\vphantom{*}}_1,n^{\vphantom{*}}_2}=n^{\vphantom{*}}_1\,{\mib a}^{\vphantom{*}}_1 + n^{\vphantom{*}}_2\, {\mib a}^{\vphantom{*}}_2
\end{align}
The sites $\big\{u^{\vphantom{*}}_\ssr{A}(n^{\vphantom{*}}_1,n^{\vphantom{*}}_2)\, , \, v^{\vphantom{*}}_\ssr{A}(n^{\vphantom{*}}_1,n^{\vphantom{*}}_2)\big\}$ {\it etc.\/}\ form a honeycomb lattice, which we call
the $A$ or $\alpha$ structure. Bernal graphite is stacked in an $ABABAB$ configuration.
Within each honeycomb layer, we write the wavefunction as a two-component spinor,
\begin{equation}
\psi^{\vphantom{\dagger}}_{\mib k}=\begin{pmatrix} u^{\vphantom{\dagger}}_{\mib k} \\ v^{\vphantom{\dagger}}_{\mib k} \end{pmatrix}\ ,
\end{equation}
where ${\mib k}$ is the crystal momentum in the basal ($k_z=0$) Brillouin zone.
The hopping between planes is described by the following local Schr{\"o}dinger equation,
which couples a central plane $l$ to planes below ($l-1$) and above ($l+1$):
\begin{equation}
M\psi^{\vphantom{*}}_l+\gamma\ns_1(\Sigma^{\sigma})^\dagger\psi^{\vphantom{*}}_{l-1} +
\gamma\ns_1\Sigma^{\sigma'}\psi^{\vphantom{*}}_{l+1}=0\ .
\label{PSE}
\end{equation}
Here, $\sigma^{\vphantom{*}}_{l-1,l}=\sigma$ and $\sigma^{\vphantom{*}}_{l,l+1}=\sigma'$,
{\it i.e.\/}\ the shift in the $u$ sublattice sites from plane $l-1$ to plane $l$ is through
a vector $\sigma{\mib\delta}^{\vphantom{*}}_1$. The matrix $M$ is given by
\begin{equation}
M=\begin{pmatrix} E & \gamma\ns_0\,S \\ \gamma\ns_0\,S^* & E \end{pmatrix}\ ,
\end{equation}
and
\begin{equation}
S=e^{i{\mib k}\cdot{\mib\delta}^{\vphantom{\dagger}}_1} + e^{i{\mib k}\cdot{\mib\delta}^{\vphantom{\dagger}}_2} +
e^{i{\mib k}\cdot{\mib\delta}^{\vphantom{\dagger}}_3}
\end{equation}
and
\begin{equation}
\Sigma^+=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \qquad,\qquad
\Sigma^-=\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}\ .
\end{equation}
\subsection{Bernal Hexagonal Graphite}
We first consider the BHG stacking order $ABABAB$, where $\sigma^{\vphantom{*}}_{l,l+1}=(-1)^l$.
Using translational invariance, we may write, for the even and odd sites
\begin{align}
\psi^{\vphantom{\dagger}}_{2j}&=e^{2ijk_z d}\,\phi\\
\psi^{\vphantom{\dagger}}_{2j+1}&=e^{i(2j+1)k_z d}\,{\raise.35ex\hbox{$\chi$}}\ ,
\end{align}
where
\begin{align}
M\phi +2\gamma\ns_1\cos\big(k^{\vphantom{\dagger}}_z d\big)\Sigma^-{\raise.35ex\hbox{$\chi$}}&=0\vphantom{\sum_i}\\
M{\raise.35ex\hbox{$\chi$}} +2\gamma\ns_1\cos\big(k^{\vphantom{\dagger}}_z d\big)\Sigma^+\phi&=0\ .
\end{align}
Inverting the second of these equations gives
\begin{equation}
{\raise.35ex\hbox{$\chi$}}=-2\gamma\ns_1\cos\big(k^{\vphantom{\dagger}}_z d\big) M^{-1}\Sigma^+\phi\ .
\label{invert}
\end{equation}
Substituting this into the first equation yields
\begin{equation}
\Big(M-4\gamma_1^2\cos^2\!\big(k^{\vphantom{\dagger}}_z d\big) \Sigma^- M^{-1}\Sigma^+\Big)\phi=0\ .
\end{equation}
Accordingly, we define
\begin{align}
{\cal K}&\equiv M-4\gamma_1^2\cos^2\!\big(k^{\vphantom{\dagger}}_z d\big) \Sigma^- M^{-1}\Sigma^+\vphantom{\sum_i}\\
&=\begin{pmatrix} E & \gamma\ns_0\,S \\ & \\ \gamma\ns_0\,S^* & E\Big(1-{4\gamma_1^2
\cos^2\!(k^{\vphantom{\dagger}}_z d )\over E^2-\gamma_0^2\, |S|^2}\Big)\end{pmatrix}\ .
\end{align}
Setting ${\rm det}\,{\cal K}=0$ yields the eigenvalue equation for Bernal graphite,
\begin{equation}
\big(E^2-\gamma_0^2\, |S|^2\big)^2-4E^2\,\gamma_1^2
\cos^2\!\big(k^{\vphantom{\dagger}}_z d\big)=0\ ,
\end{equation}
with solutions
\begin{equation}
E^{(\mu,\mu')}_{{\mib k},k^{\vphantom{\dagger}}_z}=-\mu\,\gamma\ns_1\cos\big(k^{\vphantom{\dagger}}_z d\big)
-\mu'\sqrt{\gamma_1^2\cos^2\!\big(k^{\vphantom{\dagger}}_z d\big) + \gamma_0^2\,|S|^2}\ ,
\end{equation}
where $\mu=\pm 1$ and $\mu'=\pm 1$. The four choices for $(\mu,\mu')$ correspond to the
four energy bands.
From ${\cal K}\,\phi=0$, we may write
\begin{equation}
\phi=\begin{pmatrix}\phi^{\vphantom{\dagger}}_1 \\ \phi^{\vphantom{\dagger}}_2\end{pmatrix}=
\begin{pmatrix} -\gamma\ns_0\,S \\ E\end{pmatrix}\ .
\end{equation}
From eqn. \ref{invert}, then, we have
\begin{equation}
{\raise.35ex\hbox{$\chi$}}=\begin{pmatrix}{\raise.35ex\hbox{$\chi$}}^{\vphantom{\dagger}}_1 \\ {\raise.35ex\hbox{$\chi$}}^{\vphantom{\dagger}}_2\end{pmatrix}=
{2E\,\gamma\ns_1\cos(k^{\vphantom{\dagger}}_z d )\over E^2-\gamma_0^2\, |S|^2}\,
\begin{pmatrix} -E \\ \gamma\ns_0\,S^* \\ \end{pmatrix}=\mu\begin{pmatrix}
E \\ -\gamma\ns_0\,S^*\end{pmatrix}\ .
\end{equation}
\subsection{Step Defect}
Consider now the stacking defect $ABABCBCB$, which in terms of the
$\sigma^{\vphantom{*}}_{l,l+1}$ variables may be depicted as
\begin{equation}
\cdots|+|-|+|-|-|+|-|+|\cdots
\label{step}
\end{equation}
The central plane we label $l=0$. For plane indices $l<0$, the odd layers correpond to $\phi$
planes and the even layers to ${\raise.35ex\hbox{$\chi$}}$ planes. For $l>0$, the even layers correspond to
$\phi$ planes and the odd layers to ${\raise.35ex\hbox{$\chi$}}$ planes. With $l<0$, we consider an incident
plane wave running to the right (up) and a reflected plane wave running to the left
(down). Then we have
\begin{align}
\psi^{\vphantom{\dagger}}_{2j}&=\big(\alpha\,e^{2ijk^{\vphantom{\dagger}}_z d} + \beta'\, e^{-2ijk^{\vphantom{\dagger}}_z d} \big)\,{\raise.35ex\hbox{$\chi$}}\\
\psi^{\vphantom{\dagger}}_{2j+1}&=\big(\alpha\,e^{i(2j+1)k^{\vphantom{\dagger}}_z d} + \beta'\,
e^{-i(2j+1)k^{\vphantom{\dagger}}_z d} \big)\,\phi\ ,
\end{align}
for all $j<0$. Here $\alpha$ is the complex amplitude of the incident wave and
$\beta'$ is the complex amplitude of the reflected wave.
Correspondingly, we have
\begin{align}
\psi^{\vphantom{\dagger}}_{2j-1}&=\big(\beta\,e^{i(2j-1) k^{\vphantom{\dagger}}_z d}+\alpha' \,
e^{-i(2j-1) k^{\vphantom{\dagger}}_z d}\big)\,{\raise.35ex\hbox{$\chi$}}\\
\psi^{\vphantom{\dagger}}_{2j}&=\big(\beta\, e^{2ijk^{\vphantom{\dagger}}_z d}+\alpha'\,e^{-2ijk^{\vphantom{\dagger}}_z d}\big) \,\phi\ ,
\end{align}
for all $j>0$. Here $\alpha'$ is the incident amplitude (from the right/top)
and $\beta$ is the reflected amplitude.
To match the solutions for positive and negative $l$, we first invoke eqn. \ref{PSE} with $l=-1$:
\begin{equation}
M\psi^{\vphantom{\dagger}}_{-1} + \gamma\ns_1\Sigma^-\psi^{\vphantom{\dagger}}_{-2} + \gamma\ns_1\Sigma^-\psi^{\vphantom{\dagger}}_0=0\ .
\end{equation}
The most general solution for $\psi^{\vphantom{\dagger}}_0$ is then
\begin{equation}
\psi^{\vphantom{\dagger}}_0=(\alpha+\beta')\,{\raise.35ex\hbox{$\chi$}} + \begin{pmatrix} 0 \\ b \end{pmatrix}\ ,
\label{left}
\end{equation}
where $b$ is an arbitrary complex number. Note that $\Sigma^-$ annihilates any
vector with upper component $0$.
Next, set $l=+1$ and obtain
\begin{equation}
M\psi^{\vphantom{\dagger}}_{1} + \gamma\ns_1\Sigma^+\psi^{\vphantom{\dagger}}_{0} + \gamma\ns_1\Sigma^+\psi^{\vphantom{\dagger}}_2=0\ .
\end{equation}
We may now write
\begin{equation}
\psi^{\vphantom{\dagger}}_0=(\beta+\alpha')\,\phi + \begin{pmatrix} a \\ 0 \end{pmatrix}\ ,
\label{right}
\end{equation}
where $a$ is an arbitrary complex parameter. Note that $\Sigma^+$ annihilates any vector
with lower component $0$.
The parameters $a$ and $b$ are then fixed by equating these two expressions for
$\psi^{\vphantom{\dagger}}_0$, yielding
\begin{equation}
\begin{pmatrix} a \\ -b \end{pmatrix} = (\alpha+\beta')\,{\raise.35ex\hbox{$\chi$}} - (\beta+\alpha') \,\phi\ .
\end{equation}
The wavefunction at $l=0$ can now be found. One simple way is to take the upper
component from eqn. \ref{left} and the lower component from eqn. \ref{right}:
\begin{equation}
\psi^{\vphantom{\dagger}}_0=\begin{pmatrix} (\alpha+\beta')\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{\dagger}}_1 \\ (\beta+\alpha')\,\phi^{\vphantom{\dagger}}_2
\end{pmatrix}\ .
\end{equation}
Next, we write the Schr{\"o}dinger equation for the $l=0$ plane:
\begin{align}
0&=M\psi^{\vphantom{\dagger}}_0 + \gamma\ns_1\Sigma^+\psi^{\vphantom{\dagger}}_{-1} + \gamma\ns_1\Sigma^-\psi^{\vphantom{\dagger}}_{+1}\\
&=\begin{pmatrix} E & \gamma\ns_0\,S \\ \gamma\ns_0 S^* & E \end{pmatrix}
\begin{pmatrix} (\alpha+\beta')\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{\dagger}}_1 \\ (\beta+\alpha')\,\phi^{\vphantom{\dagger}}_2 \end{pmatrix}
\nonumber\\
&\qquad\qquad+\gamma\ns_1\,\big(\alpha\,e^{-i k^{\vphantom{\dagger}}_z d} + \beta'\,
e^{i k^{\vphantom{\dagger}}_z d}\big)
\begin{pmatrix} \phi^{\vphantom{\dagger}}_2 \\ 0 \end{pmatrix}\nonumber\\
&\qquad\qquad\qquad +
\gamma\ns_1\,\big(\beta\,e^{i k^{\vphantom{\dagger}}_z d} + \alpha'\, e^{-i k^{\vphantom{\dagger}}_z d}\big)\,
\begin{pmatrix} 0 \\ {\raise.35ex\hbox{$\chi$}}^{\vphantom{\dagger}}_1 \end{pmatrix}\nonumber\ .
\end{align}
This yields two equations which may be solved to relate the outgoing amplitudes
$\beta$ and $\beta'$ to the incoming amplitudes $\alpha$ and $\alpha'$, {\it i.e.\/}\ to
derive the ${\cal S}$-matrix.
Using our previously derived results for $\phi$ and ${\raise.35ex\hbox{$\chi$}}$, we find that the above
equation reduces to
\begin{align}
0&=\begin{pmatrix} E & \gamma\ns_0\,S \\ \gamma\ns_0 S^* & E \end{pmatrix}
\begin{pmatrix} \mu\,(\alpha+\beta') \\ (\beta+\alpha') \end{pmatrix}\nonumber\\
&\qquad
+\>\gamma\ns_1\begin{pmatrix}\big(\alpha\,e^{-i k^{\vphantom{\dagger}}_z d} +\beta'\, e^{i k^{\vphantom{\dagger}}_z d}
\big) \\ \mu\,\big(\beta\,e^{i k^{\vphantom{\dagger}}_z d} + \alpha'\, e^{-i k^{\vphantom{\dagger}}_z d}\big)
\end{pmatrix}\ .
\end{align}
This yields
\begin{align}
0&=\begin{pmatrix} \mu\, E + \gamma\ns_1\,e^{-i \theta^{\vphantom{\dagger}}_z/2} & \gamma\ns_0\,S \\
\mu\,\gamma\ns_0\,S^* & E+\mu\, \gamma\ns_1\,e^{-i \theta^{\vphantom{\dagger}}_z/2} \end{pmatrix}
\begin{pmatrix} \alpha \\ \alpha' \end{pmatrix}\nonumber\\
&\qquad\qquad +
\begin{pmatrix} \gamma\ns_0\,S & \mu\, E + \gamma\ns_1\,e^{i\theta^{\vphantom{\dagger}}_z/2} \\
E+\mu\, \gamma\ns_1\,e^{-i \theta^{\vphantom{\dagger}}_z/2} & \mu\,\gamma\ns_0\,S^* \end{pmatrix}
\begin{pmatrix} \beta \\ \beta' \end{pmatrix} \ ,
\end{align}
where $\theta^{\vphantom{\dagger}}_z\equiv 2 k^{\vphantom{\dagger}}_z d$. The ${\cal S}$-matrix is defined by
\begin{equation}
\begin{pmatrix} \beta \\ \beta' \end{pmatrix} = \stackrel{{\cal S}-{\rm matrix}}
{\overbrace{\begin{pmatrix} t & r' \\ r & t' \end{pmatrix} }}
\begin{pmatrix} \alpha \\ \alpha' \end{pmatrix}\ .
\end{equation}
Solving for ${\cal S}$, we obtain
\begin{equation}
{\cal S}={-\begin{pmatrix} 2i\gamma\ns_0\,S^*\sin(\theta^{\vphantom{\dagger}}_z/2) & \gamma\ns_1 \\
\gamma\ns_1 & 2i\gamma\ns_0\,S\sin(\theta^{\vphantom{\dagger}}_z/2) \end{pmatrix}
\over \gamma\ns_1\cos(\theta^{\vphantom{\dagger}}_z) +2\, i\mu \,\sin(\theta^{\vphantom{\dagger}}_z/2)
\left[\gamma_1^2\cos^2(\theta^{\vphantom{\dagger}}_z/2) + \gamma_0^2\,|S|^2\right]^{1/2}}\ .
\end{equation}
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{trcoef.eps}
\caption
{\label{trcoef} Reflection and transmission coefficients for ${\mib k}=\alpha_1\,{\mib b}_1
+\alpha_2\,{\mib b}_2$, for four sets of $(\alpha_1,\alpha_2)$, in the vicinity
of the Dirac point $(\frac{1}{3},\frac{2}{3})$. Only positive energies are shown.}
\end{figure}
Thus, for all $\mu$ and $\mu'$, we have
\begin{equation}
R=|r|^2={\gamma_1^2\over \gamma_1^2 + 4\, \gamma_0^2\, |S|^2
\sin^2(\theta^{\vphantom{\dagger}}_z/2)}\ ,
\end{equation}
and
\begin{equation}
T=|t|^2={4\, \gamma_0^2\, |S|^2\sin^2(\theta^{\vphantom{\dagger}}_z/2)\over \gamma_1^2
+ 4\, \gamma_0^2\, |S|^2\sin^2(\theta^{\vphantom{\dagger}}_z/2)}\ .
\end{equation}
As ${\mib k}$ approaches either zone corner $K$ or $K'$, the transmission goes to zero.
This is because the chains which extend through BHG are cut and shifted at the stacking fault.
Curiously, the transmission coefficient $T$ goes to unity when $\gamma\ns_1\to 0$.
Note also that along $K$-$H$ and $K'$-$H'$ we have $S=0$ and hence $R=1$, $T=0$.
At the band edges, we have
\begin{equation}
R(\theta^{\vphantom{\dagger}}_z=0)=1 \quad,\quad
R(\theta^{\vphantom{\dagger}}_z=\pi)={\gamma_1^2\over \gamma_1^2 + 4\, \gamma_0^2\,|S|^2} \ ,
\end{equation}
with $T=1-R$ for the transmission coefficients.
\subsection{Existence of Bound States}
To search for bound states, we take, for $j>0$,
\begin{align}
\psi^{\vphantom{\dagger}}_{n=-2j}&=e^{\kappa n}\,{\raise.35ex\hbox{$\chi$}} \qquad & \psi^{\vphantom{\dagger}}_{n=2j}&=\beta\,
e^{-\kappa n}\,\phi\\
\psi^{\vphantom{\dagger}}_{n=-2j+1}&=e^{\kappa n}\,\phi \qquad & \psi^{\vphantom{\dagger}}_{n=2j+1}&=\beta\,
e^{-\kappa n}\,{\raise.35ex\hbox{$\chi$}}\ ,
\end{align}
and solve for $\kappa$, $\beta$, and $E$. At the plane $l=0$ we have
\begin{equation}
\psi^{\vphantom{\dagger}}_0=\begin{pmatrix} {\raise.35ex\hbox{$\chi$}}^{\vphantom{\dagger}}_1 \\ \beta\,\phi^{\vphantom{\dagger}}_2\end{pmatrix}\ .
\end{equation}
The Schr{\"o}dinger equation for $l\ne 0$ then yields
\begin{align}
M\,{\raise.35ex\hbox{$\chi$}} + 2\gamma\ns_1\cosh(\kappa)\,\Sigma^+\phi&=0\\
M\,\phi + 2\gamma\ns_1\cosh(\kappa)\,\Sigma^-{\raise.35ex\hbox{$\chi$}}&=0\ .
\end{align}
This yields
\begin{equation}
E=-\mu\,\gamma\ns_1\cosh(\kappa)-\mu'\,\sqrt{\gamma_1^2\cosh^2(\kappa) + \gamma_0^2\,
|S|^2}\ ,
\end{equation}
where once again $\mu=\pm 1$ and $\mu'=\pm 1$. Again we have
\begin{equation}
\phi=\begin{pmatrix} -\gamma\ns_0\,S \\ E \end{pmatrix} \qquad,\qquad
{\raise.35ex\hbox{$\chi$}}=\mu\begin{pmatrix} E \\ -\gamma\ns_0\,S^* \end{pmatrix}\ .
\end{equation}
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{bsa.eps}
\caption
{\label{bsa} Energy bands and bound state dispersion for $\gamma_1 = 0.1\,\gamma_0$
for small values of $|S|$. The bulk bands $E^{\ssr(3,4)}_{\mib k}$ are depicted
by the red and blue hatched regions, respectively. The bound state is the thick black
dot-dash curve.}
\end{figure}
At $l=0$ we again have
\begin{equation}
M\,\psi^{\vphantom{\dagger}}_0 + \gamma\ns_1\,\Sigma^+\,\psi^{\vphantom{\dagger}}_{-1} + \gamma\ns_1\,\Sigma^-\,\psi^{\vphantom{\dagger}}_{+1}=0\ ,
\end{equation}
which here yields
\begin{equation}
\begin{pmatrix} E & \gamma\ns_0\,S \\ \gamma\ns_0\,S^* & E \end{pmatrix}
\begin{pmatrix} \mu \\ \beta \end{pmatrix}
+\gamma\ns_1\, e^{-\kappa}\begin{pmatrix} 1 \\ \mu\,\beta\end{pmatrix}=0\ .
\end{equation}
This yields two equations for $\beta$, which may be written as
\begin{equation}
\beta=-{\mu\, E + \gamma\ns_1 e^{-\kappa}\over\gamma\ns_0\,S}=
-{\gamma\ns_0\,S^*\over \mu\, E +\gamma\ns_1 e^{-\kappa}} \ .
\end{equation}
This fixes the energy at
\begin{equation}
E=-\mu\, \gamma\ns_1 e^{-\kappa}\pm \gamma\ns_0\,|S|\ .
\end{equation}
Thus, we have a bound state at positive energy (and a corresponding one at negative
energy) for each real, positive value of $\kappa$, which solves one of the four
equations (for $\mu$, $\mu'=\pm 1$)
\begin{align}
-\mu\,\gamma\ns_1\,e^{-\kappa} + \mu'\,\gamma\ns_0\,|S| &= -\mu\,\gamma\ns_1\,\cosh(\kappa) \nonumber\\
&\qquad+ \mu'
\sqrt{\gamma_1^2\cosh^2(\kappa) + \gamma_0^2\,|S|^2}\ .
\label{BSE}
\end{align}
We assume $\gamma\ns_0>0$. In the SWMC analysis \cite{SWMC}, one has
$\gamma\ns_1\approx -390\,$meV and $\gamma\ns_0\approx 3.2\,$eV. The vertical hopping is
negative due to the sign of the overlap of $p^{\vphantom{\dagger}}_z$ orbitals on consecutive layers.
In order to have a bound state solution, we must have $\mu\mu'={\rm sgn}(\gamma\ns_1)$, resulting in
\begin{equation}
\sqrt{\gamma_1^2\cosh^2(\kappa) + \gamma_0^2\,|S|^2} - \gamma\ns_0\,|S| =
\big|\gamma\ns_1\big|\,\sinh(\kappa)\ ,
\end{equation}
the solution of which is
\begin{equation}
\sinh(\kappa)={\big|\gamma\ns_1\big|\over 2\,\gamma\ns_0\,|S|}\equiv u\ .
\end{equation}
Thus, there are two bound states for all ${\mib k}$ in the Brillouin zone, one at positive
energy, corresponding to the choices $\mu=\mu'={\rm sgn}(\gamma\ns_1)$, and one at negative energy,
corresponding to the choices $\mu=\mu'=-{\rm sgn}(\gamma\ns_1)$. Solving for $\kappa$, we have
\begin{equation}
e^{\pm\kappa}=\pm\, u+\sqrt{1+u^2}\ .
\end{equation}
The bound state energy may now be written as
\begin{align}
E^{\vphantom{*}}_\ssr{B}&=\big|\gamma\ns_1\big|\,\bigg(u+{1\over 2u}-\sqrt{1+u^2}\bigg)\vphantom{\sum_i}\\
&={\gamma_0^3\, |S|^3\over \gamma_1^2} + {\cal O}(u^{-5})\ ,\vphantom{\sum_N^N}
\end{align}
where the expansion in the second line is for large $u$, {\it i.e.\/}\ $\gamma\ns_0\,|S|\ll \big|\gamma\ns_1\big|$.
Note that the bound state disperses as $|{\mib k}|^3$. Recall for Bernal graphite that
the dispersion is linear in $|{\mib k}|$ in the vicinity of H and quadratic elsewhere along
the $K$-$H$ spine. The length scale associated
with the bound state is $\kappa^{-1}$. For $u\to\infty$, $\kappa^{-1} \sim 1/\ln(2u)$.
Since the spectrum, including bound states, is particle-hole symmetric, we may
without loss of generality limit our attention to $E\ge 0$ states. The continuum
bands, for fixed ${\mib k}$, range over energies
\begin{align}
-\big|\gamma\ns_1\big|+\sqrt{\gamma_1^2 + \gamma_0^2 \,|S|^2} \le & E^\ssr{(3)}_{\mib k} \le \gamma\ns_0\,|S| \\
\gamma\ns_0\,|S| \le & E^\ssr{(4)}_{\mib k} \le \big|\gamma\ns_1\big|+\sqrt{\gamma_1^2 + \gamma_0^2 \,|S|^2}\ .
\end{align}
The bound state we have analyzed lives just below the bottom of the $E^\ssr{(3)}_{\mib k}$
band. The binding energy is $\Delta=E^{\ssr(3)}_\ssr{min}-E^{\vphantom{*}}_\ssr{B}$, and is given by
\begin{equation}
{\Delta(u)\over |\gamma\ns_1|}={1\over 2u}\Big(\sqrt{1+4u^2}-1\Big)+\sqrt{1+u^2}-1-u\ .
\end{equation}
In figs. \ref{bsa} we plot the bound state spectrum for the case $|\gamma\ns_1|/\gamma\ns_0=0.1$
for small values of $|S|$, {\it i.e.\/}\ close to the zone corners, where $u$ is large.
At the zone center, $|S|=3$ is maximized and $u$ achieves its minimum value;
for reference, $u^{\vphantom{*}}_\ssr{SWMC}=0.02057$.
The binding energy vanishes in both the $u\to 0$ and $u\to\infty$ limits, as shown
in fig. \ref{delta}. The maximum of $\Delta$ occurs for $u=1$, where
$\Delta/|\gamma\ns_1|=0.03225$, corresponding to a binding energy of approximately
$13\,$meV. In the appendix, we compute the bound state spectrum for the full
SWMC model.
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{delta.eps}
\caption
{\label{delta} Binding energy of the bound state {\it versus\,} $\ln(u)$, where
$u=\big|\gamma_1/2\gamma_0 S\big|$.}
\end{figure}
\section{Finite B Case}
To obtain the Landau levels, we expand about the Dirac points, following the
method described in section \ref{weak}. We have $S^{\vphantom{\dagger}}_{{\mib K}+{\mib\pi}/\hbar}=-\epsilon\,b$,
with $\epsilon$ given in eqn. \ref{epse}. At $B=10\,{\rm T}$ one has $\epsilon=0.0371$. With $\gamma\ns_1=0.39\,$eV and $\gamma\ns_0=3.16\,$eV,
we have $r=0.123$ and $\epsilon/r=\sqrt{B/B^{\vphantom{*}}_1}$ where $B^{\vphantom{*}}_1=110.8\,{\rm T}$.
For physical fields, then, we have $\epsilon \,{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}\, r$. Note that one can also write
\begin{equation}
\epsilon\,\gamma\ns_0={\sqrt{2}\,\hbar\,v_{\scriptscriptstyle\rm F}\over\ell_B}\ ,
\end{equation}
where $v_{\scriptscriptstyle\rm F}=\sqrt{3}\,\gamma\ns_0\,a/2\hbar$ is the Fermi velocity ($a=2.46\,$\AA\ is the
lattice spacing in the hexagonal planes) and $\ell\ns_B=\hbar c/eB$ is the magnetic length.
\subsection{Bernal Stacking and Landau Levels}
We define the operator-valued matrix
\begin{equation}
{\widehat M}=\begin{pmatrix} E & \epsilon\,\gamma\ns_0\,b \\ \epsilon\,\gamma\ns_0\,b^\dagger & E \end{pmatrix}\ .
\end{equation}
For perfect Bernal stacking, we have
\begin{align}
{\widehat M}\,\psi^{\vphantom{\dagger}}_{2j} + \gamma\ns_1\,\Sigma^+\big(\psi^{\vphantom{\dagger}}_{2j-1}+\psi^{\vphantom{\dagger}}_{2j+1}\big)&=0\\
{\widehat M}\,\psi^{\vphantom{\dagger}}_{2j+1} + \gamma\ns_1\,\Sigma^-\big(\psi^{\vphantom{\dagger}}_{2j}+\psi^{\vphantom{\dagger}}_{2j+2}\big)&=0\ .
\end{align}
We now write the wavefunction in terms of right and left moving components:
\begin{align}
\psi^{\vphantom{\dagger}}_{2j}&=\big( I\,e^{iq j} + O'\,e^{-iq j}\big)\begin{pmatrix} \alpha\,\ket{n} \\
\beta\,\ket{n+1}\end{pmatrix}\\
\psi^{\vphantom{\dagger}}_{2j+1}&=\big( I\,e^{iq (j+{1\over 2})} + O'\,e^{-iq (j+{1\over 2})}\big)
\begin{pmatrix} x\,\ket{n-1} \\ y\,\ket{n}\end{pmatrix}\ ,
\label{abxy}
\end{align}
where we assume $n>0$. We therefore have
\begin{align}
M^{\vphantom{\dagger}}_n \begin{pmatrix} x \\ y \end{pmatrix}
+2\gamma\ns_1\cos(q/2)\,\Sigma^-\begin{pmatrix} \alpha \\ \beta \end{pmatrix}&=0\\
M^{\vphantom{\dagger}}_{n+1} \begin{pmatrix} \alpha \\ \beta \end{pmatrix}
+2\gamma\ns_1\cos(q\nd_{\ssr{L}\vphantom{\dagger}}/2)\,\Sigma^+\begin{pmatrix} x \\ y \end{pmatrix}&=0
\end{align}
where
\begin{equation}
M^{\vphantom{\dagger}}_n\equiv \begin{pmatrix} E & \sqrt{n}\>\epsilon\,\gamma\ns_0\\
\sqrt{n}\>\epsilon\,\gamma\ns_0 & E \end{pmatrix}\ .
\end{equation}
This leads to
\begin{align}
P^{\vphantom{\dagger}}_n(E)&={\rm det}\,\Big[M^{\vphantom{\dagger}}_{n+1}-4\gamma_1^2\cos^2(q/2)\,\Sigma^+\,
M^{-1}_n\,\Sigma^-\Big]\nonumber\\
&=E^2-(n+1)\,\epsilon^2\,\gamma_0^2 - {4 \gamma_1^2E^2\cos^2(q/2)\over
E^2-n\,\epsilon^2\,\gamma_0^2}\ .
\end{align}
Setting $P^{\vphantom{\dagger}}_n(E)=0$ yields the spectrum $E=E^{\vphantom{\dagger}}_n(q)$ of Bernal
hexagonal graphite:
\begin{align}
&{E_{n,\pm}^2(q)\over \gamma_0^2}=(n+\frac{1}{2})\,\epsilon^2 + 2r^2\cos^2(q/2)\\
&\quad\pm\sqrt{4 r^4\cos^4(q/2) + (4n+2)\,\epsilon^2\,r^2\cos^2(q/2)
+\frac{1}{4}\,\epsilon^4}\ ,\nonumber
\end{align}
where $r=\gamma\ns_1/\gamma\ns_0$. Expanding for small $\epsilon/r$, we have
\begin{align}
E^{\vphantom{\dagger}}_{n,-}&\in\bigg[\sqrt{n(n+1)}\ {\epsilon^2\over 2r}\ ,
\ \epsilon\sqrt{n}\bigg]\\
E^{\vphantom{\dagger}}_{n,+}&\in \bigg[\sqrt{n+1} \ \epsilon\ ,\ 2r+\big(n+\frac{1}{2}\big)\,
{\epsilon^2\over 2r} + \ldots\bigg]
\end{align}
\subsection{Zero Modes}
The case $n=0$ must be considered separately. Consider the wavefunction
\begin{equation}
\psi^{\vphantom{\dagger}}_{2j}=\begin{pmatrix} 0 \\ \beta\,\ket{0} \end{pmatrix}\,\delta^{\vphantom{\dagger}}_{j,J}
\quad,\quad
\psi_{2j+1}=\begin{pmatrix} 0 \\ 0 \end{pmatrix}\ .
\end{equation}
This is an $E=0$ eigenstate for any $J$. It describes a state localized on a single
plane.
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{GLLs.eps}
\caption
{\label{GLLs} Landau levels in graphite. Subbands $E_{n=1,+}$ (red),
$E_{n=1,-}$ (blue), and $E_{n=0,+}$ (green) are shown. The zero modes
are shown in black.}
\end{figure}
We can find more solutions by writing
\begin{align}
\psi^{\vphantom{\dagger}}_{2j}&=\begin{pmatrix} \alpha\,\ket{0} \\ \beta\,\ket{1} \end{pmatrix} e^{iqj} \\
\psi^{\vphantom{\dagger}}_{2j+1}&=\begin{pmatrix} 0 \\ y\,\ket{0} \end{pmatrix}e^{iq (j+{1\over 2})} \ .
\end{align}
The Schr{\"o}dinger equation then requires
\begin{equation}
\begin{pmatrix} E & \epsilon\,\gamma\ns_0 \\ \epsilon\,\gamma\ns_0 & E \end{pmatrix}
\begin{pmatrix} \alpha \\ \beta \end{pmatrix} + 2\gamma\ns_1\cos(q/2)
\begin{pmatrix} y \\ 0 \end{pmatrix}=0
\end{equation}
on even planes and
\begin{equation}
E\begin{pmatrix} 0 \\ y \end{pmatrix} + 2\gamma\ns_1\cos(q/2)
\begin{pmatrix} 0 \\ \alpha \end{pmatrix}=0
\end{equation}
on odd planes. Thus, we have three equations for the remaining three eigenvalues:
\begin{align}
0&=E\,\alpha + \epsilon\,\gamma\ns_0\,\beta + 2\gamma\ns_1\cos(q/2)\,y \\
0&=E\,\beta + \epsilon\,\gamma\ns_0\,\alpha \\
0&=E\, y + 2\gamma\ns_1\cos(q/2)\,\alpha\ .
\end{align}
We immediately see that $E=0$ is an eigenvalue, with eigenvector
\begin{equation}
\begin{pmatrix} \alpha \\ \beta \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ -2\gamma\ns_1\cos(q/2) \\
\epsilon\,\gamma\ns_0 \end{pmatrix}\ .
\end{equation}
If we Fourier transform this solution, multiplying by $e^{-iq(J+{1\over 2})}$ and summing
over $q$, we find a purely localized state, with
\begin{align}
\psi^{\vphantom{\dagger}}_{2J}&=-\gamma\ns_1\begin{pmatrix} 0 \\ \ket{1} \end{pmatrix} \\
\psi^{\vphantom{\dagger}}_{2J+1}&=\epsilon\,\gamma\ns_0\begin{pmatrix} 0 \\ \ket{0} \end{pmatrix} \\
\psi^{\vphantom{\dagger}}_{2J+2}&=-\gamma\ns_1\begin{pmatrix} 0 \\ \ket{1} \end{pmatrix} \ ,
\end{align}
with all other $\psi^{\vphantom{\dagger}}_n=0$. This zero mode is localized on three layers.
The remaining two solutions are easily found to be
\begin{equation}
\begin{pmatrix} \alpha \\ \beta \\ y \end{pmatrix} =
\begin{pmatrix} E \\ -\epsilon\,\gamma\ns_0 \\ -2\gamma\ns_1\cos(q/2) \end{pmatrix}\ ,
\end{equation}
with $E=E^{\vphantom{\dagger}}_{0,\pm}\equiv\pm\sqrt{\epsilon^2 \gamma_0^2 + 4 \gamma_1^2\cos^2(q/2)}$.
These solutions are wave-like and disperse with $q$.
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{LL_step.eps}
\caption
{\label{LL_step} Landau level indices for scattering at a stacking fault in Bernal
graphite.}
\end{figure}
\subsection{Stacking Fault}
For the system with a single step stacking fault, the situation is as depicted in
fig. \ref{LL_step}. We then swap the notation for even and odd planes on the right
half of the system (layer indices $l>0$) with respect to eqn. \ref{abxy}, and introduce
wavevectors $q\nd_{\ssr{L}\vphantom{\dagger}}$ and $q\nd_{\ssr{R}\vphantom{\dagger}}$ for the left and right half-systems.
We must then match the energies on left and right sides of the fault:
\begin{equation}
E^{\vphantom{\dagger}}_n(q\nd_{\ssr{L}\vphantom{\dagger}})=E^{\vphantom{\dagger}}_{n+1}(q\nd_{\ssr{R}\vphantom{\dagger}})\ .
\end{equation}
To identify the bound states, we write the wavefunction for $l>0$ as
\begin{equation}
\psi^{\vphantom{\dagger}}_{2j-1}=\begin{pmatrix} \alpha^{\vphantom{\dagger}}_j\,\ket{n+1} \\ \beta^{\vphantom{\dagger}}_j\,\ket{n+2}
\end{pmatrix} \quad,\quad
\psi^{\vphantom{\dagger}}_{2j}=\begin{pmatrix} x^{\vphantom{\dagger}}_j\,\ket{n} \\ y^{\vphantom{\dagger}}_j\ket{n+1} \end{pmatrix}
\end{equation}
and for $l<0$ as
\begin{equation}
\psi^{\vphantom{\dagger}}_{-(2j-1)}=\begin{pmatrix} {\bar\alpha}^{\vphantom{\dagger}}_j\,\ket{n-1} \\ {\bar\beta}^{\vphantom{\dagger}}_j\,\ket{n}
\end{pmatrix} \quad,\quad
\psi^{\vphantom{\dagger}}_{-2j}=\begin{pmatrix} {\bar x}^{\vphantom{\dagger}}_j\,\ket{n} \\ {\bar y}^{\vphantom{\dagger}}_j\ket{n+1} \end{pmatrix}\ .
\end{equation}
At $l=0$ we write
\begin{equation}
\psi^{\vphantom{\dagger}}_0=\begin{pmatrix} {\bar x}^{\vphantom{\dagger}}_0\,\ket{n} \\ y^{\vphantom{\dagger}}_0\ket{n+1}\end{pmatrix}\ .
\end{equation}
The Schr{\"o}dinger equation, evaluated for both even and odd planes with $l>0$
and $l<0$ now gives eight relations among the eight sets of coefficients
$\big\{\alpha^{\vphantom{\dagger}}_j,\beta^{\vphantom{\dagger}}_j,x^{\vphantom{\dagger}}_j,y^{\vphantom{\dagger}}_j,{\bar\alpha}^{\vphantom{\dagger}}_j,{\bar\beta}^{\vphantom{\dagger}}_j,{\bar x}^{\vphantom{\dagger}}_j,{\bar y}^{\vphantom{\dagger}}_j\big\}$,
expressible as
\begin{equation}
\begin{pmatrix} E & \sqrt{n+1}\ \epsilon\,\gamma\ns_0 \\ \sqrt{n+1}\ \epsilon\,\gamma\ns_0 & E\end{pmatrix}
\begin{pmatrix} x^{\vphantom{*}}_j \\ y^{\vphantom{\dagger}}_j \end{pmatrix}
+ \gamma\ns_1\begin{pmatrix} 0 \\ \alpha^{\vphantom{\dagger}}_j+\alpha^{\vphantom{\dagger}}_{j+1} \end{pmatrix}=0
\end{equation}
and
\begin{equation}
\begin{pmatrix} E & \sqrt{n+2}\ \epsilon\,\gamma\ns_0 \\ \sqrt{n+2}\ \epsilon\,\gamma\ns_0 & E\end{pmatrix}
\begin{pmatrix} \alpha^{\vphantom{*}}_j \\ \beta^{\vphantom{\dagger}}_j \end{pmatrix}
+ \gamma\ns_1\begin{pmatrix} y^{\vphantom{*}}_{j-1}+y^{\vphantom{*}}_j \\ 0 \end{pmatrix}=0
\end{equation}
and
\begin{equation}
\begin{pmatrix} E & \sqrt{n+1}\ \epsilon\,\gamma\ns_0 \\ \sqrt{n+1}\ \epsilon\,\gamma\ns_0 & E\end{pmatrix}
\begin{pmatrix} {\bar x}^{\vphantom{*}}_j \\ {\bar y}^{\vphantom{\dagger}}_j \end{pmatrix}
+ \gamma\ns_1\begin{pmatrix} {\bar\beta}^{\vphantom{\dagger}}_j+{\bar\beta}^{\vphantom{\dagger}}_{j+1} \\ 0 \end{pmatrix}=0
\end{equation}
and
\begin{equation}
\begin{pmatrix} E & \sqrt{n}\ \epsilon\,\gamma\ns_0 \\ \sqrt{n}\ \epsilon\,\gamma\ns_0 & E\end{pmatrix}
\begin{pmatrix} {\bar\alpha}^{\vphantom{*}}_j \\ {\bar\beta}^{\vphantom{\dagger}}_j \end{pmatrix}
+ \gamma\ns_1\begin{pmatrix} 0 \\ {\bar x}^{\vphantom{\dagger}}_{j-1}+{\bar x}^{\vphantom{\dagger}}_j \end{pmatrix}=0\ .
\end{equation}
We can use these equations to eliminate the four sets of coefficients
$\{\beta^{\vphantom{\dagger}}_j,x^{\vphantom{\dagger}}_j,{\bar\alpha}^{\vphantom{\dagger}}_j,{\bar y}^{\vphantom{\dagger}}_j\}$:
\begin{align}
\beta^{\vphantom{\dagger}}_j&=-\sqrt{n+2}\ \epsilon\,\gamma\ns_0\,E^{-1}\,\alpha^{\vphantom{\dagger}}_j \\
{\bar y}^{\vphantom{\dagger}}_j&=-\sqrt{n+1}\ \epsilon\,\gamma\ns_0\,E^{-1}\,{\bar x}^{\vphantom{\dagger}}_j \\
x^{\vphantom{\dagger}}_j&=-\sqrt{n+1}\ \epsilon\,\gamma\ns_0\,E^{-1}\,y^{\vphantom{\dagger}}_j \\
{\bar\alpha}^{\vphantom{\dagger}}_j&=-\sqrt{n}\ \epsilon\,\gamma\ns_0\,E^{-1}\,{\bar\beta}^{\vphantom{\dagger}}_j\ .
\end{align}
We then obtain
\begin{align}
0&=R^{\vphantom{\dagger}}_{n+1}(E)\,y^{\vphantom{\dagger}}_j + \alpha^{\vphantom{\dagger}}_j+\alpha^{\vphantom{\dagger}}_{j+1}\\
0&=R^{\vphantom{\dagger}}_{n+2}(E)\,\alpha^{\vphantom{\dagger}}_j + y^{\vphantom{\dagger}}_{j-1}+y^{\vphantom{\dagger}}_j\\
0&=R^{\vphantom{\dagger}}_{n+1}(E)\,{\bar x}^{\vphantom{\dagger}}_j + {\bar\beta}^{\vphantom{\dagger}}_j+{\bar\beta}^{\vphantom{\dagger}}_{j+1}\\
0&=R^{\vphantom{\dagger}}_{n}(E)\,{\bar\beta}^{\vphantom{\dagger}}_j + {\bar x}^{\vphantom{\dagger}}_{j-1}+{\bar x}^{\vphantom{\dagger}}_j\ ,
\end{align}
where
\begin{equation}
R^{\vphantom{\dagger}}_n(E)\equiv {E^2-E_n^2\over E\,\gamma\ns_1}\ ,
\end{equation}
with $E^2_n\equiv n\,\epsilon^2\,\gamma_0^2$.
We then have
\begin{equation}
\begin{pmatrix} \alpha^{\vphantom{\dagger}}_{j+1} \\ y^{\vphantom{\dagger}}_j \end{pmatrix} = \big(K_{n+1}\big)^j
\begin{pmatrix} \alpha^{\vphantom{\dagger}}_1 \\ y^{\vphantom{\dagger}}_0 \end{pmatrix}
\end{equation}
and
\begin{equation}
\begin{pmatrix} {\bar\beta}^{\vphantom{\dagger}}_{j+1} \\ {\bar x}^{\vphantom{\dagger}}_j \end{pmatrix} = \big(K_n\big)^j
\begin{pmatrix} {\bar\beta}^{\vphantom{\dagger}}_1 \\ {\bar x}^{\vphantom{\dagger}}_0 \end{pmatrix}\ ,
\label{soln}
\end{equation}
where
\begin{equation}
K^{\vphantom{\dagger}}_n(E)=\begin{pmatrix} R^{\vphantom{\dagger}}_n(E)\,R^{\vphantom{\dagger}}_{n+1}(E) -1 & R^{\vphantom{\dagger}}_n(E) \\
-R^{\vphantom{\dagger}}_{n+1}(E) & -1 \end{pmatrix}\ .
\end{equation}
Note that ${\rm det}\,K^{\vphantom{\dagger}}_n(E)=1$, and that the characteristic polynomial
${\rm det}\,(\lambda-K^{\vphantom{\dagger}}_n)$ is real for real $\lambda$. It is easy to see
that the eigenvalues of $K^{\vphantom{\dagger}}_n(E)$ form a complex conjugate pair $e^{\pm i\theta}$
if the energy $E$ satisfies the condition the condition
$\big|{\rm Tr}\,K^{\vphantom{\dagger}}_n(E)\big| \le 2$, or
\begin{equation}
0\le R^{\vphantom{\dagger}}_n(E)\,R^{\vphantom{\dagger}}_{n+1}(E) \le 4\ .
\end{equation}
This is the condition that $E$ lies within one of four energy bands.
The roots of $R^{\vphantom{\dagger}}_n(E)\,R^{\vphantom{\dagger}}_{n+1}(E)=0$ lie at $E^2=E_n^2$ and
$E^2=E_{n+1}^2$, while the roots of $R^{\vphantom{\dagger}}_n(E)\,R^{\vphantom{\dagger}}_{n+1}(E)=4$
lie at $E^2=E_{n,-}^2$ and $E^2=E_{n+1,+}^2$, where
\begin{align}
E_{n,\pm}^2&=(n+\frac{1}{2})\,\epsilon^2\, \gamma_0^2 + 2\,\gamma_1^2\\
&\qquad\qquad\pm
\sqrt{4\, \gamma_1^4 + (4n+2)\, \epsilon^2\,\gamma_0^2\,\gamma_1^2 + \frac{1}{4} \,\epsilon^4\,\gamma_0^4}\ .\nonumber
\end{align}
The bands are then given by
\begin{equation}
E_{n,-}^2 \le E^2 \le E_n^2 \qquad,\qquad
E_{n+1}^2 \le E^2 \le E_{n,+}^2\ .
\end{equation}
In the limit $\sigma\equiv \epsilon^2\,\gamma_0^2/\gamma_1^2\ll 1$, we can expand
and write
\begin{align}
E_{n,-}^2&\simeq n(n+1)\,{\epsilon^4\,\gamma_0^4\over 4\,\gamma_1^2}\\
E_{n,+}&\simeq 4\,\gamma_1^2 + (2n+1)\,\epsilon^2\,\gamma_0^2\ .
\end{align}
At plane $n=0$ the Schr{\"o}dinger equation yields
\begin{equation}
\begin{pmatrix} \gamma\ns_1 & E \\ 0 & \sqrt{n+1}\ \epsilon\,\gamma\ns_0\end{pmatrix}
\begin{pmatrix} {\bar\beta}^{\vphantom{\dagger}}_1 \\ {\bar x}^{\vphantom{\dagger}}_0\end{pmatrix} +
\begin{pmatrix} 0 & \sqrt{n+1}\ \epsilon\,\gamma\ns_0 \\ \gamma\ns_1 & E \\ \end{pmatrix}
\begin{pmatrix} \alpha^{\vphantom{\dagger}}_1 \\ y^{\vphantom{\dagger}}_0\end{pmatrix} =0\ .
\label{central}
\end{equation}
\subsection{Scattering Matrix}
If both $\big|{\rm Tr}\,K^{\vphantom{\dagger}}_n(E)\big|\le 2$ and $\big|{\rm Tr}\,K^{\vphantom{\dagger}}_{n+1}(E)\big|\le 2$,
then we can write
\begin{equation}
\begin{pmatrix} {\bar\beta}^{\vphantom{\dagger}}_1 \\ {\bar x}^{\vphantom{\dagger}}_0 \end{pmatrix} = I\,{\rm\Psi}_-^{(n)} +
O'\,{\rm\Psi}_+^{(n)}
\label{left}
\end{equation}
and
\begin{equation}
\begin{pmatrix} \alpha^{\vphantom{\dagger}}_1 \\ y^{\vphantom{\dagger}}_0 \end{pmatrix} = I'\,{\rm\Psi}_-^{(n+1)} +
O\,{\rm\Psi}_+^{(n+1)},
\label{right}
\end{equation}
where
\begin{equation}
K^{\vphantom{\dagger}}_n(E)\,{\rm\Psi}_\pm^{(n)}=e^{\pm i\theta^{\vphantom{\dagger}}_n}\,{\rm\Psi}_\pm^{(n)}\ .
\end{equation}
Then we have
\begin{align}
\begin{pmatrix} {\bar\beta}^{\vphantom{\dagger}}_{j+1} \\ {\bar x}^{\vphantom{\dagger}}_j \end{pmatrix} &=
I\,e^{-ij\theta^{\vphantom{\dagger}}_n}\,{\rm\Psi}_-^{(n)} + O'\,e^{ij\theta^{\vphantom{\dagger}}_n}\,{\rm\Psi}_+^{(n)} \\
\begin{pmatrix} \alpha^{\vphantom{\dagger}}_{j+1} \\ y^{\vphantom{\dagger}}_j \end{pmatrix} &=
I'\,e^{-ij\theta^{\vphantom{\dagger}}_{n+1}}\,{\rm\Psi}_-^{(n+1)} + O\,e^{ij\theta^{\vphantom{\dagger}}_{n+1}}\,{\rm\Psi}_+^{(n+1)}\ .
\end{align}
The ${\cal S}$-matrix, which relates incoming to outgoing flux amplitudes, is then obtained from eqns.
\ref{central}, \ref{left}, and \ref{right}, upon replacing $I\to v^{1/2}_\ssr{L}\,{\cal I}$, $O \to {\cal O}' v^{1/2}_\ssr{L}$,
$I'\to v^{1/2}_\ssr{R}\,{\cal I}$, and $O \to {\cal O} v^{1/2}_\ssr{R}$, where $v^{\vphantom{*}}_\ssr{L}={\partial} E_n(q^{\vphantom{*}}_\ssr{L})/
{\partial} q^{\vphantom{*}}_\ssr{L}$ and $v^{\vphantom{*}}_\ssr{R}={\partial} E_{n+1}(q^{\vphantom{*}}_\ssr{R})/{\partial} q^{\vphantom{*}}_\ssr{R}$
\begin{figure}[!t]
\centering
\includegraphics[width=8cm]{bsb.eps}
\caption
{\label{bsb} Bulk energy bands (shaded and hatched regions) and bound states
(magenta curves) {\it versus\/} magnetic field for $\gamma_0=3.16\,$eV and
$\gamma_1=-0.39\,$eV (tight binding; nearest neighbor hopping only).
The lowest energy bound state merges into the band
continuum at $B\approx 15\,$T. The other bound states remain sharp over the
energy range shown and do not mix with the lowest bulk band. }
\end{figure}
\subsection{Bound States}
If a state is evanescent on both sides of the stacking fault, we must have that
both $\big|{\rm Tr}\,K^{\vphantom{\dagger}}_n(E)\big| > 2$ and $\big|{\rm Tr}\,K^{\vphantom{\dagger}}_{n+1}(E)\big| > 2$.
The eigenvalues of $K^{\vphantom{\dagger}}_n(E)$ are given by
\begin{equation}
\Lambda^{\vphantom{\dagger}}_{n,\pm}=\frac{1}{2}\,\tau^{\vphantom{\dagger}}_n \pm\frac{1}{2}\sqrt{\tau_n^2-4}\ ,
\end{equation}
where
\begin{equation}
\tau^{\vphantom{\dagger}}_n(E)\equiv {\rm Tr}\,K^{\vphantom{\dagger}}_n(E)=R^{\vphantom{\dagger}}_n(E)\,R^{\vphantom{\dagger}}_{n+1}(E)-2\ .
\end{equation}
In order that the solution in eqn. \ref{soln} not blow up for $n\to\pm\infty$, we
must require that $\begin{pmatrix} \alpha^{\vphantom{\dagger}}_1 \\ y^{\vphantom{\dagger}}_0\end{pmatrix}$
and $\begin{pmatrix} {\bar\beta}^{\vphantom{\dagger}}_1 \\ {\bar x}^{\vphantom{\dagger}}_0\end{pmatrix}$ have no weight in the
$|\Lambda|>1$ eigenspaces for $K^{\vphantom{\dagger}}_n(E)$ and $K^{\vphantom{\dagger}}_{n+1}(E)$, respectively. This
means
\begin{align}
R^{\vphantom{\dagger}}_{n+2}(E)\,\alpha^{\vphantom{\dagger}}_1 + y^{\vphantom{\dagger}}_0&=-\Lambda^<_{n+1}\,y^{\vphantom{\dagger}}_0\vphantom{\sum_i}\\
R^{\vphantom{\dagger}}_{n+1}(E)\,{\bar\beta}^{\vphantom{\dagger}}_1 + {\bar x}^{\vphantom{\dagger}}_0&=-\Lambda^<_{n}\,{\bar x}^{\vphantom{\dagger}}_0\ ,
\end{align}
where $|\Lambda^<|<1$.
When we combine these equations with those in eqn. \ref{central}, we obtain
\begin{equation}
{\cal M}\begin{pmatrix} \alpha^{\vphantom{\dagger}}_1 \\ y^{\vphantom{\dagger}}_0 \\ {\bar\beta}^{\vphantom{\dagger}}_1 \\ {\bar x}^{\vphantom{\dagger}}_0
\end{pmatrix}=0\ .
\end{equation}
where
\begin{equation}
{\cal M}=\begin{pmatrix}
R^{\vphantom{\dagger}}_{n+2} & 1+\Lambda^<_{n+1} & 0 & 0 \\
\gamma\ns_1 & E & 0 & \sqrt{n+1}\ \epsilon\,\gamma\ns_0 \\
0 & 0 & R^{\vphantom{\dagger}}_{n+1} & 1+\Lambda^<_{n} \\
0 & \sqrt{n+1}\ \epsilon\,\gamma\ns_0 & \gamma\ns_1 & E \end{pmatrix}\ .
\end{equation}
A solution requires $D(E)={\rm det}\,{\cal M}(E)=0$. We have
\begin{align}
D(E)&=\big[E\,R^{\vphantom{\dagger}}_{n+2}-\gamma\ns_1(1+\Lambda^<_{n+1})\big]\,
\big[E\,R^{\vphantom{\dagger}}_{n+1}-\gamma\ns_1(1+\Lambda^<_{n})\big]\nonumber\\
&\qquad-(n+1)\,\epsilon^2\,\gamma_0^2\,R^{\vphantom{\dagger}}_{n+1}\,R^{\vphantom{\dagger}}_{n+2}\ .
\end{align}
Let us look for a bound state with energy $E$ which is parametrically (in $\sigma$)
smaller than both $\gamma\ns_1$ and $\epsilon\,\gamma\ns_0$. Then $R^{\vphantom{\dagger}}_n(E)\simeq -E_n^2/E\,\gamma\ns_1$, from
which we obtain $\Lambda^<_n \simeq \big(R^{\vphantom{\dagger}}_n\,R^{\vphantom{\dagger}}_{n+1}\big)^{-1}$. Then find
\begin{equation}
D(E)\approx \gamma_1^2-(n+1)^2(n+2)\,{\epsilon^6\,\gamma_0^6\over \gamma_1^2\,E^2}\ .
\end{equation}
Setting $D(E)=0$ yields the bound state energy,
\begin{equation}
E^2=(n+1)^2(n+2)\,{\epsilon^6\,\gamma_0^6\over \gamma_1^4}\ .
\end{equation}
Thus, the bound state energy is proportional to $|B|^{3/2}$.
In fig. \ref{bsb}, we plot the lowest ten bound state energies {\it versus\/} magnetic field.
\section{Surface Spectroscopy of Buried Stacking Faults}
Our previous results for the transmission through a stacking defect
suggest that these defects are very effective in decoupling graphene
stacks. We analyze now the density of states at a graphite surface
in the presence of a stacking defect a few layers below the surface.
The stacking sequence is (AB)$^{\vphantom{*}}_{N/2}$CBCB$\cdots$. The number of layers between
the surface and the defect is $N$.
\begin{figure}
\begin{center}
\includegraphics*[width=8cm,angle=0]{green20.eps}
\includegraphics*[width=8cm,angle=0]{green100.eps}
\end{center}
\caption{(Color online). Left: Density of states for the two
inequivalent sites of a graphite surface with a stacking defect 20
layers below the surface Triangles (red) give the density of states
at the site with a nearest neighbor in the layer below. Squares
(green) give the density of states at the site without nearest
neighbors in the layer below. Right: as in the left panel, with a
defect 100 layers below the surface. }
\label{green0}
\end{figure}
The system can be separated into a perfect semi-infinite graphite
sample coupled to the defect layer, and $N$ layers between the
defect and the surface. We will only include the parameters
$\gamma\ns_0$ and $\gamma\ns_1$. The semi-infinite portion can be integrated out.
The site of the defect layer connected to it acquires a self energy:
\begin{equation}
\Sigma_0 ( \omega ) = {2 \gamma_1^2\over \left( \omega -
{|\gamma\ns_0\,S|^2\over \omega} \right) - \sqrt{ \left( \omega -
{|\gamma\ns_0\,S|^2\over \omega} \right)^2 - 4 \gamma_1^2}}
\label{semiinfinite}
\end{equation}
We now integrate out this site, leading to the self energy:
\begin{equation}
\Sigma_1 ( \omega ) ={|\gamma\ns_0\,S|^2\over\omega - \Sigma_0 (
\omega ) }
\end{equation}
The procedure can be iterated leading to new self energies for sites
$2$, $3\ldots N$, resulting in the hierarchy
\begin{equation}
\Sigma_{i+1} ( \omega ) = {|\gamma\ns_0\,S|^2\over\omega} +
{\gamma_1^2\over \omega - \Sigma_i ( \omega )}
\end{equation}
The Green's function at the two inequivalent sites of the surface
layer ($N$) are:
\begin{equation}
G_{\rm u}^{N} ( \omega ) = {1\over \omega -
{|\gamma\ns_0\,S|^2\over \omega} - {\gamma_1^2\over \omega - \Sigma^{\vphantom{\dagger}}_{N-1} (
\omega )}}
\end{equation}
and
\begin{equation}
G_{\rm v}^{N} ( \omega ) = {1 \over \omega -
{|\gamma\ns_0\,S|^2\over \omega - {\gamma_1^2\over \omega - \Sigma^{\vphantom{\dagger}}_{N-1} (
\omega )}}} \label{green_s}
\end{equation}
We show in fig. \ref{green0} the surface
density of states when such a defect lies twenty and hundred layers
below the surface, obtained by integrating the
imaginary part of the Green's functions in eq.~\ref{green_s}
over the in-plane component ${\mib k}_\parallel$ of the wavevector.
\begin{figure}
\begin{center}
\includegraphics*[width=6cm,angle=0]{green_step_1.eps}
\includegraphics*[width=6cm,angle=0]{green_step_2.eps}
\end{center}
\caption{(Color online). Surface density of states for a
semiinfinite stack with a defect ten layers below the surface. The
Landau level index is $n=2$, and the fields studied are B = 1 T
(red) and B = 10 T (blue).
Left: Sublattice with a nearest neighbor in the contiguous layer.
Right: Sublattice without a neighbor in the contiguous layer.}
\label{green}
\end{figure}
The density of states show a number of resonances, which are
smoothed out when the number of layers between the defect and the
surface is large. For $N \gg 1$, we recover the analytical results
in ref. \cite{GNP06}. These results are consistent with the analysis in
the previous sections, which show that the transmission through the
defect is strongly suppressed. The layers between the defect and the
surface become effectively decoupled from the bulk of the system.
\begin{figure}
\begin{center}
\includegraphics*[width=6cm,angle=0]{green_3D_1.eps}
\includegraphics*[width=6cm,angle=0]{green_3D_2.eps}
\end{center}
\caption{Surface density of states at the sublattice without a
nearest neighbor in the next layer. The system has a stacking
fault of the type described in the text ten layers from the surface.
Top: $n=2$. Bottom: $n=10$.} \label{green_3D}
\end{figure}
The previous analysis can be extended to the study of Landau levels
in a magnetic field. As discussed earlier, the hoppings within the
layers depend now on the Landau level index $n$, instead of on ${\mib k}^{\vphantom{*}}_\parallel$.
The $n$ dependence of the hoppings in he two
layers within the unit cell is different. Because of this, the self
energy obtained by integrating out the perfect semiinfinite region
leads to a more complicated expression than those in
eq.~\ref{semiinfinite}. Within the region between the defect and the
surface the successive self energies have a twofold periodicity:
\begin{align}
\Sigma_i ( \omega ) &={n\,v_\ssr{F}^2\, \ell_B^{-2}\over \omega} +
{\gamma_1^2\over \omega - \Sigma_{i-1} ( \omega )} \\
\Sigma_{i+1} ( \omega ) &= {(n-1)\,v_\ssr{F}^2\, \ell_B^{-2} \over \omega} +
{\gamma_1^2\over \omega - \Sigma_{i} ( \omega )}
\end{align}
The resulting densities of states for Landau level index $n=2$ and
two magnetic fields, $B=1\,$T and $B=10\,$T, are shown in
fig.~\ref{green}.
We show finally in fig. [\ref{green_3D}] the dependence of the peaks
in the surface density of states on the magnetic field. As before,
there is a stacking defects ten layers below the surface. In
agreement with experiments \cite{MAT05,NIM06,LI07}, there are
peaks which scale as $\sqrt{B}$ and peaks which scale as $B$.
\section{Discussion}
We have analyzed the appearance of two dimensional features in bulk
graphite. We show that deviations from the Bernal stacking order are
very effective in inducing two dimensional behavior. An ordered
array of graphene layers with the rhombohedral stacking order leads
to isolated Landau levels, and to quantized quantum Hall
plateaus at moderate magnetic fields in doped systems. We found
that the gap between Landau level subbands of indices $n$ and $n+1$
opens at a field $B_{{\rm c},n}$ with $B_{{\rm c},n=0}\equiv B^{\vphantom{*}}_0=0.123\,{\rm T}$
and $B_{{\rm c},n}\sim 4n\,B_0$ for large $n$. By contrast, in Bernal
graphite, the first gap is predicted to open at fields on the order of $10\,$T
\cite{BHRA07}, and the second gap opens only at enormous field, on the order
of 1000 T.
We have also considered the simplest stacking defect in Bernal
graphite, which has locally a rhombohedral arrangement. These
defects are expected to be common in many graphite samples, and
concentrations up to 10\% have been reported~\cite{BCP88}. These
defects are very effective in decoupling the electronic states on
either side. They also give rise to a two dimensional band
of electronic states, localized in the vicinity of the defect.
Within a nearest neighbor tight binding model for the $\pi$ band
of graphite, with in-plane hopping $\gamma\ns_0=3.16\,$eV and interplane
hopping $\gamma\ns_1=0.39\,$eV, we found a maximum binding energy of
approximately 13 meV, for states rather close to the corners in the
basal Brillouin zone. When the full SWMC model is taken into account
\cite{SWMC}, we obtain a maximum binding energy of almost 40 meV
for electron states and 20 meV for hole states; the binding energy is
significant only along the zone faces.
\begin{figure}[!t]
\begin{center}
\includegraphics*[width=7cm,angle=0]{sketch_hall.eps}
\end{center}
\caption{Sketch of the expected behavior of the Hall conductivity in
the quantum limit, for a lightly doped system. The leftmost plateau is
a bulk effect, related to the Landau levels of Bernal graphite~\cite{BHRA07}. The size of the
other jumps depends on the concentration $x$ of stacking defects.
The continuum bands of Landau levels lead to a monotonically varying conductivity.}
\label{sketch_QHE}
\end{figure}
What are the implications of our work for magnetotransport in graphite with stacking faults?
To describe the physics, it is helpful to keep in mind the bound state Landau level
structure of fig. \ref{bsb}. First, suppose the graphite is undoped. In this case, the Fermi
level remains pinned within the central $n=0$ Landau levels. With only nearest neighbor
hoppings, there are two flat bands ({\it i.e.\/}\ which do not disperse as a function of $k^{\vphantom{*}}_z$)
at $E=0$ associated with each zone corner in the basal Brillouin zone. Taking into account
the weak second-neighbor plane hoppings $\gamma\ns_2$ and $\gamma\ns_5$, these bands disperse and
acquire a width of about 40 meV. For the full SWMC model, due to the breaking of electron-hole
symmetry, the Fermi level can drift within these central Landau subbands, even if the system is at
electroneutrality. As shown by Yoshioka and Fukuyama \cite{YF81}, due to interaction effects
one then expects a charge density wave (CDW) at sufficiently high fields. Anomalies in the observed
magnetotransport data corresponding to this CDW transition have indeed been observed
\cite{TAN81}. The presence of stacking faults, which produce bound states away from the central
Landau levels, should not affect this picture.
However, if the graphite is lightly doped, a different picture emerges \cite{BHRA07}.
In this case, the central Landau levels become filled at a field $B^*=\frac{1}{2} n d\,\phi^{\vphantom{*}}_0$,
where $n$ is the bulk carrier density, $d$ is the interplane separation ({\it i.e.\/}\ the $c$-axis
lattice constant is $2d$ due to Bernal stacking). For fields $B<B^*$, the central Landau
bands are filled, and the Hall conductivity should be quantized at a value
$2e^2/hd$ \cite{BHRA07}. As $B$ is decreased further, the Fermi level crosses the bound
state energy. The bound state Landau levels (one for each spin value and inequivalent zone corner)
then makes a contribution to $\sigma^{\vphantom{*}}_{xy}$, of magnitude ${\rm\Delta} \sigma^\ssr{fault}_{xy}=2x e^2/hd$,
as shown in the sketch in fig. \ref{sketch_QHE}, where $x$ is the concentration of stacking faults.
Upon further reducing $B$, the Fermi level enters
into the first bulk band, and $\sigma^{\vphantom{*}}_{xy}$ begins to rise continuously. As $E^{\vphantom{*}}_\ssr{F}$ crosses
other bound state Landau levels, additional small jumps of ${\rm\Delta} \sigma^\ssr{fault}_{xy}=2x e^2/hd$
should appear. At a finite concentration $x$ of stacking faults, the bound states will themselves form a band,
and the small jumps will no longer have infinite slope.
The scenario discussed here shows how anomalous features could occur in the high
field magnetotransport of doped graphite, however we cannot find any obvious connection
between our work and the observations of Kempa {\it et al.\/}\ \cite{KEK06}.
Stacking defects below a graphite surface decouple the surface
region from the bulk, leading to quasi-two-dimensional behavior,
with localized Landau levels. We have shown how such buried defects leave a signature
which can be measured by surface spectroscopy.
Finally, our results suggest that the electronic properties of few
layer graphene samples can be substantially modified by changes in
the stacking order.
\section{Acknowledgments} The authors gratefully acknowledge conversations with A. Bernevig,
P. Esquinazi, M. Fogler, N. Garc{\'\i}a, T. Hughes, and S. Raghu.
This work was supported by MEC (Spain) through grant
FIS2005-05478-C02-01 and CONSOLIDER CSD2007-00010, the Comunidad de
Madrid, through CITECNOMIK, CM2006-S-0505-ESP-0337, the EU Contract
12881 (NEST).
\section{Appendix : Full SWMC Treatment of Stacking Fault}
We define the vectors
\begin{equation}
\psi^{\vphantom{*}}_n=\begin{pmatrix} u^\alpha_n \\ v^\alpha_n \\ u^\beta_n \\ v^\beta_n \end{pmatrix} \ (n<0) \qquad,\qquad
\phi^{\vphantom{*}}_n=\begin{pmatrix} v^\gamma_n \\ u^\gamma_n \\ v^\beta_n \\ u^\beta_n \end{pmatrix} \ (n>0)\ .
\end{equation}
For a stacking defect $ABABCBCB$ the SWMC couplings are depicted in fig. \ref{stackfig_b}. In fact,
additional couplings must be introduced at the defect. In the bulk, sites have either zero or two
$c$-axis neighbors, but at the stacking fault there are two sites with a single such neighbor.
One expects the associated on-site energy ${\rm\Delta}''\approx\frac{1}{2}{\rm\Delta}$. In addition, there
are three interlayer couplings at the defect which in principle are distinct from $\gamma\ns_3$ and $\gamma\ns_4$,
and which we denote in the figure by dotted pale blue lines, with hopping amplitude
${\tilde\gamma}^{\vphantom{*}}_4$. For simplicity, we shall take ${\rm\Delta}''={\rm\Delta}$
and ${\tilde\gamma}^{\vphantom{*}}_4=\gamma\ns_4$ for two of the links, and ${\tilde\gamma}^{\vphantom{*}}_4=\gamma\ns_3$ for the
other link. For details, see the definition of the $F$ matrix below.
Let each pair of layers be indexed by a nonzero integer $n$.
From the figures, we can read off the Schr{\"o}dinger equations
\begin{align}
M\psi^{\vphantom{\dagger}}_{n-1} + K\psi^{\vphantom{\dagger}}_n + M^\dagger\psi^{\vphantom{\dagger}}_{n+1}&=0 \quad (n<-1)\\
M^*\phi^{\vphantom{\dagger}}_{n-1} + K^*\phi^{\vphantom{\dagger}}_n + M^{\rm t}\phi^{\vphantom{\dagger}}_{n+1}&=0 \quad (n>1)\ ,
\end{align}
where, consistent with the full SWMC Hamiltonian \cite{SWMC,DD02},
\begin{equation}
K=\begin{pmatrix} -E & -\gamma\ns_0\,S & \gamma\ns_4\,S & \gamma\ns_3\,S^* \\ -\gamma\ns_0\,S^* & -E+\rmDelta' & \gamma\ns_1 & \gamma\ns_4\,S \\
\gamma\ns_4\,S^* & \gamma\ns_1 & -E+\rmDelta' & -\gamma\ns_0\,S \\ \gamma\ns_3\,S & \gamma\ns_4\,S^* & -\gamma\ns_0\,S^* & -E \end{pmatrix}
\end{equation}
and
\begin{equation}
M=\begin{pmatrix} \frac{1}{2}\gamma\ns_2 & 0 & \gamma\ns_4\,S & \gamma\ns_3\,S^* \\
0 & \frac{1}{2}\gamma\ns_5 & \gamma\ns_1 & \gamma\ns_4\,S \\ 0 & 0 & \frac{1}{2}\gamma\ns_5 & 0 \\ 0 & 0 & 0 & \frac{1}{2}\gamma\ns_2 \end{pmatrix}\ .
\end{equation}
We take the SWMC parameters from ref. \cite{DD02}
\begin{align*}
\gamma\ns_0&=3160\,{\rm meV} & \gamma\ns_1 &= 390\,{\rm meV}\\
\gamma\ns_2&=-20\,{\rm meV} & \gamma\ns_3&=315\,{\rm meV}\\
\gamma\ns_4& =44\,{\rm meV}& \gamma\ns_5&=38\,{\rm meV}\ ,
\end{align*}
with ${\rm\Delta}=-8\,$meV.
Here, $\rmDelta'$ is a combination of the original SWMC parameters:
\begin{equation}
\rmDelta'={\rm\Delta} + \gamma\ns_5-\gamma\ns_2\ ,
\end{equation}
hence $\rmDelta'=50\,$meV.
At the defect, the Schr{\"o}dinger equation yields
\begin{align}
M\psi^{\vphantom{\dagger}}_{-2} + K\psi^{\vphantom{\dagger}}_{-1} + F^\dagger\phi^{\vphantom{\dagger}}_1&=0\label{defecta}\\
F\psi^{\vphantom{\dagger}}_{-1} + K^*\phi^{\vphantom{*}}_1 + M^{\rm t}\phi^{\vphantom{\dagger}}_2&=0\ ,\label{defectb}
\end{align}
where
\begin{equation}
F=\begin{pmatrix} \frac{1}{2}\gamma\ns_2 & 0 & \gamma\ns_3\,S &\gamma\ns_4\,S^* \\
0 & 0 & \gamma\ns_4 \,S^* & \gamma\ns_1 \\ 0 & 0 & 0 & \frac{1}{2}\gamma\ns_5 \\ 0 & 0 & \frac{1}{2}\gamma\ns_2 & 0 \end{pmatrix}\ .
\end{equation}
\subsection{Scattering matrix and bound states}
We write $\psi^{\vphantom{\dagger}}_n=z^n\,{\raise.35ex\hbox{$\chi$}}$ (for $n<0$) and $\phi^{\vphantom{\dagger}}_n={z^*}^n\,{\raise.35ex\hbox{$\chi$}}^*$ (for $n>0$).
In the bulk ($|n|>0$), we then have (for both sides)
\begin{equation}
\big(z^{-1}M + K + z \,M^\dagger\big)\,{\raise.35ex\hbox{$\chi$}}=0\ .
\label{chieqn}
\end{equation}
In order for a solution to exist, we require
\begin{equation}
P(z)\equiv{\rm det}\,\big(z^{-1}M + K + z \,M^\dagger\big)=0\ ,
\end{equation}
which is an eighth order equation in $z$. Note that $P(z)=0$ guarantees that $P({z^*}^{-1})=0$.
It can also be shown, due to the form of $M$, that $P(z)=P(z^{-1})$. Thus, the allowed values of $z$
come in sets $(z,z^*, z^{-1},{z^*}^{-1})$.
\begin{figure}[!t]
\centering
\includegraphics[width=7.5cm]{stackfig_b.eps}
\caption
{\label{stackfig_b} SWMC couplings for a stacking defect in Bernal graphite,
showing more clearly the four sublattice structure on either side of the defect.}
\end{figure}
Within a bulk energy band, two of the eight $z$ roots are unimodular, and may be written as
$z^{\vphantom{*}}_1=e^{ i k}$ and $z^{\vphantom{*}}_5=e^{-ik}$ with $k$ real. Their associated eigenvectors are
${\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_{1,5}$. Of the remaining six roots, three ($z^{\vphantom{*}}_2$, $z^{\vphantom{*}}_3$, $z^{\vphantom{*}}_4$) each have
modulus greater than unity and are thus unnormalizable on the right. The remaining three roots
($z^{\vphantom{*}}_6$, $z^{\vphantom{*}}_7$, $z^{\vphantom{*}}_8$) each have modulus smaller than unity and are unnormalizable
on the left. We keep only the normalizable solutions and write
\begin{align}
n&<0\ :& \psi^{\vphantom{*}}_n&={\cal I}\,e^{ikn}\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_1 + {\cal O}'\,e^{-ikn}\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_5\\
&&&\qquad+ A^{\vphantom{*}}_2\,z_2^n\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_3 + A^n_3\,z^n_3\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_3
+ A^{\vphantom{*}}_4\,z^n_4\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_4\nonumber\\
&&&\nonumber\\
n&>0\ :& \phi^{\vphantom{*}}_n&={\cal I}'\,e^{-ikn}\,{\raise.35ex\hbox{$\chi$}}^*_1 + {\cal O}\,e^{ikn}\,{\raise.35ex\hbox{$\chi$}}^*_5 \\
&&&\qquad+ A^{\vphantom{*}}_6\,{z^*_6}^n\,{\raise.35ex\hbox{$\chi$}}^*_6 + A^{\vphantom{*}}_7\,{z^*_7}^n\,{\raise.35ex\hbox{$\chi$}}^*_7
+ A^{\vphantom{*}}_8\,{z^*_8}^n\,{\raise.35ex\hbox{$\chi$}}^*_8\ .\nonumber
\end{align}
Equations (\ref{defecta},\ref{defectb}) then yield eight equations in the ten unknowns
$({\cal I},{\cal O},{\cal I}',{\cal O}')$ and $(A^{\vphantom{*}}_2,A^{\vphantom{*}}_3,A^{\vphantom{*}}_4,A^{\vphantom{*}}_6,A^{\vphantom{*}}_7,A^{\vphantom{*}}_8)$. These then
determine, for each energy $E$, the ${\cal S}$-matrix, defined by the relation
\begin{equation}
\begin{pmatrix} {\cal O} \\ {\cal O}' \end{pmatrix} = \stackrel{{\cal S}}{\overbrace{\begin{pmatrix} t & r' \\ r & t' \end{pmatrix}}}
\begin{pmatrix} {\cal I} \\ {\cal I}' \end{pmatrix}
\end{equation}
If two bands overlap, then we have eigenvalues $z^{\vphantom{\dagger}}_{1,5}=e^{\pm ik}$ and $z^{\vphantom{*}}_{2,6}=
e^{\pm i p}$, with $\big|z^{\vphantom{*}}_{3,4}\big|>1$ and $\big|z^{\vphantom{*}}_{7,8}\big|<1$.
\begin{align}
n&<0\ :& \psi^{\vphantom{*}}_n&={\cal I}\,e^{ikn}\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_1 + {\cal O}'\,e^{-ikn}\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_5\\
&&&\quad +{\tilde{\cal I}}\,e^{ipn}\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_2 + {\tilde{\cal O}}'\,e^{-ipn}\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_6\nonumber\\
&&&\qquad+ A^n_3\,z^n_3\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_3 +A^{\vphantom{*}}_4\,z^n_4\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_4\nonumber\\
&&&\nonumber\\
n&>0\ :& \phi^{\vphantom{*}}_n&={\cal I}'\,e^{-ikn}\,{\raise.35ex\hbox{$\chi$}}^*_1 + {\cal O}\,e^{ikn}\,{\raise.35ex\hbox{$\chi$}}^*_5 \\
&&&\quad +{\tilde{\cal I}}'\,e^{-ipn}\,{\raise.35ex\hbox{$\chi$}}^*_2+ {\tilde{\cal O}}\,e^{ipn}\,{\raise.35ex\hbox{$\chi$}}^*_6\nonumber\\
&&&\qquad + A^{\vphantom{*}}_7\,{z^*_7}^n\,{\raise.35ex\hbox{$\chi$}}^*_7+ A^{\vphantom{*}}_8\,{z^*_8}^n\,{\raise.35ex\hbox{$\chi$}}^*_8 \ .\nonumber
\end{align}
The $S$-matrix is then $4\times 4$, and we should take care to properly define it to act on
{\it flux amplitudes\/}, {\it viz.}
\begin{equation}
\begin{pmatrix} v^{1/2}_{1,k}\>{\cal O} \\ v^{1/2}_{1,k}\>{\cal O}' \\
v^{1/2}_{2,p}\>{\tilde{\cal O}} \\ v^{1/2}_{2,p}\>{\tilde{\cal O}}' \end{pmatrix} = {\cal S}
\begin{pmatrix} v^{1/2}_{1,k}\>{\cal I} \\ v^{1/2}_{1,k}\>{\cal I}' \\
v^{1/2}_{2,p}\>{\tilde{\cal I}} \\ v^{1/2}_{2,p}\>{\tilde{\cal I}}' \end{pmatrix} \ ,
\end{equation}
where $v^{\vphantom{*}}_{1,k}={\partial} E^{\vphantom{*}}_1(k)/{\partial} k$ and $v^{\vphantom{*}}_{2,p}={\partial} E^{\vphantom{*}}_2(p)/{\partial} p$.
If three bands overlap, the $S$-matrix is $6\times 6$.
\subsection{Bound states}
When $E$ does not lie within a bulk band, we write
\begin{align}
n&<0\ :& \psi^{\vphantom{*}}_n&=A^{\vphantom{*}}_1\,z^n_1\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_1 + A^{\vphantom{*}}_2\,z^n_2\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_2\\
&&&\qquad\qquad+ A^n_3\,z^{\vphantom{*}}_3\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_3 +A^{\vphantom{*}}_4 \,z^n_4\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_4 \nonumber\\
n&>0\ :& \phi^{\vphantom{*}}_n&= A^{\vphantom{*}}_5\,{z^*_5}^n\,{\raise.35ex\hbox{$\chi$}}^*_5 + A^{\vphantom{*}}_6\,{z^*_6}^n\,{\raise.35ex\hbox{$\chi$}}^*_6\\
&&&\qquad\qquad+ A^{\vphantom{*}}_7\,{z^*_7}^n\,{\raise.35ex\hbox{$\chi$}}^*_7+A^{\vphantom{*}}_8\,{z^*_8}^n\,{\raise.35ex\hbox{$\chi$}}^*_8\ .\nonumber
\end{align}
Here, $\big|z^{\vphantom{*}}_{1,2,3,4}\big|>1$ and $\big|z^{\vphantom{*}}_{5,6,7,8}\big|<1$. Without loss of generality, we
may assume
\begin{equation}
z^*_u=z^{-1}_{u+4}\ ,
\end{equation}
for $u=1,2,3,4$. Equations (\ref{defecta},\ref{defectb}) now give eight
homogeneous equations in the eight unknowns $A^{\vphantom{*}}_{1-8}$. A nontrivial solution can only exist when the corresponding
determinant vanishes, which puts a single complex condition on the energy $E$. The solutions are the allowed bound states.
We now apply eqns. \ref{defecta} and \ref{defectb}:
\begin{align}
M\psi^{\vphantom{\dagger}}_{-2} + K\psi^{\vphantom{\dagger}}_{-1} + F^\dagger\phi^{\vphantom{\dagger}}_1&=0\\
F\psi^{\vphantom{\dagger}}_{-1} + K^*\phi^{\vphantom{*}}_1 + M^{\rm t}\phi^{\vphantom{\dagger}}_2&=0
\end{align}
to
\begin{equation}
\psi^{\vphantom{\dagger}}_n=\sum_{u=1}^4 A^{\vphantom{*}}_u\,z^n_u\,{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_u\quad,\quad
\phi^{\vphantom{\dagger}}_n=\sum_{l=5}^8 A^{\vphantom{*}}_l\,{z^*_l}^n\,{\raise.35ex\hbox{$\chi$}}^*_l
\end{equation}
using
\begin{align}
\big(z^{-1}M + K + z \,M^\dagger\big)\,{\raise.35ex\hbox{$\chi$}}&=0\\
\big({z^*}^{-1}M^* + K^* + z^* \,M^{\rm t}\big)\,{\raise.35ex\hbox{$\chi$}}^*&=0\ .
\end{align}
This yields
\begin{equation}
M^\dagger\psi^{\vphantom{*}}_0 = F^\dagger\phi^{\vphantom{*}}_1 \qquad,\qquad F\,\psi^{\vphantom{\dagger}}_{-1}=M^*\phi^{\vphantom{*}}_0\ ,
\end{equation}
which, expanded, gives
\begin{equation}
\sum_{u=1}^4 A^{\vphantom{*}}_u\,M^\dagger{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_u = \sum_{l=5}^8 A^{\vphantom{*}}_l\,z^*_l\,F^\dagger {\raise.35ex\hbox{$\chi$}}^*_l
\label{BSA}
\end{equation}
and
\begin{equation}
\sum_{u=1}^4 A^{\vphantom{*}}_u\,z_u^{-1}\,F{\raise.35ex\hbox{$\chi$}}^{\vphantom{*}}_u = \sum_{l=5}^8 A^{\vphantom{*}}_l\,M^* {\raise.35ex\hbox{$\chi$}}^*_l\ .
\label{BSB}
\end{equation}
These give $8$ homogeneous equations in the $8$ unknowns can only be solved when
the corresponding determinant vanishes, which is the condition that $E$ lie at a bound state energy.
\subsection{Method of solution}
Eqn. \ref{chieqn} can be written as two coupled equations,
\begin{align}
z^{-1}\,M\,{\raise.35ex\hbox{$\chi$}} + {\raise.35ex\hbox{$\chi$}}'&=0\\
K\,{\raise.35ex\hbox{$\chi$}} + z\,M^\dagger\,{\raise.35ex\hbox{$\chi$}} -{\raise.35ex\hbox{$\chi$}}'&=0\ .
\end{align}
These equations may be recast as the rank-8 system,
\begin{equation}
\begin{pmatrix} z +NK & -N \\ M & z \end{pmatrix}
\begin{pmatrix} {\raise.35ex\hbox{$\chi$}} \\ {\raise.35ex\hbox{$\chi$}}' \end{pmatrix}=0\ ,
\end{equation}
where $N\equiv {M^\dagger}^{-1}$.
Thus the solutions $z^{\vphantom{*}}_j$ are the complex eigenvalues of the matrix
\begin{equation}
Q=\begin{pmatrix} -NK & N\\ -M & 0 \end{pmatrix}
\begin{pmatrix} {\raise.35ex\hbox{$\chi$}} \\ {\raise.35ex\hbox{$\chi$}}' \end{pmatrix}\ .
\end{equation}
Note that ${\rm det}(Q)={\rm det}(M)\cdot{\rm det}(N)=1$ independent of $K$
and of the above-diagonal elements of $M$ and the below-diagonal elements of $N$.
From row reduction, it is easy to derive
\begin{equation}
N={4\over\gamma\ns_2\gamma\ns_5}\begin{pmatrix} \frac{1}{2}\gamma\ns_5 & 0 & 0 & 0 \\
0 &\frac{1}{2}\gamma\ns_2 & 0 & 0 \\
-\gamma\ns_4\,S^* & -\gamma\ns_1\gamma\ns_2\gamma_5^{-1} & \frac{1}{2}\gamma\ns_2 & 0 \\
-\gamma_2^{-1}\gamma\ns_3\gamma\ns_5\,S &\gamma\ns_4\,S^* & 0 & \frac{1}{2}\gamma\ns_5\end{pmatrix}\ .
\end{equation}
The equations (\ref{BSA}) and (\ref{BSB}) can now be written as an $8\times 8$ system,
\begin{equation}
\stackrel{R}{\overbrace{
\begin{pmatrix} M^\dagger_{ab}\,\xi^{\vphantom{*}}_{bu} & - z^*_l \, F^\dagger_{ab}\,\xi^*_{bl} \\
&\\
z^{-1}_u \, F^{\vphantom{*}}_{ab}\,\xi^{\vphantom{*}}_{bu}& - M^*_{ab}\,\xi^*_{bl} \end{pmatrix} }}
\begin{pmatrix} A^{\vphantom{*}}_{u=1,2,3,4} \\ \\ A^{\vphantom{*}}_{l=5,6,7,8} \end{pmatrix}=0\ ,
\end{equation}
where $a$, $b$, and $u$ run from $1$ to $4$, and $l$ runs from $5$ to $8$.
The implied sums for each submatrix are over $b$ and not $u$ or $l$, and
$\xi^{\vphantom{*}}_{ij}$ is the matrix of eigenvectors of $Q$:
\begin{equation}
\sum_{k=1}^8 Q^{\vphantom{*}}_{ik}\,\xi^{\vphantom{*}}_{kj}=z^{\vphantom{*}}_i\,\xi^{\vphantom{*}}_{ij}\quad\hbox{\rm (no sum on $i$)}\ ,
\end{equation}
where $i$, $j$, and $k$ run from $1$ to $8$. The bound state condition is
${\rm det}(R)=0$.
\begin{figure}[!b]
\centering
\includegraphics[width=8.0cm]{boundstate.eps}
\caption
{\label{boundstate} Binding energies for bound states within the gap at negative energies (blue short dash - dot curve)
and positive energies (red long dash - dot curve), for the SWMC model with a single stacking fault, as a function of
wavevector in the basal Brillouin zone. The solid green line shows the energy gap between bonding and antibonding $\pi$ bands,
which collapses in the vicinity of the basal zone corner $K$.}
\end{figure}
We have numerically found the bound states lying in the gap between
the bonding and antibonding $pi$ bands of graphite. Our results are
displayed in fig. \ref{boundstate}. For the full SWMC calculation,
there is no longer particle-hole symmetry. We find that the binding
energy ({\it i.e.\/}\ the distance of the bound state from the closest band
extremum) is considerable along the entire $KM$ edge. This is in
contrast to our analytic results for the nearest neighbor model,
where the bound state energy was considerable only for $|S|\approx
\gamma\ns_1/2\gamma\ns_0\simeq 0.062$, which is satisfied only on a small ring
about the $K$ and $K'$ points. On the other hand, the lack of bound
states along the $\Gamma K$ edge implies a finite broadening of the
Landau levels derived from this band.
It is important to realize that the SWMC model itself is only valid close to the $K$-$H$ spine in the Brillouin zone. The model must
be extended, as in ref. \cite{JD73}, to include other tight binding parameters, in order to fit the $\pi$ band throughout the entire zone,
which is necessary in order to model various optical transitions. In this case, the in-plane hopping is modified:
\begin{equation}
\gamma\ns_0 S \to \gamma^\sss{(1)}_0\,S^{\vphantom{*}}_1+\gamma^\sss{(2)}_0\,S^{\vphantom{*}}_2+\gamma^\sss{(3)}_0\,S^{\vphantom{*}}_3\ ,
\end{equation}
where $\gamma^\sss{(n)}$ and $S^{\vphantom{*}}_n$ are, respectively, the amplitude and lattice sum of $e^{i{\mib k}\cdot{\mib\delta}}$
corresponding to the $n^\ssr{th}$ nearest neighbor in-plane inter-sublattice hopping \cite{JD73}, subject to the constraint
\begin{equation}
\gamma_0^\ssr{SWMC}=\gamma^\sss{(1)}_0 - 2\,\gamma^\sss{(2)}_0 + \gamma^\sss{(3)}_0\ .
\end{equation}
It is a rather simple matter to include such effects in our calculation, and we find in general, for a broad set of possible
parameterizations satisfying the constraint, that our results have the same qualitative features.
Our approximations regarding the parameters ${\rm\Delta}''$ and ${\tilde\gamma}^{\vphantom{*}}_4$ are such that, were
their values known, our binding energies could easily be off by perhaps a few tens of millivolts.
We expect, however, that the general features found here should still pertain, namely a single bound
state whose binding energy is maximized at several tens of millivolts along the $K$-$M$ edge in the basal
Brillouin zone.
|
train/arxiv
|
BkiUbME4c3aisKzQyiKE
| 5 | 1 |
\section{Introduction}
Theory of hypergeometric integrals originated from Gauss has been
generalized to higher dimensions, which has applications in
various area of mathematics and physics (\cite{ao-ki, koh-cft, var}).
In the above generalization, the notion of the local system
cohomology groups on the complement of a hyperplane arrangement
plays a crucial role.
Let ${\mathcal A}=\{H_1, \dots, H_n\}$ be an arrangement of affine hyperplanes in
$\mathbb{C}^\ell$, $M({\mathcal A})=\mathbb{C}^\ell\smallsetminus\bigcup_{H\in{\mathcal A}}H$ be its complement.
We also fix a defining equation $\alpha_i$ of $H_i$.
An arrangement ${\mathcal A}$ is called essential if normal vectors of hyperplanes
generate $\mathbb{C}^\ell$. The first homology group $H_1(M({\mathcal A}), \mathbb{Z})$ is a
free abelian group generated by the meridians $\gamma_1, \dots, \gamma_n$
of hyperplanes. We denote their dual basis by $e_1, \dots, e_n\in
H^1(M({\mathcal A}), \mathbb{Z})$. The element $e_i$ can be identified with
$\frac{1}{2\pi\sqrt{-1}}d\log\alpha_i$ via the de Rham isomorphism.
The isomorphism class of a rank one complex local system ${\mathcal L}$ is
determined by a homomorphism $\rho:H_1(M({\mathcal A}), \mathbb{Z})\longrightarrow
\mathbb{C}^\times$, which is also determined by an $n$-tuple
$q=(q_1, \dots, q_n)\in(\mathbb{C}^\times)^n$, where
$q_i=\rho(\gamma_i)$.
For a generic parameter $(q_1, \dots, q_n)$, it is known that
the following vanishing result holds.
\begin{equation}
\label{eq:typicalvanishing}
\dim H^k(M({\mathcal A}), {\mathcal L})=
\left\{
\begin{array}{ll}
0,& \mbox{ if $k\neq \ell$, }\\
&\\
|\chi(M({\mathcal A}))|,& \mbox{ if $k=\ell$.}
\end{array}
\right.
\end{equation}
Several sufficient conditions for the vanishing (\ref{eq:typicalvanishing})
have been known (\cite{ao-ki, koh}). Among others, Cohen, Dimca and Orlik
(\cite{cdo}) proved the following.
\begin{theorem}
\label{thm:cdo}
(CDO-type vanishing theorem) Suppose that $q_X\neq 1$ for
each dense edge $X$ contained in the hyperplane at infinity.
Then the vanishing (\ref{eq:typicalvanishing}) holds.
(See \S \ref{subsec:os} below for terminologies).
\end{theorem}
The above result is stronger than many other vanishing results.
Indeed for the case $\ell=2$, it was proved in \cite{y-mini} that
the vanishing (\ref{eq:typicalvanishing}) with additional property holds
if and only if the assumption of Theorem \ref{thm:cdo} holds.
The local system cohomology group $H^k(M({\mathcal A}), {\mathcal L})$ is computed by
using twisted de Rham complex $(\Omega_{M({\mathcal A})}^\bullet, d+\omega\wedge)$,
with $\omega=\sum\lambda_i d\log\alpha_i$,
where $\lambda$ is a complex number such that
$\exp(-2\pi\sqrt{-1}\lambda_i)=q_i$.
The algebra of rational differential forms $\Omega_{M({\mathcal A})}^\bullet$
has a natural $\mathbb{C}$-subalgebra $A_{\mathbb{C}}^\bullet({\mathcal A})$
generated by $e_i=\frac{1}{2\pi\sqrt{-1}}d\log\alpha_i$. This
subalgebra is known to be isomorphic to the cohomology ring
$H^\bullet(M({\mathcal A}), \mathbb{C})$ of $M({\mathcal A})$ (\cite{bri-tress})
and having a combinatorial
description the so-called Orlik-Solomon algebra \cite{O-S}
(see \S \ref{subsec:os} below for details).
The Orlik-Solomon algebra provides a subcomplex
$(A_\mathbb{C}^\bullet({\mathcal A}), \omega\wedge)$ of the twisted de Rham complex,
which is called the Aomoto complex.
There exists a natural morphism
\begin{equation}
\label{eq:morphism}
(A_\mathbb{C}^\bullet({\mathcal A}), \omega\wedge)\hookrightarrow
(\Omega_{M({\mathcal A})}^\bullet, d+\omega\wedge)
\end{equation}
of complexes. The Aomoto complex
$(A_\mathbb{C}^\bullet({\mathcal A}), \omega\wedge)$
has a purely combinatorial description.
Furthermore, it can be considered as
a linearization of the twisted de Rham complex
$(\Omega_{M({\mathcal A})}^\bullet, d+\omega\wedge)$.
Indeed, there exists a Zariski open subset $U\subset(\mathbb{C}^\times)^n$
which contains $(1, 1, \dots, 1)\in(\mathbb{C}^\times)^n$ such that
(\ref{eq:morphism}) is quasi-isomorphic for $q\in U$ (\cite{esv, stv, nty}).
However, they are not isomorphic in general.
Vanishing results for the cohomology of the Aomoto complex are
also proved by Yuzvinsky.
\begin{theorem}
\label{thm:yuz}
(\cite{yuz, yuz-bos})
Let $\omega=\sum_{i=1}^n 2\pi\sqrt{-1}\lambda_ie_i\in A_\mathbb{C}^1({\mathcal A})$.
Suppose $\lambda_X\neq 0$ for all dense edge $X$ in $L({\mathcal A})$.
Then we have
\begin{equation}
\label{eq:yuz}
\dim H^k(A_\mathbb{C}^\bullet({\mathcal A}), \omega\wedge)=
\left\{
\begin{array}{ll}
0,& \mbox{ if $k\neq \ell$, }\\
&\\
|\chi(M({\mathcal A}))|,& \mbox{ if $k=\ell$.}
\end{array}
\right.
\end{equation}
\end{theorem}
We note that the assumptions in
Theorem \ref{thm:cdo}
and
Theorem \ref{thm:yuz}
are
somewhat complementary.
For the first one requires nonresonant condition
along the hyperplane at infinity, on the other hand,
Theorem \ref{thm:yuz} imposes nonresonant condition on all dense edges
in the affine space.
Recently, Papadima and Suciu proved that for a torsion local system,
the dimension of the local system cohomology group is bounded by
that of Aomoto complex with finite field coefficients.
\begin{theorem}
\label{thm:ps-sp}
(\cite{PS})
Let $p\in\mathbb{Z}$ be a prime.
Suppose $\omega=\sum_{i=1}^n\lambda_ie_i\in A_{{\mathbb F}_p}^1({\mathcal A})$ and
${\mathcal L}$ is the local system determined by
$q_i=\exp(\frac{2\pi\sqrt{-1}}{p}\lambda_i)$.
Then
\begin{equation}
\label{eq:ineqps}
\dim_\mathbb{C} H^k(M({\mathcal A}), {\mathcal L})\leq
\dim_{{\mathbb F}_p} H^k(A_{{\mathbb F}_p}^\bullet({\mathcal A}), \omega \wedge),
\end{equation}
for all $k\geq 0$.
\end{theorem}
In view of Papadima and Suciu's inequality (\ref{eq:ineqps}),
it is natural to expect that CDO-type vanishing theorem for
a $p$-torsion local system may be deduced from that of the Aomoto complex
with finite field coefficients. The main result of this paper is
the following CDO-type vanishing theorem for Aomoto complex with
arbitrary coefficient ring.
\begin{theorem}
\label{thm:intromain}
(Theorem \ref{thmprincipal})
Let ${\mathcal A}=\{H_1, \dots, H_n\}$ be an essential affine hyperplane arrangement
in $\mathbb{R}^\ell$. Let $R$ be a commutative ring with $1$.
Let
$\omega=\sum_{i=1}^n\lambda_ie_i\in A_R^1({\mathcal A})$.
Suppose that $\lambda_X\in R^\times$ for any dense edge $X$
contained in the hyperplane at infinity. Then the following holds.
\begin{equation}
\label{eq:main}
H^k(A_R^\bullet({\mathcal A}), \omega\wedge)\simeq
\left\{
\begin{array}{ll}
0,& \mbox{ if $k\neq \ell$, }\\
&\\
R^{|\chi(M({\mathcal A}))|},& \mbox{ if $k=\ell$.}
\end{array}
\right.
\end{equation}
\end{theorem}
Our proof relies on several works (\cite{y-lef, y-mini, y-cham})
concerning minimality of arrangements.
We can also provide an alternative proof of
Theorem \ref{thm:cdo} for real arrangements.
This paper is organized as follows.
In \S \ref{sec:notation},
we recall basic terminologies and the description of
Aomoto complex in terms of chambers developed in
\cite{y-lef, y-mini, y-cham}. We also recall
the description of twisted minimal complex in terms
of chambers. Simply speaking, two cochain complexes
$(R[\operatorname{ch}^\bullet({\mathcal A})], \nabla_{\omega_\lambda})$ and
$(\mathbb{C}[\operatorname{ch}^\bullet({\mathcal A})], \nabla_{{\mathcal L}})$ are constructed
by using the real structures of ${\mathcal A}$ (adjacent relations of chambers).
These cochain complexes provide a parallel description
between the cohomology of Aomoto complex and the local system
cohomology group.
Indeed, using these complexes, we can
prove simultaneously CDO-type vanishing result
for both cases.
In \S \ref{sec:results},
we state the main result and describe the strategy for the proof.
The proof consists of an easy part and a hard part.
The easy part of the proof is done mainly by
elementary arguments on cochain complex, which is also done
in this section. The hard part is done in the subsequent section
(\S \ref{sec:proofs}).
The final section \S \ref{sec:proofs} is devoted to
analyze the polyhedral structures of chambers
which are required for matrix presentations of the
coboundary map of $(R[\operatorname{ch}^\bullet({\mathcal A})], \nabla_{\omega_\lambda})$.
\section{Notations and Preliminaries}
\label{sec:notation}
\subsection{Orlik-Solomon algebra and Aomoto complex}
\label{subsec:os}
Let ${\mathcal A}=\{H_1,\hdots,H_n\}$ be an affine
hyperplane arrangement in $V=\mathbb{R}^\ell$.
Denote by $M({\mathcal A})=\mathbb{C}^\ell \smallsetminus \cup_{i=1}^n H_i \otimes \mathbb{C}$
the complement of the complexified hyperplanes.
By identifying $\mathbb{R}^\ell$ with $\mathbb{P}_{\mathbb{R}}^\ell\smallsetminus\overline{H}_\infty$,
define the projective closure by
$\overline{\A}=\{\overline{H}_1,\hdots,\overline{H}_n,\overline{H}_\infty\}$, where
$\overline{H}_i\subset\mathbb{P}_{\mathbb{R}}^\ell$ is the closure of $H_i$ in the projective space.
We denote $L({\mathcal A})$ and $L(\overline{\A})$ the intersection posets of
${\mathcal A}$ and $\overline{\A}$, respectively, namely, the poset of subspaces
obtained as intersections of some hyperplanes with reverse inclusion order.
An element of $L({\mathcal A})$ (and $L(\overline{\A})$) is also called an edge.
We denote by $L_k({\mathcal A})$ the set of all $k$-dimensional edges.
For example $L_{\ell}({\mathcal A})=\{V\}$ and $L_{\ell-1}({\mathcal A})={\mathcal A}$.
Then ${\mathcal A}$ is essential if and only if $L_0({\mathcal A})\neq\emptyset$.
Let $R$ be a commutative ring. Orlik and Solomon gave a
simple combinatorial description of the algebra $H^*(M(\mathcal{A}),R)$,
which is the quotient of the exterior algebra on
classes dual to the meridians, modulo a certain
ideal determined by $L({\mathcal A}),$ see \cite{O-S}.
More precisely, by associating to any hyperplane $H_i$ a generator $e_i \simeq \frac{1}{2\pi\sqrt{-1}} d \log \alpha_i,$ the Orlik-Solomon algebra $A^\bullet_R({\mathcal A})$ of ${\mathcal A}$ is the quotient of the exterior algebra generated by the elements $e_i,\,1\leq i \leq n,$ modulo the ideal $I({\mathcal A})$ generated by:
\begin{itemize}
\item the elements of the form $\{e_{i_1}\wedge\cdots\wedge e_{i_s}\,|\,H_{i_1} \cap \cdots \cap H_{i_s}= \emptyset\},$
\item the elements of the form $\{\partial(e_{i_1}\wedge\cdots\wedge e_{i_s})\,|\,H_{i_1} \cap \cdots \cap H_{i_s}\neq \emptyset\,\,\text{and}\,\,\codim(H_{i_1} \cap \cdots \cap H_{i_s})<s\}$, where
$\partial(e_{i_1}\wedge\cdots\wedge e_{i_s})=\sum_{\alpha=1}^s(-1)^{\alpha-1}
e_{i_1}\wedge\dots\wedge\widehat{e_{i_\alpha}}\wedge\dots\wedge e_{i_s}$.
\end{itemize}
Let $\lambda=(\lambda_1, \dots, \lambda_n)\in R^n$ and $\omega_{\lambda}= {\sum_{i=1}^n \lambda_i e_i}\in A^1_R({\mathcal A}).$ The cochain complex
$(A^\bullet_R({\mathcal A}),\omega_\lambda\wedge)=\{A^\bullet_R({\mathcal A})\stackrel{\omega_\lambda\wedge}{\longrightarrow}A^{\bullet+1}_R({\mathcal A})\}$ is called
\emph{the Aomoto complex}.
We say that an edge $X \in L(\overline{\A})$ is \textit{dense} if the localization $\overline{\A}_X=\{\overline{H}\in \overline{\A} \,|\, X \subseteq \overline{H} \}$ is indecomposable (see
\cite{ot-int} for more details).
We consider each hyperplane $\overline{H}\in\overline{\A}$ is a dense edge.
In this paper, the set of dense edges of $\overline{\A}$ contained in $\overline{H}_\infty$
plays an important role. We denote by ${\operatorname{\mathsf D}}_\infty(\overline{\A})$ the set of all
dense edges contained in $\overline{H}_\infty$. We will characterize
$X\in{\operatorname{\mathsf D}}_\infty(\overline{\A})$ in terms of chambers in Proposition \ref{densechamber}.
Set $\lambda_\infty:= - \sum_{i=1}^n \lambda_i$, and for any $X\in L(\overline{\A})$,
$\lambda_X:= {\sum_{\overline{H}_i\supset X} \lambda_i}$, where the index $i$
runs $\{1, 2, \dots, n, \infty\}$.
The isomorphism class of
a rank one local system ${\mathcal L}$ on the complexified complement $M({\mathcal A})$ is
determined by the monodromy $q_i\in\mathbb{C}^\times$ around each hyperplane
$H_i$. As in the case of
Aomoto complex, we denote $q_\infty=(q_1q_2\cdots q_n)^{-1}$
and $q_X=\prod_{\overline{H}_i\supset X}q_i$ for an edge $X\in L(\overline{\A})$.
\subsection{Chambers and minimal complex}
\label{subsec:chambers}
In this section, we recall the description of the minimal complex
in terms of real structures from \cite{y-lef, y-mini, y-cham}.
Let ${\mathcal A}=\{H_1, \dots, H_n\}$ be an essential
hyperplane arrangement in $\mathbb{R}^\ell$.
A connected component of $\mathbb{R}^\ell\smallsetminus\bigcup_{i=1}^nH_i$ is called a
chamber. The set of all chambers of ${\mathcal A}$ is denoted by $\operatorname{ch}({\mathcal A})$.
A chamber $C\in\operatorname{ch}({\mathcal A})$ is called a bounded chamber if $C$ is
bounded. The set of all bounded chambers of ${\mathcal A}$ is denoted by
$\operatorname{bch}({\mathcal A})$.
For a chamber $C\in\operatorname{ch}({\mathcal A})$,
denote by $\overline{C}$ the closure of $C$ in $\mathbb{P}_{\mathbb{R}}^\ell$.
It is easily seen that
a chamber $C$ is bounded if and only if $\overline{C}\cap\overline{H}_\infty=\emptyset$.
For given two chambers $C, C'\in\operatorname{ch}({\mathcal A})$, denote by
\[
\operatorname{Sep}(C, C'):=\{H_i\in{\mathcal A}\mid \mbox{ $H_i$ separates $C$ and $C'$}\},
\]
the set of separating hyperplanes of $C$ and $C'$.
For the description of the minimal complex, we have to fix a
generic flag. Let
\[
{\mathcal F}: \emptyset=F^{-1}\subset F^0\subset F^1\subset\dots\subset F^{\ell}
=\mathbb{R}^\ell
\]
be a generic flag (i.e., $F^k$ is a generic $k$-dimensional affine subspace,
in other words, $\dim(\overline{X}\cap\overline{F}^k)=\dim \overline{X}+k-\ell$ for any
$\overline{X}\in L(\overline{\A})$).
The genericity of ${\mathcal F}$ is equivalent to
\[
F^k\cap L_i({\mathcal A})=L_{k+i-\ell}({\mathcal A}\cap F^k),
\]
for $k+i\geq\ell$.
\begin{definition}
\label{def:nearinfty}
We say that the hyperplane $F^{\ell-1}$ is near to $\overline{H}_\infty$ when
$F^{\ell-1}$ does not separate $0$-dimensional
edges $L_0({\mathcal A})\subset\mathbb{R}^\ell$.
Similarly, we say the flag ${\mathcal F}$ is near to $\overline{H}_\infty$ when
$F^{k-1}$ does not separate $L_0({\mathcal A}\cap F^{k})$ for all
$k=1, \dots, \ell$.
\end{definition}
From this point, we assume that the flag ${\mathcal F}$ is
near to $\overline{H}_\infty$.
For a generic flag ${\mathcal F}$ near to $\overline{H}_\infty$, we define
\[
\begin{split}
\operatorname{ch}^k({\mathcal A})
&=
\{C\in\operatorname{ch}({\mathcal A})\mid C\cap F^k\neq\emptyset, C\cap F^{k-1}=\emptyset\}\\
\operatorname{bch}^k({\mathcal A})
&=
\{C\in\operatorname{ch}^k({\mathcal A})\mid C\cap F^k\mbox{ is bounded}\}\\
\operatorname{uch}^k({\mathcal A})
&=
\{C\in\operatorname{ch}^k({\mathcal A})\mid C\cap F^k\mbox{ is unbounded}\}.\\
\end{split}
\]
Then clearly, we have
\[
\begin{split}
\operatorname{ch}^k({\mathcal A})&=\operatorname{bch}^k({\mathcal A})\sqcup\operatorname{uch}^k({\mathcal A})\\
\operatorname{ch}({\mathcal A})&=\bigsqcup_{k=0}^\ell\operatorname{ch}^k({\mathcal A}).
\end{split}
\]
Note that $\operatorname{bch}^\ell({\mathcal A})=\operatorname{bch}({\mathcal A})$, however, for $k<\ell$,
$C\in\operatorname{bch}^k({\mathcal A})$ is an unbounded chamber.
\begin{definition}(\cite[Definition 2.1]{y-mini})
Let $C\in \operatorname{bch}({\mathcal A}).$
There exists a unique chamber, denoted by $C^{\vee}\in\operatorname{uch}({\mathcal A})$,
which is the opposite with respect to $\overline{C}\cap \overline{H}_\infty,$
where $\overline{C}$ is the closure of $C$ in the projective space
$\mathbb{P}_{\mathbb{R}}^\ell$.
\end{definition}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1]
\draw[thick,rounded corners=0.6cm] (1,1) -- (8,1) -- (8,4.5) node[right] {$\overline{H}_\infty$};
\draw[thick] (1,2.5) node [left] {$H_1$} --(8.5,3.5);
\draw[thick] (1,3.5) node [left] {$H_2$} --(8.5,1.5);
\draw[thick,rounded corners=0.3cm] (5,0.5) -- (3,1.5) -- (3,4.5) node [above] {$H_3$};
\draw[thick,rounded corners=0.3cm] (4,0) -- (4,4.5) node [above] {$H_4$};
\draw[thick,rounded corners=0.3cm] (3,0.5) -- (5,1.5) -- (5,4.5) node [above] {$H_5$};
\filldraw[fill=black, draw=black] (4,1) circle (2pt);
\draw (2,1.5) node[above] {$C_1$};
\draw (3.5,1.5) node[above] {$C_2$};
\draw (4.5,1.5) node[above] {$C_3$};
\draw (6,1.5) node[above] {$C_4$};
\draw (2,3.5) node[above] {$C_4^\lor$};
\draw (3.5,3.5) node[above] {$C_2^\lor$};
\draw (4.5,3.5) node[above] {$C_3^\lor$};
\draw (6,3.5) node[above] {$C_1^\lor$};
\draw (2,0) node[above] {$C_1^\lor$};
\draw (3.5,0) node[above] {$C_3^\lor$};
\draw (4.5,0) node[above] {$C_2^\lor$};
\draw (6,0) node[above] {$C_4^\lor$};
\draw[very thick] (1,1) node[above] {\footnotesize $\overline{C_1}\cap\overline{H}_\infty$} --(4,1);
\end{tikzpicture}
\caption{Opposite chambers}
\label{fig:opposite}
\end{figure}
Let us denote the projective subspace generated by $\overline{C}\cap\overline{H}_\infty$
by $X(C)=\langle\overline{C}\cap\overline{H}_\infty\rangle$.
\begin{proposition}
\label{prop:charsep}Let $C\in \operatorname{bch}({\mathcal A}),$ then
\begin{equation}
\label{eq:charsep}
\operatorname{Sep}(C, C^\lor)=\{H\in{\mathcal A}\mid \overline{H}\not\supset X(C)\}=
\overline{\A}\smallsetminus\overline{\A}_{X(C)}.
\end{equation}
\end{proposition}
\begin{proof}
Let $p\in C$ and $p'$ be a point in the relative interior of
$\overline{C}\cap\overline{H}_\infty$.
Take the line $L=\langle p, p'\rangle\subset\mathbb{P}_{\mathbb{R}}^\ell$.
Choose a point $p''\in C^\lor\cap L$.
Then consider the segment
$[p, p'']\subset \mathbb{R}^\ell=\mathbb{P}_{\mathbb{R}}^\ell\smallsetminus\overline{H}_\infty$
(See Figure \ref{fig:segment}).
On the projective space $\mathbb{P}_{\mathbb{R}}^\ell$, the line $L=\langle p, p'\rangle$
must intersect every hyperplane $\overline{H}\in\overline{\A}$ exactly once.
Furthermore, $L$ intersects $\overline{H}\in\overline{\A}_{X(C)}$ at $p'$.
On the other hand, the segment $[p, p'']$ intersects
$H\in\operatorname{Sep}(C, C^\lor)$.
Hence we have (\ref{eq:charsep}).
\end{proof}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1]
\draw[thick,rounded corners=0.6cm] (1,1) -- (8,1) -- (8,4.5) node[right] {$\overline{H}_\infty$};
\draw[thick] (1,2.5) node [left] {$H_1$} --(8.5,3.5);
\draw[thick] (1,3.3) node [left] {$H_2$} --(8.5,2);
\draw[thick,rounded corners=0.3cm] (5.5,0.5) -- (2.5,1.5) -- (2.5,4.5) node [above] {$H_3$};
\draw[thick,rounded corners=0.3cm] (4,0) -- (4,4.5) node [above] {$H_4$};
\draw[thick,rounded corners=0.3cm] (2.5,0.5) -- (5.5,1.5) -- (5.5,4.5) node [above] {$H_5$};
\filldraw[fill=black, draw=black] (3,0.5) node [below] {$p''$} circle (2pt);
\filldraw[fill=black, draw=black] (5,4) node [left] {$p''$} circle (2pt);
\filldraw[fill=black, draw=black] (5,2) node [left] {$p$} circle (2pt);
\filldraw[fill=black, draw=black] (4,0.5) node [right] {$p'$};
\draw[thick,dashed,rounded corners=0.2cm] (3,0.5)--(5,1.5)--(5,2);
\draw[very thick] (5,2)--(5,4);
\filldraw[fill=black, draw=black] (4,1) circle (2pt);
\end{tikzpicture}
\caption{The segment $[p, p'']$ (thick segment).}
\label{fig:segment}
\end{figure}
\begin{corollary}
\label{cor:sep}
If $\dim X(C)=\ell-1$, then $\operatorname{Sep}(C, C^\lor)={\mathcal A}$.
\end{corollary}
\begin{proof}
In this case, $\overline{\A}_{X(C)}=\{\overline{H}_\infty\}$. Proposition
\ref{prop:charsep} concludes $\operatorname{Sep}(C, C^\lor)={\mathcal A}$.
\end{proof}
\begin{proposition}
\label{prop:bchuch}
(\cite{y-lef, y-mini})
\begin{itemize}
\item[$(i)$] $\#\operatorname{ch}^k({\mathcal A})=b_k$, where $b_k=b_k(M({\mathcal A}))$.
\item[$(ii)$] $\#\operatorname{bch}^k({\mathcal A})=\#\operatorname{uch}^{k+1}({\mathcal A})$.
\item[$(iii)$] $\#\operatorname{bch}^k({\mathcal A})=b_k-b_{k-1}+\dots+(-1)^kb_0$.
\end{itemize}
\end{proposition}
Concerning $(ii)$ of Proposition \ref{prop:bchuch},
an explicit bijection is given by the opposite chamber,
\[
\iota: \operatorname{bch}^k({\mathcal A})\stackrel{\simeq}{\longrightarrow}\operatorname{uch}^{k+1}({\mathcal A}),
C\longmapsto C^\lor.
\]
Next result characterizes the dense edge contained in $\overline{H}_\infty$.
\begin{proposition}(\cite[Proposition 2.4]{y-mini})\label{densechamber}
Let ${\mathcal A}$ be an affine arrangement in $\mathbb{R}^\ell.$
An edge $X\in L(\overline{\A})$ with $X \subseteq \overline{H}_\infty$
is dense if and only if $X=X(C)$ for some chamber
$C\in \operatorname{uch}({\mathcal A}).$ In particular, we have
\begin{equation}
\label{eq:denseedge}
{\operatorname{\mathsf D}}_\infty({\mathcal A})=\{X(C)\mid C\in\operatorname{uch}({\mathcal A})\}.
\end{equation}
\end{proposition}
Next we define the degree map
\[
\deg:\operatorname{ch}^k({\mathcal A})\times\operatorname{ch}^{k+1}({\mathcal A})\longrightarrow\mathbb{Z}.
\]
Let $B=B^k\subset F^k$ be a
$k$-dimensional ball with sufficiently large radius so that
every $0$-dimensional edge $X\in L_0({\mathcal A}\cap F^k)\simeq L_{\ell-k}({\mathcal A})$
is contained in the interior of $B^k$.
Let $C\in\operatorname{ch}^k({\mathcal A})$ and $C'\in\operatorname{ch}^{k+1}({\mathcal A})$.
Then there exists a vector field $U^{C'}$ on $F^k$
(\cite{y-lef}) which satisfies the following conditions.
\begin{itemize}
\item
$U^{C'}(x)\neq 0$ for $x\in\partial \overline{C}\cap B^k$.
\item
Let $x\in\partial(B^k)\cap \overline{C}$. Then $T_x(\partial B^k)$ can be considered
as a hyperplane of $T_xF^k$. We impose a condition that
$U^{C'}(x)\in T_xF^k$ is contained in the half space corresponding to
the inside of $B^k$.
\item
If $x\in H\cap F^k$ for a hyperplane $H\in{\mathcal A}$, then
$U^{C'}(x)\not\in T_x(H\cap F^k)$ and is directed to the side
in which $C'$ is lying with respect to $H$.
\end{itemize}
When the vector field $U^{C'}$ satisfies the above conditions,
we say that \emph{the vector field $U^{C'}$ is directed to the chamber $C'$}.
The above conditions imply that if either
$x\in H\cap F^k$ or $x\in \partial B^k$,
then $U^{C'}(x)\neq 0$.
Thus for $C\in\operatorname{ch}^k({\mathcal A})$, $U$ is not vanishing on $\partial (\overline{C}\cap B^k)$.
Hence we can consider the following Gauss map.
\[
\frac{U^{C'}}{|U^{C'}|}: \partial(\overline{C}\cap B^k)\longrightarrow S^{k-1}.
\]
Fix an orientation of $F^k$, which induces an orientation on
$\partial(\overline{C}\cap B^k)$.
\begin{definition}
\label{degreemap}
Define the degree $\deg(C, C')$ between $C\in\operatorname{ch}^k({\mathcal A})$ and
$C'\in\operatorname{ch}^{k+1}({\mathcal A})$ by
\[
\deg(C,C'):=\deg\left(\left.
\frac{U^{C'}}{|U^{C'}|}\right|_{\partial (\overline{C}\cap B^k)}:
\partial(\overline{C}\cap B^k)\longrightarrow S^{k-1}\right)\in\mathbb{Z}.
\]
This is independent of the choice of $U^{C'}$ (\cite{y-lef}).
\end{definition}
If the vector field $U^{C'}$ does not have zeros on $\overline{C}\cap B^k$,
then the Gauss map can be extended to the map
$\overline{C}\cap B^k\longrightarrow S^{k-1}$. Hence
$\frac{U^{C'}}{|U^{C'}|}:\partial(\overline{C}\cap B^k)\longrightarrow S^{k-1}$
is homotopic to a constant map. Thus we have the following.
\begin{proposition}
\label{prop:degree0}
If the vector field $U^{C'}$ is nowhere zero on $\overline{C}\cap B^k$,
then $\deg(C, C')=0$.
\end{proposition}
\begin{example}
\label{ex:pointing}
Let $p_0\in F^k$ such that $p_0\notin\bigcup_{H\in{\mathcal A}}H\cup\partial B^k$.
Define the pointing vector field $U^{p_0}$ by
\begin{equation}
U^{p_0}(x)=\overrightarrow{x; p_0}\in T_xF^k,
\end{equation}
where $\overrightarrow{x; p_0}$ is a tangent vector at $x$ pointing
$p_0$ (see Figure \ref{fig:pointingvf}).
The vector field $U^{p_0}$ is directed to the chamber which contains $p_0$.
Note that $U^{p_0}(x)=0$ if and only if $x=p_0$.
Hence if $p_0\notin C\cap B^k$, the Gauss map
$\frac{U^{p_0}}{|U^{p_0}|}:\partial(\overline{C}\cap B^k)\longrightarrow S^{k-1}$
has
$\deg\left(\frac{U^{p_0}}{|U^{p_0}|}\right)=0$. Otherwise, if
$p_0\in C\cap B^k$, $\deg\left(\frac{U^{p_0}}{|U^{p_0}|}\right)=(-1)^k$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1]
\draw [thick] (0,1)--(8,1);
\draw [thick] (0,4)--(8,4);
\draw [thick] (1,0)--(1,5);
\draw [thick] (7,0)--(7,5);
\draw [thick] (0,0.5)--(8,4.5);
\draw[->, thick] (1,4)--(2.25,3.5);
\draw[->, thick] (1,3)--(2.25,2.75);
\draw[->, thick] (1,2)--(2.25,2);
\draw[->, thick] (1,1)--(2.25,1.25);
\draw[->, thick] (2,4)--(3,3.5);
\draw[->, thick] (2,3)--(3,2.75);
\draw[->, thick] (2,1)--(3,1.25);
\draw[->, thick] (3,4)--(3.75,3.5);
\draw[->, thick] (3,3)--(3.75,2.75);
\draw[->, thick] (3,2)--(3.75,2);
\draw[->, thick] (3,1)--(3.75,1.25);
\draw[->, thick] (4,4)--(4.5,3.5);
\draw[->, thick] (4,3)--(4.5,2.75);
\draw[->, thick] (4,2)--(4.5,2);
\draw[->, thick] (4,1)--(4.5,1.25);
\draw[->, thick] (5,4)--(5.25,3.5);
\draw[->, thick] (5,3)--(5.25,2.75);
\draw[->, thick] (5,2)--(5.25,2);
\draw[->, thick] (5,1)--(5.25,1.25);
\draw[->, thick] (6,4)--(6,3.5);
\draw[->, thick] (6,3)--(6,2.75);
\draw[->, thick] (6,1)--(6,1.25);
\draw[->, thick] (7,4)--(6.75,3.5);
\draw[->, thick] (7,3)--(6.75,2.75);
\draw[->, thick] (7,2)--(6.75,2);
\draw[->, thick] (7,1)--(6.75,1.25);
\filldraw[fill=black, draw=black] (6,2) circle (2pt) node[left] {$p_0$};
\end{tikzpicture}
\caption{Pointing Vector field $\frac{1}{4}U^{p_0}$}
\label{fig:pointingvf}
\end{figure}
\end{example}
Consider the Orlik-Solomon algebra $A_R^\bullet({\mathcal A})$ over
the commutative ring $R$. Let $\omega_\lambda=\sum_{i=1}^n\lambda_i e_i
\in A_R^1({\mathcal A})$, ($\lambda_i\in R$).
We will describe the Aomoto complex $(A_R^\bullet({\mathcal A}), \omega_\lambda\wedge)$
in terms of chambers.
For two chambers $C, C'\in\operatorname{ch}({\mathcal A})$, define $\lambda_{\operatorname{Sep}(C, C')}$ by
\[
\lambda_{\operatorname{Sep}(C, C')}:=
\sum_{H_i\in\operatorname{Sep}(C, C')}\lambda_i.
\]
\begin{proposition}
\label{prop:lambdadense}
Let $C$ be an unbounded chambers. Then
\[
\lambda_{\operatorname{Sep}(C, C^\lor)}=-\lambda_{X(C)}.
\]
\end{proposition}
\begin{proof}
By Proposition \ref{prop:charsep}, we have
$\overline{\A}=\overline{\A}_{X(C)}\sqcup\operatorname{Sep}(C, C^\lor)$. Hence,
from the definition of $\lambda_{\infty}=-\sum_{i=1}^n\lambda_i$,
we obtain
$\lambda_{\operatorname{Sep}(C, C^\lor)}+\lambda_{X(C)}=0$.
\end{proof}
Let $R[\operatorname{ch}^k({\mathcal A})]=\bigoplus_{C\in\operatorname{ch}^k({\mathcal A})}R\cdot[C]$
be the free $R$-module generated by $\operatorname{ch}^k({\mathcal A})$.
Let $\nabla_{\omega_\lambda}: R[\operatorname{ch}^k({\mathcal A})] \longrightarrow R[\operatorname{ch}^{k+1}({\mathcal A})]$
be the $R$-homomorphism defined by
\begin{equation}
\label{eq:defwedge}
\nabla_{\omega_\lambda}([C])=
\displaystyle{\sum_{C'\in \operatorname{ch}^{k+1}}} \deg(C,C')\cdot
\lambda_{\operatorname{Sep}(C, C')}\cdot [C'].
\end{equation}
\begin{proposition}
\label{prop:aomotochamber}
(\cite{y-cham})
$(R[\operatorname{ch}^\bullet({\mathcal A})],\nabla_{\omega_\lambda})$ is a cochain complex. Furthermore, there is a natural isomorphism of cochain complexes,
\[
(R[\operatorname{ch}^\bullet({\mathcal A})],\nabla_{\omega_\lambda})
\simeq (A^\bullet_R({\mathcal A}),\omega_\lambda\wedge).
\]
In particular,
\[
H^k
(R[\operatorname{ch}^\bullet({\mathcal A})],\nabla_{\omega_\lambda})
\simeq
H^k(A^\bullet_R({\mathcal A}),\omega_\lambda\wedge).
\]
\end{proposition}
Let ${\mathcal L}$ be a rank one local system on $M({\mathcal A})$ which
has monodromy $q_i\in\mathbb{C}^\times$ ($i=1, \dots, n$) around $H_i$.
Fix $q_i^{1/2}=\sqrt{q_i}$ and define $q_{\infty}^{1/2}$ and
$\Delta(C, C')$ by
$q_{\infty}^{1/2}:=\left(q_1^{1/2}\cdots q_n^{1/2}\right)^{-1}$ and
\[
\Delta(C, C'):=
\prod_{H_i\in\operatorname{Sep}(C, C')}q_i^{1/2}-
\prod_{H_i\in\operatorname{Sep}(C, C')}q_i^{-1/2},
\]
respectively. Then the local system cohomology group
can be computed in a similar way to the Aomoto complex.
Indeed, let us define the linear map
$\nabla_{{\mathcal L}}:\mathbb{C}[\operatorname{ch}^k({\mathcal A})]\longrightarrow\mathbb{C}[\operatorname{ch}^{k+1}({\mathcal A})]$ by
\[
\nabla_{{\mathcal L}}([C])=
\sum_{C'\in \operatorname{ch}^{k+1}} \deg(C,C')\cdot
\Delta(C, C')\cdot [C'].
\]
Then we have the following.
\begin{proposition}
\label{prop:localchamber}
(\cite{y-lef})
$(\mathbb{C}[\operatorname{ch}^\bullet({\mathcal A})],\nabla_{{\mathcal L}})$ is a cochain complex.
Furthermore, there is a natural isomorphism of cohomology groups:
\[
H^k(\mathbb{C}[\operatorname{ch}^\bullet({\mathcal A})],\nabla_{{\mathcal L}}) \simeq H^k(M({\mathcal A}), {\mathcal L}).
\]
\end{proposition}
\section{Main results and strategy}
\label{sec:results}
\subsection{Main theorems}
In this section, let ${\mathcal A}=\{H_1, \dots, H_n\}$ be a hyperplane
arrangement in $\mathbb{R}^\ell$ and $R$ be a commutative ring with $1$.
\begin{theorem}
\label{thmprincipal}
If $\lambda_X \in R^\times$ for all
$X\in{\operatorname{\mathsf D}}_\infty(\overline{\A})$,
then
\[
H^k(\mathbb{C}[\operatorname{ch}^\bullet({\mathcal A})],\nabla_{\omega_\lambda})\simeq
\left\{
\begin{array}{ll}
0,& \mbox{ if }k<\ell, \\
&\\
R[\operatorname{bch}({\mathcal A})],& \mbox{ if }k=\ell.
\end{array}
\right.
\]
\end{theorem}
More generally, we can prove the following.
\begin{corollary}
\label{cor:principal}
Let $0\leq p<\ell$.
If $\lambda_X \in R^\times$ for all
$X\in {\operatorname{\mathsf D}}_\infty(\overline{\A})$ with $\dim(X)\geq p,$ then
\[
H^k(\mathbb{C}[\operatorname{ch}^\bullet({\mathcal A})],\nabla_{\omega_\lambda})=0, \mbox{ for all }
0\leq k < \ell-p.
\]
\end{corollary}
\proof
Here we give a proof of Corollary \ref{cor:principal} based on
the main Theorem \ref{thmprincipal}.
If we consider ${\mathcal A}\cap F^{\ell-p}$. The Orlik-Solomon algebra
$A_R^{\bullet}({\mathcal A}\cap F^{\ell-p})$ is isomorphic to
$A_R^{\leq \ell-p}({\mathcal A})$. Hence we have an isomorphism
\begin{equation}
\label{eq:propagate}
H^k(A_R^{\bullet}({\mathcal A}\cap F^{\ell-p}), \omega_\lambda\wedge)\simeq
H^k(A_R^{\bullet}({\mathcal A}), \omega_\lambda\wedge),
\end{equation}
for $k<\ell-p$. Note that $L({\mathcal A}\cap F^{\ell-p})\simeq L^{\geq p}({\mathcal A})$.
By the assumption, we have $\lambda_X\in R^\times$ for
any $X\in{\operatorname{\mathsf D}}_\infty({\mathcal A}\cap F^{\ell-q})$. Hence by Theorem \ref{thmprincipal},
the left hand side of (\ref{eq:propagate}) is vanishing.
\endproof
By Proposition \ref{prop:aomotochamber}, we have the following
vanishing theorem for the Aomoto complex.
\begin{corollary}
Let $0\leq p<\ell$.
If $\lambda_X \in R^\times$ for all
$X\in {\operatorname{\mathsf D}}_\infty(\overline{\A})$ with $\dim(X)\geq p,$ then
\[
H^k(A_R^\bullet({\mathcal A}),\omega_\lambda\wedge)=0, \mbox{ for all }
0\leq k < \ell-p.
\]
\end{corollary}
\begin{remark}
\label{rem:locsyscdo}
Completely similar proof works also for the case of local systems.
Namely, if the local system ${\mathcal L}$ satisfies that $q_X \neq 1$ for all
$X\in {\operatorname{\mathsf D}}_\infty(\overline{\A})$ with $\dim(X)\geq p,$ then
\[
H^k(\mathbb{C}[\operatorname{ch}^\bullet({\mathcal A})],\nabla_{{\mathcal L}})=0, \mbox{ for all }k<\ell-p.
\]
Using Proposition \ref{prop:localchamber}, this implies
\[
H^k(M({\mathcal A}), {\mathcal L})=0, \mbox{ for all }k<\ell-p,
\]
which gives an alternative proof for Theorem \ref{thm:cdo}
by Cohen, Dimca and Orlik.
\end{remark}
\subsection{Strategy for the proof of Theorem \ref{thmprincipal}}
\label{subsec:strategy}
In order to analyze the cohomology group,
\[
H^k(R[\operatorname{ch}^\bullet({\mathcal A})], \nabla_\omega)=
\frac
{\ker\left(\nabla_\omega:R[\operatorname{ch}^k({\mathcal A})]\longrightarrow R[\operatorname{ch}^{k+1}({\mathcal A})]\right)}
{\image\left(\nabla_\omega:R[\operatorname{ch}^{k-1}({\mathcal A})]\longrightarrow R[\operatorname{ch}^{k}({\mathcal A})]\right)},
\]
we will use the direct decomposition
$R[\operatorname{ch}^k({\mathcal A})]=R[\operatorname{bch}^k({\mathcal A})]\oplus R[\operatorname{uch}^k({\mathcal A})]$, and then consider
the map
\begin{equation}
\overline{\nabla}_{\omega_{\lambda}}:
R[\operatorname{bch}^k({\mathcal A})] \hookrightarrow R[\operatorname{ch}^{k}({\mathcal A})]
\stackrel{\nabla_\omega}{\longrightarrow}
R[\operatorname{ch}^{k+1}({\mathcal A})]\twoheadrightarrow R[\operatorname{uch}^{k+1}({\mathcal A})].
\end{equation}
We will study the map
$\overline{\nabla}_{\omega_{\lambda}}:
R[\operatorname{bch}^k({\mathcal A})] \longrightarrow R[\operatorname{uch}^{k+1}({\mathcal A})]$ in detail below.
Recall that there is a natural bijection $\iota:
\operatorname{bch}^k({\mathcal A})\stackrel{\simeq}{\longrightarrow}\operatorname{uch}^{k+1}({\mathcal A})$
(see Proposition \ref{prop:bchuch} and subsequent remarks),
once we fix an ordering $C_1, \dots, C_b$ of $\operatorname{bch}^k({\mathcal A})$,
we obtain a matrix expression of the map
$\overline{\nabla}_{\omega_{\lambda}}$.
We will prove the following.
\begin{itemize}
\item[(i)]
Let $C\in\operatorname{bch}^k({\mathcal A})$. Then
$\deg(C, C^\lor)=(-1)^{\ell-1-\dim X(C)}$.
\item[(ii)]
For an appropriate ordering of $\operatorname{bch}^k({\mathcal A})=\{C_1, \dots, C_b\}$,
the matrix expression of
$\overline{\nabla}_{\omega_{\lambda}}:R[\operatorname{bch}^{k}({\mathcal A})]\longrightarrow
R[\operatorname{uch}^{k+1}({\mathcal A})]$ becomes
an upper-triangular matrix.
\item[(iii)]
$\det\overline{\nabla}_\omega\in R^\times$
\item[(iv)]
These imply Theorem \ref{thmprincipal}.
\end{itemize}
(i) and (ii) will be proved in \S \ref{sec:proofs}.
Here we prove (iii) and (iv) based on (i) and (ii).
First note that from Proposition \ref{prop:lambdadense}, the
definition (\ref{eq:defwedge}) of the coboundary map of the
complex $(R[\operatorname{ch}^\bullet({\mathcal A})], \nabla_\omega)$,
and uppertriangularity (ii) above, we have
\[
\det\overline{\nabla}_\omega=\pm\prod_{C\in\operatorname{bch}^k({\mathcal A})}\deg(C, C^\lor)\lambda_{X(C)}.
\]
From the assumption that $\lambda_{X}\in R^\times$ for
$X\in{\operatorname{\mathsf D}}_\infty({\mathcal A})$ (see also Proposition \ref{densechamber}),
we obtain (iii).
Since
$\overline{\nabla}_\omega:R[\operatorname{bch}^k({\mathcal A})]\stackrel{\simeq}{\longrightarrow}
R[\operatorname{uch}^{k+1}({\mathcal A})]$
is an isomorphism of free $R$-modules, which are diagonals of
the following diagram,
we have $H^k(R[\operatorname{ch}^\bullet({\mathcal A})], \nabla_{\omega})=0$ for $k<\ell$ and
$H^\ell(R[\operatorname{ch}^\bullet({\mathcal A})], \nabla_{\omega})\simeq R[\operatorname{bch}^\ell({\mathcal A})]$.
\[
\begin{array}{ccccccccccc}
R[\operatorname{ch}^0]&\stackrel{\nabla_\omega}{\longrightarrow}&R[\operatorname{ch}^1]&\stackrel{\nabla_\omega}{\longrightarrow}&\cdots&\stackrel{\nabla_\omega}{\longrightarrow}&R[\operatorname{ch}^k]&\stackrel{\nabla_\omega}{\longrightarrow}&R[\operatorname{ch}^{k+1}]&\stackrel{\nabla_\omega}{\longrightarrow}&\cdots\\
||&&||&&&&||&&||&&\\
R[\operatorname{bch}^0]&&R[\operatorname{bch}^1]&&\cdots&&R[\operatorname{bch}^k]&&R[\operatorname{bch}^{k+1}]&&\\
&\searrow&\oplus&\searrow&&\searrow&\oplus&\searrow&\oplus&&\\
&&R[\operatorname{uch}^1]&&\cdots&&R[\operatorname{uch}^k]&&R[\operatorname{uch}^{k+1}]&&
\end{array}
\]
\section{Proofs}
\label{sec:proofs}
In this section, we prove (i) and (ii) in \S \ref{subsec:strategy} for
$k=\ell-1$. Namely:
\begin{itemize}
\item[(i')] For a chamber $C\in\operatorname{bch}^{\ell-1}({\mathcal A})$,
$\deg(C, C^\lor)=(-1)^{\ell-1-\dim X(C)}$.
\item[(ii')] For an appropriate ordering of $\{C_1, \dots, C_b\}=
\operatorname{bch}^{\ell-1}({\mathcal A})$, the matrix expression of $\overline{\nabla}_{\omega_{\lambda}}:R[\operatorname{bch}^{\ell-1}({\mathcal A})]\longrightarrow R[\operatorname{uch}^\ell({\mathcal A})]$ becomes
an upper-triangular matrix.
\end{itemize}
For other $k<\ell$, the assertions are proved by a similar way using
the generic section by $F^{k+1}$ (see the argument of the proof of
Corollary \ref{cor:principal}).
\subsection{Structure of Walls}
For simplicity we will set $F=F^{\ell-1}$. Recall that
$\operatorname{bch}^{\ell-1}({\mathcal A})=\{C\in\operatorname{ch}({\mathcal A})\mid C\cap F\mbox{ is a bounded
chamber of $F\cap{\mathcal A}$}\}$.
Let $C\in\operatorname{bch}^{\ell-1}({\mathcal A})$. A hyperplane $H\in{\mathcal A}$ is said to be
a wall of $C$ if $H\cap F$ is a supporting hyperplane of
a facet of $\overline{C}\cap F$.
For any $C\in \operatorname{bch}^{\ell-1}({\mathcal A})$, we denote by
$\operatorname{Wall}(C)$ the set of all walls of $C$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1]
\draw[dashed,thick] (0,1)--(4,1) node[right] {$F^1$};
\draw[thick] (1,0) node[left] {$H_1$} -- (2.25,5);
\draw[thick] (3,0) node[right] {$H_2$}--(1.75,5);
\draw[thick] (4,0.6) node[below] {$H_3$} --(0.5,4.5);
\filldraw[fill=black, draw=black] (1.25,1) circle (2pt);
\filldraw[fill=black, draw=black] (2.75,1) circle (2pt);
\draw[very thick] (1.25,1)--node[above] {$\overline{C}\cap F^1$} (2.75,1);
\draw (2,0) node [above] {$C$};
\draw[dashed,thick] (5,1)--(9,1) node[right] {$F^1$};
\draw[thick] (6,0) node[left] {$H'_1$} -- (6,5);
\draw[thick] (8,0) node[right] {$H'_2$}--(8,5);
\draw[thick] (9,0.6) node[below] {$H'_3$} --(5.5,4.5);
\filldraw[fill=black, draw=black] (6,1) circle (2pt);
\filldraw[fill=black, draw=black] (8,1) circle (2pt);
\draw[very thick] (6,1)--node[above] {$\overline{C'}\cap F^1$} (8,1);
\draw (7,0) node [above] {$C'$};
\end{tikzpicture}
\caption{\small $\operatorname{Wall}(C)=\operatorname{Wall}_2(C)=\{H_1, H_2\}, \operatorname{Wall}(C')=\operatorname{Wall}_1(C')=\{H'_1, H'_2\}$}
\label{fig:walls}
\end{figure}
We divide the set of walls into two types.
\begin{definition}
\label{def:walls}
A wall $H\in\operatorname{Wall}(C)$ is called the first kind if
$\overline{H}\supset X(C)$. Otherwise $H$ is called a wall of second kind.
The set of walls of first kind, and second kind are denoted by
$\operatorname{Wall}_1(C)$ and $\operatorname{Wall}_2(C)$ respectively. We have
$\operatorname{Wall}(C)=\operatorname{Wall}_1(C)\sqcup\operatorname{Wall}_2(C)$.
(See Figure \ref{fig:walls} and \ref{fig:walls12}.)
\end{definition}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1]
\draw[thick] (0,0)--(1.5,1.5)--(9,1.5)--(7.5,0)--node[above]{$\overline{H}_{\infty}$}(0,0);
\draw[very thick] (1,1)--(8.5,1) node[right] {$X(C)$};
\filldraw[fill=white, draw=black, thick] (1,0)--(2.5,1.5)--(3.5,5)--(2,3.5)--(1,0);
\filldraw[fill=white, draw=black, thick] (3,4.5)--(6,4.5)--(7,1)--(2,1)--(3,4.5);
\filldraw[fill=white, draw=black, thick] (2.5,4)--(5.5,4)--(7,1)--(2,1)--(2.5,4);
\filldraw[fill=white, draw=black, thick] (5,3.5)--(6,0)--(7.5,1.5)--(6.5,5)--(5,3.5);
\draw[thick] (5.5,4)--(7,1)--(6,4.5);
\filldraw[fill=white, draw=black, thick] (0,3.5)--node [left] {$F^2$}(1.5,5)--(9,5)--(7.5,3.5)-- (0,3.5);
\draw[thick] (2,3.5)-- node [left] {\footnotesize $H_3$} (3.5,5);
\draw[thick] (5,3.5)-- node [below] {\footnotesize $H_4$} (6.5,5);
\draw[thick] (2.5,4)-- node [below] {\footnotesize $H_1$} (5.5,4);
\draw[thick] (3,4.5)-- node [above] {\footnotesize $H_2$} (6,4.5);
\draw[thick] (4.2,4) node[above] {\small $C$};
\draw[dashed, very thin] (1.5,1.5)--(9,1.5);
\draw[dashed, very thin] (1,1)--(8.5,1) ;
\draw[dashed, very thin] (2,1)--(2.5,1.5)--(3.5,5);
\draw[dashed, very thin] (2,1)--(3,4.5)--(6,4.5)--(7,1);
\draw[dashed, very thin] (2,1)--(2.5,4)--(5.5,4)--(7,1);
\draw[dashed, very thin] (7.5,1.5)--(6.5,5);
\end{tikzpicture}
\caption{$\operatorname{Wall}_1(C)=\{H_1, H_2\}, \operatorname{Wall}_2(C)=\{H_3, H_4\}$.}
\label{fig:walls12}
\end{figure}
Let $C\in \operatorname{bch}^{\ell-1}({\mathcal A})$ and $\operatorname{Wall}_1(C)=\{H_{i_1}, \dots, H_{i_k}\}$ the
walls of first kind. We choose defining equations
$\alpha_{i_1}, \dots, \alpha_{i_k}$ of $\operatorname{Wall}_1(C)$ so that
\[
C\subset\{\alpha_{i_1}>0\}\cap\dots\cap\{\alpha_{i_k}>0\}.
\]
Note that $\widetilde{C}:=
\{\alpha_{i_1}>0\}\cap\dots\cap\{\alpha_{i_k}>0\}$ is a chamber of
$\operatorname{Wall}_1({\mathcal A})$.
Let $D\in\operatorname{uch}({\mathcal A})$ be another unbounded chamber of ${\mathcal A}$.
Then $D$ is said to be inside $\operatorname{Wall}_1(C)$ if
\[
D\subset\widetilde{C}=
\{\alpha_{i_1}>0\}\cap\dots\cap\{\alpha_{i_k}>0\}.
\]
This condition is also equivalent to
$\operatorname{Sep}(C,D)\cap \operatorname{Wall}_1(C)=\emptyset$.
Recall that the opposite chamber of $C\in\operatorname{bch}^{\ell-1}({\mathcal A})$ is defined
as the opposite chamber with respect to $X(C)\subset\overline{H}_\infty$.
Using (\ref{eq:charsep}), we have the following.
\begin{proposition}
\label{prop:cap}
Let $C\in\operatorname{bch}^{\ell-1}({\mathcal A})$. Then $\operatorname{Sep}(C, C^\lor)\cap\operatorname{Wall}(C)=\operatorname{Wall}_2(C)$.
\end{proposition}
\begin{remark}\label{rkinside}
Let $C\in\operatorname{bch}^{\ell-1}({\mathcal A})$.
If $D$ is inside the walls of $\operatorname{Wall}_1(C),$ then we have
$X(D)\subset X(C)$ and
$\dim X(D)\leq \dim X(C).$
\end{remark}
\subsection{Fibered structure of chambers}
\label{subsec:fiber}
Let $d=\dim X(C)$.
Let $C\in\operatorname{bch}^{\ell-1}({\mathcal A})$.
As above, we let $\widetilde{C}\in
\operatorname{ch}(\operatorname{Wall}_1(C))$ the unique chamber such that $C\subset\widetilde{C}$.
For each point $p\in\overline{\widetilde{C}}$,
denote by $G_1(p):=\langle X(C), p\rangle\cap F$ (Figure \ref{fig:basepoly}).
Then $G_1(p)$ is a $d$-dimensional affine subspace which is
parallel to each $H\in\operatorname{Wall}_1(C)$. Fix a base point
$p_0\in\widetilde{C}$. We also fix an $(\ell-1-d)$-dimensional subspace
$G_2(p_0)\subset F$
which is passing through $p_0$ and
transversal to $G_1(p_0)$ (see Figure \ref{fig:basepoly}).
Let us call
$Q_0:=G_2(p_0)\cap\overline{\widetilde{C}}$ the base polytope.
Consider the map
$\pi_C:\overline{C}\cap F\longrightarrow Q_0,
p\longmapsto G_1(p)\cap Q_0$.
For each $q\in Q_0$, the fiber $\pi_C^{-1}(q)=G_1(q)\cap \overline{C}$
is a $d$-dimensional polytope.
This fact is a conclusion of the assumption that $F$ is
generic and near to $\overline{H}_\infty$, and the following
elementary proposition.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1]
\draw[thick] (0,0)--(8,0)--(9,4)--(1,4)node[left]{$F$}--(0,0);
\draw[thick] (0.25,1)node[left]{$H_1$}--(8.25,1);
\draw[thick] (0.75,3)node[left]{$H_2$}--(8.75,3);
\draw[thick] (0.5,0.5)--(2.5,3.5);
\draw[thick] (2.5,0.5)--(5.5,3.5);
\draw[thick] (4,3.5)--(5,0.5);
\draw (3,2.3)node[above]{$\overline{C}\cap F$};
\draw[very thick, blue] (0.5,2)--(8.5,2);
\draw[blue] (5.5,2) node[above]{$G_1(p_0)$};
\draw[thick,red!70!black] (6.5,0)--(7.5,4);
\draw[red!70!black] (6.6,0.5) node[right]{$G_2(p_0)$};
\filldraw[fill=green!50!black, draw=green!50!black] (6.75,1) circle (2pt);
\filldraw[fill=green!50!black, draw=green!50!black] (7.25,3) circle (2pt);
\draw[green!50!black, very thick] (6.75,1)--(7.25,3) ;
\draw[green!50!black] (7.6,3) node[below] {$Q_0$};
\filldraw[fill=black, draw=black] (7,2) circle (2pt) ;
\draw(7.3,2) node[below] {$p_0$};
\draw[thick, red] (0.3,1.2)--(8.3,1.2) node[right] {$G_1(p)$};
\filldraw[fill=red, draw=red] (2,1.2) circle (2pt) node[above,red] {$p$};
\filldraw[fill=red, draw=red] (6.8,1.2) circle (2pt) ;
\draw (6.8,1.5) node[left, red] {$\pi_C(p)$};
\end{tikzpicture}
\caption{Base polytope $Q_0$ ($\operatorname{Wall}_1(C)=\{H_1, H_2\}$)}
\label{fig:basepoly}
\end{figure}
\begin{proposition}
Let $P\subset\mathbb{R}^\ell$ be an $\ell$-dimensional polytope. Let $X\subset P$ be
a $d$-dimensional face ($0\leq d\leq \ell$). We denote by $\langle X\rangle$
the $d$-dimensional affine subspace spanned by $X$. Then for $\varepsilon
\in\mathbb{R}^\ell$ with sufficiently small
$0\leq |\varepsilon|\ll 1$,
$(\langle X\rangle+\varepsilon)\cap P$ is either an empty set or
a $d$-dimensional polytope.
\end{proposition}
\begin{remark}
\label{rem:section}
Since $\pi_C:\overline{C}\cap F\longrightarrow Q_0$ is a fibration
with contractible fibers, there exists a continuous
section $\sigma_C:Q_0\longrightarrow\overline{C}\cap F$ such that
$\pi_C\circ\sigma_C=\operatorname{id}_{Q_0}$.
\end{remark}
\subsection{Upper-triangularity}
\label{subsec:uppertri}
Let us fix an ordering of chambers of $\operatorname{bch}^{\ell-1}({\mathcal A})=\{C_1,\hdots,C_b\}$
in such a way that
\[
\dim X(C_1)\geq \dim X(C_2) \geq \cdots \geq \dim X(C_b).
\]
The main result in this section is the following.
\begin{theorem}
\label{thm:triangular}
The matrix $(\deg(C_i, C_j^\lor))_{i, j=1, \dots, b}$ is
upper-triangular. In other words, if $i>j$,
$\deg(C_i, C_j^\lor)=0$.
\end{theorem}
\begin{proof}
Let $C, D\in\operatorname{bch}^{\ell-1}({\mathcal A})$. Suppose
$\dim X(D)\geq\dim X(C)$ and $C\neq D$.
Then we will prove $\deg(C, D^\lor)=0$.
The idea of
the proof is to construct a vector field $U^{D^\lor}$ directed to $D^\lor$
on $F$
which is nowhere vanishing on a neighbourhood of
$\overline{C}\cap F\subset F$.
Then by Proposition \ref{prop:degree0}, we have $\deg(C, D^\lor)=0$.
We divide into three cases.
\begin{itemize}
\item[(a)]
$\dim X(C)=\ell-1$.
\item[(b)]
$\dim X(C)<\ell-1$ and $D$ is not inside of $\operatorname{Wall}_1(C)$.
\item[(c)]
$\dim X(C)<\ell-1$ and $D$ is inside of $\operatorname{Wall}_1(C)$.
\end{itemize}
Firstly we consider the case (a). In this case, since $\dim X(D)\geq
\dim X(C)$, we have $\dim X(D)=\ell-1$.
Choose a point $p\in D\cap F$, and define the vector field
$U$ on $F$ by
\[
U(x)=\overrightarrow{x;p}\in T_xF.
\]
Then the vector field is directed to $p$ and nowhere vanishing on
$\overline{C}\cap F$ (because $p\notin\overline{C}$).
By Corollary \ref{cor:sep}, $-U$ is a vector field directed to
$D^\lor$, which is also nowhere vanishing on $\overline{C}\cap F$.
Hence $\deg(C, D^\lor)=0$.
From now on, we assume $\dim X(C)<\ell-1$. If $D$ is inside of
$\operatorname{Wall}_1(C)$, then $X(D)\subset X(C)$ by Remark \ref{rkinside}, we
have $\overline{\A}_{X(D)}\supset\overline{\A}_{X(C)}$. Proposition \ref{prop:cap}
indicates $\operatorname{Sep}(D, D^\lor)\cap\overline{\A}_{X(C)}=\emptyset$.
We can conclude that
$D^\lor$ is also inside $\operatorname{Wall}_1(C)$. Conversely, if $D$ is
not inside of $\operatorname{Wall}_1(C)$, then also $D^\lor$ is not inside $\operatorname{Wall}_1(C)$.
Next we consider the case (b). Then $\operatorname{Sep}(C, D^\lor)\cap\operatorname{Wall}_1(C)
\neq\emptyset$. Choose a hyperplane $H_{i_0}\in\operatorname{Sep}(C, D^\lor)\cap\operatorname{Wall}_1(C)$.
Let $\alpha_{i_0}$ be the defining equation of $H_{i_0}$. Without loss of
generality, we may assume that
\[
\begin{split}
H_{i_0}^{+}&=\{\alpha_{i_0}>0\}\supset D^\lor\\
H_{i_0}^{-}&=\{\alpha_{i_0}<0\}\supset C.
\end{split}
\]
We will construct a vector field $U^{D^\lor}$ on $F$ which
is directed to $D^\lor$ and satisfying
\begin{equation}
\label{eq:positivity}
U^{D^\lor}(x)\alpha_{i_0}>0,
\end{equation}
for $x\in\overline{C}\cap F$, where the left hand side of (\ref{eq:positivity}) is
the derivative of $\alpha_{i_0}$ with respect to the vector field.
In particular, we obtain a vector field directed to $D^\lor$ which
is nowhere vanishing on $\overline{C}\cap F$. It is enough to show that, at
any point $x_0\in\overline{C}$, there exists a local vector field around $x_0$
which satisfies (\ref{eq:positivity}). Then we will obtain a
global vector field which satisfies (\ref{eq:positivity})
using partition of unity.
It is sufficient to show the existence of such vector field
around each vertex $x_0$
of $\overline{C}\cap F$. By genericity of $F$,
$Z:=\bigcap{\mathcal A}_{x_0}=
\bigcap_{x_0\in H\in{\mathcal A}}H$ is a $1$-dimensional flat of ${\mathcal A}$,
which is transversal to $F$. By the assumption that $F$ does not separate
$0$-dimensional flats of ${\mathcal A}$, we have
\begin{equation}
\label{eq:inclusionZ}
\overline{Z}\cap\overline{H}_\infty
\subset\overline{C}\cap\overline{H}_\infty.
\end{equation}
(See Figure \ref{fig:Z}.)
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1]
\draw[thick] (0,0)--node[left]{$\overline{H}_{\infty}$}(1.5,1.5)--(9,1.5)--(7.5,0)--(0,0);
\filldraw[fill=white, draw=red, thick] (1,1)--(1,4.5) --
(8.5,4.5) -- node [right, red] {$H_{i_0}^{s_0}$} (8.5,1) --(1,1);
\draw[red, thick] (2.8,4.5)--(2.5,1);
\draw[thick] (2,3.8) -- (2.5,1);
\draw[thick] (4.8,3.8) -- (5,1);
\draw[red, thick] (5.5,4.5) -- (5,1);
\filldraw[fill=white, draw=black, thick] (0,3.5)--node [left] {$F$}
(1.5,5)--(9,5)--(7.5,3.5)-- (0,3.5);
\draw[red, thick] (1,4.5) -- (8.5,4.5);
\draw[dashed, red, very thin] (1,1)--(1,4.5);
\draw[dashed, red, very thin] (2.8,4.5)--(2.5,1);
\draw[dashed, very thin] (0,0)--(1.5,1.5)--(9,1.5);
\draw[very thick] (1,1)--(8.5,1) node[right] {$X(C)$};
\draw[thick] (1.7,3.5)--(3.3,5);
\draw[thick] (1.2,4.7) node [left] {$H_{i_0}$} --(8.7,4.7);
\draw[thick] (0.3,3.8)-- node[above] {$\overline{C}\cap F$} (7.8,3.8);
\draw[thick] (4.5,3.5)--(6,5);
\draw[thick] (5,5)--(6.5,3.5);
\filldraw[fill=red, draw=red] (5.5,4.5) circle (2pt) ;
\draw (6.2,4.5) node[below, red]{$x_0$};
\draw[dashed, red, very thin] (5.5,4.5) -- node [right] {$Z$} (5,1);
\draw[dashed, very thin] (4.8,3.8) -- (5,1);
\draw[dashed, very thin] (2,3.8) -- (2.5,1);
\draw[dashed, very thin] (3,4.7) -- (2.5,1);
\draw[dashed, very thin] (4.8,3.8) -- (5,1);
\draw[dashed, very thin] (5.3,4.7) -- (5,1);
\end{tikzpicture}
\caption{$Z$ and $H_{i_0}^{s_0}$.}
\label{fig:Z}
\end{figure}
Set $s_0:=\alpha_{i_0}(x_0)$ and $H_{i_0}^{s_0}=\{\alpha_{i_0}=s_0\}$
the hyperplane passing through $x_0$ which is parallel to $H_{i_0}$.
Then we have $Z\subset H_{i_0}^{s_0}$, otherwise, contradicting
(\ref{eq:inclusionZ}). The hyperplanes ${\mathcal A}_{x_0}={\mathcal A}_Z$ determines
chambers (cones), one of which, denoted by $\Gamma$, contains
$D^\lor$ (Figure \ref{fig:gamma}).
Hence the tangent vector $U^{D^\lor}(x_0)$ should be
contained in $\Gamma$. Furthermore,
\begin{equation}
D
\subset\Gamma\cap H_{i_0}^{+}
\subset\Gamma\cap H_{i_0}^{>s_0}.
\end{equation}
In particular, we have
$\Gamma\cap H_{i_0}^{>s_0}\neq\emptyset$. Thus we can construct
a vector field $U^{D^\lor}$ around $x_0$ so that
$U^{D^\lor}(x_0)\in \Gamma\cap H_{i_0}^{>s_0}$.
Then (\ref{eq:positivity}) is satisfied around $x_0$.
Hence we have $\deg(C, D^\lor)=0$ for the case (b).
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1]
\fill[green!20!white] (4,2) -- (4.707,1.293) arc (-45:45:1cm) --(4,2);
\draw[green!20!black] (4.8,1.5) node[right] {$\Gamma$};
\draw[thick] (0,0)--(8,0)--(9,4)--(1,4)node[left]{$F$}--(0,0);
\draw[thick] (0.25,1)--(8.25,1);
\draw[thick] (0.75,3)node[left]{$H_{i_0}$}--(8.75,3);
\draw[thick] (0.5,0.5)--(2,3.5);
\draw[thick] (2.5,0.5)--(5.5,3.5);
\draw[thick] (2.5,3.5)--(5.5,0.5);
\draw (2,1)node[above]{$\overline{C}\cap F$};
\draw[very thick, red] (0.5,2) node [left] {$H_{i_0}^{s_0}$} --(8.5,2);
\draw[->, thick, red] (1.75,2)--(2,2.5)node[right]{$H_{i_0}^{\geq s_0}$};
\draw[->,blue!50!black, very thick] (4,2) -- (5.5,2.5) node[right] {$U^{D^\lor}(x_0)$};
\filldraw[fill=black, draw=black] (4,2) circle (2pt) node[below]{$x_0$};
\end{tikzpicture}
\caption{Construction of the vector field $U^{D^\lor}$}
\label{fig:gamma}
\end{figure}
Thirdly, suppose $D$ is inside of $\operatorname{Wall}_1(C)$, equivalently,
$D\subset\widetilde{C}$. Let us handle the case (c).
Since $X(D)\subset X(C)$ and $\dim X(D)\geq \dim X(C)$, we have
$X(D)=X(C)$. In this case, $\operatorname{Wall}_1(C)=\operatorname{Wall}_1(D)$ and
$\widetilde{C}=\widetilde{D}$.
We consider the fibration
$\pi_D:\overline{D}\cap F\longmapsto Q_0$
which also has $d$-dimensional polytopes as fibers.
Since the fiber is contractible, there exists a
continuous section $\sigma_D:Q_0\longmapsto\overline{D}\cap F$
such that $\pi_D\circ\sigma_D=\operatorname{id}_{Q_0}$.
Now we construct a vector field. For each $p\in\overline{C}\cap F$,
we denote $G_2(p)$ the $(\ell-1-d)$-dimensional subspace
which is passing through $p$ and parallel to $G_2(p_0)$ (Figure \ref{fig:V2}).
Let $\{p'\}=G_2(p)\cap G_1(p_0)$. The tangent space is decomposed
as $T_pF=T_pG_1(p)\oplus T_pG_2(p)$. We first construct a vector field
on the second component. Let us define the tangent vector
$V_2(p)\in T_pG_2(p)\subset T_pF$ by
\begin{equation}
V_2(p)=\overrightarrow{p; p'}.
\end{equation}
The vector field $V_2$ is obviously inward with respect to $\operatorname{Wall}_1(C)$,
and vanishing on the reference fiber $G_1(p_0)\cap \overline{C}$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1.2]
\draw[thick] (0,0)--(8,0) node [right] {$F$} -- (8,4) --
(0,4)--cycle;
\draw[thick] (0,1) -- (8,1);
\draw[thick] (0,3) -- (8,3);
\draw[thick] (0.5,0.5) -- (1.5,3.5);
\draw[thick] (3.5,3.5)--(4.5,0.5);
\draw[thick] (3.5,0.5)--(5.5,3.5);
\draw[thick] (5.7,0.5)--(6.2,3.5);
\draw (2,1) node[above] {$C$};
\filldraw[fill=blue, draw=blue] (6.8,3) circle (2pt) ;
\filldraw[fill=blue, draw=blue] (7.2,1) circle (2pt) ;
\draw[thick, blue] (6.8,3)--(7,2)--node[right]{\small $Q_0$} (7.2,1);
\filldraw[fill=black, draw=black] (7,2) circle (2pt) ;
\draw (7.2,2) node [above, above, black] {\small $p_0$} ;
\draw[red, thick] (0,2)--(8,2) node[right] {$G_1(p_0)$};
\draw[thick, blue] (2.7, 3.5) node[above] {\small $G_2(p)$} --(3.3,0.5);
\filldraw[fill=blue, draw=blue] (2.9,2.5) node [right] {$p$} circle (2pt) ;
\filldraw[fill=blue, draw=blue] (3,2) circle (2pt) ;
\draw (2.8,2) node[below, blue] {$p'$};
\foreach \x in {0.5,1,1.5,2,2.5,3.5,4}
\draw[->, blue, thick] (\x, 3)--(\x+0.1,2.5);
\foreach \x in {0.5,1,1.5,2.5,3.5,4}
\draw[->, blue, thick] (\x, 1)--(\x-0.1,1.5);
\foreach \x in {0.3,0.8,1.3,1.8,2.3,3.8}
\draw[->, blue, thick] (\x, 2.5)--(\x+0.05,2.25);
\draw[->, blue, thick] (2.9,2.5)--(2.95,2.25);
\foreach \x in {0.7,1.2,1.7,2.2,3.7}
\draw[->, blue, thick] (\x, 1.5)--(\x-0.05,1.75);
\end{tikzpicture}
\caption{$V_2$.}
\label{fig:V2}
\end{figure}
Next we construct a vector field $V_1$ along the fibers $G_1(p)$.
Using the section $\sigma_C:Q_0\longrightarrow\overline{C}\cap F$
(Remark \ref{rem:section}),
define $V_1$ by
\begin{equation}
V_1(p)=\overrightarrow{p;\sigma_D(\pi_C(p))},
\end{equation}
(Figure \ref{fig:V1}).
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1.2]
\draw[thick] (0,0)--(8,0) node [right] {$F$} -- (8,4) --
(0,4)--cycle;
\draw[thick] (0,1) -- (8,1);
\draw[thick] (0,3) -- (8,3);
\draw[thick] (0.5,0.5) -- (1.5,3.5);
\draw[thick] (2.5,3.5)--(3.5,0.5);
\draw[thick] (2.5,0.5)--(4.5,3.5);
\draw[thick] (6.2,0.5)--(6.7,3.5);
\draw (2,1) node[above] {$C$};
\draw (5,1) node[above] {$D$};
\draw[rounded corners=0.3cm, red, thick] (4,1)--(4,1.5)--
(4.5,2)--(5,2.5)node[right] {\small $\sigma_D(Q_0)$}--(4.8,3);
\filldraw[fill=red, draw=red] (6.8,3) circle (2pt) ;
\filldraw[fill=red, draw=red] (7.2,1) circle (2pt) ;
\draw[thick, red] (6.8,3)--(7,2)--node[right]{\small $Q_0$} (7.2,1);
\filldraw[fill=red, draw=red] (1,2) node[left, red] {\small $p$} circle (2pt);
\draw[thin, red] (1,2)--(4.5,2);
\filldraw[fill=red, draw=red] (4.5,2) node[right, red] {\small $p''$} circle (2pt);
\foreach \y in {1.05,1.5,2,2.5,3.05}
\draw[->, red, thick] (1,\y)--(1.7,\y);
\foreach \y in {1.05,1.5,2,2.5,3.05}
\draw[->, red, thick] (2.5,\y)--(3.2,\y);
\end{tikzpicture}
\caption{$V_1, p''=\sigma_D(\pi_C(p))$.}
\label{fig:V1}
\end{figure}
\begin{proposition}
For sufficiently large $t\gg 0$, the vector field $tV_1+V_2$ is
directed to $D$. Similarly, $-tV_1+V_2$ is a vector field directed to
$D^\lor$.
\end{proposition}
\begin{proof}
Let $p\in H\in\operatorname{Wall}_1(C)$. Recall that $D$ is inside $\operatorname{Wall}_1(C)$.
Since $V_2$ is inward and $V_1$ is tangent to $H$, the vector field
$\pm tV_1+V_2$ is also
inward.
Let $H\in\operatorname{Wall}_2(C)$ and $p\in H\cap F$.
Then $V_1$
(resp. $-V_1$) is directed to $D$ (resp. $D^\lor$) with respect to $H$.
Hence for sufficiently large $t$, $tV_1+V_2$ (resp. $-tV_1+V_2$)
is directed to $D$ (resp. $D^\lor$).
\end{proof}
Since $V_1$ is nowhere vanishing vector field on $\overline{C}\cap F$,
$-tV_1+V_2$ is a nowhere vanishing vector field around $\overline{C}\cap F$
which is directed to $D^\lor$.
Hence $\deg(C, D^\lor)=0$.
This completes the proof of Theorem \ref{thm:triangular}.
\end{proof}
\subsection{The degree formula}
\label{subsec:degree}
This section is devoted to prove the following.
\begin{theorem}
\label{thm:degree}
Let $C\in\operatorname{bch}^{\ell-1}({\mathcal A})$. Suppose $\dim X(C)=d$. Then
\begin{equation}
\deg(C, C^\lor)=(-1)^{\ell-1-d}.
\end{equation}
\end{theorem}
We construct a vector field around $\overline{C}\cap F$ which is
directed to $C^\lor$. The vector field $V_2$ is the same as
in the previous section (\S \ref{subsec:uppertri}).
Define the vector field $V_1$ along the fibers $\pi_C$ by
\begin{equation}
V_1(p)=\overrightarrow{p;\sigma_C(\pi_C(p))}
\end{equation}
(see Figure \ref{fig:degreeformula}).
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1.2]
\draw[thick] (0,0)--(8,0) node [right] {$F$} -- (8,4) --
(0,4)--cycle;
\draw[thick] (0,1) -- (8,1);
\draw[thick] (0,3) -- (8,3);
\draw[thick] (0.5,0.5) -- (1.5,3.5);
\draw[thick] (3.5,3.5)--(4.5,0.5);
\draw[thick] (3.5,0.5)--(5.5,3.5);
\draw[thick] (6.2,0.5)--(6.7,3.5);
\draw (2,3) node[below] {$C$};
\draw[rounded corners=0.3cm, red, thick] (2,1)--(2,1.5)node[right] {\small $\sigma_C(Q_0)$}
--(2.5,2)--(3,2.5)--(2.8,3);
\filldraw[fill=red, draw=red] (6.8,3) circle (2pt) ;
\filldraw[fill=red, draw=red] (7.2,1) circle (2pt) ;
\draw[thick, red] (6.8,3)--(7,2)--node[right]{\small $Q_0$} (7.2,1);
\filldraw[fill=red, draw=red] (1,2) node[left, red] {\small $p$} circle (2pt);
\filldraw[fill=red, draw=red] (2.5,2) node[right, red] {\small $p''$} circle (2pt);
\foreach \y in {1.05,1.5,2,2.5,3.05}
\draw[->, red, thick] (1,\y)--(1.7,\y);
\foreach \y in {1.05,1.5,2,2.5,3.05}
\draw[->, red, thick] (4,\y)--(3.3,\y);
\end{tikzpicture}
\caption{$V_1, p''=\sigma_C(\pi_C(p))$.}
\label{fig:degreeformula}
\end{figure}
Then $tV_1+V_2$ is a vector field directed to $C$ (for $t\gg 0$).
Since $C$ and $C^\lor$ are separated by $H\in{\mathcal A}\smallsetminus\operatorname{Wall}_1(C)$,
the vector field $-tV_1+V_2$ is directed to $C^\lor$.
We can compute degree $\deg(C, C^\lor)$ using the vector field
$-tV_1+V_2$. Note that $-tV_1(p)$ is outward vector field in along
a $d$-dimensional space $G_1(p)$ and $V_2(p)$ is inward which is
tangent to a $(\ell-1-d)$-dimensional space $G_2(p)$.
Hence $\deg(C, C^\lor)$ is equal to the index of
the following vector field in $\mathbb{R}^{\ell-1}$ at the origin.
\begin{equation}
\label{eq:coord}
V=
\sum_{i=1}^d x_i\frac{\partial}{\partial x_i}-
\sum_{i=d+1}^{\ell-1} x_i\frac{\partial}{\partial x_i},
\end{equation}
where $d=\dim X(C)$.
Recall that the de Rham cohomology group
$H^{\ell-1}(S^{\ell-2})$
is generated by the differential form (\cite{BT})
\[
\sum_{i=1}^{\ell-1}(-1)^{i-1}x_idx_1\wedge\dots\wedge
\widehat{dx_i}\wedge\dots\wedge dx_{\ell-1}.
\]
It is easily seen that the self map of
$H^{\ell-1}(S^{\ell-2})$
induced by the Gauss map of the vector field (\ref{eq:coord}) is equal to
the multiplication by $(-1)^{\ell-1-d}$.
This completes the proof of Theorem \ref{thm:degree}.
\medskip
\noindent
{\bf Acknowledgements.}
The present work was conducted during P. Bailet's stay at Hokkaido University
as a postdoc. She thanks Postdoctoral Fellowship for Foreign Researchers
(JSPS) for financial and other supports.
M. Yoshinaga is partially supported by
Grant-in-Aid for Scientific Research (C) 25400060 (JSPS).
|
train/arxiv
|
BkiUe_HxK02iP4WkBXCV
| 5 | 1 |
\section{Introduction}
In this work we consider the composite finite-sum optimization problem
\begin{equation}\label{primal-LSVRG}
\textstyle \min \limits_{x\in \mathbb{R}^d} \left[ P(x) \; { := }\; \frac{1}{n} \sum\limits_{\tau=1}^n f^{(\tau)}(x) + \psi(x) \right],
\end{equation}
where $f(x)\; { := }\;\frac{1}{n}\sum_{\tau} f^{(\tau)}(x)$ is an average of $n$ smooth\footnote{We say that a function $\phi:\mathbb{R}^d\to \mathbb{R}$ is smooth if it is differentiable, and has $L_\phi$ Lipschitz gradient: $\|\nabla \phi(x)-\phi(y)\| \leq L_\phi \|x-y\|$ for all $x,y\in \mathbb{R}^d$. We say that $L_\phi$ is the {\em smoothness constant} of $\phi$.} convex functions $f^{(\tau)}:\mathbb{R}^d\to \mathbb{R}$ distributed over $n$ nodes (devices, computers), and $\psi:\mathbb{R}^d \to \mathbb{R}\cup \{+\infty\}$ is a proper closed convex function representing a possibly nonsmooth regularizer. On each node, $f^{(\tau)}(x)$ is an average of $m$ smooth convex functions
\begin{equation}\label{eq:bui8f0890f}
\textstyle f^{(\tau)}(x) = \frac{1}{m} \sum \limits_{i=1}^m f^{(\tau)}_i(x) ,
\end{equation}
representing the average loss over the training data stored on node $\tau$. While we specifically focus on the case when $m=1$, our results are also new in the $m=1$ case, and hence this regime is relevant as well. We assume throughout that problem (\ref{primal-LSVRG}) has at least one optimal solution $x^*$. We denote the smoothness constants of functions $f$, $f^{(\tau)}$ and $f^{(\tau)}_i$ using symbols $L_f$, ${\bar L}$ and $L$, respectively. These constants are in general related as follows: \begin{equation}\label{eq:3smoothnessconstants}L_f \leq {\bar L} \leq n L_f, \qquad \bar{L} \leq L \leq m \bar{L}.\end{equation}
When training very large scale supervised machine learning problems, such as those arising in the context of federated learning \cite{FEDLEARN, FL2017-AISTATS, FEDOPT} (see also recent surveys \cite{FL_survey_2019, FL-big}), distributed algorithms need to be used. In such settings, communication is generally much slower than (local) computation, which makes communication the key bottleneck in the design of efficient distributed systems. There are several ways to tackle this issue, including reliance on large mini-batches \cite{Goyal17, You17}, asynchronous learning \cite{Agarwal11, Lian15, Recht11}, local updates \cite{COCOA+journal, localSGD-Stich, localSGD-AISTATS2020, Hanzely2020, Blake2020} and communication compression (e.g., quantization and sparsification) \cite{Alistarh17, Bernstein18, Mish19, Seide14, Wen17}. In this work we focus on the last of these techniques: communication compression.
\subsection{Communication compression}
\paragraph{Contraction and unbiased compressors.}
We say that a randomized map $Q:\mathbb{R}^d\to \mathbb{R}^d$ is a {\em contraction compressor} if there exists a constant $0< \delta \leq 1$ such that
\begin{equation}\label{eq:contractor}
\mathbb{E} \left[\|x - Q(x) \|^2\right] \leq (1-\delta)\|x\|^2, \quad \forall x\in \mathbb{R}^d.
\end{equation}
Further, we say that a randomized map $\tilde{Q}:\mathbb{R}^d\to \mathbb{R}^d$ is an {\em unbiased compressor} if there exists a constant $\omega \geq 0$ such that
\begin{equation} \label{eq:unbiased}
\mathbb{E}[{\tilde Q}(x) ] = x \quad {\rm and} \quad \mathbb{E}\|{\tilde Q}(x)\|^2 \leq (\omega + 1)\|x\|^2, \qquad \forall x\in \mathbb{R}^d.
\end{equation}
It is well known that (see, e.g., \cite{biased2020}) after appropriate scaling, any unbiased compressor satisfying \eqref{eq:unbiased} becomes a contraction compressor. Indeed, for any ${\tilde Q}$ satisfying \eqref{eq:unbiased}, $\frac{1}{\omega+1}{\tilde Q}$ is a contraction compressor satisfying \eqref{eq:contractor} with $\delta = \frac{1}{\omega+1}$, as shown here:
\begin{align*}
\textstyle \mathbb{E}\left[ \left\|\frac{1}{\omega+1}{\tilde Q}(x) - x \right\|^2 \right] &= \textstyle \frac{1}{(\omega+1)^2} \mathbb{E} \left[\|{\tilde Q}(x)\|^2\right]+ \|x\|^2 - \frac{2}{\omega+1}\mathbb{E} \left[ \langle {\tilde Q}(x), x\rangle \right]\\
&\leq \textstyle \frac{1}{\omega+1}\|x\|^2 + \|x\|^2 - \frac{2}{\omega+1}\|x\|^2 = \left(1 - \frac{1}{\omega+1} \right) \|x\|^2.
\end{align*}
Since compressors are typically applied in a scaled fashion, using a scaling stepsize, this means that for all practical purposes, the class of unbiased compressors is included in the class of contraction compressors. For examples of contraction and unbiased compressors, we refer the reader to \cite{biased2020}.
\subsection{Error compensation} While compression reduces the communicated bits in each communication round, it introduces errors, which generally leads to an increase in the number of communication rounds needed to find a solution of any predefined accuracy. Still, compression has been found useful in practice, as the trade-off often seems to prefer compression to no compression. In order to deal with the errors introduced by compression, some form of error compensation/error feedback is needed.
If we assume that the accumulated error is bounded, and in the case of unbiased compressors, the convergence rate of error compensated SGD was shown to be the same as that of vanilla SGD \cite{Tang18}. However, if we only assume bounded second moment of the stochastic gradients, in order to guarantee the boundedness of the accumulated quantization error, some decaying factor needs to be involved in general, and error compensated SGD is proved to have some advantage over QSGD in some perspective for convex quadratic problem \cite{Wu18}. On the other hand, for contraction compressors (for example, the TopK compressor \cite{Alistarh18}), error compensated SGD actually has the same convergence rate as vanilla SGD \cite{Stich18, Stich19, Tang19}. Since SGD only has a sublinear convergence rate, the current error compensated methods could not get linear convergence rate. If $f$ is non-smooth and $\psi=0$, error compensated SGD was studied in \cite{karimireddy2019error} in the single node case, and the convergence rate is of order $O\left( \nicefrac{1}{\sqrt{\delta k} } \right)$.
For variance-reduced methods, QSVRG \cite{Alistarh17} handles the smooth case ($\psi\equiv 0$) and VR-DIANA \cite{Samuel19} handles the composite case (general $\psi$). However, the compressors of both algorithms need to be unbiased. Error compensation in VR-DIANA does not need to be used since this method successfully employs variance reduction (of the variance introduced by the compressor) instead. In this paper, we study error compensation in conjunction with the acceleration mechanism employed in loopless Katyusha (L-Katyusha) \cite{LSVRG}, for any contraction compressor.
\subsection{Contributions}
We now summarize the main contributions of our work.
\paragraph{Acceleration for error compensation.} We develop a new communication efficient algorithm for solving the distributed optimization problem \eqref{primal-LSVRG}--\eqref{eq:bui8f0890f} which we call {\em Error Compensated Loopless Katyusha} (ECLK); see Algorithm~\ref{alg:ec-lkatyusha}. ECLK is the {\em first accelerated} error compensated SGD method, and can be seen as an EC variant of the Loopless Katyusha method developed in \cite{LSVRG}.
\paragraph{Iteration complexity.} We obtain the {\em first accelerated linear convergence rate} for error compensated methods using contraction operators. The iteration complexity of ECLK is
$$
\textstyle O\left( \left( \frac{1}{\delta} + \frac{1}{p} + \sqrt{\frac{L_f}{\mu}} + \sqrt{\frac{L}{\mu p n}} + \frac{1}{\delta}\sqrt{\frac{(1-\delta){{\color{blue}\bar L}}}{\mu p}} + \sqrt{\frac{(1-\delta)L}{\mu p \delta}} \right) \log\frac{1}{\epsilon} \right),
$$
where $p \in (0,1]$ is a parameter of the method described later. This is an improvement over the previous best known result for error compensated SGD by Beznosikov et al.~\cite{biased2020}, who obtain {\em nonaccelerated} linear rate. Moreover, they only consider the special case when $\psi\equiv 0$, and for their linear rate, they need to assume that $\nabla f^{(\tau)}(x^*) = 0$ for all $\tau$, and that full gradients are computed by all nodes.
If we invoke additional assumptions (Assumption~\ref{as:expcompressor} or Assumption~\ref{as:topkcompressor}) on the contraction compressor, the iteration complexity is improved to
$$
\textstyle
O \left( \left( \frac{1}{\delta} + \frac{1}{p} + \sqrt{\frac{L_f}{\mu}} + \sqrt{\frac{L}{\mu p n}} + \frac{1}{\delta} \sqrt{\frac{(1-\delta) {\color{red}L_f}}{\mu p}} + \sqrt{\frac{(1-\delta) L}{\mu p \delta {\color{red}n}}} \right) \log \frac{1}{\epsilon} \right).
$$
This is indeed an improvement since ${\color{red} L_f} \leq {\color{blue}{\bar L}}$ (see \eqref{eq:3smoothnessconstants}), and because of the extra scaling factor of $\color{red}n$ in the last term.
If $\delta=1$, i.e., if no compression is used, we recover the iteration complexity of the accelerated method L-Katyusha \cite{qian2019svrg}.
\paragraph{Communication complexity.} Considering the communication complexity, the optimal choice of $p$ is $O(r(Q))$, where $r(Q)$ is the {\em compression ratio} for the compressor $Q$ defined in (\ref{eq:defrQ}). In particular, when $L_f = {\bar L} =L$, by choosing the optimal $p$, the communication complexity becomes
$$
\textstyle
O \left( \Delta_1 \left( \frac{r(Q)}{\delta} + \left( r(Q) + \frac{\sqrt{r(Q)}}{\sqrt{n}} + \frac{\sqrt{(1-\delta)r(Q)}}{\delta} \right) \sqrt{\frac{L}{\mu}} \right) \log \frac{1}{\epsilon} \right),
$$
where $\Delta_1$ is the communication cost of the uncompressed vector $x\in \mathbb{R}^d$.
\section{Gradient Compression Methods}\label{sec:compress}
\subsection{TopK and RandK}
We now give two canonical examples of contraction and unbiased compression operators.
\begin{example}[TopK compressor] For a parameter $1\leq K \leq d$, the TopK compressor is defined as
$$
({\rm TopK}(x))_{\pi(i)} = \left\{ \begin{array}{rl}
(x)_{\pi(i)} &\mbox{ if $i\leq K$, } \\
0 \quad \quad &\mbox{ otherwise, }
\end{array} \right.
$$
where $\pi$ is a permutation of $\{1, 2, ..., d\}$ such that $(|x|)_{\pi(i)} \geq (|x|)_{\pi(i+1)}$ for $i = 1, ..., d-1$, and if $(|x|)_{\pi(i)} = (|x|)_{\pi(i+1)}$, then $\pi(i) \leq \pi(i+1)$.
\end{example}
The definition of {\rm TopK} compressor is slightly different with that of \cite{Stich18}. In this way, {\rm TopK} compressor is a deterministic operator (well-defined when there are equal dimensions).
\begin{example}[RandK compressor] For a parameter $1\leq K \leq d$, the {\rm RandK} compressor is defined as
$$
({\rm RandK}(x))_{i} = \left\{ \begin{array}{rl}
(x)_i &\mbox{ if $i \in S$, } \\
0 \quad \quad &\mbox{ otherwise, }
\end{array} \right.
$$
where $S$ is chosen uniformly from the set of all $K$ element subsets of $\{1, 2, ..., d\}$. {\rm RandK} can be used to define an unbiased compressor via scaling. Indeed, it is easy to see that
\[
\mathbb{E}\left( \frac{d}{K} {\rm RandK}(x)\right) = x
\]
for all $x\in \mathbb{R}^d$.
\end{example}
For the {\rm TopK} and {\rm RandK} compressors, we have the following property.
\begin{lemma}[Lemma A.1 in \cite{Stich18}]
For the {\rm TopK} and {\rm RandK} compressors with $1\leq K \leq d$, we have
$$ \textstyle
\mathbb{E} \left[\|{\rm TopK}(x) - x \|^2 \right] \leq \left( 1 - \frac{K}{d} \right) \|x\|^2,
$$
and
$$
\mathbb{E}\left[ \|{\rm RandK}(x) - x \|^2\right] \leq \left( 1 - \frac{K}{d} \right) \|x\|^2.
$$
\end{lemma}
\subsection{Further assumptions}
We will optionally use the following additional assumptions for the contraction compressor. These assumptions are not necessary, but when used, they will lead to better complexity.
\begin{assumption}\label{as:expcompressor}
$\mathbb{E}[Q(x)] = \delta x$ and all $x\in \mathbb{R}^d$.
\end{assumption}
It is easy to verify that {\rm RandK} compressor satisfies Assumption \ref{as:expcompressor} with $\delta = \frac{K}{d}$, and $\frac{1}{\omega+1} {\tilde Q}$, where ${\tilde Q}$ is any unbiased compressor, also satisfies Assumption \ref{as:expcompressor} with $\delta = \frac{1}{\omega+1}$.
\begin{assumption}\label{as:topkcompressor}
For $x_{\tau} = \frac{\eta}{{\cal L}_1} g^k_{\tau} + e^k_{\tau} \in \mathbb{R}^d$, $\tau=1, ..., n$ and $k\geq 0$ in Algorithm \ref{alg:ec-lkatyusha}, there exist $\delta^\prime>0$ such that $\mathbb{E}[Q(x_{\tau})] = Q(x_{\tau})$, and
$$\textstyle
\left\|\sum \limits_{\tau=1}^n (Q(x_{\tau}) - x_{\tau} )\right\|^2 \leq (1-\delta^\prime) \left\| \sum \limits_{\tau=1}^n x_{\tau} \right\|^2.
$$
\end{assumption}
Since {\rm TopK} is deterministic, we have $\mathbb{E}[Q(x)] = Q(x)$ for any $x\in \mathbb{R}^d$. If $Q(x_{\tau})$ is close to $x_{\tau}$, then $\delta^\prime$ could be larger than $\frac{K}{d}$. Whenever Assumption \ref{as:topkcompressor} is needed, if $\delta > \delta^\prime$, we could decrease $\delta$ such that $\delta = \min \{ \delta, \delta^\prime \}$. In this way, we have the uniform parameter $\delta$ for the contraction compressor.
\section{Error Compensated L-Katyusha }
\subsection{Description of the method}
In this section we describe our method: error compensated L-Katyusha (see Algorithm \ref{alg:ec-lkatyusha}). The search direction in L-Katyusha in the distributed setting ($n\geq 1$) at iteration $k$ is
\begin{equation}\label{eq:sdinLkatyusha}
\textstyle \frac{1}{n} \sum \limits_{\tau=1}^n \left( \nabla f_{i_k^\tau}^{(\tau)}(x^k) - \nabla f_{i_k^\tau}^{(\tau)}(w^k) + \nabla f^{(\tau)}(w^k) \right),
\end{equation}
where $i_k^\tau$ is sampled uniformly and independently from $[m] \; { := }\; \{ 1, 2, ..., m \}$ on the $\tau$-th node for $1\leq \tau \leq n$, $x^k$ is the current iteration, and $w^k$ is the current reference point. Whenever $\psi$ is nonzero in problem (\ref{primal-LSVRG}), $\nabla f(x^*)$ is nonzero in general, and so is $\nabla f^{(\tau)}(x^*)$. Thus, compressing the direction
$$
\nabla f_{i_k^\tau}^{(\tau)}(x^k) - \nabla f_{i_k^\tau}^{(\tau)}(w^k) + \nabla f^{(\tau)}(w^k)
$$
directly on each node would cause nonzero noise even if $x^k$ and $w^k$ converged to the optimal solution $x^*$. On the other hand, since $f_i^{(\tau)}$ is $L$-smooth,
$
g^k_\tau = \nabla f_{i_k^\tau}^{(\tau)}(x^k) - \nabla f_{i_k^\tau}^{(\tau)}(w^k)
$
could be small if $x^k$ and $w^k$ are close enough. Thus, we compress the vector $\frac{\eta}{{\cal L}_1} g^k_\tau + e^k_\tau$ on each node instead. The accumulated error $e^{k+1}_\tau$ is equal to the compression error at iteration $k$ for each node. On each node, a scalar $u^k_\tau$ is also maintained, and only $u^k_1$ will be updated. The summation of $u^k_\tau$ is $u^k$, and we use $u^k$ to control the update frequency of the reference point $w^k$. All nodes maintain the same copies of $x^k$, $w^k$, $y^k$, $z^k$, ${\tilde g}^k$, and $u^k$. Each node sends their compressed vector ${\tilde g}^k_{\tau} = Q(\frac{\eta}{{\cal L}_1} g^k_{\tau} + e^k_{\tau})$ and $u^{k+1}_\tau$ to the other nodes. If $u^k=1$, each node also sends $\nabla f^{(\tau)}(w^k)$ to the other nodes. After the compressed vector ${\tilde g}^k_{\tau}$ is received, we add $\frac{\eta}{{\cal L}_1} \nabla f(w^k)$ to it as the search direction. We also need the following standard proximal operator:
$$
\textstyle
\operatorname{prox}_{\eta \psi} (x) \; { := }\; \arg\min_y \left\{ \frac{1}{2}\|x-y\|^2 + \eta \psi(y) \right\}.
$$
The reference point $w^k$ will be updated if $u^{k+1}=1$. It is easy to see that $w^k$ will be updated with propobility $p$ at each iteration.
\begin{algorithm}[tb]
\caption{Error Compensated Loopless Katyusha (ECLK)}
\label{alg:ec-lkatyusha}
\begin{algorithmic}[1]
\State {\bfseries Parameters:} stepsize parameters $\eta = \frac{1}{3\theta_1} >0$, ${\cal L}_1>0$, $\sigma_1 = \frac{\mu_f}{2{\cal L}_1}\geq 0$, $\theta_1, \theta_2 \in (0, 1)$; probability $p \in (0, 1]$
\State {\bfseries Initialization:}
$x^0 = y^0= z^0 = w^0 \in \mathbb{R}^d$; $e^0_\tau = 0 \in \mathbb{R}^d$; $u^0=1\in \mathbb{R}$
\For{ $k = 0, 1, 2, ...$}
\For{ $\tau = 1, ..., n$}
\State Sample $i_k^\tau$ uniformly and independently in $[m]$ on each node
\State $g^k_{\tau} = \nabla f_{i_k^\tau}^{(\tau)}(x^k) - \nabla f_{i_k^\tau}^{(\tau)}(w^k)$
\State ${\tilde g}^k_{\tau} = Q(\frac{\eta}{{\cal L}_1} g^k_{\tau} + e^k_{\tau})$
\State $e^{k+1}_{\tau} = e^k_{\tau} + \frac{\eta}{{\cal L}_1} g^k_{\tau} - {\tilde g}^k_{\tau}$
\State $u^{k+1}_\tau = 0$ for $\tau = 2, ..., n$
\State $
u^{k+1}_1 = \left\{ \begin{array}{rl}
1 & \mbox{ with probability $p$} \\
0 &\mbox{ with probability $1-p$}
\end{array} \right.
$
\State Send ${\tilde g}^k_{\tau}$ and $u^{k+1}_\tau$ to the other nodes
\State Send $\nabla f^{(\tau)}(w^k)$ to the other nodes if $u^k=1$
\State Receive ${\tilde g}^k_{\tau}$ and $u^{k+1}_\tau$ from the other nodes
\State Receive $\nabla f^{(\tau)}(w^k)$ from the other nodes if $u^k=1$
\State ${\tilde g}^k = \frac{1}{n} \sum_{\tau=1}^n {\tilde g}^k_{\tau}$
\State $u^{k+1} = \sum_{\tau=1}^n u^{k+1}_{\tau}$
\State $z^{k+1} = \operatorname{prox}_{\frac{\eta}{(1+\eta\sigma_1) {\cal L}_1} \psi} \left( \frac{1}{1 + \eta \sigma_1} \left( \eta \sigma_1 x^k + z^k - {\tilde g}^k - \frac{\eta}{{\cal L}_1} \nabla f(w^k) \right) \right)$
\State $y^{k+1} = x^k + \theta_1 (z^{k+1} - z^k)$
\State $
w^{k+1} = \left\{ \begin{array}{rl}
y^k & \mbox{ if $u^{k+1}=1$ } \\
w^k &\mbox{ otherwise }
\end{array} \right.
$
\State $x^{k+1} = \theta_1 z^{k+1} + \theta_2 w^{k+1} + (1-\theta_1 - \theta_2)y^{k+1}$
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Convergence analysis: preliminaries}
We now introduce some perturbed vectors which will be used in the convergence analysis. In Algorithm~\ref{alg:ec-lkatyusha}, let $e^k = \frac{1}{n}\sum_{\tau=1}^n e^k_\tau$, $g^k = \frac{1}{n} \sum_{\tau=1}^n g^k_\tau$, and ${\tilde x}^k = x^k - \frac{1}{1 + \eta \sigma_1}e^k$, ${\tilde z}^k = z^k - \frac{1}{1+\eta \sigma_1}e^k$ for $k\geq 0$. Then
$
e^{k+1} = \frac{1}{n} \sum_{\tau=1}^n \left( e^k_\tau + \frac{\eta}{{\cal L}_1} g^k_\tau - {\tilde g}^k_\tau \right) = e^k + \frac{\eta}{{\cal L}_1} g^k - {\tilde g}^k,
$
and
\begin{eqnarray}
{\tilde z}^{k+1}
&=& \textstyle z^{k+1} - \frac{1}{1 + \eta\sigma_1}e^{k+1} \nonumber \\
&=& \textstyle \frac{1}{1 + \eta \sigma_1} \left( \eta\sigma_1 x^k + z^k - {\tilde g}^k - \frac{\eta}{{\cal L}_1}\nabla f(w^k) \right) - \frac{\eta \partial \psi(z^{k+1}}{(1 + \eta \sigma_1){\cal L}_1}) - \frac{e^{k+1}}{1 + \eta\sigma_1} \nonumber \\
&=& \textstyle \frac{1}{1 + \eta \sigma_1} \left( \eta\sigma_1 x^k + z^k - e^k - \frac{\eta}{{\cal L}_1}g^k - \frac{\eta}{{\cal L}_1}\nabla f(w^k) \right) - \frac{\eta \partial \psi(z^{k+1})}{(1 + \eta \sigma_1){\cal L}_1} \nonumber \\
&=&\textstyle \frac{1}{1 + \eta \sigma_1} \left( \eta\sigma_1 {\tilde x}^k + {\tilde z}^k - \frac{\eta}{{\cal L}_1}g^k - \frac{\eta}{{\cal L}_1}\nabla f(w^k) \right) - \frac{\eta \partial \psi(z^{k+1})}{(1 + \eta \sigma_1){\cal L}_1}. \label{eq:tildezk+1}
\end{eqnarray}
The above relation plays an important role in the convergence analysis, and allows us to follow the analysis of original L-Katyusha. We need the following assumption in this section.
\begin{assumption}\label{as:eclkatyusha}
$f_i^{(\tau)}$ is $L$-smooth, $f^{(\tau)}$ is ${\bar L}$-smooth, $f$ is $L_f$-smooth and $\mu_f$-strongly convex, and $\psi$ is $\mu_\psi$-strongly convex.
\end{assumption}
We define some notations which will be used to construct the Lyapunov functions in the convergence analysis. Define $\mu = \mu_f + \mu_\psi$, ${\tilde {\cal Z}}^k = \frac{{\cal L}_1 + \eta \mu/2}{2\eta} \|{\tilde z}^k - x^*\|^2$, ${\cal Y}^k = \frac{1}{\theta_1} (P(y^k) - P^*)$, and ${\cal W}^k = \frac{\theta_2}{pq\theta_1} (P(w^k) - P^*)$. From the update rule of $w^k$ in Algorithm \ref{alg:ec-lkatyusha}, it is easy to see that
\begin{equation}\label{eq:wk+1}
\textstyle \mathbb{E}_k [{\cal W}^{k+1}] = (1-p) {\cal W}^k + \frac{\theta_2}{q} {\cal Y}^k,
\end{equation}
for $k \geq 0$. In the next lemma, we describe the evolution of the terms ${\tilde {\cal Z}}^{k}$ and ${\cal Y}^k$.
\begin{lemma}\label{lm:zyk+1}
If ${\cal L}_1 \geq L_f$ and $\theta_1 + \theta_2 \leq 1$, then $\mathbb{E}_k\left[ {\tilde {\cal Z}}^{k+1} + {\cal Y}^{k+1} \right]$ can be upper bounded by
\begin{eqnarray*}
\textstyle && \frac{{\cal L}_1{\tilde {\cal Z}}^k}{{\cal L}_1 + \eta\mu/2} + (1-\theta_1 - \theta_2) {\cal Y}^k + pq{\cal W}^k
+ \left( \frac{{\cal L}_1}{2\eta} + \frac{\mu_f}{2} \right) \|e^k\|^2 + \left( \frac{{\cal L}_1}{2\eta} + \frac{\mu}{2} \right) \mathbb{E}_k \|e^{k+1}\|^2 \\
&& \qquad - \frac{1}{\theta_1} \left( \theta_2 - \frac{2L}{n{\cal L}_1} \right) (f(w^k) - f(x^k) - \langle \nabla f(x^k), w^k-x^k \rangle ).
\end{eqnarray*}
\end{lemma}
Because of the compression, we have the additional error terms $\|e^k\|^2$ and $\|e^{k+1}\|^2$ in the evolution of ${\tilde {\cal Z}}^{k}$ and ${\cal Y}^k$ in Lemma~\ref{lm:zyk+1}. However, from the contraction property of the compressor, we can obtain inequalities controlling the evolution of $\frac{1}{n}\sum_{\tau=1}^n \|e^k_{\tau}\|^2$ and $\|e^k\|^2$ in the following two lemmas.
\begin{lemma}\label{lm:ek+1}
Th quantity $ \mathbb{E}_k \left[ \frac{1}{n}\sum \limits_{\tau=1}^n \|e^{k+1}_\tau \|^2 \right]$ is upper bounded by the expression
\begin{eqnarray*}
\textstyle \left( 1 - \frac{\delta}{2} \right) \frac{1}{n} \sum \limits_{\tau=1}^n \|e^k_\tau\|^2 + \frac{2(1-\delta)\eta^2}{{\cal L}_1^2} \left( \frac{2{\bar L}}{\delta} + L \right) (f(w^k) - f(x^k) - \langle \nabla f(x^k), w^k-x^k \rangle ).
\end{eqnarray*}
\end{lemma}
\begin{lemma}\label{lm:ek+1-2}
Under Assumption \ref{as:expcompressor} or \ref{as:topkcompressor}, the quantity $\mathbb{E}_k [\|e^{k+1}\|^2] $ is upper bounded by
\begin{eqnarray*}
\textstyle \left( 1 - \frac{\delta}{2} \right) \|e^k\|^2 + \frac{2(1-\delta)\delta}{n^2} \sum \limits_{\tau=1}^n \|e^k_{\tau} \|^2 + \frac{2(1-\delta)\eta^2}{{\cal L}_1^2} \left( \frac{2L_f}{\delta} + \frac{3L}{n} \right) (f(w^k) - f(x^k) - \langle \nabla f(x^k), w^k-x^k \rangle ).
\end{eqnarray*}
\end{lemma}
\subsection{Convergence analysis: main results}
From the above three lemmas, we can construct suitable Lyapunov functions which enable us to prove linear convergence. First, we construct the Lyapunov function $\Psi^k$ for the general case as follows. Let ${\cal L}_2 \; { := }\; \frac{4L}{n} + \frac{112(1-\delta) {\bar L}}{9\delta^2} + \frac{56(1-\delta) L}{9\delta}$, and for $k \geq 0$ define
$$
\textstyle
{\Phi}^k \; { := }\; {\tilde {\cal Z}}^{k} + {\cal Y}^{k} + {\cal W}^{k} + \frac{4{\cal L}_1}{\delta \eta} \cdot \frac{1}{n} \sum_{\tau=1}^n \|e^{k}\|^2.
$$
We are now ready to state our main convergence theorems.
\begin{theorem}\label{th:eclkatyusha-1}
Assume the compressor $Q$ in Algorithm \ref{alg:ec-lkatyusha} is a contraction compressor and Assumption~\ref{as:eclkatyusha} holds. If ${\cal L}_1 \geq \max \{ L_f, 3\mu\eta \}$, $\theta_1 + \theta_2 \leq 1$, and $\theta_2 \geq \frac{{\cal L}_2}{2{\cal L}_1}$, then we have
$$ \textstyle
\mathbb{E} \left[\Phi^k\right]\leq \left(1-\min\left( \frac{\mu}{\mu+6\theta_1 {\cal L}_1},\theta_1 + \theta_2 - \frac{\theta_2}{q}, p(1-q), \frac{\delta}{6} \right)\right)^k \Phi^0,\enspace \forall k\geq 0.
$$
\end{theorem}
If Assumption \ref{as:expcompressor} or Assumption \ref{as:topkcompressor} holds, we can define the Lyapunov function $\Psi^k$ as follows.
Let ${\cal L}_3 \; { := }\; \frac{4L}{n} + \frac{784(1-\delta) L_f}{9\delta^2} + \frac{56(1-\delta)L}{\delta n}$, and for $k\geq 0$ define
$$
\textstyle
\Psi^k \; { := }\; {\tilde {\cal Z}}^{k} + {\cal Y}^{k} + {\cal W}^{k} + \frac{4{\cal L}_1}{\delta \eta} \|e^{k}\|^2 + \frac{28{\cal L}_1(1-\delta)}{\delta \eta n} \cdot \frac{1}{n} \sum_{\tau=1}^n \|e^{k}_\tau\|^2,
$$
\begin{theorem}\label{th:eclkatyusha-2}
Assume the compressor $Q$ in Algorithm \ref{alg:ec-lkatyusha} is a contraction compressor and Assumption~\ref{as:eclkatyusha} holds. Assume Assumption \ref{as:expcompressor} or Assumption \ref{as:topkcompressor} holds. If ${\cal L}_1 \geq \max \{ L_f, 3\mu\eta \}$, $\theta_1 + \theta_2 \leq 1$, and $\theta_2 \geq \frac{{\cal L}_3}{2{\cal L}_1}$, then we have
$$
\textstyle
\mathbb{E} \left[\Psi^k\right]\leq \left(1-\min\left( \frac{\mu}{\mu+6\theta_1 {\cal L}_1},\theta_1 + \theta_2 - \frac{\theta_2}{q}, p(1-q), \frac{\delta}{6} \right)\right)^k \Psi^0,\enspace \forall k\geq 0.
$$
\end{theorem}
In order to cast the above results into a more digestable form, we formulate the following corollary.
\begin{corollary}\label{co:eclkatyusha}
Assume the compressor $Q$ in Algorithm \ref{alg:ec-lkatyusha} is a contraction compressor and Assumption~\ref{as:eclkatyusha} holds. Let
$
{\cal L}_1 = \max\left( {\cal L}_4, L_f, 3\mu \eta \right)$, $\theta_2 = \frac{{\cal L}_4}{2\max\{ L_f, {\cal L}_4 \}}$ and
\begin{align*}
\textstyle
\theta_1=\left\{\begin{array}{ll}
\min\left( \sqrt{\frac{\mu}{{\cal L}_4 p}}\theta_2, \theta_2 \right)& \mathrm{~if~}L_f \leq \frac{{\cal L}_4}{p}\\ \min\left( \sqrt{\frac{\mu}{L_f}}, \frac{p}{2} \right) & \mathrm{otherwise}
\end{array}\right. .
\end{align*}
(i) Let ${\cal L}_4 = {\cal L}_2$. Then with some $q \in [\frac{2}{3}, 1)$, $\mathbb{E}[\Phi^k] \leq \epsilon \Phi^0$ for
\begin{equation}\label{eq:iter1}
\textstyle
k \geq O\left( \left( \frac{1}{\delta} + \frac{1}{p} + \sqrt{\frac{L_f}{\mu}} + \sqrt{\frac{L}{\mu p n}} + \frac{1}{\delta}\sqrt{\frac{(1-\delta){\bar L}}{\mu p}} + \sqrt{\frac{(1-\delta)L}{\mu p \delta}} \right) \log\frac{1}{\epsilon} \right).
\end{equation}
(ii) Let ${\cal L}_4 = {\cal L}_3$. If Assumption \ref{as:expcompressor} or \ref{as:topkcompressor} holds, then for some $q \in [\frac{2}{3}, 1)$, we have $\mathbb{E}[\Psi^k] \leq \epsilon \Psi^0$ for
\begin{equation}\label{eq:iter2}
\textstyle
k \geq O \left( \left( \frac{1}{\delta} + \frac{1}{p} + \sqrt{\frac{L_f}{\mu}} + \sqrt{\frac{L}{\mu p n}} + \frac{1}{\delta} \sqrt{\frac{(1-\delta) L_f}{\mu p}} + \sqrt{\frac{(1-\delta) L}{\mu p \delta n}} \right) \log \frac{1}{\epsilon} \right).
\end{equation}
\end{corollary}
Noticing that $L_f\leq {\bar L} \leq nL_f$ and ${\bar L} \leq L \leq m{\bar L}$, the iteration complexity in (\ref{eq:iter2}) could be better than that in (\ref{eq:iter1}). On the other hand, if $L_f = {\bar L} = L$, then both iteration complexities in (\ref{eq:iter1}) and (\ref{eq:iter2}) become
\begin{equation}\label{eq:iter3}
\textstyle
O\left( \left( \frac{1}{\delta} + \frac{1}{p} + \sqrt{\frac{L}{\mu}} + \sqrt{\frac{L}{\mu p n}} + \frac{1}{\delta} \sqrt{\frac{(1-\delta) L}{\mu p}} \right) \log \frac{1}{\epsilon} \right).
\end{equation}
\section{Communication Cost}\label{sec:com}
\paragraph{Optimal choice of $p$.} In Algorithm \ref{alg:ec-lkatyusha}, when $w^k$ is updated, the uncompressed vector $\nabla f^{(\tau)}(w^k)$ need to be communicated. We denote $\Delta_1$ as the communication cost of the uncompressed vector $x\in \mathbb{R}^d$. Define the compress ratio $r(Q)$ for the contraction compressor $Q$ as
\begin{equation}\label{eq:defrQ}
r(Q) \; { := }\; \sup_{x \in \mathbb{R}^d} \left\{ \mathbb{E} \left[ \frac{\mbox{ communication cost of $Q(x)$ }}{ \Delta_1} \right] \right\}.
\end{equation}
Denote the total expected communication cost for $k$ iterations as ${\cal T}_k$. The expected communication cost at iteration $k \geq 1$ is bounded by $\Delta_1 r(Q) + 1 + p\Delta_1$, where 1 bit is needed to communicate $u^{k+1}_\tau$, and the expected communication cost at iteration $k =0$ is bounded by $\Delta_1 r(Q) + 1 + \Delta_1$. Hence,
\begin{eqnarray}
{\cal T}_k &\leq & \textstyle \Delta_1 r(Q) + 1 + \Delta_1 + (\Delta_1 r(Q) + 1 + p\Delta_1)k \nonumber \\
&\leq &\textstyle \Delta_1 r(Q) + 1 + \Delta_1 + (\Delta_1 r(Q) + 1)\left( 1 + \frac{p}{r(Q)} \right) k. \label{eq:Tk}
\end{eqnarray}
Next, we discuss how to choose $p$ to minimize the total expected communication cost. From Corollary~\ref{co:eclkatyusha} (i) and (\ref{eq:Tk}), we have $\mathbb{E}[\Phi^k] \leq \epsilon \Phi^0$ for
\begin{eqnarray*}
{\cal T}_k &=& \textstyle O \left( (\Delta_1r(Q) + 1) \left( 1 + \frac{p}{r(Q)} \right) \left( a + \frac{1}{p} + \frac{b}{\sqrt{p}} \right) \log \frac{1}{\epsilon} \right) \\
&=& \textstyle O \left( (\Delta_1r(Q) + 1) \left( a + \frac{pa}{r(Q)} + \frac{1}{p} + \frac{1}{r(Q)} + \frac{b}{\sqrt{p}} + \frac{b\sqrt{p}}{r(Q)} \right) \log \frac{1}{\epsilon} \right),
\end{eqnarray*}
where we denote $a = \frac{1}{\delta} + \sqrt{\frac{L_f}{\mu}}$ and $b = \sqrt{\frac{L}{\mu n}} + \frac{1}{\delta}\sqrt{\frac{(1-\delta){\bar L}}{\mu}} + \sqrt{\frac{(1-\delta)L}{\mu \delta}} $. Noticing that $\frac{b}{\sqrt{p}} + \frac{b\sqrt{p}}{r(Q)} \geq \frac{2b}{\sqrt{r(Q)}}$, we have
$$
\textstyle
O\left( a + \frac{pa}{r(Q)} + \frac{1}{p} + \frac{1}{r(Q)} + \frac{b}{\sqrt{p}} + \frac{b\sqrt{p}}{r(Q)} \right) \geq O\left( a + \frac{1}{r(Q)} + \frac{b}{\sqrt{r(Q)}} \right),
$$
and the above lower bound holds for $p = O(r(Q))$. Hence, in order to minimize the total expected communication cost, the optimal choice of $p$ is $O(r(Q))$.
Under Assumption \ref{as:expcompressor} or \ref{as:topkcompressor}, from Corollary \ref{co:eclkatyusha} (ii), by the same analysis, in order to minimize the total expected communication cost for $\mathbb{E}[\Psi^k] \leq \epsilon \Psi^0$, the optimal choice of $p$ is also $O(r(Q))$.
\paragraph{Comparison to the uncompressed L-Katyusha.} For simplicity, we assume $L_f = {\bar L} = L$ and $\Delta_1 r(Q) \geq O(1)$. From (\ref{eq:iter3}) and (\ref{eq:Tk}), by choosing $p=O(r(Q))$, we have
\begin{equation}\label{eq:Tk22}
\textstyle
{\cal T}_k = O \left( \Delta_1 \left( \frac{r(Q)}{\delta} + \left( r(Q) + \frac{\sqrt{r(Q)}}{\sqrt{n}} + \frac{\sqrt{(1-\delta)r(Q)}}{\delta} \right) \sqrt{\frac{L}{\mu}} \right) \log \frac{1}{\epsilon} \right).
\end{equation}
For uncompressed L-Katyusha, by choosing $p=1$, we have
\begin{equation}\label{eq:Tk33}
\textstyle
{\cal T}_k = O \left( \Delta_1 \sqrt{\frac{L}{\mu}} \log \frac{1}{\epsilon} \right).
\end{equation}
If $\frac{\sqrt{r(Q)}}{\delta} < 1$, then the communication cost in (\ref{eq:Tk22}) is less than that in (\ref{eq:Tk33}). For TopK compressor, $r(Q) = \frac{K(64 + \lceil \log d \rceil)}{64d}$, and in practice $\delta$ can be much larger than $\frac{K}{d}$, sometimes even in order $O(1)$.
\begin{figure*}[t]
\vspace{0cm}
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.45\linewidth]{fig/fig_a5a_iter.pdf}&
\includegraphics[width=0.45\linewidth]{fig/fig_mushrooms_iter.pdf}&
\end{tabular}
\vskip -0.2cm
\caption{The iteration complexity performance of Top k=1 vs Random dithering 1-bit vs No compression for the error compensated L-Katyusha on \texttt{a5a} and \texttt{mushrooms} datasets.}
\label{fig:iter}
\end{figure*}
\begin{figure*}[t]
\vspace{0cm}
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.3\linewidth]{fig/fig_top_k_bits.pdf}&
\includegraphics[width=0.3\linewidth]{fig/fig_rand_dithering_bits.pdf}&
\includegraphics[width=0.3\linewidth]{fig/fig_no_comp_bits.pdf}&
\end{tabular}
\vskip -0.2cm
\caption{The communication complexity performance of TopK (with $K=1$) vs Random dithering 1-bit vs No compression for the error compensated L-Katyusha on \texttt{a5a} and \texttt{mushrooms} datasets.}
\label{fig:combit}
\end{figure*}
\section{Experiments}
In this section, we experimentally study the performance of error compensated L-Katyusha used with several contraction compressors on the logistic regression problem for binary classification:
$$
x \mapsto \log\left( 1 + \exp(-y_iA^T_i x) \right) + \frac{\lambda}{2} \|x\|^2,
$$
where $\{ A_i, y_i \}$ is the data point. We use two datasets, namely, $a5a$ and $mushrooms$ from the LIBSVM library \cite{chang2011libsvm}. The regularization parameter $\lambda=10^{-3}$. The number of nodes in our experiments is $20$, and the optimal solution is obtained by running the uncompressed L-Katyusha for $10^5$ iterations. We use the parameter setting in Corollary \ref{co:eclkatyusha} (ii). We calculate the theoretical $L_f$ and $L$ as $L_f^{th}$ and $L^{th}$ respectively. Then we choose $L_f = t \cdot L_f^{th}$ and $L=t \cdot L^{th}$, and search the best $t$ for $t \in \{ 10^{-k} | k=0, 1, 2... \}$ in each case.
\paragraph{Compressors.} In our experiments, we use two contraction compressors: TopK compressor with $K=1$ and compressor $\frac{1}{\omega + 1} {\tilde Q}$, where ${\tilde Q}$ is the unbiased random dithering compressor in \cite{Alistarh17} with level $s=2^1$. For TopK compressor, $r(Q) = \frac{K(64 + \lceil \log d \rceil)}{64d}$. For random dithering compressor, from Theorem~3.2 in \cite{Alistarh17}, we can get
$$
\textstyle
r(Q) = \frac{1}{64 d} \left( \left( 3 + \left( \frac{3}{2} + o(1) \right) \log \left( \frac{2(s^2 + d)}{s(s + \sqrt{d})} \right) \right)s(s + \sqrt{d}) + 64 \right).
$$
\subsection{TopK vs Random dithering vs No compression}
In this subsection, we compare the uncompressed L-Katyusha with the error compensated L-Katyusha with two contraction compressors: TopK compressor and random dithering compressor. For simplicity, we choose $p=r(Q)$, and explore the influence of $p$ in the next subsection. Figure \ref{fig:iter} and figure \ref{fig:combit} show the iteration complexity and communication complexity of them respectively. We can see that compared with the uncompressed L-Katyusha, the error compensated L-Katyusha with TopK and random dithering compressors need more iterations to reach the optimal solution, but need much less communication bits. In particular, the error compensated L-Katyusha with Top1 compressor is more than 10 times faster than the umcompressed L-Katyusha considering the communication complexity.
\begin{figure*}[t]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.45\linewidth]{fig/fig_a5a_top_k_diff_p.pdf}&
\includegraphics[width=0.45\linewidth]{fig/fig_a5a_rand_dithering_diff_p.pdf} \\
\includegraphics[width=0.45\linewidth]{fig/fig_mushrooms_top_k_diff_p.pdf}&
\includegraphics[width=0.45\linewidth]{fig/fig_mushrooms_rand_dithering_diff_p.pdf}&
\end{tabular}
\caption{The influence of $p$ for the communication complexity performance of Top k=1 and Random dithering 1-bit for the error compensated L-Katyusha on \texttt{a5a} and \texttt{mushrooms} datasets.}
\label{fig:p}
\end{figure*}
\begin{figure*}[t]
\vspace{0cm}
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.48\linewidth]{fig/fig_a5a_top_k_compare.pdf}&
\includegraphics[width=0.48\linewidth]{fig/fig_mushrooms_top_k_compare.pdf}&
\end{tabular}
\vskip -0.2cm
\caption{The communication complexity performance of ECSGD vs ECGD vs EC-LKatyusha vs EC-LKatyusha-full for Top k=1 on \texttt{a5a} and \texttt{mushrooms} datasets.}
\label{fig:topkcompare}
\end{figure*}
\begin{figure*}[t]
\vspace{0cm}
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.48\linewidth]{fig/fig_a5a_rand_dithering_compare.pdf}&
\includegraphics[width=0.48\linewidth]{fig/fig_mushrooms_rand_dithering_compare.pdf}&
\end{tabular}
\vskip -0.2cm
\caption{The communication complexity performance of ECSGD vs ECGD vs EC-LKatyusha vs EC-LKatyusha-full for Random dithering 1-bit on \texttt{a5a} and \texttt{mushrooms} datasets.}
\label{fig:randomcompare}
\end{figure*}
\begin{figure*}[t]
\vspace{0cm}
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.48\linewidth]{fig/fig_a5a_top_k_compareDNIANA.pdf}&
\includegraphics[width=0.48\linewidth]{fig/fig_mushrooms_top_k_compareDNIANA.pdf}&
\end{tabular}
\vskip -0.2cm
\caption{The communication complexity performance of EC-LKatyusha-full vs ADIANA on \texttt{a5a} and \texttt{mushrooms} datasets.}
\label{fig:compareANIANA}
\end{figure*}
\subsection{Influence of $p$}
In this subsection, we show the influence of the parameter $p$ for the communication complexity of the error compensated L-Katyusha with TopK and random dithering compressors respectively. We choose $p = t \cdot r(Q)$ for $t\in \left\{ 3, 1, \frac{1}{3}, \frac{1}{9} \right\}$. Figure \ref{fig:p} shows that $p=r(Q)$ or $p=\frac{1}{3}r(Q)$ achieves the best performance, which coincides with our analysis in Section \ref{sec:com}.
\subsection{Comparison to ECSGD and ECGD}
In this subsection, we compare error compensated L-Katyusha (ECLK) with error compensated SGD (ECSGD) and error compensated GD (ECGD) for TopK compressor and random dithering compressor. ECGD is actually a special case of ECSGD with $m=1$, where the full gradient $\nabla f^{(\tau)}(x^k)$ is calculated on each node. Let ECLK-full be the special case of ECLK with $m=1$, where the full gradient $\nabla f^{(\tau)}(x^k)$ is calculated on each node. For ECLK, we choose $p=r(Q)$. Figure~\ref{fig:topkcompare} and Figure~\ref{fig:randomcompare} show that ECSGD and ECGD can only converge to a neighborhood of the optimal solution, while ECLK and ECLK-full converge to the optimal solution, and at a linear rate.
\subsection{Comparison to ADIANA}
ADIANA \cite{li2020acceleration} is an accelerated method for any unbiased compressor where the full gradient is used on each node. In this subsection, we compare the EC-LKatyusha-full with ADIANA. For the unbiased compressor ${\tilde Q}$ for ADIANA, we use random dithering compressor with $s=2$ and $s = \sqrt{d}$. For the contraction compressor, we use TopK compressor with $K=1$ and $\frac{1}{\omega + 1}{\tilde Q}$ where ${\tilde Q}$ is the random dithering compressor with $s=2$ and $s = \sqrt{d}$. Figure \ref{fig:compareANIANA} shows that for the communication complexity, the EC-LKatyusha-full with Top1 compressor is the best. For the random dithering compressor with $s=2$ or $s = \sqrt{d}$, the communication complexity of EC-LKatyusha-full is also better than that of ADIANA.
\clearpage
\bibliographystyle{plain}
|
train/arxiv
|
BkiUdt45qdmDMTwvPh-n
| 5 | 1 |
\section{INTRODUCTION}
The injection and manipulation of electron spins in
semiconductors are important issues for spin-based electronics,
``spintronics.''\cite{Zutic} The spin-orbit (SO) interaction
plays an important role in
the manipulation of the spins.
For conduction electrons in direct-gap semiconductors,
the SO interaction is expressed in the same form as that
in vacuum,
that is,
\begin{equation}
H_{\rm SO} =\frac{\lambda}{\hbar} \bm{\sigma} \cdot
\left[\bm{p} \times \bm{\nabla} V(\bm{r}) \right]
\label{eq:SOorg}
\end{equation}
where $V(\bm{r})$ is an external potential and $\bm{\sigma}$
indicates the electron spin $\bm{s}=\bm{\sigma}/2$.
The coupling constant $\lambda$ is significantly enhanced by
the band effect, particularly in narrow-gap semiconductors such
as InAs,~\cite{Winkler} compared with that in vacuum,
$|\lambda|=\hbar^2/(4 m_0^2 c^2)$ with $m_0$ as the
electron mass and $c$ as the velocity of light.
In two-dimensional electron gas (2DEG) in semiconductor
heterostructures, an electric field perpendicular to the
2DEG results in the Rashba SO interaction.~\cite{Rashba}
For the electric field ${\cal E}$ in the $z$ direction,
the substitution of $V(\bm{r})=e {\cal E} z$ into Eq.\
(\ref{eq:SOorg}) yields
\begin{equation}
H_{\rm SO} =\frac{\alpha}{\hbar} (p_y\sigma_x-p_x\sigma_y),
\label{eq:Rashba}
\end{equation}
with $\alpha=e {\cal E} \lambda$. Large values of $\alpha$
have been reported in experiments.\cite{Nitta,Grundler,Yamada}
In the spin transistor proposed by Datta and
Das,~\cite{spintransistor} electron spins are injected into
the semiconductor heterostructures from a ferromagnet,
and manipulated by tuning the strength of Rashba SO
interaction by adjusting the electric field ${\cal E}$.
The spins are detected by another ferromagnet.
It is well-known, however, that the efficiency of spin
injection from a ferromagnetic metal to semiconductors is
very poor, less than 0.1\%, due to the conductivity
mismatch.~\cite{mismatch}
To overcome this difficulty,
the SO interaction may be useful for
the efficient spin injection, besides the spin manipulation,
in the spin transistor.
Several spin filters have been
proposed utilizing the SO interaction, e.g.,
three- or four-terminal devices based on the spin Hall
effect (SHE),~\cite{Bulgakov,Kiselev,Kiselev2,Pareek}
a triple-barrier tunnel diode,~\cite{3diode}
a quantum point contact,~\cite{Eto05,Silvestrov}
a three-terminal device for the Stern-Gerlach experiment
using a nonuniform SO interaction,~\cite{3termSG}
and an open quantum dot.~\cite{Krich}
Yamamoto and Kramer proposed a three-terminal spin filter
with an antidot, using a SHE caused by the scattering
of electrons at a repulsive potential created by the
antidot.~\cite{Yamamoto}
The SHE is one of the phenomena to create a spin current due
to the SO interaction. There are two
types of SHE.
One is an intrinsic SHE which creates a dissipationless spin
current in the perfect crystal.~\cite{Murakami,Sinova}
Murakami \textit{et al}., for example, proposed that
the drift motion of holes in SO-split valence bands induces
an intrinsic SHE.~\cite{Murakami}
The SHE of the hole system has been observed experimentally
by Wunderlich \textit{et al}.,
using a $p$-$n$ junction light-emitting diode.~\cite{Wunderlich}
The other type is an extrinsic SHE caused by the spin-dependent
scattering of electrons by
impurities.~\cite{Dyakonov,Hirsch,Zhang,Engel}
For a centrally symmetric potential around an impurity, $V(r)$
($r=\sqrt{x^2+y^2+z^2}$), Eq.\ (\ref{eq:SOorg}) is rewritten as
\begin{equation}
H_{\rm SO} =-\lambda\frac{2}{r} \frac{dV}{dr}
\bm{l} \cdot \bm{s},
\label{eq:SO3D}
\end{equation}
where $\bm{l}=(\bm{r} \times \bm{p})/\hbar$ is the angular momentum.
This results in the skew scattering: accompanied by the
scattering from direction $\bm{n}$ to $\bm{n'}$, the spin is
polarized in $(\bm{n} \times \bm{n'})/
|\bm{n} \times \bm{n'}|$.\cite{Mott,Landau}
In an optical experiment on the Kerr rotation, Kato {\it et al}.\
observed a spin accumulation at sample edges along
the electric current in $n$-type GaAs,\cite{Kato}
which is ascribable to the extrinsic SHE.
The experimental result has been quantitatively explained
by a semi-classical theory considering the skew scattering
and ``side jump'' effects.~\cite{Engel}
In our previous paper,~\cite{Eto} we have quantum-mechanically
formulated the extrinsic SHE for 2DEG in semiconductor
heterostructures. We have examined the SHE due to the scattering
by an artificial potential created by an antidot,
scanning tunnel microscope (STM) tip, etc.
An antidot is a small metallic electrode fabricated above
2DEG, which creates an electrically tunable potential on 2DEG.
The potential is usually repulsive, but it could be attractive
when a positive voltage is applied to the antidot.
We have found that the
SHE is significantly enhanced by resonant scattering
when the attractive potential is properly tuned.
We have stressed that the extrinsic SHE is easier to understand
in 2DEG than in three-dimensional case. Let us consider
an axially symmetric potential $V(r)$ ($r=\sqrt{x^2+y^2}$)
created by an antidot on conduction electrons
in the $xy$ plane.
The SO interaction in Eq.\ (\ref{eq:SOorg}) becomes
\begin{equation}
H_\text{SO} = -\lambda \frac{2}{r} \frac{dV}{dr} l_z s_z
\equiv V_1(r) l_z s_z,
\label{eq:SO2D}
\end{equation}
where $l_z$ and $s_z$ are the $z$ component of angular momentum
and spin operators, respectively.
$V_1(r)=-(2 \lambda/r)V'(r)$, which has the
same sign as $V(r)$ if $|V(r)|$ is a monotonically decreasing
function of $r$ and $\lambda>0$. Assuming that $V(r)$ is
smooth on the scale of the lattice constant, we adopt the
effective mass equation
\begin{equation}
\left[ -\frac{\hbar^2}{2m^*} \Delta + V(r) + V_1(r)l_z s_z \right]
\psi({\bm r}) =E \psi({\bm r}),
\label{eq:Schroedinger}
\end{equation}
for an envelope function $\psi({\bm r})$ with $m^*$ as the
effective mass. In Eq.\ (\ref{eq:Schroedinger}),
$l_z$ and $s_z$ are conserved
in contrast to the three-dimensional case with
Eq.\ (\ref{eq:SO3D}). For $s_z=\pm 1/2$, an electron experiences
the potential of $V(r) \pm V_1(r) l_z/2$. As a consequence, the
scattering of components of $l_z>0$ ($l_z<0$) is enhanced
(suppressed) by the SO interaction for $s_z=1/2$ when
$V_1(r)$ has the same sign as $V(r)$.
The effect is opposite to that for $s_z=-1/2$. This is the origin
of the extrinsic SHE.
We have formulated the SHE in terms of phase shifts in the
partial wave expansion for 2DEG and shown that the SHE
is largely enhanced by resonant scattering.~\cite{Eto}
These results are summarized in Appendix A.
In the present paper, we consider three- and four-terminal
devices including an antidot, as shown in
Fig.\ \ref{fig:System}, and examine the enhancement of
the SHE. We evaluate an efficiency of the
spin-filtering effect by resonant scattering in the case
of attractive potential. Although our three-terminal device
is very similar to the spin filter proposed by
Yamamoto and Kramer,~\cite{Yamamoto}
they have only studied the case of repulsive potential.
We show that our device can be a spin filter with an
efficiency of more than 50\% by tuning the potential to
the resonant condition.
\begin{figure}
\includegraphics[width=8.5cm]{fig1.eps}
\caption{Our model for (A) three- and (B) four-terminal devices
for the spin filter. They are fabricated on two-dimensional
electron gas in the $xy$ plane. Both the devices include an antidot
at the center of junction, which is a square area surrounded
by broken lines. Three or four ideal leads connect the junction
to reservoirs. Reservoir 1 is a source from which unpolarized
electrons are injected into the junction.
The voltages are equal in the drains; reservoirs 2 and 3 in model
(A) and reservoirs 2, 3, and 4 in model (B).}
\label{fig:System}
\end{figure}
We numerically solve the effective mass equation
[Eq.\ (\ref{eq:Schroedinger})] with an appropriate boundary condition
for our devices.
A confining potential for the leads (quantum wires) could induce
the SO interaction, following Eq.\
(\ref{eq:SOorg}).~\cite{Hattori,Bellucci,Jiang,Xing}
However, its effect on the electrons is much smaller than
the SO interaction induced by the antidot potential because the
amplitude of the wavefunction is small around the edges of the
leads. Therefore, we consider the antidot-induced SO interaction
only.
We also assume that the antidot potential $V(\bm{r})$ is
independent of $z$. Otherwise, it would create the
Rashba-type SO interaction, Eq.\ (\ref{eq:Rashba}) with
$\alpha=\lambda(\partial V/\partial z)$, in addition to
Eq.\ (\ref{eq:SO2D}).
The Dresselhaus SO interaction is also disregarded, which is
induced by the inversion asymmetry of the crystal.~\cite{Dresselhaus}
These effects will be discussed in the final section.
The electron-electron interaction is not taken into account.
The Coulomb blockade is not relevant to
the case of antidot, in contrast to that of conventional
quantum dot which is connected to the leads via tunnel
barriers.~\cite{Kouwenhoven} In our model, therefore, the influence of
the electron-electron interaction is only quantitative and
could be considered by the mean-field level, as discussed in
the final section. Note that there have been
several researches for the spin-current generation based on
the electron-electron interaction in the absence of SO
interaction, e.g., using single or
double quantum dots~\cite{Aono,Feinberg,Pustilnik1}
and quantum wires.~\cite{Sharma,Citro,Pustilnik2,Braunecker,Abanin}
The organization of the present paper is as follows.
In Sec.\ II, we describe our model for three- and four-terminal
devices and calculation method. The calculation of spin-dependent
conductance in multi-terminal devices is formulated using the
Green's function in the
tight-binding model.~\cite{Datta,Ando,Ando2,Yamamoto2}
In Sec.\ III, we present numerical results of
the conductance and spin-filtering effect when the strength of
antidot potential is tuned. We also investigate the density
of states (DOS) in the junction area of the devices to
illustrate virtual bound states. The existence of the
virtual bound states at the Fermi level is a strong evidence
that a resonant scattering takes place when the
spin-filtering effect is remarkably enhanced.
In addition, we perform a channel analysis of the
spin-dependent conductance to closely examine the resonance.
The final section (Sec.\ IV) is devoted to the conclusions
and discussion.
\section{MODEL AND CALCULATION METHOD}
In this section, we explain our model and calculation method.
We numerically solve the effective mass equation in the
tight-binding model for the devices. In the presence of the SO
interaction in Eq.\ (\ref{eq:SO2D}), the $z$ component of
electron spin $s_z$ is conserved although $l_z$ is not a good
quantum number owing to the lack of rotational symmetry in
our devices. Hence we can solve the equation for
$s_z=\pm 1/2$ separately.
\subsection{Model}
We consider three- and four-terminal devices with an antidot,
fabricated on semiconductor heterostructures, as shown
in Fig.\ \ref{fig:System}.
Three or four leads (quantum wires) of width $W$ are joined
to one another at a junction, which is a square area
surrounded by broken lines in the figure.
The leads are formed by hard-wall potential and
connected to the reservoirs. Reservoir 1 is a source from which
unpolarized electrons are injected into the junction.
The electrons are outgoing to the drains; reservoirs 2 and 3
(2, 3, and 4) in the three-terminal (four-terminal) device.
The voltages are equal in all the drains.
An antidot creates an axially symmetric potential $V(r)$, where $r$
is the distance from the center of the junction. It is assumed to
be attractive and given by a smooth potential well,
\begin{equation}
V(r)= \begin{cases}
V_0 & \text{($r-R_0 <-\frac{\Delta R_0}{2}$)} \\
\frac{V_0}{2} \left\{ 1-\sin \left( \pi
\frac{r-R_0}{\Delta R_0} \right) \right\}
& \text{($|r-R_0 |\le \frac{\Delta R_0}{2}$)} \\
0 & \text{($r-R_0 >\frac{\Delta R_0}{2}$)}
\end{cases}
\label{eq:potential}
\end{equation}
with $V_0<0$.
The radius of the potential well is $R_0 =W/4$, and we choose
$\Delta R_0 =0.7 R_0$. The gradient of $V(r)$ gives rise to the
SO interaction in Eq.\ (\ref{eq:SO2D}).
For the numerical study, we discretize the two-dimensional space
with lattice constant $a$ (tight-binding model with a square
lattice). The width of the leads is $W=(N+1)a$ with $N=29$;
the wavefunction becomes zero at the zeroth and ($N+1$)th sites in
the transverse direction of the leads. The Hamiltonian is
\begin{equation}
\begin{split}
H &=t \sum_{i,j,\sigma} (4+\tilde{V}_{i,j}^{\ })
\cre{c}{i,j;\sigma} \ani{c}{i,j;\sigma} \\
& -t \sum_{i,j,\sigma} (T_{i,j ; i+1,j; \sigma}^{\ }
\cre{c}{i,j;\sigma} \ani{c}{i+1,j;\sigma} \\
& + T_{i,j ; i,j+1 ;\sigma}^{\ } \cre{c}{i,j,\sigma}
\ani{c}{i,j+1,\sigma} + \text{h.\ c.}),
\end{split}
\label{eq:AndoH}
\end{equation}
where $\cre{c}{i,j;\sigma}$ and $\ani{c}{i,j;\sigma}$ are
creation and annihilation operators of an electron at site
$(i,j)$ with spin $\sigma$.
Here, $t=\hbar^2 /2m^{*} a^2$, where $m^{*}$ is the effective
mass of electrons.
$\tilde{V}_{i,j}$ is the potential energy at site $(i,j)$,
in units of $t$. The transfer term in the $x$ direction is given by
\begin{equation}
T_{i,j; i+1,j; \pm} = 1\pm i \tilde{\lambda}
(\tilde{V}_{i+1/2,j+1} -\tilde{V}_{i+1/2,j-1}),
\label{eq:xhopping}
\end{equation}
whereas that in the $y$ direction is
\begin{equation}
T _{i,j ; i,j+1; \pm} =1\mp i \tilde{\lambda}
(\tilde{V}_{i+1,j+1/2} -\tilde{V}_{i-1,j+1/2}),
\label{eq:yhopping}
\end{equation}
where $\tilde{\lambda} =\lambda /(4 a^2)$ is the
dimensionless coupling constant of the SO interaction.
$\tilde{V}_{i+1/2,j}$ is
the average of the potential energy at sites
$(i,j)$ and $(i+1,j)$, and $\tilde{V}_{i,j+1/2}$ is
that at sites $(i,j)$ and $(i,j+1)$.
The SO interaction is absent in the leads.
The wavefunction of conduction channel $\mu$ in the leads
is written as
\begin{eqnarray}
\psi_\mu (i^\prime , j^\prime ) & = &
\exp(ik_{\mu}a j^\prime) u_{\mu} (i^\prime),
\\
u_{\mu} (i^\prime) & = &
\sqrt{\frac{2}{N+1}} \sin \left( \frac{\pi \mu i^\prime}{N+1}\right),
\label{eq:Wavefunc}
\end{eqnarray}
where $i^\prime$ and $j^\prime$ denote the site numbers in
the transverse and longitudinal directions of the leads,
respectively.
The wavenumber $k_{\mu}$ is determined from the condition that
the dispersion relation
\begin{equation}
E_\mu (k)=
4t-2t\cos \left( \frac{\pi \mu}{N+1} \right) -2t\cos (k a)
\label{eq:cosBand}
\end{equation}
is identical to the Fermi energy $E_{\rm F}$.
The band edge of channel $\mu$ is defined by
$E_\mu (k=0)$. The band edge is located below
$E_{\rm F}$ for the conduction channel.
For $E_\mu (k=0)>E_\text{F}$, on the other hand, mode $\mu$ is
an evanescent mode. The wavefunction of the
evanescent mode is given by
\begin{equation}
\psi_\mu (i^\prime , j^\prime ) = \exp (-\kappa_{\mu}a j^\prime)
u_{\mu} (i^\prime),
\end{equation}
where $a j^\prime$ is the distance from the junction along the lead and
$\kappa_{\mu}$ satisfies $E_\mu (i\kappa_{\mu})=E_\text{F}$.
\subsection{Calculation method}
For numerical studies, we introduce the Green's function.
The Green's function for the junction area is defined by
\begin{equation}
\hat{G}_{\sigma}(E) =
\left[ EI-\mathcal{H}_{\sigma}-\sum_p \Sigma^p \right]^{-1},
\label{eq:Green}
\end{equation}
where $\mathcal{H}_{\sigma}$ is the truncated matrix of the
Hamiltonian for the junction area ($N^2 \times N^2$) with
spin $\sigma$, and $\Sigma^p$ is the self-energy due to the
coupling to the lead $p$:
\begin{equation}
\Sigma^p =-t \ \tau_p^\dagger U \Lambda U^{-1} \tau_p^{\ }.
\end{equation}
Here $U$ is a unitary matrix,
$U=(\bm{u}_1, \bm{u}_2, \cdots, \bm{u}_N )$, where
$\bm{u}_{\mu}=\left[ u_{\mu}(1), u_{\mu}(2) , \cdots , u_{\mu}(N)
\right]^t$.
$\Lambda={\rm diag}(\lambda_1,\lambda_2,\cdots,\lambda_N)$,
where $\lambda_{\mu}=\exp (ik_{\mu} a)$ for conduction channels
and $\lambda_{\mu}=\exp (-\kappa_{\mu}a)$ for evanescent modes.
$\tau_p$ is a coupling matrix ($N \times N^2$) between
lead $p$ and the junction: $\tau_p (p_i,i)=1$ for $p_i$ being
an adjacent site in lead $p$ to site $i$ in the junction;
$\tau_p (p_i,i)=0$ otherwise.~\cite{Datta}
The spin-dependent conductance from reservoir $p$ to reservoir
$q$ is obtained from the Landauer-B\"{u}ttiker formula.
It is written as
\begin{equation}
G^{qp}_{\sigma}
=\frac{e^2}{h} \text{Tr} \left[ \Gamma^q \hat{G}_{\sigma}(E)
\Gamma^p \hat{G}_{\sigma}^\dagger(E) \right],
\label{eq:Conductance}
\end{equation}
where
\begin{equation}
\Gamma^p =i[\Sigma^p -{\Sigma^p}^\dagger].
\end{equation}
The total conductance is $G^{qp}=G^{qp}_++G^{qp}_-$, whereas
the spin polarization in the $z$ direction is defined by
\begin{equation}
P_z^{\ qp}= \frac{G^{qp}_+ - G^{qp}_-}{G^{qp}_+ + G^{qp}_-}
\label{eq:Polarization}
\end{equation}
for the current from reservoir $p$ to $q$.
To elucidate the virtual bound states in the potential well,
we calculate the DOS in the junction area. It is evaluated from
the Green's function (\ref{eq:Green}) as~\cite{com1}
\begin{equation}
D(E)=-\frac{1}{\pi} \sum_{\sigma} \text{Im}\left[\text{Tr}
\hat{G}_{\sigma}(E) \right].
\label{eq:DOS}
\end{equation}
We assume that $\tilde{\lambda} =0.1$ for the strength of
SO interaction, which corresponds to the value for InAs,
$\lambda =117.1\, \mathrm{\AA^2}$,~\cite{Winkler}
with $a=W/30$ and width of the leads
$W \approx 50\,\mathrm{nm}$. The temperature $T=0$.
We focus on the transport from reservoir 1 to 2 and
omit the superscript 21 of $G_{\pm}^{21}$ and $P_z^{21}$,
otherwise stated.
Note that the conductance from reservoir 1 to 3 is related to
$G_{\pm }^{31} =G_{\mp }^{21}$ from the symmetry of the system,
for both three- and four-terminal devices. The current from
reservoir 1 to 4 is not spin-polarized in the four-terminal device.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{fig2.eps}
\end{center}
\caption{(Color online)
Numerical results for the three-terminal device with
$k_\text{F} R_0 =2$, where $R_0$ is the radius of attractive
potential. (A) Conductance $G_{\pm}$ from reservoir 1
to 2 in Fig.\ \ref{fig:System}(A) for $s_z=\pm 1/2$ and
(B) spin polarization $P_z$ of the output current
in reservoir 2, as functions of the potential depth $|V_0|$.
In (A), solid and broken lines indicate $G_{+}$ and
$G_{-}$, respectively. A dotted line shows the
conductance per spin in the absence of the SO interaction.
(C) Grayscale plot of the density of states in the
junction area, $D(E)$, in the plane of $|V_0|$ and energy
$E$ of electron.}
\label{fig:T2}
\end{figure}
\section{CALCULATED RESULTS}
We calculate the conductance $G_{\pm}$ for spin $s_z=\pm 1/2$
and spin polarization $P_z$ when the potential depth $|V_0|$
is tuned. We examine three cases of $k_\text{F} R_0 =1$, $2$, and
$3$, where the Fermi wavenumber $k_\text{F}$ is defined by the
Fermi energy $E_\text{F}$ as $E_\text{F} /t =(k_\text{F} a)^2$.\cite{com2}
In the three cases, the Fermi energy $E_\text{F}$ is different,
whereas the radius of the potential well $R_0$ is fixed.
The number of conduction channels in the leads is 1, 2, and 3,
respectively.
Here we discuss the cases of $k_\text{F} R_0 =2$ and $3$.
The numerical result with $k_\text{F} R_0 =1$ is given in Appendix B.
(Surprisingly, we find a perfect spin polarization $P_z=1$
in the case of $k_\text{F} R_0 =1$. However, the transport properties
seem quite specific. This is due to a strong interference effect
in the case of single conduction channel.)
\subsection{Case of $k_\text{F} R_0 =2$}
We begin with the three-terminal device in the
presence of two conduction channels in the leads
($k_\text{F} R_0 =2$). Figures \ref{fig:T2}(A) and \ref{fig:T2}(B) show the
conductance $G_{\pm}$ for $s_z=\pm 1/2$ and spin polarization
$P_z$, respectively,
when the potential depth $|V_0|$ is gradually changed.
As seen in Fig.\ \ref{fig:T2}(A),
the conductance $G_{\pm}$ shows three minima as a function of
$|V_0|$. At the first minimum at $|V_0|/E_\text{F} \approx 0.6$, the
difference in the conductance for $s_z=\pm 1/2$ is small.
At the second and third minima at
$|V_0|/E_\text{F} \approx 2$ and $5$, respectively,
the difference is remarkable, which results in a large spin
polarization in the $z$ direction [Fig.\ \ref{fig:T2}(B)].
$P_z$ is enhanced to 25\% around the second minimum and
61\% around the third minimum.
The behavior of $G_{\pm}$ should be ascribable to resonant
scattering at the potential well.
The resonant scattering takes place through a virtual bound state
in the potential well, which enhances the electron
scattering to the unitary limit (Appendix A).
This makes the minima of $G_{\pm}$ in our situation of both
three- and four-terminal devices.
(It is not trivial whether the resonant scattering makes minimum
or maximum of the conductance in multi-terminal devices.
See the discussion in Appendix A.) Around the minima of the
conductance, the difference between $G_{+}$ and $G_{-}$ is
magnified. Look at the third minimum of the conductance
around $|V_0|/E_\text{F} \approx 5$. The minimum of $G_{+}$ is
located at a smaller value of $|V_0|$ than that of $G_{-}$.
In consequence, $P_z$ shows a pair of negative dip
($G_{+}<G_{-}$) and a positive peak ($G_{+}>G_{-}$).
This dip-peak structure of $P_z$ can be understood in terms of
phase shifts when the resonance for $s_z=\pm 1/2$ is well
separated from each other (Appendix A).
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{fig3.eps}
\end{center}
\caption{(Color online)
Numerical results for the four-terminal device with
$k_\text{F} R_0 =2$, where $R_0$ is the radius of attractive
potential. (A) Conductance $G_{\pm}$ from reservoir 1
to 2 in Fig.\ \ref{fig:System}(B) for $s_z=\pm 1/2$,
(B) conductance $G_{\pm}^{41}$ from reservoir 1 to 4, and
(C) spin polarization $P_z$ of the output current
in reservoir 2, as functions of the potential depth $|V_0|$.
In (A) and (B), solid and broken lines indicate $G_{+}$ and
$G_{-}$, or $G_{+}^{41}$ and $G_{-}^{41}$, respectively.
A dotted line shows the
conductance per spin in the absence of the SO interaction.
In (B), solid and broken lines for $G_{\pm}^{41}$
are completely overlapped.
(D) Grayscale plot of the density of states in the
junction area, $D(E)$, in the plane of $|V_0|$ and energy
$E$ of electron.}
\label{fig:C2}
\end{figure}
To confirm the above-mentioned scenario regarding the resonant
scattering, we calculate the DOS in the
junction area. Figure \ref{fig:T2}(C) shows a grayscale
plot of the DOS $D(E)$ in the plane of $|V_0|$ and energy $E$ of
electron. The band edge for the lowest and second conduction
channels in the leads [$E_1 (k=0)$ and $E_2 (k=0)$ in
Eq.\ (\ref{eq:cosBand})] are located at $E/E_\text{F} =0.154$ and $0.615$,
respectively. The sharp peaks of $D(E)$ below
the lowest band edge correspond to the bound states
inside the junction area. With an increase in the potential
depth $|V_0|$, several bound states appear one after another.
The first one is an $S$-like bound state ($l_z=0$)
although the angular momentum $l_z$ is not a good quantum
number in our device because of the lack of rotational
symmetry. The bound state
exists even without the potential well ($|V_0|=0$) in the
junction area,~\cite{Kiselev} and changes to the
$S$-like bound state in the potential well with increasing
$|V_0|$. The state is doubly degenerate due to the Kramers
degeneracy. The next are $P$-like bound states ($l_z=\pm 1$).
They are a pair of Kramers degenerate states.
Then $D$-like bound states ($l_z=\pm 2$) appear, which
are clearly split into two by the SO interaction.
Another $S$-like state is located at approximately the
same energy. Finally $F$-like bound states ($l_z=\pm 3$)
appear in Fig.\ \ref{fig:T2}(C).
The pair of Kramers degenerate states is largely
separated for the $F$-like states.
The peaks of the bound states in $D(E)$ are broadened above the
band edge of the lowest conduction channel in the leads,
which significantly influence
the electron scattering at the Fermi level as virtual bound states.
The second and third minima of the conductance $G_{\pm}$
are located at the position of $D$ and $F$-like virtual bound
states at $E_\text{F}$, respectively. This is a clear evidence of the resonant
scattering through virtual bound states. (At the first minimum of
$G_{\pm}$ around $|V_0|/E_\text{F} = 0.6$, we cannot find
any virtual bound state at the Fermi level. The minimum of
$G_{\pm}$ may not be due to the resonant scattering, but due to
some interference effect around the junction.)
We present the calculated results for the four-terminal device
with $k_\text{F} R_0 =2$ in Fig.\ \ref{fig:C2}:
(A) conductance $G_{\pm}$ for $s_z=\pm 1/2$ from
reservoir 1 to 2 in Fig.\ \ref{fig:System}(B),
(B) conductance $G_{\pm}^{41}$ from reservoir 1 to 4,
and (C) spin polarization $P_z$ of the output current
in reservoir 2, as functions of the potential depth $|V_0|$.
As seen in Fig.\ \ref{fig:C2}(B), $G_{+}^{41}=G_{-}^{41}$
because the SHE does not make a spin polarization in the
output current in reservoir 4.
The characters of conductance $G_{\pm}$
for $s_z=\pm 1/2$ and spin polarization
$P_z$ are almost the same as those in Fig.\ \ref{fig:T2}
for three-terminal device.
The conductance shows three minima. The second and third
minima are clearly due to resonant
scattering via $D$- or $F$-like virtual bound states,
as seen in the DOS in Fig.\ \ref{fig:C2}(D).
Around the minima, the conductance for $s_z=\pm 1/2$
is largely split by the SO interaction, which results
in a large spin polarization $P_z$.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{fig4.eps}
\end{center}
\caption{(Color online)
Numerical results for the three-terminal device with
$k_\text{F} R_0 =3$, where $R_0$ is the radius of attractive
potential. (A) Conductance $G_{\pm}$ from reservoir 1
to 2 in Fig.\ \ref{fig:System}(A) for $s_z=\pm 1/2$ and
(B) spin polarization $P_z$ of the output current
in reservoir 2, as functions of the potential depth $|V_0|$.
In (A), solid and broken lines indicate $G_{+}$ and
$G_{-}$, respectively. A dotted line shows the
conductance per spin in the absence of the SO interaction.
(C) Grayscale plot of the density of states in the
junction area, $D(E)$, in the plane of $|V_0|$ and energy
$E$ of electron.
Regarding the result for four-terminal device with
$k_\text{F} R_0 =3$, a broken line in (B) indicates the
spin polarization $P_z$ of the output current in reservoir 2
in Fig.\ \ref{fig:System}(B).}
\label{fig:TC3}
\end{figure}
\subsection{Case of $k_\text{F} R_0 =3$}
Figure \ref{fig:TC3} shows the calculated results for
the three-terminal device in the presence of
three conduction channels in the leads ($k_\text{F} R_0 =3$):
(A) conductance $G_{\pm}$ for $s_z=\pm 1/2$ and
(B) spin polarization $P_z$ in the $z$ direction,
as functions of the potential depth $|V_0|$.
Figure \ref{fig:TC3}(C) shows
a grayscale plot of the density of states $D(E)$ in the
plane of $|V_0|$ and energy $E$ of electron.
In Fig.\ \ref{fig:TC3}(B), a broken line indicates the
spin polarization $P_z$ in the four-terminal device.
The conductance $G_{\pm}$ shows several minima as a function of
potential depth $|V_0|$. The spin polarization $P_z$ is enhanced
around the minima of $G_{\pm}$. These properties can be
understood in the same way as in the preceding subsection.
The polarization $P_z$ is enhanced to $41$\% at $|V_0|/E_\text{F} = 3.1$
in the three-terminal device, and it is enhanced to $49$\%
at $|V_0|/E_\text{F} = 3.2$ in the four-terminal device.
This is due to resonant scattering via $G$-like virtual
bound states ($l_z=\pm 4$).
Compared with the case of two conduction channels
in Figs.\ \ref{fig:T2} and \ref{fig:C2}, the values of the conductance $G_{\pm}$
are larger in the case of three conduction channels
($k_\text{F} R_0 =3$), whereas
the maximum value of spin polarization is almost the same.
This implies a more efficient spin filter in the case of
three conduction channels than in the case of two conduction
channels.
\subsection{Channel analysis for spin filtering}
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{fig5.eps}
\end{center}
\caption{
A channel analysis for incident waves from reservoir 1
in the three-terminal device with $k_\text{F} R_0 =2$.
(A) Spin polarization $P_z$ of the output current
in reservoir 2 in Fig.\ \ref{fig:System}(A),
as a function of the potential depth $|V_0|$,
for the incident electrons in the lowest channel (solid
line) and second channel (broken line).
(B) Conductance $G_{\pm}$ from reservoir 1 to 2
for the incident electrons in the lowest channel
with $s_z=1/2$ (solid line) and $-1/2$ (broken line).
A dotted line indicates the conductance per spin in the
absence of the SO interaction.}
\label{fig:ch2}
\end{figure}
In cases of $k_\text{F} R_0 =2$ and $3$,
there are two and three conduction channels in the leads,
respectively. To examine the resonant scattering in detail,
we perform a channel analysis of incident waves from reservoir 1.
The results are given only for the three-terminal device
in this subsection.
In the case of $k_\text{F} R_0 =2$, we plot the spin polarization
$P_z$ for the incident electrons in the lowest and second
channels in Fig.\ \ref{fig:ch2}(A). At $|V_0|/E_\text{F} \sim 2$
(resonance by $D$-like virtual bound state), $P_z$
is enhanced to 73\% for the lowest channel while
it is to 18\% for the second channel. Hence the former
plays a main role in the spin polarization. At
$|V_0|/E_\text{F} \sim 5$ (resonance by $F$-like virtual bound state),
on the other hand, $|P_z|$ becomes 75\% for the lowest channel
while it becomes 83\% for the second channel. Then both
channels are important for the spin-dependent scattering.
We could selectively inject the lowest channel to the junction,
e.g., using a quantum point contact fabricated on the lead
connected to reservoir 1. Then we could realize a spin filter
with an efficiency of about 75\%. In Fig.\ \ref{fig:ch2}(B),
we plot the conductance $G_{\pm}$ when only the lowest channel
is injected from reservoir 1.
At $|V_0|/E_\text{F} \sim 2$, the conductance almost vanishes
although $P_z$ is enhanced to 73\%. This is due to an interference
effect at the junction as in the case of
single conduction channel with $k_\text{F} R_0 =1$ (Appendix B).
At $|V_0|/E_\text{F} \sim 5$, on the other hand, the total conductance is
$G_++G_-=0.4 (e^2/h)$ and $P_z=-75$\%.
The latter situation is favorable to application to a
spin filter.
A similar channel analysis is given for the case of $k_\text{F} R_0 =3$
in Fig.\ \ref{fig:ch3}. There are three incident channels
in this case.
It is notable that, at $|V_0|/E_\text{F} \sim 2.8$,
a spin polarization of $P_z=62$\% is realized for the incident
electrons in the lowest channel while the total
conductance is as large as $G_++G_-=0.8 (e^2/h)$.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{fig6.eps}
\end{center}
\caption{A channel analysis for incident waves from reservoir 1
in the three-terminal device with $k_\text{F} R_0 =3$.
(A) Spin polarization $P_z$ of the output current
in reservoir 2 in Fig.\ \ref{fig:System}(A),
as a function of the potential depth $|V_0|$,
for the incident electrons in the lowest channel (solid
line), second channel (broken line), and third
channel (dotted line).
(B) Conductance $G_{\pm}$ from reservoir 1 to 2
for the incident electrons in the lowest channel
with $s_z=1/2$ (solid line) and $-1/2$ (broken line).
A dotted line indicates the conductance per spin in the
absence of the SO interaction.}
\label{fig:ch3}
\end{figure}
\subsection{Repulsive potential}
We investigate the SHE caused by the scattering by
a repulsive potential, $V_0>0$ in Eq.\ (\ref{eq:potential}).
Figure \ref{fig:R} shows (A) conductance $G_{\pm}$ for
$s_z=\pm 1/2$ and (B) spin polarization $P_z$ in the $z$ direction,
when the potential height $V_0$ is gradually increased.
The extrinsic SHE is expected even with a repulsive
potential.~\cite{Yamamoto} However, the spin-filtering effect
is much weaker than the case with an
attractive potential. In Fig.\ \ref{fig:R}(B), the
spin polarization is at most $P_z \approx 0.3\%$ in the three-terminal
device and $P_z \approx 0.45\%$ in the four-terminal device. In this
case, the resonant scattering does not take place since
virtual bound states are hardly formed in the potential
barrier. This indicates an important role of resonant
scattering in the enhancement of the SHE discussed in the
preceding subsections.
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{fig7.eps}
\end{center}
\caption{Numerical results for the three-terminal device
(curves labeled by $a$) and four-terminal device
(curves labeled by $b$) with repulsive potential [$V_0>0$ in
Eq.\ (\ref{eq:potential})]. $k_\text{F} R_0 =2$, where $R_0$ is
the radius of the potential.
(A) Conductance $G_{\pm}$ from reservoir 1
to 2 in Fig.\ \ref{fig:System}, as a function of potential
height $V_0$, for $s_z=1/2$ (solid lines) and $-1/2$ (broken lines).
(B) Spin polarization $P_z$ of the output current
in reservoir 2.
}
\label{fig:R}
\end{figure}
\section{CONCLUSIONS AND DISCUSSION}
We have numerically studied the extrinsic SHE in
multi-terminal devices including an antidot, fabricated
on semiconductor heterostructures with strong SO interaction.
The antidot creates a tunable potential on two-dimensional
electron gas in the heterostructures, which may be attractive
as well as repulsive. When an attractive potential
is tuned properly, the resonant scattering via a virtual
bound state takes place, which makes minima of the
conductance from reservoir 1 to 2 in Fig.\ \ref{fig:System}.
Then the difference between the conductances for
$s_z=\pm 1/2$ is enlarged and, as a result, the spin
polarization is significantly enhanced in the direction
perpendicular to the two-dimensional plane.
The spin polarization can be more than 50\%
in our three- and four-terminal devices.
The enhancement of the extrinsic SHE by resonant scattering
has been studied in different systems.
Kiselev and Kim
have proposed a three-terminal spin filter without antidot
in the presence of Rashba SO interaction
[Eq.\ (\ref{eq:Rashba})].~\cite{Kiselev,Kiselev2}
They have pointed out that the spin-filtering effect
is enhanced by resonant scattering at the junction area
when the Fermi energy of 2DEG is tuned.
In their device, the direction of spin polarization is tilted
from the $z$ direction perpendicular to the plane.
In our device, the spin is polarized in the $z$ direction,
which is easier to detect by an optical experiment on the
Kerr rotation~\cite{Kato} and, above all, more suitable to
spintronic devices.
The extrinsic SHE enhanced by (many-body) resonant scattering
has been examined for metallic systems with magnetic
impurities.~\cite{Fert,Fert2,Guo} In the case of
semiconductor heterostructures, however, we have a great
advantage in the tunability of scattering potential.
The enhanced SHE caused by resonant scattering at a
single potential can be investigated in details.
We make some comments regarding our calculations.
(i) The electron-electron interaction has been neglected.
Below the band edge of the lowest conduction channel in
the leads,
we have observed bound states in the density of states
in the potential well.
Since the bound states are occupied by electrons, we
have to consider the electron-electron interaction
between the electrons and conduction electrons at the Fermi
level. The Hartree potential from the trapped electrons
should be taken into account
although the Coulomb blockade is irrelevant to the case of antidot
potential without tunnel barriers, in contrast to
the case of conventional quantum dots.~\cite{Kouwenhoven}
In our calculated results, therefore, the
values of $|V_0|$ at the resonance are underestimated.
(ii) It is necessary to create such a deep potential as
$|V_0| \sim E_{\rm F}$ in designing the devices.
This might be difficult with a usual antidot structure
fabricated on semiconductor heterostructures.
Alternatively, we could make such a deep potential using
a STM tip, a charged impurity under the antidot, etc.
(iii) We have assumed that the antidot potential $V(\bm{r})$
is independent of $z$. Otherwise, the Rashba-type SO
interaction, Eq.\ (\ref{eq:Rashba}) with
$\alpha=\lambda(\partial V/\partial z)$,
must be added to Eq.\ (\ref{eq:SO2D}).
This would create an effective magnetic field in the $xy$ plane
and thus decrease the spin polarization in the $z$ direction.
The Dresselhaus SO interaction has also been disregarded.
The SO interaction is induced by the inversion asymmetry of
the crystal~\cite{Dresselhaus} and expressed as
\begin{equation}
H_{\rm SO}=\frac{\beta}{\hbar}(-p_x\sigma_x+p_y\sigma_y).
\end{equation}
This would also result in an effective magnetic field in the
$xy$ plane and lessen the spin polarization in the
$z$ direction.
\section*{ACKNOWLEDGMENTS}
This work was partly supported by the Strategic Information
and Communications R\&D Promotion Program (SCOPE) from the
Ministry of Internal Affairs and Communications of Japan, and
by a Grant-in-Aid for Scientific Research from
the Japan Society for the Promotion of Science.
|
train/arxiv
|
BkiUc3Q5qsBC9nVRX3Ko
| 5 | 1 |
\section{Introduction}
A central hyperplane arrangement, which we will denote by $\mathcal{A}$, is a union of hyperplanes passing through the origin in a vector space $V\cong\mathbb{K}^\ell$, where $\mathbb{K}$ is a field. Write $S$ for the symmetric algebra of $V^*$, which is isomorphic to a polynomial ring in $\ell$ variables. Then $\mathcal{A}$ is the union of the zero-locus of linear forms $\alpha_H$, one for each hyperplane $H$ in $\mathcal{A}$. The module of logarithmic $\mathcal{A}$-derivations, denoted $D(\mathcal{A})$, consists of derivations $\theta\in \mbox{Der}_\mathbb{K}(S)$ satisfying $\theta(\alpha_H)\in \alpha_H S$ for every $H\in\mathcal{A}$. Study of this module was initiated by Saito~\cite{S80}; it is of particular interest to know when $D(\mathcal{A})$ is a free $S$-module. In this case $\mathcal{A}$ is called a free arrangement. One of the central open questions in the theory of hyperplanes, due to Terao, is whether freeness of an arrangement is combinatorial, meaning that it can be detected from the lattice of intersections.
Let $\mathbf{m}:\mathcal{A}\rightarrow \mathbb{Z}_{>0}$ be a function, called a multiplicity, associating to each hyperplane $H$ a positive integer $\mathbf{m}(H)$; the pair $(\mathcal{A},\mathbf{m})$ is called a multi-arrangement. The module of derivations of $(\mathcal{A},\mathbf{m})$, denoted $D(\mathcal{A},\mathbf{m})$, consists of those derivations $\theta\in \mbox{Der}_\mathbb{K}(S)$ satisfying $\theta(\alpha_H)\in \alpha_H^{\mathbf{m}(H)}S$ for every $H\in\mathcal{A}$. If $D(\mathcal{A},\mathbf{m})$ is a free $S$-module we say $(\mathcal{A},\mathbf{m})$ free and $\mathbf{m}$ is a free multiplicity of $\mathcal{A}$. Due to a criterion stated by Ziegler~\cite{ZieglerMulti} and later improved by Yoshinaga~\cite{YoshCharacterizationFreeArr}, freeness of multi-arrangements is closely linked to freeness of arrangements.
There have been major advances in the understanding of multi-arrangements during the last decade. The characteristic polynomial has been defined for multi-arrangements by Abe, Terao, and Wakefield~\cite{TeraoCharPoly} and they show that Terao's factorization theorem holds for this characteristic polynomial.
Moreover, the addition-deletion theorem has also been extended by Abe, Terao, and Wakefield to multi-arrangements~\cite{EulerMult}. This improved theory of multi-arrangements has recently led to remarkable progress in understanding freeness of arrangements and of Terao's question in particular~\cite{AbeDivisional,AbeDeletion}.
In this paper we add to the list of available tools for studying multi-arrangements by introducing a homological characterization for freeness. The characterization involves building a co-chain complex which we denote $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ from modules constructed by Brandt and Terao~\cite{BrandtTerao} to study $k$-formality (see Definition~\ref{defn:DerivationComplex} for details). Chain complexes having very similar properties to $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ appear in the theory of algebraic splines~\cite{Homology,LCoho}; applying techniques of Schenck and Stiller~\cite{Spect,CohVan} yields our main result, stated below.
\begin{thm}[Homological characterization of freeness]\label{thm:Free}
The multi-arrangement $(\mathcal{A},\mathbf{m})$ is free if and only if $H^k(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))=0$ for $k> 0$. Moreover, $D(\mathcal{A},\mathbf{m})$ is locally free if and only if $H^k(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))$ has finite length for all $k>0$.
\end{thm}
Weaker versions of this statement have been proved recently and used to classify free multiplicities on several rank three arrangements~\cite{GSplinesGraphicArr,A3MultiBraid,X3}. For simple arrangements, the forward direction of the first statement in Theorem~\ref{thm:Free} follows from work of Brandt and Terao~\cite{BrandtTerao}. Homological methods are not new in the study of freeness of arrangements; besides the aforementioned work of Brandt and Terao, Yuzvinsky developed and studied the theory of cohomology of sheaves of differentials on arrangement lattices to great effect in~\cite{YuzCohoLocal,YuzFormal,YuzModuli}. While we will not attempt to generalize this framework to multi-arrangements, Yuzvinsky's work, along with Brandt and Terao's, is an important motivation for this paper.
The remainder of the paper is devoted to applications of this homological criterion. In \S~\ref{sec:ChainComplex} we extend a combinatorial bound on projective dimension of $D(\mathcal{A},\mathbf{m})$ due to Kung and Schenck in the case of simple arrangements. In \S~\ref{sec:Formal} we elucidate the connection to $k$-formality and use the homological characterization of Theorem~\ref{thm:Free} to extend a result of Brandt and Terao~\cite{BrandtTerao} to multi-arrangements in Corollary~\ref{cor:MultifreeFormal}.
Following the initial applications of this homological characterization of freeness, we describe in \S~\ref{sec:Computations} how the chain complex $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ can be concretely computed. We have implemented this construction in the computer algebra system Macaulay2~\cite{M2}. The code for constructing the chain complex, as well as a file working through many of the examples in this paper, may be found on the author's website: \href{https://math.okstate.edu/people/mdipasq/Research/Research.html}{math.okstate.edu/$\sim$mdipasq}. In \S~\ref{sec:Computations} we also explicitly work out the structure of $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ for graphic arrangements and show that Theorem~\ref{thm:Free} recovers the main result of~\cite{GSplinesGraphicArr}.
In \S~\ref{sec:TF2}, we study a class of arrangements which we call $TF_2$ arrangements; these are formal arrangements whose relations of length three are linearly independent. We believe this study is well-motivated by the interesting behavior of multi-$TF_2$ arrangements in moduli as well as additional counter-examples to Orlik's conjecture which arise in the process. We illustrate this in \S~\ref{sec:Examples} before proceeding to the body of the paper. If $\mathcal{A}$ is a $TF_2$ arrangement, freeness of $(\mathcal{A},\mathbf{m})$ is determined by the vanishing of the single cohomology module $H^1(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))$, making these arrangements well-suited to the homological methods afforded by Theorem~\ref{thm:Free}. We show that a $TF_2$ arrangement is free if and only if it is supersolvable. We completely classify free multiplicities on non-free $TF_2$ arrangements in Proposition~\ref{prop:H2Pres} and Theorem~\ref{thm:FreeMultNonFreeTF2}. Moreover, we show that free multiplicities of free $TF_2$ arrangements can be determined in a combinatorial fashion from the exponents of its rank two sub-arrangements in Theorem~\ref{thm:FreeMultFreeTF2}.
We also give in \S~\ref{sec:SyzygiesTeraoConj} a syzygetic criterion for freeness of a multi-arrangement of lines, generalizing a criterion for freeness of $A_3$ multi-arrangements from~\cite{A3MultiBraid}. Specializing to simple line arrangements gives an equivalent formulation of Terao's question for line arrangements, phrased in terms of syzygies of a certain module presented by a matrix of linear forms (Question~\ref{ques}).
\textbf{Acknowledgements:} I am indebted to Stefan Tohaneanu for pointing out his paper~\cite{StefanFormal}, which provided the inspiration to generalize the homological arguments in~\cite{GSplinesGraphicArr}. The current work would not be possible without the collaboration of Chris Francisco, Jeff Mermin, Jay Schweig, and Max Wakefield on previous papers~\cite{A3MultiBraid,X3}. Takuro Abe has been a consistent source of inspiring discussions and many patient explanations via e-mail. Computations in the computer algebra system Macaulay2~\cite{M2} were very useful at all stages of research.
\subsection{Examples}\label{sec:Examples}
In this section we illustrate results which can be obtained by applying the homological criterion for freeness (Theorem~\ref{thm:Free}). The three examples in this section are $TF_2$ arrangements, the definition and analysis of which appears in \S~\ref{sec:TF2}.
\begin{exm}\label{ex:multiplicitiessupersolvable}
Consider the line arrangement $\mathcal{A}(\alpha,\beta)$ defined by $xyz(x-\alpha z)(x-\beta z)(y-z)$ where $\alpha,\beta\in\mathbb{K}$. See Figure~\ref{fig:multiplicitiessupersolvable} for a projective picture of this arrangement over $\mathbb{R}$. Clearly if $\alpha\neq\beta,\alpha\neq0,$ and $\beta\neq 0$, then the intersection lattice $L(\mathcal{A}(\alpha,\beta))$ does not change. In fact, the arrangements $\mathcal{A}(\alpha,\beta)$ with $\alpha\neq\beta,\alpha\neq0,$ and $\beta\neq 0$ comprise the moduli space of this lattice (see Appendix~\ref{app:Moduli} for a brief summary of the moduli space of a lattice). It is easily checked that $\mathcal{A}(\alpha,\beta)$ is supersolvable.
We will see in Theorem~\ref{thm:FreeMultFreeTF2} that the freeness of the multi-arrangement $(\mathcal{A}(\alpha,\beta),\mathbf{m})$ can be determined if the exponents of the rank two sub multi-arrangements are known. Write $\mathbf{m}(x),\mathbf{m}(y),\ldots$ for the multiplicity assigned to, respectively, $x=0,y=0,\ldots$. There are two rank-two sub multi-arrangements of $(\mathcal{A}(\alpha,\beta),\mathbf{m})$ defined by
\[
\begin{array}{rl}
\tilde{X}_1= & y^{\mathbf{m}(y)}z^{\mathbf{m}(z)}(y-z)^{\mathbf{m}(y-z)}\mbox{ and} \\
\tilde{X}_2= & x^{\mathbf{m}(x)}z^{\mathbf{m}(z)}(x-\alpha z)^{\mathbf{m}(x-\alpha z)}(x-\beta z)^{\mathbf{m}(x-\beta z)}.\\
\end{array}
\]
In Example~\ref{ex:multiplicitiessupersolvable0}, we deduce from Theorem~\ref{thm:FreeMultFreeTF2} that $(\mathcal{A}(\alpha,\beta),\mathbf{m})$ is free if and only if either $\tilde{X}_1$ or $\tilde{X}_2$ has $\mathbf{m}(z)$ as an exponent. This property is sensitive to the characteristic of $\mathbb{K}$; we will assume in the remainder of this example that $\mathbb{K}$ has characteristic zero.
Write $M_1=\mathbf{m}(y)+\mathbf{m}(z)+\mathbf{m}(y-z)$ and $M_2=\mathbf{m}(x)+\mathbf{m}(z)+\mathbf{m}(x-\alpha z)+\mathbf{m}(x-\beta z)$. If $\mathbb{K}$ has characteristic zero, the exponents of the multi-arrangement $\tilde{X}_1$ are known~\cite{Wakamiko}; $\mathbf{m}(z)$ is an exponent if and only if $M_1\le 2\mathbf{m}(z)+1$. So we assume $M_1>2\mathbf{m}(z)+1$ and determine when $\tilde{X}_2$ has an exponent of $\mathbf{m}(z)$.
It is not difficult to show that if $\mathbf{m}(z)$ is an exponent of $\tilde{X}_2$, then $\mathbf{m}(z)=\max\{\mathbf{m}(x),\mathbf{m}(z),\mathbf{m}(x-\alpha z),\mathbf{m}(x-\beta z)\}$ (see Lemma~\ref{lem:Boolean}). From~\cite{DerProjLine} it is known that $\mathbf{m}(z)$ is an exponent of $\tilde{X}_2$ if $M_2\le 2\mathbf{m}(z)+1$. Moreover it follows from~\cite[Theorem~1.6]{AbeFreeness3Arrangements} that $\mathbf{m}(z)$ is not an exponent of $\tilde{X}_2$ if $M_2>2+2\mathbf{m}(z)$ (this also requires that $\mathbb{K}$ has characteristic zero). However if $M_2=2+2\mathbf{m}(z)$ then it is only known that $\mathbf{m}(z)$ is not an exponent of $\tilde{X}_2$ for \textit{generic} choices of $\alpha$ and $\beta$ (at least if $\mathbb{K}=\mathbb{C}$~\cite{DerProjLine}).
To see what can happen if $M_2=2+2\mathbf{m}(z)$, consider the multi-arrangement $(\mathcal{A}(\alpha,\beta),\mathbf{m})$ defined by
\[
x^3y^3z^3(x-\alpha z)(x-\beta z)(y-z)^3.
\]
Then $\tilde{X}_1=x^3y^3(y-z)^3$ and $\tilde{X}_2=x^3z^3(x-\alpha z)(x-\beta z)$. The exponents of $\tilde{X}_1$ are $(4,5)$, while the exponents of $\tilde{X}_2$ are $(4,4)$ if $\alpha\neq-\beta$ and $(3,5)$ if $\alpha=-\beta$ (see~\cite{ZieglerMulti} or Lemma~\ref{lem:nn11exp}). By Theorem~\ref{thm:FreeMultFreeTF2}, $(\mathcal{A}(\alpha,\beta),\mathbf{m})$ is free if and only if $\alpha=-\beta$.
As a consequence, we see that for a fixed multiplicity $\mathbf{m}$ the free multi-arrangements $(\mathcal{A},\mathbf{m})$ in the moduli space of $L(\mathcal{A})$ can form a non-empty proper Zariski closed subset, even when $\mathcal{A}$ is supersolvable over a field of characteristic zero. In contrast, Yuzvinsky has shown that free arrangements form a Zariski open subset of the moduli space of $L(\mathcal{A})$~\cite{YuzModuli}.
\begin{figure}
\begin{tikzpicture}[scale=.6]
\draw[thick] (0,0) circle (4cm);
\draw[thick] (-3.8,0)--(3.8,0);
\draw[thick] (0,-3.8)--(0,3.8);
\draw[red,thick] (1,3.3)--(1,-3.3);
\draw[thick] (3.3,1)--(-3.3,1);
\draw[red,thick] (-1.5,-3.1)--(-1.5,3.1);
\draw[->,red,thick] (1.3,2.7)--(2.1,2.7);
\draw[->,red,thick] (-1.3,2.8)--(-.5,2.8);
\node[red] at (1.7,2.3) {$\alpha$};
\node[red] at (-.9,2.3) {$\beta$};
\end{tikzpicture}
\caption{A projective picture emphasizing the moduli in Example~\ref{ex:multiplicitiessupersolvable}}\label{fig:multiplicitiessupersolvable}
\end{figure}
\end{exm}
\begin{exm}\label{ex:TotallyNonFree}
Let $\mathcal{A}(\alpha,\beta)$ be the arrangement with defining polynomial $\mathcal{Q}(\mathcal{A}(\alpha,\beta))=xyz(x-\alpha y)(x-\beta y)(y-z)(x-z)$, where $\alpha,\beta\in\mathbb{K}$. See Figure~\ref{fig:TotallyNonFree} for a projective drawing of this arrangement over $\mathbb{R}$. It is straightforward to show that if $\alpha\neq 1,\beta\neq 1,$ and $\alpha\neq\beta$, then the lattice $L(\mathcal{A}(\alpha,\beta))$ does not change. Just as in Example~\ref{ex:multiplicitiessupersolvable}, these arrangements comprise the moduli space of this lattice. It is easily checked that $\mathcal{A}(\alpha,\beta)$ is not free for any choice of $\alpha,\beta$ since its characteristic polynomial does not factor.
We will see in Theorem~\ref{thm:FreeMultNonFreeTF2} that if $\mathbb{K}$ has characteristic $0$, the multi-arrangement $(\mathcal{A}(\alpha,\beta),\mathbf{m})$ is free if and only if its defining equation has the form
\[
\mathcal{Q}(\mathcal{A},\mathbf{m})=x^ny^nz^n(x-\alpha y)(x-\beta y)(y-z)(x-z),
\]
where $n>1$ is an integer and $\alpha^{n-1}=\beta^{n-1}\neq 1$. In particular, if $\alpha/\beta$ is not a root of unity in $\mathbb{K}$, then $\mathcal{A}$ is totally non-free, meaning it does not admit any free multiplicities. For instance, if $\mathbb{K}=\mathbb{R}$, then $\mathcal{A}$ admits a free multiplicity if and only if $\alpha=-\beta$ (the free multiplicities occur precisely when $n>1$ is odd). Since the arrangements $\mathcal{A}(\alpha,\beta)$ with $\alpha\neq 1,\beta\neq 1,$ and $\alpha\neq\beta$ all have the same intersection lattice, this shows that the property of being totally non-free is not combinatorial. In contrast, Abe, Terao, and Yoshinaga have shown that the property of being totally free is combinatorial~\cite{TeraoTotallyFree}.
\begin{figure}[htp]
\begin{tikzpicture}[scale=.6]
\draw[thick] (0,0) circle (4cm);
\draw[thick] (-3.8,0)--(3.8,0);
\draw[thick] (0,-3.8)--(0,3.8);
\draw[thick] (1,3.3)--(1,-3.3);
\draw[thick] (3.3,1)--(-3.3,1);
\draw[red,thick] (-1.2*1.3,2.4*1.3)--(1.2*1.3,-2.4*1.3);
\draw[red,thick] (-2*1.7,-.7*1.7)--(2*1.7,.7*1.7);
\draw[->,red,thick] (-1.6,2.6) to [out=200, in=70] (-2.7,1.3);
\draw[->,red,thick] (-2.8,-1.3) to [out=300, in=180] (-1.5,-2.3);
\node[red] at (-2,1.8) {$\alpha$};
\node[red] at (-2,-1.7) {$\beta$};
\end{tikzpicture}
\caption{A projective picture emphasizing the moduli in Example~\ref{ex:TotallyNonFree}}\label{fig:TotallyNonFree}
\end{figure}
\end{exm}
\begin{exm}\label{ex:RestrictionHighPdim}
Let $S=\mathbb{K}[x_0,\ldots,x_r]$ and let $\mathcal{A}\subset\mathbb{K}^{r+1}$ be the arrangement defined by
\[
\mathcal{Q}(\mathcal{A})=x_0\left(\prod\limits_{i=1}^r (x^2_i-x^2_0) \right) (x_1-x_2)\cdots (x_{r-1}-x_r)(x_r+x_1)
\]
Let $H$ be the hyperplane defined by $x_0$. In Proposition~\ref{prop:Xr}, we will show that $\mathcal{A}$ is free using Yoshinaga's theorem~\cite{YoshCharacterizationFreeArr} and Theorem~\ref{thm:FreeMultNonFreeTF2}. Moreover, we will prove that $\mbox{pdim}(D(\mathcal{A}^H))=r-3$, the largest possible. In fact, we will show more: the minimal free resolution of $D(\mathcal{A}^H)$ is a truncated and shifted Koszul complex, so it is linear. As with the previous two examples, the key to our analysis is that the restriction $\mathcal{A}^H$ is a $TF_2$ arrangement, which is particularly well suited to the homological methods we introduce in this paper.
This family of examples is interesting because it adds to a short list of arrangements known to fail Orlik's conjecture. This conjecture states that $\mathcal{A}^H$ is free whenever $\mathcal{A}$ is free~\cite{OrlikArr}. The only counterexamples to this conjecture of which we are aware appear in work of Edelman and Reiner~\cite{ReinerCounterEx,ReinerAnBn}. For the small ranks that we have been able to compute, our examples differ from theirs in that $D(\mathcal{A}^H)$ for the examples of Edelman and Reiner seems to be always `almost free' - that is $D(\mathcal{A}^H)$ has only one more generator than the rank of $\mathcal{A}^H$ and there is only a single relation among these generators. This latter behavior has been studied in a recent article of Abe~\cite{AbeDeletion}.
\end{exm}
\section{Preliminaries}\label{sec:Preliminaries}
Fix a field $\mathbb{K}$, let $V$ be a $\mathbb{K}$-vector space of dimension $\ell$, and $V^*$ the dual vector space. Set $S=\mbox{Sym}(V^*)$, the symmetric algebra on $V^*$. A hyperplane arrangement $\mathcal{A}\subset V$ is a union of hyperplanes $H$ defined by the vanishing of the affine linear form $\alpha_H\in V^*$; the \textit{defining polynomial} of $\mathcal{A}$ is $\mathcal{Q}(\mathcal{A})=\prod_{H\in\mathcal{A}} \alpha_H$. We will consistently abuse notation and write $H\in\mathcal{A}$ if $H$ is one of the hyperplanes whose union forms $\mathcal{A}$. Moreover, we will write $|\mathcal{A}|$ for the number of hyperplanes in $\mathcal{A}$.
The \textit{rank} of a hyperplane arrangement $\mathcal{A}\subset V$ is $r=r(\mathcal{A}):=\dim V-\dim(\cap_{H\in \mathcal{A}} H)$. The arrangement $\mathcal{A}\subset V$ is called \textit{essential} if $r(\mathcal{A})=\dim V$ and \textit{central} if $\cap_i H_i\neq\emptyset$. We will always assume $\mathcal{A}$ is a central hyperplane arrangement. We refer the reader to the landmark book of Orlik and Terao~\cite{Arrangements} for further details on arrangements.
The intersection lattice $L=L(\mathcal{A})$ of $\mathcal{A}$ is the lattice whose elements (flats) are all possible intersections of the hyperplanes of $\mathcal{A}$, ordered with respect to reverse inclusion. We will use $<$ to denote the ordering on the lattice, so if $X,Y\in L(\mathcal{A})$ and $X\subseteq Y$ as intersections, then $Y\le X$ in $L(\mathcal{A})$. This is a ranked lattice with rank function the codimension of the flat; we denote by $L_i=L_i(\mathcal{A})$ the flats $X\in L(\mathcal{A})$ with rank $i$. Given a flat $X\in L(\mathcal{A})$, the (closed) subarrangement $\mathcal{A}_X$ is the hyperplane arrangement of those hyperplanes of $\mathcal{A}$ which contain $X$, and the \textit{restriction} of $\mathcal{A}$ to $X$, denoted $\mathcal{A}^X$, is the hyperplane arrangement (in linear space corresponding to $X$) with hyperplanes $\{H\cap X: H \not < X \mbox{ in } L(\mathcal{A})\}$. If $X<Y$, the interval $[X,Y]\subset L(\mathcal{A})$ is the sub-lattice of all flats $Z\in L$ so that $X\le Z\le Y$. This is the intersection lattice of the arrangement $\mathcal{A}^Y_X$.
If $\mathcal{A}\subset V_1$ and $\mathcal{B}\subset V_2$ are two arrangements, then the product of $\mathcal{A}$ and $\mathcal{B}$ is the arrangement
\[
\mathcal{A} \times \mathcal{B}=\{H\oplus V_2:H\in\mathcal{A}\}\cup\{V_1\oplus H':H'\in\mathcal{B}\},
\]
and the arrangements $\mathcal{A},\mathcal{B}$ are \textit{factors} of $\mathcal{A}\times\mathcal{B}$. If an arrangement can be written as a product of two arrangements we say it is \textit{reducible}, otherwise we call it \textit{irreducible}. (Notice that an arrangement is not essential if and only if it has the empty arrangement as a factor).
If $\mathcal{A}\subset V$ is an arrangement the module of derivations of $\mathcal{A}$, denoted $D(\mathcal{A})$, is defined by
\[
D(\mathcal{A})=\{\theta\in\mbox{Der}_\mathbb{K}(S)| \theta(\alpha_H)\in\langle \alpha_H \rangle\mbox{ for all } H\in\mathcal{A} \}.
\]
If $D(\mathcal{A})$ is free as an $S$-module, we say $\mathcal{A}$ is free.
\begin{defn}
A multi-arrangement $(\mathcal{A},\mathbf{m})$ is an arrangement $\mathcal{A}\subset V$, along with a function $\mathbf{m}:\mathcal{A}\rightarrow \mathbb{Z}_{>0}$ assigning a positive integer to every hyperplane. The \textit{defining polynomial} of a multi-arrangement $(\mathcal{A},\mathbf{m})$ is $\mathcal{Q}(\mathcal{A},\mathbf{m}):=\prod_{H\in\mathcal{A}} \alpha_H^{\mathbf{m}(H)}$. The module of multi-derivations $D(\mathcal{A},\mathbf{m})$ is
\[
D(\mathcal{A},\mathbf{m})=\{\theta\in\mbox{Der}_\mathbb{K}(S)|\theta(\alpha_H)\in\langle \alpha_H^{\mathbf{m}(H)} \rangle\mbox{ for all } H\in\mathcal{A}\}
\]
\end{defn}
\begin{lem}\label{lem:DerivationMatrix}
Let $(\mathcal{A},\mathbf{m})$ be a multi-arrangement in $V\cong \mathbb{K}^\ell$. Let $\alpha_i$ be the form defining the hyperplane $H_i$, and set $m_i=\mathbf{m}(H_i)$. The module $D(\mathcal{A},\mathbf{m})$ of multiderivations on $\mathcal{A}$ is isomorphic to the kernel of the map
\[
\psi:S^{\ell+d}\rightarrow S^d,
\]
where $\psi$ is the matrix
\[
\begin{pmatrix}
& \vline & \alpha_1^{m_1} & & \\
B & \vline & & \ddots & \\
& \vline & & & \alpha_k^{m_k}
\end{pmatrix}
\]
and $B$ is the matrix with entry $B_{ij}=a_{ij}$, where $\alpha_j=\sum_{i,j} a_{ij} x_i$.
\end{lem}
\begin{proof}
See the comments preceding~\cite[Theorem~4.6]{DimSeries}.
\end{proof}
If $D(\mathcal{A},\mathbf{m})$ is free as an $S$-module then we say that the multi-arrangement $(\mathcal{A},\mathbf{m})$ is free and $\mathbf{m}$ is a \textit{free multiplicity} of $\mathcal{A}$. If $D(\mathcal{A},\mathbf{m})$ is free there is (by definition) a \textit{basis} of derivations $\theta_1,\ldots,\theta_\ell\in D(\mathcal{A},\mathbf{m})$ so that every other $\theta\in D(\mathcal{A},\mathbf{m})$ can be written uniquely as a polynomial combination of $\theta_1,\ldots,\theta_\ell$. If $\mathcal{A}$ is central (which we will assume throughout), we may assume these derivations are homogeneous with degrees $d_i=\deg(\theta_i)$. The set $(d_1,\ldots,d_\ell)$ are called the \textit{exponents} of $D(\mathcal{A},\mathbf{m})$. We will always assume $d_1\ge d_2\ge\cdots\ge d_\ell$. Write $|\mathbf{m}|$ for $\sum_{H\in\mathcal{A}}\mathbf{m}(H)$. It follows from Saito's criterion (below) that if $D(\mathcal{A},\mathbf{m})$ is free with exponents $(d_1,\ldots,d_\ell)$ then $\sum_{i=1}^\ell d_i=|\mathbf{m}|$.
\begin{prop}[Saito's criterion]\label{prop:Saito}
Let $(\mathcal{A},\mathbf{m})$ be a central arrangement in a vector space $V$ of dimension $\ell$, and write $\mathbb{K}[x_1,\ldots,x_\ell]$ for $\mbox{Sym}(V^*)$. Suppose $\theta_1,\ldots,\theta_\ell$ are derivations with $\theta_i=\sum_{j=1}^\ell \theta_{ij}\frac{\partial}{\partial x_i}$. Write $M=M(\theta_1,\ldots,\theta_\ell)$ for the $\ell\times\ell$ matrix of coefficients $M_{ij}=\theta_{ij}$. Then $D(\mathcal{A},\mathbf{m})$ is free with basis $\theta_1,\ldots,\theta_\ell$ if and only if $\det(M)$ is a scalar multiple of the defining polynomial $\mathcal{Q}(\mathcal{A},\mathbf{m})$.
\end{prop}
If $X\in L(\mathcal{A})$, we write $(\mathcal{A}_X,\mathbf{m}_X)$ for the multi-arrangement $\mathcal{A}_X$ with multiplicity function $\mathbf{m}_X=\mathbf{m}|_{\mathcal{A}_X}$. If $(\mathcal{A}_X,\mathbf{m}_X)$ is free for every $X\neq \cap_{H\in\mathcal{A}} H \in L$, then we say $(\mathcal{A},\mathbf{m})$ is \textit{locally free}; equivalently the associated sheaf $\widetilde{D(\mathcal{A},\mathbf{m})}$ is a vector bundle on $\mathbb{P}^{\ell-1}$.
\begin{prop}\cite[Proposition~1.7]{AbeSignedEliminable}\label{prop:pdimLB}
Let $(\mathcal{A},\mathbf{m})$ be a multi-arrangement, $X\in L(\mathcal{A})$, and $(\mathcal{A}_X,\mathbf{m}_X)$ the corresponding closed subarrangement with restricted multiplicities. Then $\mbox{pdim}(D(\mathcal{A},\mathbf{m}))\ge \mbox{pdim}(D(\mathcal{A}_X,\mathbf{m}_X))$.
\end{prop}
\begin{lem}[Ziegler~\cite{ZieglerMulti}]\label{lem:globalUB}
For any arrangement $\mathcal{A}\subset V,\mbox{pdim}(D(\mathcal{A},\mathbf{m}))\le r(\mathcal{A})-2$. In particular, if $r(\mathcal{A})\le 2$ then $(\mathcal{A},\mathbf{m})$ is free.
\end{lem}
If $\mathcal{A}$ is an arrangement and $H\in\mathcal{A}$, we denote by $(\mathcal{A}^H,\mathbf{m})$ the \textit{Ziegler restriction} of $\mathcal{A}$ to $\mathcal{A}^H$; this is the arrangement $\mathcal{A}^H$ with the multiplicity function $\mathbf{m}^H$ defined by
\[
\mathbf{m}^H(X)=\#\{H'\in\mathcal{A}:H'\cap H=X\}
\]
for every $X\in \mathcal{A}^H$. We include the following criterion for freeness which is due to Yoshinaga~\cite{YoshCharacterizationFreeArr}; the observation that we can restrict to codimension three was made in~\cite[Theorem~4.1]{AbeYoshinaga}.
\begin{thm}\cite[Theorem~2.2]{YoshCharacterizationFreeArr}\label{thm:Yoshinaga}
An arrangement $\mathcal{A}$ over a field of characteristic zero is free if and only if, for some $H\in\mathcal{A}$:
\begin{enumerate}
\item $(\mathcal{A}^H,\mathbf{m}^H)$ is free and
\item $\mathcal{A}_X$ is free for every $X\neq 0\in L_3(\mathcal{A})$ so that $H<X$.
\end{enumerate}
\end{thm}
The second condition is sometimes stated as `$\mathcal{A}$ is locally free along $H$ in codimension three.'
\section{The homological criterion}\label{sec:ChainComplex}
Let $(\mathcal{A},\mathbf{m})$ be a multi-arrangement. In this section we prove Theorem~\ref{thm:Free}; we describe the chain complex $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ and prove that $(\mathcal{A},\mathbf{m})$ is free if and only if $H^i(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))=0$ for all $i>0$. The construction of the modules which comprise $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ is due to Brandt and Terao if $\mathbf{m}\equiv 1$~\cite{TeraoPoincare,BrandtTerao}; we make the straightforward observation that the same definitions work also for multi-arrangements. We follow the presentation given in~\cite{BrandtTerao}.
\begin{defn}\label{defn:Dk}
Set $D_0(\mathcal{A},\mathbf{m})=D(\mathcal{A},\mathbf{m})$ and for $1\le k\le r=r(\mathcal{A})$ inductively define $D_k(\mathcal{A},\mathbf{m})$ and $K_k(\mathcal{A},\mathbf{m})$ as the cokernel and kernel, respectively of the map
\[
\tau_{k-1}=\tau_{k-1}(\mathcal{A}): D_{k-1}(\mathcal{A},\mathbf{m})\rightarrow\bigoplus\limits_{X\in L_{k-1}} D_{k-1}(\mathcal{A}_X,\mathbf{m}_X),
\]
where $\tau_{k}$ is a sum of maps $\phi_{k}(Y):D_{k}(\mathcal{A},\mathbf{m})\rightarrow D_{k}(\mathcal{A}_Y,\mathbf{m}_Y)$. For $Y\in L$ with $r(Y)\ge k$, $\phi_k(Y)$ is defined inductively (the map for $k=0$ is the usual inclusion of derivations) via the diagram in Figure~\ref{fig:Dk}:
\begin{figure}[htp]
\centering
\begin{tikzcd}
D_{k-1}(\mathcal{A},\mathbf{m}) \ar{r}{\tau_{k-1}(\mathcal{A})} \ar{d}{\phi_{k-1}(Y)} & \bigoplus\limits_{X\in L_{k-1}} D_{k-1}(\mathcal{A}_X,\mathbf{m}_X) \ar{r}\ar{d}{p_{k-1}(Y)} & D_k(\mathcal{A},\mathbf{m}) \ar{r}\ar{d}{\phi_k(Y)} & 0\\
D_{k-1}(\mathcal{A}_Y,\mathbf{m}_Y) \ar{r}{\tau_{k-1}(\mathcal{A}_Y)} & \bigoplus\limits_{\substack{X\le Y\\ r(X)=k-1}} D_{k-1}((\mathcal{A}_Y)_X,(\mathbf{m}_Y)_X) \ar{r} & D_k(\mathcal{A}_Y,\mathbf{m}_Y) \ar{r} & 0
\end{tikzcd}
\caption{Diagram for Definition~\ref{defn:Dk}}\label{fig:Dk}
\end{figure}
The center vertical map is projection, the left-hand square commutes, so $\phi_k(Y)$ may be defined so that the right-hand square commutes.
\end{defn}
\begin{remark}\label{rem:D1}
Given an arrangement $\mathcal{A}$, the only flat of $L$ with rank $0$ is $V$, the ambient space of $\mathcal{A}$. The module $D_1(\mathcal{A},\mathbf{m})$ is the cokernel of the map
\[
D_0(\mathcal{A},\mathbf{m})\xrightarrow{\tau_0} \bigoplus\limits_{X\in L_0} D_0(\mathcal{A}_X,\mathbf{m}),
\]
in other words the cokernel of the inclusion
\[
D(\mathcal{A},\mathbf{m})\rightarrow D(V)=\mbox{Der}_\mathbb{K}(S)\cong S^\ell,
\]
where $\ell=\dim(V)$.
\end{remark}
\begin{remark}\label{rem:LowDescription}
Fix a basis $x_1,\ldots,x_\ell$ for $S_1=\mbox{Sym}(V^*)_1$ and denote the corresponding basis of $\mbox{Der}_{\mathbb{K}}(S)$ by $\partial_i=\partial/\partial x_i$. Number the hyperplanes of $\mathcal{A}$ by $H_1,\ldots,H_k$. Assume $H_j=V(\alpha_j)$, where $\alpha_j=\alpha_{H_j}=\sum_{i} a_{ij}x_i$. For some $H=H_j\in\mathcal{A}$ let $\partial_H=\sum_i a_{ij}\partial_i$.
For $H\in \mathcal{A}$, let $J(H)=\langle \alpha_H^{\mathbf{m}(H)} \rangle$, the ideal generated by $\alpha_H^{\mathbf{m}(H)}$ in $S$. Then $D(\mathcal{A}_H,\mathbf{m}_H)\subset \mbox{Der}_\mathbb{K}(S)$ is isomorphic to $J(H)\partial_H\oplus S^{\ell-1}$, where $\ell=\dim(V)$ and $J(H)\partial_H$ denotes that $J(H)$ is living inside of the copy of $S$ corresponding to the basis element $\partial_H$. So $D_1(\mathcal{A}_H,\mathbf{m}_H)$ is the cokernel of the inclusion $D(\mathcal{A}_H,\mathbf{m}_H)\rightarrow \mbox{Der}_{\mathbb{K}}(S)\cong S^\ell,$
which may be identified as $S\partial_H/J(H)$. There is then a natural map
\[
\mbox{Der}_{\mathbb{K}}(S)\cong S^\ell\xrightarrow{B} \bigoplus_{H\in\mathcal{A}} \dfrac{S}{J(H)}=\bigoplus_{X\in L_1} D_1(\mathcal{A}_X,\mathbf{m}_X),
\]
where $B$ is the matrix with entries $B_{ij}=a_{ij}$. The kernel of this map is $D(\mathcal{A},\mathbf{m})$, its image is $D_1(\mathcal{A},\mathbf{m})$, and its cokernel is $D_2(\mathcal{A},\mathbf{m})$.
\end{remark}
\begin{remark}
We will discuss computations of $D_k(\mathcal{A},\mathbf{m})$ further in \S~\ref{sec:Computations}.
\end{remark}
Extending Remark~\ref{rem:LowDescription}, we assemble the modules $\bigoplus\limits_{X\in L_k} D_k(\mathcal{A}_X,\mathbf{m}_X)$ into a chain complex.
\begin{defn}\label{defn:DerivationComplex}
Set $\mathcal{D}^k=\bigoplus\limits_{X\in L_k} D_k(\mathcal{A}_X,\mathbf{m}_X)$. Define $\delta^k:\mathcal{D}^k\rightarrow \mathcal{D}^{k+1}$ by the composition $\mathcal{D}^k\rightarrow D_{k+1}(\mathcal{A},\mathbf{m}) \xrightarrow{\tau_{k+1}} \mathcal{D}^{k+1},$ where the first map is the natural surjection from Definition~\ref{defn:Dk}. The derivation complex $\mathcal{D}^\bullet=\mathcal{D}^{\bullet}(\mathcal{A},\mathbf{m})$ is the chain complex with modules $\mathcal{D}^k$ for $k=0,\ldots,r(\mathcal{A})$ and maps $\delta^k:\mathcal{D}^k\rightarrow \mathcal{D}^{k+1}$ for $k=0,\ldots,r(\mathcal{A})-1$.
\end{defn}
\begin{remark}\label{rem:ComplexDiagram}
The derivation complex $\mathcal{D}^\bullet$ is tautologically a complex from the definitions of $D_k(\mathcal{A},\mathbf{m})$ and $\delta^k$. The commutative diagram in Figure~\ref{fig:DerComplex} shows how all the definitions so far fit together. Note that $K_i(\mathcal{A},\mathbf{m})$ from Definition~\ref{defn:Dk} may be identified with $H^i(\mathcal{D}^\bullet)$.
\end{remark}
\begin{remark}
The chain complex $\mathcal{D}^\bullet$ in Definition~\ref{defn:DerivationComplex} is essentially dual to a chain complex described in~\cite{StefanFormal}; we will describe the precise connection in \S~\ref{sec:Formal}.
\end{remark}
\begin{figure}
\centering
\begin{tikzcd}
& & 0\ar[dr] & & & \\
& & & K_{i+1}(\mathcal{A},\mathbf{m})\ar{dr} & & 0 \\
& & & & D_{i+1}(\mathcal{A},\mathbf{m})\ar{ur}\ar{dr}{\tau_{i+1}(\mathcal{A})} \\
& \mathcal{D}^{i-1} \ar{dr}\ar{rr}{\delta^{i-1}} & & \mathcal{D}^i\ar{ur}\ar{rr}{\delta^i} & & \mathcal{D}^{i+1} \\
& & D_i(\mathcal{A},\mathbf{m}) \ar{ur}[swap]{\tau_i(\mathcal{A})}\ar[dr] & & & \\
& K_i(\mathcal{A},\mathbf{m})\ar[ur] & & 0 & &\\
0\ar[ur] & & & & &
\end{tikzcd}
\caption{Components of Definition~\ref{defn:DerivationComplex}}\label{fig:DerComplex}
\end{figure}
\begin{lem}\label{lem:H0Der}
For a multi-arrangement $(\mathcal{A},\mathbf{m})$, $H^0(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))\cong D(\mathcal{A},\mathbf{m})$.
\end{lem}
\begin{proof}
This is immediate from Remark~\ref{rem:LowDescription}.
\end{proof}
Now we proceed to the proof of Theorem~\ref{thm:Free}. We use a few preliminary results.
\begin{lem}\cite[Lemma~4.12]{BrandtTerao}\label{lem:local}
For any $k$, the functors $X\to D_k(\mathcal{A}_X,\mathbf{m}_X)$ for $X\in L$ are \textit{local} in the sense of~\cite[Definition~6.4]{SolomonTeraoCharPoly}.
Namely let $P\in \mbox{Spec}(S)$, $X\in L$, and set $X(P)=\bigcap\limits_{\substack{H\in\mathcal{A}_X\\ \alpha_H\in P}} H$. Then
\begin{itemize}
\item $D_k(\mathcal{A}_X,\mathbf{m}_X)_P=D_k(\mathcal{A}_{X(P)},\mathbf{m}_{X(P)})_P$ and
\item $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})_P=\mathcal{D}^\bullet(\mathcal{A}_{X(P)},\mathbf{m}_{X(P)})_P$.
\end{itemize}
\end{lem}
\begin{proof}
For the first bullet, use the fact that $X\to D(\mathcal{A}_X,\mathbf{m})$ is local, the short exact sequences in Definition~\ref{defn:Dk}, and the fact that localization is an exact functor. The second bullet follows from the first.
\end{proof}
\begin{prop}\label{prop:CMCodimK}
Let $X\in L_k$ and $I(X)\subset S$ denote the ideal generated by the linear forms $\alpha_H$ for all $H\le X$. Then $D_k(\mathcal{A}_X,\mathbf{m}_X)$ is Cohen-Macaulay of codimension $k$ and $I(X)$ is its only associated prime.
\end{prop}
\begin{remark}
Proposition~\ref{prop:CMCodimK} is implicit in the proof of~\cite[Proposition~4.13]{BrandtTerao}; we provide a proof for completeness.
\end{remark}
\begin{proof}
As usual, set $\ell=\dim(V)$. By changing coordinates, we may assume $X=V(x_1,\ldots,x_k)$. The result is clear if $k=0$ or $k=1$, so we assume $k\ge 2$. Let $\pi_X:V\to X^\perp=W$ be the projection with center $X$ and set $R=\mbox{Sym}(W^*)\cong \mathbb{K}[x_{k+1},\ldots,x_\ell]$. Then we observe that
\begin{itemize}
\item $\mathcal{A}^\pi=\pi_X(\mathcal{A}_X)$ is an essential arrangement in $W$ of rank $\ell-k=\dim W$,
\item $D_k(\mathcal{A}^\pi,\mathbf{m}_X)\otimes_R S=D_k(\mathcal{A}_X,\mathbf{m}_X)$,
\item $x_{k+1},\ldots,x_{\ell}$ is a regular sequence on $D_k(\mathcal{A}_X,\mathbf{m}_X)$,
\item $D_k(\mathcal{A}_X,\mathbf{m}_X)/\langle x_{k+1},\ldots,x_{\ell} \rangle D_k(\mathcal{A}_X,\mathbf{m}_X)\cong D_k(\mathcal{A}^\pi,\mathbf{m}_X)$,
\item and $\mbox{Ass}(D_k(\mathcal{A}_X,\mathbf{m}_X))=\{PS|P\in\mbox{Ass}(D_k(\mathcal{A}^\pi,\mathbf{m}_X))\},$
\end{itemize}
where the final bullet point follows from~\cite[Theorem~23.2]{Matsumura}, which describes behavior of associated primes under flat extensions. Hence it suffices to show that the only associated prime of $D_k(\mathcal{A},\mathbf{m})$ when $k=r(\mathcal{A})=\dim V$ is the maximal ideal of $S$. Consider the short exact sequence
\[
0\rightarrow D_{k-1}(\mathcal{A},\mathbf{m})\rightarrow \mathcal{D}^{k-1}=\bigoplus\limits_{X\in L_{k-1}} D_{k-1}(\mathcal{A}_X,\mathbf{m}_X) \rightarrow D_k(\mathcal{A},\mathbf{m}) \rightarrow 0
\]
from Definition~\ref{defn:Dk}, and localize at a prime $P\in\mbox{Spec}(S)$. If $\mbox{codim}(P)\le k-1$, then by induction either $\mathcal{D}^{k-1}_P$ vanishes (in which case $D_k(\mathcal{A},\mathbf{m})_P=0$) or $P=I(X)$ for some $X\in L$ of codimension $k-1$ and $\mathcal{D}^{k-1}_P=D_{k-1}(\mathcal{A}_X,\mathbf{m}_X)_P$. In the latter case, localizing the exact sequence above at $P=I(X)$ and using Lemma~\ref{lem:local} yields the exact sequence
\[
0\rightarrow D_{k-1}(\mathcal{A}_X,\mathbf{m}_{X})_{I(X)}\rightarrow D_{k-1}(\mathcal{A}_X,\mathbf{m}_X)_{I(X)} \rightarrow D_k(\mathcal{A},\mathbf{m})_{I(X)} \rightarrow 0,
\]
so clearly $D_k(\mathcal{A},\mathbf{m})_{I(X)}=0$. Hence the only prime in the support of $D_k(\mathcal{A},\mathbf{m})$ is the homogeneous maximal ideal.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:Free}]
By Lemma~\ref{lem:H0Der}, $D(\mathcal{A},\mathbf{m})\cong H^0(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))$. Now we use the following result of Schenck and Stiller (see also~\cite{Spect}).
\begin{thm}\cite[Theorem~3.4]{CohVan}\label{thm:Schenck}
Suppose $C^\bullet=0\rightarrow C^0 \rightarrow C^1 \rightarrow C^2 \rightarrow \cdots\rightarrow C^t \rightarrow 0$ is a complex of $S=\mathbb{K}[x_1,\ldots,x_\ell]$-modules so that, for $k=0,\ldots t$,
\begin{itemize}
\item $C^k$ is Cohen-Macaulay of codimension $k$
\item $H^k(\mathcal{C^\bullet})$ is supported in codimension $\ge k+2$.
\end{itemize}
Then $H^0(C^\bullet)$ is free if and only if $H^k(C^\bullet)=0$ for $k>0$ and locally free if and only if $H^k(C^\bullet)$ has finite length for $k>0$.
\end{thm}
\noindent By Proposition~\ref{prop:CMCodimK}, $\mathcal{D}^k=\mathcal{D}^k(\mathcal{A},\mathbf{m})$ is Cohen-Macaulay of codimension $k$. So we need to show that $H^k(\mathcal{D}^\bullet)$ is supported in codimension at least $k+2$. We use the fact that taking homology commutes with localization. So let $P$ be a prime and consider the localized complex
\[
\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})_P=\cdots\rightarrow \mathcal{D}^{k-1}_P \xrightarrow{\delta^{k-1}_P} \mathcal{D}^k_P \xrightarrow{\delta^k_P} \mathcal{D}^{k+1}_P \rightarrow\cdots
\]
If $\mbox{codim}(P)\le k$, then we have seen in the proof of Proposition~\ref{prop:CMCodimK} that the localized map $\delta^{k-1}_P$ becomes an isomorphism, hence $H^k(\mathcal{D}^\bullet)_P=H^k(\mathcal{D}^\bullet_P)=0$. Now suppose $\mbox{codim}(P)=k+1$. If $P\neq I(X)$ for some $X\in L$ of codimension $k+1$, then let $X\in L_i$ ($i\le k$) be the flat of maximal rank so that $I(X)\subset P$. If $r(X)\le k-1$ then $H^k(\mathcal{D}^\bullet_P)=0$ by Proposition~\ref{prop:CMCodimK}. So suppose $X$ has codimension $k$. Then the localized map $\delta^{k-1}_P$ becomes an isomorphism again as in the proof of Proposition~\ref{prop:CMCodimK}.
Finally suppose $P=I(X)$ for some $X\in L_{k+1}$. Localizing yields
\[
\bigoplus\limits_{\substack{Y\ge X \\ r(Y)=k-1}} D_{k-1}(\mathcal{A}_Y,\mathbf{m}_Y)_{P} \xrightarrow{\delta_P^{k-1}} \bigoplus\limits_{\substack{Z\ge X\\ r(Z)=k}} D_k(\mathcal{A}_Z,\mathbf{m}_Z)_{P} \xrightarrow{\delta^k_P} D_{k+1}(\mathcal{A}_X,\mathbf{m}_X)_{P}.
\]
By definition $\delta^{k-1}$ factors through $D_k(\mathcal{A},\mathbf{m})$. Hence $H^k(\mathcal{D}^\bullet)_P$ is the middle homology of the three term complex
\[
0\rightarrow D_k(\mathcal{A}_X,\mathbf{m}_X)_P \xrightarrow{(\tau_k)_P} \bigoplus\limits_{\substack{Z\ge X\\ r(Z)=k}} D_k(\mathcal{A}_Z,\mathbf{m}_Z)_{P} \xrightarrow{\delta^k_P} D_{k+1}(\mathcal{A}_X,\mathbf{m}_X)_{P}\rightarrow 0,
\]
which is exact by Definition~\ref{defn:Dk}. It follows that $H^k(\mathcal{D}^\bullet)$ is supported in codimension $\ge k+2$.
\end{proof}
\begin{remark}
In the case of a simple arrangement, the forward implication of Theorem~\ref{thm:Free} follows from~\cite[Proposition~4.13]{BrandtTerao}.
\end{remark}
Theorem~\ref{thm:Schenck} arises from a studying the hyperExt modules of $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$. Without the vanishing assumptions we may obtain the following.
\begin{prop}\label{prop:BoundingProjectiveDimension}
Set $p_i=\mbox{pdim}(H^i(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})))$ for $i>0$. Then
\[
\mbox{pdim}(D(\mathcal{A},\mathbf{m}))\le \max\limits_{i>0}\{p_i-i-1\},
\]
with equality if there is a single $i>0$ for which $H^i(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))\neq 0$.
\end{prop}
\begin{proof}
See~\cite[Lemma~4.11]{Spect} or~\cite[\S~3]{GSplinesGraphicArr}.
\end{proof}
\subsection{A combinatorial bound on projective dimension}
We close this section by extending a combinatorial bound on projective dimension due to Kung and Schenck for simple arrangements~\cite[Corollary~2.3]{HalKung}. Recall that a generic arrangement of rank $\ell$ is one in which the intersection of every subset of $k\le\ell$ hyperplanes has codimension $k$.
\begin{cor}\label{cor:pdimcircuit}
Let $(\mathcal{A},\mathbf{m})$ be a multi-arrangement. If $\mathcal{A}_X$ is generic with $|\mathcal{A}_X|>r(X)$, then $\mbox{pdim}(D(\mathcal{A},\mathbf{m}))\ge r(X)-2$. In particular, if the matroid of $\mathcal{A}$ has a closed circuit of length $m$, then $\mbox{pdim}(D(\mathcal{A},\mathbf{m}))\ge m-3$.
\end{cor}
\begin{proof}
If $r(\mathcal{A})=2$ the statement is trivial so we will assume $r(\mathcal{A})>2$. Suppose $\mathcal{A}_X$ is generic with $|\mathcal{A}_X|>r(X)$. By Proposition~\ref{prop:pdimLB}, it suffices to show that $\mbox{pdim}(D(\mathcal{A}_X,\mathbf{m}_X))\ge r(X)-2$. So we assume $\mathcal{A}=\mathcal{A}_X$ is essential and generic of rank $r$ with $|\mathcal{A}|>r$ and prove $\mbox{pdim}(D(\mathcal{A},\mathbf{m}))=r-2$.
In this case we claim the chain complex $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ has the form $S^r \xrightarrow{\delta^0} \bigoplus\limits_{H\in\mathcal{A}}\dfrac{S}{J(H)}$, where $J(H)=\langle \alpha_H^{\mathbf{m}(H)}\rangle$. That $\mathcal{D}^0=S^r$ and $\mathcal{D}^1=\bigoplus_{H\in\mathcal{A}} S/J(H)$ follows from the definition of $\mathcal{D}^\bullet$ and Remark~\ref{rem:LowDescription}. To prove that $\mathcal{D}^k=0$ for $k>1$, it suffices to show that $D_2(\mathcal{A}_Y,\mathbf{m}_Y)=0$ for all $Y\in L_2$. We have
\[
D_2(\mathcal{A}_Y,\mathbf{m}_Y)=\mbox{coker}\left( S^{r}\xrightarrow{\delta^0_Y} \bigoplus_{H\in\mathcal{A}_Y}\dfrac{S}{J(H)}\right).
\]
Since $\mathcal{A}$ is generic, the set $\{\alpha_H:H\in\mathcal{A}_Y\}$ consists of $r(Y)$ linearly independent forms and the coefficient matrix $\delta^1_Y$ has full rank. So $D_2(\mathcal{A}_Y,\mathbf{m}_Y)=0$.
It follows that $H^1(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))=\mbox{coker}(\delta^0)$. Since $|\mathcal{A}|>r$, we see that $\delta^0$ cannot be surjective, so $H^1(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))\neq 0$. We show that $H^1(\mathcal{D}^\bullet)$ is only supported at the maximal ideal. To this end, let $P\in\mbox{spec}(S)$ be a prime of codimension $k\le r-1$. Write $X(P)=\bigcap\limits_{\substack{H\in\mathcal{A}_X\\ \alpha_H\in P}} H$. Since $\mathcal{A}$ is generic, $\{\alpha_H: \alpha_H\in P\}$ consists of at most $k$ linearly independent forms, so up to a change of coordinates $\mathcal{A}_{X(P)}$ is union of coordinate hyperplanes. By Lemma~\ref{lem:local}, $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})_P\cong \mathcal{D}^\bullet(\mathcal{A}_{X(P)},\mathbf{m}_{X(P)})_P$. The chain complex $\mathcal{D}^\bullet(\mathcal{A}_{X(P)},\mathbf{m}_{X(P)})$ has the form $S^r\xrightarrow{\delta^0_{X(P)}} \bigoplus\limits_{H\in\mathcal{A}_{X(P)}} \dfrac{S}{J(H)}$, and $\delta^0_{X(P)}$ is clearly surjective, so
\[
H^1(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))_P\cong H^1(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})_P)\cong H^1(\mathcal{D}^\bullet(\mathcal{A}_{X(P)},\mathbf{m}_{X(P)})_P)=0.
\]
It follows that $H^1(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))$ is only supported at the maximal ideal. Since $H^1(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))\neq 0$, $\mbox{pdim}(H^1(\mathcal{D}^\bullet))=r$ and by Proposition~\ref{prop:BoundingProjectiveDimension}, $\mbox{pdim}(D(\mathcal{A},\mathbf{m}))=r-2$, the maximal projective dimension.
\end{proof}
\begin{remark}
Corollary~\ref{cor:pdimcircuit} implies that generic arrangements are totally non-free; this was first proved by Yoshinaga~\cite{YoshExtendability}.
\end{remark}
\begin{remark}
Even for simple arrangements, the lower bound given by Corollary~\ref{cor:pdimcircuit} may be arbitrarily far off from the actual projective dimension. See Remark~\ref{rem:GeneralizedX3}.
\end{remark}
\section{Multi-arrangements and $k$-formality}\label{sec:Formal}
In this section we will show that if $(\mathcal{A},\mathbf{m})$ is a free multi-arrangement then $\mathcal{A}$ is $k$-formal (in the sense of~\cite{BrandtTerao}) for $2\le k\le r-1$, where $r=r(\mathcal{A})$ is the rank of $\mathcal{A}$ (thus generalizing the result of Brandt and Terao~\cite{BrandtTerao} to multi-arrangements). Once we have set up the notation, this is an immediate corollary of Theorem~\ref{thm:Free}.
We again follow the presentation in~\cite{BrandtTerao}. Fix an arrangement $\mathcal{A}=\cup_{H\in\mathcal{A}} V(\alpha_H)\subset V$. Set $E(\mathcal{A}):=\bigoplus_{H\in\mathcal{A}} e_H\mathbb{K}$ and define $\phi:E(\mathcal{A})\rightarrow V^*$ by $\phi(e_H)=\alpha_H$. Put $F(\mathcal{A})=\ker(\phi)$; this is called the \textit{relation space} of $\mathcal{A}$.
The arrangement $\mathcal{A}$ is $2$-\textit{formal} (or just \textit{formal}) if the relation space is generated by relations among three linear forms. Since three linear forms are dependent if and only if they define a codimension two flat, $2$-formality is equivalent to surjectivity of the map
\[
\pi_2:\bigoplus_{X\in L_2} F(\mathcal{A}_X)\rightarrow F(\mathcal{A}),
\]
where $\pi_2$ is the sum of natural inclusions $F(\mathcal{A}_X)\hookrightarrow F(\mathcal{A})$ for each $X\in L_2$.
\begin{defn}\label{defn:Rk}
Set $R_0:=T(\mathcal{A})^*\subset V^*$, where $T(\mathcal{A})=\cap_{H\in\mathcal{A}} H$. For $1\le k\le r$, recursively define $R_k(\mathcal{A})$ as the kernel of the map
\[
\pi_{k-1}=\pi_{k-1}(\mathcal{A}):=\bigoplus_{X\in L_{k-1}} R_{k-1}(\mathcal{A}_X)\rightarrow R_{k-1}(\mathcal{A}),
\]
where $\pi_k$ is the sum of natural inclusions for $0\le k\le r-1$. To simplify notation, set $\mathcal{R}_k=\mathcal{R}_k(\mathcal{A})=\bigoplus_{X\in L_k} R_k(\mathcal{A}_X)$.
\end{defn}
\begin{remark}
After chasing through the definitions one can see that $R_1(\mathcal{A})$ is the kernel of the restriction map $V^*\rightarrow T(\mathcal{A})^*$ and $R_2(\mathcal{A})=F(\mathcal{A})$. See~\cite{BrandtTerao} for details.
\end{remark}
\begin{defn}
The arrangement $\mathcal{A}$ is
\begin{itemize}
\item $2$-formal if $\mathcal{A}$ is formal
\item $k$-formal, for $3\le k\le r-1$, if $\mathcal{A}$ is $(k-1)$-formal and the map $\pi_k:\mathcal{R}_k=\bigoplus_{X\in L_k} R_k(\mathcal{A}_X)\rightarrow R_k(\mathcal{A})$ is surjective.
\end{itemize}
\end{defn}
In~\cite{StefanFormal}, Tohaneanu gives a homological formulation of $k$-formality as follows. First, notice that there is a natural differential $\delta_k: \mathcal{R}_k\rightarrow \mathcal{R}_{k-1}$ (similar to the differential for $\mathcal{D}^\bullet$) defined as the composition $\mathcal{R}_k \rightarrow R_k(\mathcal{A}) \xrightarrow{\pi_{k-1}} \mathcal{R}_{k-1}$.
\begin{lem}\cite[Lemma~2.5]{StefanFormal}\label{lem:HomologicalCharFormality}
With the differentials $\delta_k$, $1\le k\le r$, the vector spaces $\mathcal{R}_i$ ($0\le k\le r$) form a chain complex $\mathcal{R}_\bullet=\mathcal{R}_\bullet(\mathcal{A})$. The arrangement $\mathcal{A}$ is $k$-formal if and only if $H_i(\mathcal{R}_\bullet)=0$ for $i=1,\ldots,k-1$.
\end{lem}
\begin{remark}
If $\mathbf{m}\equiv 1$ (so $(\mathcal{A},\mathbf{m})$ is a simple arrangement) we will denote $D_k(\mathcal{A},\mathbf{m})$ (recall Definition~\ref{defn:Dk}) and $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ (recall Definition~\ref{defn:DerivationComplex}) by $D_k(\mathcal{A})$ and $\mathcal{D}^\bullet(\mathcal{A})$, respectively.
\end{remark}
Brandt and Terao show that the vector spaces $R_k(\mathcal{A})$ are dual to the degree zero part of $D_k(\mathcal{A})$.
\begin{prop}\cite[Proposition~4.10]{BrandtTerao}\label{prop:Duality}
For $0\le k\le r$, $D_k(\mathcal{A})_0\cong R_k(\mathcal{A})^*$, where $R_k(\mathcal{A})^*$ is the $\mathbb{K}$-vector space dual of $R_k(\mathcal{A})$.
\end{prop}
\begin{lem}\label{lem:DegZeroGen}
The modules $D_k(\mathcal{A},\mathbf{m})$ for $1\le k\le r$ are generated in degree zero. More precisely, we have an isomorphism (as $\mathbb{K}$-vector spaces) $D_k(\mathcal{A},\mathbf{m})_0\cong D_k(\mathcal{A})_0$.
\end{lem}
\begin{proof}
Both claims are clear for $D_1(\mathcal{A},\mathbf{m})$ by Remark~\ref{rem:D1}. By Definition~\ref{defn:Dk}, $D_k(\mathcal{A},\mathbf{m})$ is a quotient of $\bigoplus_{X\in L_k} D_{k-1}(\mathcal{A}_X,\mathbf{m}_X)$. Hence by induction, $D_k(\mathcal{A},\mathbf{m})$ is also generated in degree zero. Now we have the following commutative diagram:
\begin{center}
\begin{tikzcd}
D_{k-1}(\mathcal{A},\mathbf{m})_0 \ar{r}{\tau_{k-1}(\mathcal{A})} \ar{d}{\cong} & \bigoplus\limits_{X\in L_{k-1}} D_{k-1}(\mathcal{A}_X,\mathbf{m}_X)_0 \ar{r}\ar{d}{\cong} & D_k(\mathcal{A},\mathbf{m})_0 \ar{r} & 0\\
D_{k-1}(\mathcal{A})_0 \ar{r}{\tau_{k-1}(\mathcal{A})} & \bigoplus\limits_{X\in L_{k-1}} D_{k-1}(\mathcal{A}_X)_0 \ar{r} & D_k(\mathcal{A})_0 \ar{r} & 0,
\end{tikzcd}
\end{center}
where the first two vertical maps are isomorphisms by induction. Hence there is also an isomorphism $D_k(\mathcal{A},\mathbf{m})_0\cong D_k(\mathcal{A})$.
\end{proof}
\begin{cor}\label{cor:HomologicalCharFormality}
An arrangement $\mathcal{A}$ is $k$-formal if and only if $H^i(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})_0)=0$ for $i=1,\ldots,k-1$.
\end{cor}
\begin{proof}
Immediate from Lemma~\ref{lem:DegZeroGen}, Lemma~\ref{lem:HomologicalCharFormality}, and Proposition~\ref{prop:Duality}.
\end{proof}
\begin{defn}\label{defn:TotallyFormal}
An arrangement $\mathcal{A}$ is \textit{totally formal} if $\mathcal{A}_X$ is $k$-formal for $2\le k\le r(X)$ for all $X\in L(\mathcal{A})$.
\end{defn}
For example, a rank three arrangement is totally formal if and only if it is formal. See Remark~\ref{rem:GraphicTotallyFormal} for further examples of totally formal arrangements.
\begin{cor}\label{cor:MultifreeFormal}
If $(\mathcal{A},\mathbf{m})$ is free then $\mathcal{A}$ is totally formal.
\end{cor}
\begin{proof}
Suppose to the contrary that $\mathcal{A}_X$ is not $k$-formal for some $X\in L$ and $2\le k\le r(X)-1$. Then, by Corollary~\ref{cor:HomologicalCharFormality}, $H^i(\mathcal{D}^\bullet_0(\mathcal{A}_X,\mathbf{m}_X))\neq 0$ for some $1\le i\le k-1$. Hence by Theorem~\ref{thm:Free}, $D(\mathcal{A}_X,\mathbf{m}_X)$ is not free, whence $D(\mathcal{A},\mathbf{m})$ is not free by Proposition~\ref{prop:pdimLB}.
\end{proof}
\begin{remark}
We will see in Proposition~\ref{prop:H2Pres} that there are totally formal arrangements which nevertheless are totally non-free. See also Example~\ref{ex:ZieglerPair}.
\end{remark}
\begin{remark}\label{rem:CombFreeObstFromFormality}
The ranks of the vector spaces appearing in $\mathcal{R}_\bullet$ are not combinatorial in general (see Example~\ref{ex:ZieglerPair}), however if $\mathcal{A}$ is totally formal then these ranks are determined by $L(\mathcal{A})$. We can see this by inductively reading off the rank of $R_k(\mathcal{A}_X)$ $(X\in L_k)$ from the Euler characteristic of $\mathcal{R}_\bullet(\mathcal{A}_X)$; since $\mathcal{A}$ is totally formal the Euler characteristic of $\mathcal{R}_\bullet(\mathcal{A}_X)$ is zero by Lemma~\ref{lem:HomologicalCharFormality}. This yields a number of combinatorial obstructions to freeness which can be read off $L(\mathcal{A})$ (see for instance~\cite[Corollary~4.16]{BrandtTerao}). By Corollary~\ref{cor:MultifreeFormal}, if any of these combinatorial obstructions are satisfied, the arrangement is totally non-free.
\end{remark}
In the following corollary, we call a hyperplane $H\in\mathcal{A}$ \textit{generic} if, for all $X\in L_2$ so that $H<X$ in $L$, there is a unique hyperplane $H'\neq H$ so that $H'<X$. Moreover, we say $H$ is a \textit{separator} of $\mathcal{A}$ if $r(\mathcal{A}-H)<r(\mathcal{A})$. Part of the following result may be found in~\cite[Proposition~3.9]{BrandtTerao}; we provide a proof for completeness.
\begin{cor}\label{cor:genericHyperplane}
Suppose $\mathcal{A}$ is an arrangement of rank $\ge 2$. If $\mathcal{A}$ has a generic hyperplane which is not a separator, then $\mathcal{A}$ is not formal. In particular, $\mathcal{A}$ is totally non-free.
\end{cor}
\begin{proof}
Let $H\in\mathcal{A}$ be the generic hyperplane which is not a separator, and write $v_H$ for the corresponding row of $\delta^0_S$. The condition that $H$ is not a separator means that we can find $r=r(\mathcal{A})$ linearly independent rows $v_1,\ldots,v_r$ of $\delta^0_S$ where $v_i\neq v_H$ for $i=1,\ldots,r$. Hence there is a relation $\sum_{i=1}^r c_iv_i+c_Hv_H=0$ (for constants $c_1,\ldots,c_H$). Since $r\ge 2$ and $H$ is generic, there is no way to write this relation as a linear combination of relations among three hyperplanes (since $v_H$ is not in the support of any such relation). So $\mathcal{A}$ is not formal. The final conclusion follows from Corollary~\ref{cor:MultifreeFormal}.
\end{proof}
\section{Computing the chain complex}\label{sec:Computations}
In this section we work out concrete presentations for the modules appearing in $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ and illustrate the constructions via examples, with the goal of studying freeness and projective dimension of $D(\mathcal{A},\mathbf{m})$. The following definition, which constructs $\mathcal{D}^\bullet$ as the cokernel of a map of chain complexes, is analogous to the setup of the Billera-Schenck-Stillman chain complex used in algebraic spline theory~\cite{Homology,LCoho}. Since there are many details, the reader may find it easiest to read the following constructions while following along with Examples~\ref{ex:PointsP1} and~\ref{ex:X3}.
\begin{defn}\label{defn:FormalityComplex}
For a multi-arrangement $(\mathcal{A},\mathbf{m})$, set $S_k(\mathcal{A}_X)=D_k(\mathcal{A}_X,\mathbf{m}_X)_0\otimes_{\mathbb{K}}S$, the degree zero part of $D_k(\mathcal{A}_X,\mathbf{m}_X)$ tensored with $S$, and set $\mathcal{S}^\bullet(\mathcal{A}):=\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})_0\otimes_{\mathbb{K}}S$, so $\mathcal{S}^k=\bigoplus_{X\in L_k} S_k(\mathcal{A}_X)$. These are independent of the choice of multiplicities by Lemma~\ref{lem:DegZeroGen}.
For $Y\in L$, write $\phi^S_k(Y),\tau^S_k$ for the maps $\phi^S_k(Y):S_k(\mathcal{A})\rightarrow S_k(\mathcal{A}_Y),\tau^S_k:S_k(\mathcal{A})\rightarrow \bigoplus_{X\in L_k} S_k(\mathcal{A}_X)$ which are obtained from the maps $\phi_k(Y):D_k(\mathcal{A},\mathbf{m})\rightarrow D_k(\mathcal{A}_Y,\mathbf{m}),\tau_k:D_k(\mathcal{A},\mathbf{m})\rightarrow \bigoplus_{X\in L_k} D(\mathcal{A}_X,\mathbf{m}_X)$ (see Definition~\ref{defn:Dk}) by restricting to degree zero and then tensoring with $S$. Likewise write $\delta_S^i$ for the differential of $\mathcal{S}^\bullet$.
Since each of the modules $D_k(\mathcal{A},\mathbf{m})$ is generated in degree zero by Lemma~\ref{lem:DegZeroGen}, there is a natural surjective map $S_k(\mathcal{A}_X)\rightarrow D_k(\mathcal{A}_X,\mathbf{m}_X)$ for every $\mathbf{m}$ and $X\in L_k$. Hence there is a surjective map of complexes $\mathcal{S}^\bullet(\mathcal{A})\rightarrow \mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ for any multiplicity $\mathbf{m}$.
For each surjection $S_k(\mathcal{A}_X)\rightarrow D_k(\mathcal{A}_X,\mathbf{m}_X)$, write $J_k(\mathcal{A}_X,\mathbf{m}_X)$ for the kernel of this surjection, and write $\mathcal{J}^\bullet(\mathcal{A},\mathbf{m})$ for the kernel of the surjection $\mathcal{S}^\bullet(\mathcal{A})\rightarrow \mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$, so $\mathcal{J}^k(\mathcal{A},\mathbf{m})=\bigoplus_{X\in L_k} J_k(\mathcal{A}_X,\mathbf{m}_X)$. Denote by $\phi^J_i(Y),\tau^J_i,$ and $\delta_J^i$ the maps obtained from restricting $\phi^S_i(Y),\tau^S_i,$ and $\delta_S^i$. See figure~\ref{fig:FormalityComplexes} which shows the short exact sequence of complexes $0\rightarrow \mathcal{J}^\bullet \rightarrow \mathcal{S}^\bullet\rightarrow \mathcal{D}^\bullet \rightarrow 0$.
\end{defn}
\begin{figure}
\centering
\begin{tikzcd}
\mathcal{J}^\bullet(\mathcal{A},\mathbf{m}) \cdots \ar{r} & \bigoplus\limits_{X\in L_{k-1}} J_{k-1}(\mathcal{A}_X,\mathbf{m}_X) \ar{r}{\delta_J^{k-1}} \ar{d} & \bigoplus\limits_{Y\in L_k} J_k(\mathcal{A}_Y,\mathbf{m}) \ar{d}\ar{r}{\delta_J^k} &\cdots \\
\mathcal{S}^\bullet(\mathcal{A}) \cdots \ar{r} & \bigoplus\limits_{X\in L_{k-1}} S_{k-1}(\mathcal{A}_X) \ar{r}{\delta_S^{k-1}} \ar{d} & \bigoplus\limits_{Y\in L_k} S_k(\mathcal{A}_Y) \ar{d}\ar{r}{\delta_S^k} &\cdots \\
\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}) \cdots \ar{r} & \bigoplus\limits_{X\in L_{k-1}} D_{k-1}(\mathcal{A}_X,\mathbf{m}_X) \ar{r}{\delta^{k-1}} & \bigoplus\limits_{Y\in L_k} D_k(\mathcal{A}_Y,\mathbf{m}) \ar{r}{\delta^k} &\cdots
\end{tikzcd}
\caption{Short exact sequence of complexes from Definition~\ref{defn:FormalityComplex}}\label{fig:FormalityComplexes}
\end{figure}
\begin{remark}\label{rem:FormalS}
By Corollary~\ref{cor:HomologicalCharFormality}, $\mathcal{A}$ is $k$-formal if and only if $H^i(\mathcal{S}^\bullet(\mathcal{A}))=0$ for $1\le i\le k-1$. Furthermore $\mathcal{A}$ is essential if and only if $H^0(\mathcal{S}^\bullet(\mathcal{A}))=0$.
\end{remark}
\begin{remark}\label{rem:LES}
The short exact sequence $0\rightarrow \mathcal{J}^\bullet\rightarrow \mathcal{S}^\bullet \rightarrow \mathcal{D}^\bullet\rightarrow 0$ gives rise to a long exact sequence starting as
\[
0\rightarrow H^0(\mathcal{S}^\bullet) \rightarrow H^0(\mathcal{D}^\bullet)\cong D(\mathcal{A},\mathbf{m})\xrightarrow{\psi} H^1(\mathcal{J}^\bullet) \rightarrow H^1(\mathcal{S}^\bullet)\rightarrow\cdots,
\]
where $\psi$ is defined on $\theta\in D(\mathcal{A},\mathbf{m})$ as $\psi(\theta)=\sum_{H\in L_1} \theta(\alpha_H)\in \oplus_{H\in L_1} J(H)$. The map $\psi$ is an isomorphism if (and only if) $\mathcal{A}$ is essential and formal.
\end{remark}
\begin{remark}\label{rem:DJiso}
If $\mathcal{A}$ is essential and $k$-formal for all $k\ge 2$, then the long exact sequence from Remark~\ref{rem:LES} breaks into isomorphisms $H^i(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))\cong H^i(\mathcal{J}^\bullet(\mathcal{A},\mathbf{m}))$ for $i\ge 0$ (by Remark~\ref{rem:FormalS}). In particular, if we wish to determine free multiplicities on an arrangement, we may assume by Corollary~\ref{cor:MultifreeFormal} that $\mathcal{A}$ is $k$-formal for all $k\ge 2$, hence the isomorphism $H^i(\mathcal{D}^\bullet(\mathcal{A},\mathbf{m}))\cong H^i(\mathcal{J}^\bullet(\mathcal{A},\mathbf{m}))$ holds for $i\ge 0$.
\end{remark}
\begin{lem}\label{lem:ComputingJk}
Let $(\mathcal{A},\mathbf{m})$ be a multi-arrangement. If $H\in L_1$, then set $J(H)=J_1(\mathcal{A}_H,\mathbf{m}(H))=\langle \alpha_H^{\mathbf{m}(H)}\rangle$. If $X\in L_k$ where $k>1$, then the module $J_k(\mathcal{A}_X,\mathbf{m}_X)$ satisfies
\[
\begin{array}{rl}
J_k(\mathcal{A}_X,\mathbf{m}_X) &=\delta^{k-1}_S\left(\bigoplus\limits_{\substack{Y\in L_{k-1}\\ X<Y}} J_{k-1}(\mathcal{A}_Y,\mathbf{m}_Y) \right)\\[20 pt]
& =\sum\limits_{\substack{Y\in L_{k-1}\\ X<Y}} \phi^S_{k-1}(X)(J_{k-1}(\mathcal{A}_Y,\mathbf{m}_Y))
\end{array}
\]
with $\delta^{k-1}_S:\mathcal{S}^{k-1}\rightarrow\mathcal{S}^k$ and $\phi_S^k(X):S_k(\mathcal{A}_Y)\rightarrow S_k(\mathcal{A}_X)$ the maps from Definition~\ref{defn:FormalityComplex}.
\end{lem}
\begin{proof}
For simplicity we take $\mathcal{A}_X=\mathcal{A}$, so $\mathcal{A}$ has rank $k$ and $X=\cap_{H\in\mathcal{A}} H$. The tail end of the short exact sequence of complexes $0\rightarrow \mathcal{J}^\bullet\rightarrow \mathcal{S}^\bullet\rightarrow \mathcal{D}^\bullet \rightarrow 0$ is shown below.
\begin{center}
\begin{tikzcd}
\mathcal{J}^{k-2}\ar{r}{\delta^{k-2}_J}\ar{d} & \mathcal{J}^{k-1}=\bigoplus\limits_{Y\in L_{k-1}} J_{k-1}(\mathcal{A}_Y,\mathbf{m}_Y)\ar{r}{\delta^{k-1}_J}\ar{d} & \mathcal{J}^k=J_k(\mathcal{A},\mathbf{m})\ar{d} \\
\mathcal{S}^{k-2}\ar{r}{\delta^{k-2}_S}\ar{d} & \mathcal{S}^{k-1}=\bigoplus\limits_{Y\in L_{k-1}} S_{k-1}(\mathcal{A}_Y)\ar{r}{\delta^{k-1}_S}\ar{d} & \mathcal{S}^k=S_k(\mathcal{A})\ar{d}\\
\mathcal{D}^{k-2}\ar{r}{\delta^{k-2}} & \mathcal{D}^{k-1}=\bigoplus\limits_{Y\in L_{k-1}} D_{k-1}(\mathcal{A},\mathbf{m}) \ar{r}{\delta^{k-1}} & \mathcal{D}^k=D_k(\mathcal{A},\mathbf{m})\\
\end{tikzcd}
\end{center}
The differentials $\delta_S^{k-2}$ and $\delta^{k-2}$ factor through $S_{k-1}(\mathcal{A})$ and $D_{k-1}(\mathcal{A},\mathbf{m})$, respectively, by Definition~\ref{defn:DerivationComplex}. It follows that $H^{k-1}(\mathcal{S}^\bullet)=H^{k-1}(\mathcal{D}^\bullet)=H^k(\mathcal{S}^\bullet)=H^k(\mathcal{D}^\bullet)=0$ by Definition~\ref{defn:Dk}. Hence the long exact sequence in cohomology yields that $H^k(\mathcal{J}^\bullet)=0$, in other words $\delta^{k-1}_J$ is surjective. The first equality follows from commutativity of the diagram. By definition, $\delta^k_J=\tau^J_k=\sum_{Y\in L_{k-1}} \phi_{k-1}^J(X)$. Since $\phi_{k-1}^J(X)$ is the restriction of $\phi_{k-1}^S(X)$, this proves the second equality.
\end{proof}
From Lemma~\ref{lem:ComputingJk}, we see that in order to explicitly determine the complexes $\mathcal{J}^\bullet$ and $\mathcal{D}^\bullet$, it suffices to determining the maps $\phi^S_{k}(Y)$ for $Y\in L_{k}$, or equivalently to determine the differential $\delta^k_S$ of the complex $\mathcal{S}^\bullet$. In \S~\ref{sec:Formal}, we saw that $\mathcal{S}^\bullet\cong (\mathcal{R}_\bullet^*)\otimes_{\mathbb{K}} S$, so the differential $\delta^k_S$ is just the transpose of the differential $\delta_k$ in the complex $\mathcal{R}_\bullet$. By examining these matrices as they appear in~\cite{BrandtTerao} and~\cite{StefanFormal}, we obtain the following recipe for constructing $\delta^k_S$.
\begin{lem}\label{lem:SkDifferential}
A matrix for $\delta^k_S$ may be inductively defined as follows. The matrix for $\delta^0_S$ is the coefficient matrix for $\mathcal{A}$, whose rows give coefficients of the linear forms defining $\mathcal{A}$. Inductively, $\delta^k_S$ may be represented by a matrix whose rows are naturally grouped according to flats $X\in L_k$. A row corresponding to $X\in L_k$ encodes a relation among rows of $\delta^{k-1}_S$ which correspond to flats $Y\in L_{k-1}$ so that $Y<X$; the set of all rows corresponding to $X\in L_k$ is a choice of basis for all relations among the rows of $\delta^{k-1}_S$ corresponding to flats $Y\in L_k$ so that $Y<X$.
\end{lem}
\begin{exm}[Points in $\mathbb{P}^1$]\label{ex:PointsP1}
Consider the arrangement $\mathcal{A}$ of $k+2$ points in $\mathbb{P}^1$, corresponding to the product $xy(x-a_1y)\ldots(x-a_ky)$. Let $H_x=V(x),H_y=V(y),$ and $H_i=V(x-a_iy)$ for $i=1,\ldots,k$. By Lemma~\ref{lem:SkDifferential}, the complex $\mathcal{S}^\bullet$ is
\[
0\rightarrow S^2 \xrightarrow{\delta^0_S} S^{k+2} \xrightarrow{\delta^1_S} S^k \rightarrow 0,
\]
where
\[
\delta^0=
\begin{bmatrix}
1 & 0 \\
0 & 1 \\
1 & -a_1\\
\vdots & \vdots\\
1 & -a_k
\end{bmatrix}
\qquad \mbox{and} \qquad
\delta^1=
\begin{bmatrix}
-1 & a_1 & 1 & 0 &\cdots & 0\\
-1 & a_2 & 0 & 1 &\cdots & 0\\
\vdots & \vdots & \vdots & \vdots & & \vdots\\
-1 & a_k & 0 & 0 & \cdots & 1
\end{bmatrix}.
\]
Notice that $S_2(\mathcal{A})\cong S^k$. Write $m_x,m_y$ for $\mathbf{m}(H_x),\mathbf{m}(H_y)$, respectively, and $m_i$ for $\mathbf{m}(H_i)$, $i=1,\ldots,k$. By Lemma~\ref{lem:ComputingJk}, $J_2(\mathcal{A},\mathbf{m})=\mathcal{J}^2(\mathcal{A},\mathbf{m})$ is generated by the columns of the matrix
\[
M=
\begin{bmatrix}
-x^{m_x} & a_1y^{m_y} & (x-a_1y)^{m_1} & 0 &\cdots &0\\
-x^{m_x} & a_2y^{m_y} & 0 & (x-a_2y)^{m_2} & \cdots & 0\\
\vdots & \vdots & \vdots & \vdots & & \vdots\\
-x^{m_x} & a_ky^{m_y} & 0 & 0 & \cdots & (x-a_ky)^{m_k}
\end{bmatrix},
\]
so $D^2(\mathcal{A},\mathbf{m})\cong \mbox{coker}(M)$. Notice that $M$ is a matrix for $\delta^1_J$ with the natural choice of basis for $\bigoplus_{H\in L_1} J(H)\cong \bigoplus_{H\in L_1} S(-\mathbf{m}(H))$. Hence, by Remark~\ref{rem:DJiso}, we may identify $D(\mathcal{A},\mathbf{m})$ with $H^1(\mathcal{J}^\bullet,\mathbf{m})$, which is exactly the syzygies on the columns of $M$ (it is also straightforward to see this from the definition of $D(\mathcal{A},\mathbf{m})$). In particular, if $k=1$ so $\mathcal{A}$ is the $A_2$ braid arrangement, then $D(A_2,\mathbf{m})$ may be identified with the syzygies on the forms $x^{m_x},y^{m_y},$ and $(x-a_1y)^{m_1}$. This provides an alternative way to identify the generators and exponents of $(A_2,\mathbf{m})$, which were originally found in~\cite{Wakamiko} (see~\cite{FatPoints},\cite[Example~3.6,Lemma~4.5]{A3MultiBraid} for more details).
\end{exm}
For an arrangement defined by the vanishing of forms $\alpha_1,\ldots,\alpha_n$, we will write $H_i$ for $V(\alpha_i)$ and denote the flat $H_{i_1}\cap\cdots\cap H_{i_k}$ by the list of indices $i_1\cdots i_k$. Furthermore, we will denote by $L_2^{\text{\tiny{trip}}}$ the set of rank two flats which are the intersection of at least three hyperplanes.
\begin{exm}[$X_3$ arrangement]\label{ex:X3} Consider the arrangement $\mathcal{A}_t$ defined by the vanishing of the six linear forms
\[
\begin{array}{ll}
\alpha_1=x & \alpha_4=x-t y\\
\alpha_2=y & \alpha_5=x+z\\
\alpha_3=z & \alpha_6=y+z.
\end{array}
\]
The intersection lattice of $\mathcal{A}_t$ is constant as long as $t\neq 0,1$, with six double points and three triple points $L_2^{\text{\tiny{trip}}}=\{124,135,236\}$. Lemma~\ref{lem:SkDifferential} yields
\[
\mathcal{S}^\bullet=0\rightarrow S^3\xrightarrow{\delta^0_S} S^6\xrightarrow{\delta^1_S} S^3 \rightarrow 0,
\]
where
\[
\delta^0_S=\bordermatrix{ & x & y & z\cr
1 & 1 & 0 & 0 \cr
2 & 0 & 1 & 0 \cr
3 & 0 & 0 & 1 \cr
4 & 1 & -t & 0 \cr
5 & 1 & 0 & 1 \cr
6 & 0 & 1 & 1}
\qquad
\delta^1_S=\bordermatrix{ & 1 & 2 & 3 & 4 & 5 & 6 \cr
124 & 1 & -t & 0 & -1 & 0 & 0 \cr
135 & 1 & 0 & 1 & 0 & -1 & 0 \cr
236 & 0 & 1 & 1 & 0 & 0 & -1
}.
\]
This complex is always exact, hence $\mathcal{A}_t$ is always formal for $t\neq 0,1$ by Corollary~\ref{cor:HomologicalCharFormality}. By Remark~\ref{rem:DJiso}, $H^i(\mathcal{D}^\bullet)\cong H^i(\mathcal{J}^\bullet)$. By Theorem~\ref{thm:Free}, we may check freeness of $D(\mathcal{A}_t,\mathbf{m})$ by determining vanishing of $H^1(\mathcal{J}^\bullet)$.
Now we consider the complex $\mathcal{J}^\bullet$. Write $J(i)$ for $J_1((\mathcal{A}_t)_{H_i},m_i)=\langle \alpha_i^{m_i}\rangle$. If $ijk\inL_2^{\text{\tiny{trip}}}$, we write $J(ijk)$ for the ideal $J(i)+J(j)+J(k)$, where $ijk\inL_2^{\text{\tiny{trip}}}$. Then, by Lemma~\ref{lem:ComputingJk}, $J_2(124,\mathbf{m})=J(1)-tJ(3)-J(4)=J(1)+J(3)+J(4)=J(134)$. The same holds for any triple point, so $J_2(ijk,\mathbf{m})=J(ijk)$ for every $ijk\inL_2^{\text{\tiny{trip}}}$. So $\mathcal{J}^2=\oplus_{ijk\inL_2^{\text{\tiny{trip}}}} J(ijk)$ and
\[
\mathcal{J}^\bullet= 0\rightarrow \bigoplus\limits_{i=1}^6 J(i)\xrightarrow{\delta^1_J} \bigoplus\limits_{ijk\inL_2^{\text{\tiny{trip}}}} J(ijk),
\]
where $\delta^1_J$ is the restriction of $\delta^1_S$. A presentation for $H^2(\mathcal{J}^\bullet)$ is worked out in~\cite{X3} and is used to prove that $(\mathcal{A}_t,\mathbf{m})$ is free if and only if the defining equation has the form $\mathcal{Q}(\mathcal{A},\mathbf{m})=x^ny^nz^n(x-ty)(x+z)(y+z)$, where $t^n=1$. We generalize this result in Theorem~\ref{thm:FreeMultNonFreeTF2}.
\end{exm}
\subsection{Graphic arrangements}
Let $G$ be a simple graph (no loops or multiple edges) on $\ell$ vertices $\{v_1,\ldots,v_\ell\}$ with edge set $E(G)$, $S=\mathbb{K}[x_1,\ldots,x_\ell]$ (with $x_i$ corresponding to $v_i$), and set $H_{ij}=V(x_i-x_j)$. The \textit{graphic arrangement} associated to $G$ is the arrangement $\mathcal{A}_G=\cup_{\{v_i,v_j\}\in E(G)} H_{ij}$; $\mathcal{A}_G$ is a sub-arrangement of the $A_{\ell-1}$. A multiplicity $\mathbf{m}$ on $\mathcal{A}_G$ is determined by the values $m_{ij}=\mathbf{m}(H_{ij})$ corresponding to edges $\{v_i,v_j\}\in E(G)$.
Recall that the \textit{clique complex} (or \textit{flag complex}) of a graph $G$ is the simplicial complex $\Delta=\Delta(G)$ with an $i$-simplex for every complete graph on $(i-1)$ vertices.
\begin{lem}\label{lem:simpcochain}
The chain complex $\mathcal{S}^\bullet(\mathcal{A}_G)$ may be identified with the simplicial co-chain complex of $\Delta(G)$ with coefficients in $S$. Hence $\mathcal{A}_G$ is $k$-formal if and only if $H^i(\Delta(G); S)=0$ for $1\le i\le k-1$.
\end{lem}
\begin{proof}
By~\cite[Lemma~3.1]{StefanFormal}, $\mathcal{R}_\bullet(\mathcal{A}_G)$ may be identified with the simplicial chain complex of $\Delta(G)$ with coefficients in $\mathbb{K}$. Now use the isomorphism $\mathcal{S}^\bullet\cong(\mathcal{R}_\bullet)^*\otimes_\mathbb{K} S$.
\end{proof}
\begin{remark}\label{rem:GraphicTotallyFormal}
Using Lemma~\ref{lem:simpcochain} we may easily see how the notions of $k$-formal for various $k$ are distinct; this was part of the intent of~\cite{StefanFormal}. This lemma also makes it clear that the condition that $\mathcal{A}_G$ is $k$-formal for $2\le k\le r-1$ is distinct from the condition of being totally formal. A graphic arrangement $\mathcal{A}_G$ is $k$-formal for $2\le k\le r-1$ if and only if its clique complex $\Delta(G)$ is contractible. On the other hand, $\mathcal{A}_G$ is totally formal if and only if $G$ is chordal; a much stronger condition which coincides with both freeness and supersolvability of $\mathcal{A}_G$~\cite{StanleySupersolvable}.
\end{remark}
If $\sigma\in\Delta(G)_k$ is a complete graph on the $(k+1)$ vertices $\{v_{i_0},\ldots,v_{i_k}\}$ (where $k\ge 1$), then write $J(\sigma)$ for the ideal generated by the forms $\{(x_{i_s}-x_{i_t})^{m_{i_si_t}} : 0\le s<t\le k\}$. If $\sigma=\{v_i\}$ is a single vertex, then we take $J(\sigma)=0$.
\begin{prop}\label{prop:GraphicDdescription}
If $G$ is a simple graph, then $\mathcal{D}^\bullet(\mathcal{A}_G,\mathbf{m})$ has modules
\[
\mathcal{D}^i\cong \bigoplus_{\sigma\in\Delta(G)_i} S/J(\sigma)
\]
and differentials $\delta^i$ induced from the simplicial co-chain complex with coefficients in $S$, which may be identified with $\mathcal{S}^\bullet(\mathcal{A}_G)$.
\end{prop}
\begin{proof}
Use the identification of the differentials $\delta^i$ in Lemma~\ref{lem:simpcochain} as the simplicial co-chain differential for $\Delta(G)$ and the construction of $J_k((\mathcal{A}_G)_X,\mathbf{m}_X)$ from Lemma~\ref{lem:ComputingJk}.
\end{proof}
\begin{remark}
The chain complex in Proposition~\ref{prop:GraphicDdescription} was introduced in~\cite{GSplinesGraphicArr} by analogy with a natural class of chain complexes in the context of multivariate spline theory~\cite{Homology,LCoho}. Applying Theorem~\ref{thm:Free} yields the homological characterization of freeness obtained in~\cite[Corollary~5.6]{GSplinesGraphicArr}.
\end{remark}
\begin{remark}
The first non-trivial classification of free multiplicities on a graphic arrangement admitting both free and non-free multiplicities was completed in~\cite{AbeDeletedA3}. Building on work of Abe, Nuida, and Numata~\cite{AbeSignedEliminable}, the classification of free multiplicities on the $A_3$ braid arrangement has been completed in~\cite{A3MultiBraid}. The key is a detailed analysis of $H^2(\mathcal{D}^\bullet(A_3,\mathbf{m}))$, where $\mathcal{D}^\bullet$ is the complex described in Corollary~\ref{prop:GraphicDdescription}.
\end{remark}
\section{$TF_2$ arrangements}\label{sec:TF2}
In this section we introduce a subset of the totally formal arrangements which we shall call $TF_k$ arrangements. These are totally formal arrangements which additionally satisfy that $\mathcal{S}^i(\mathcal{A})=0$ for $i>k$. For instance, every totally formal arrangement is $TF_k$ for $k\ge r(\mathcal{A})$. A graphic arrangement $\mathcal{A}_G$ is $TF_k$ if and only if $G$ is chordal (see Remark~\ref{rem:GraphicTotallyFormal}) and $\dim(\Delta(G))\le k$. By Theorem~\ref{thm:Free} and Remark~\ref{rem:DJiso}, freeness of $TF_k$ arrangements is determined by the vanishing of $H^i(\mathcal{J}^\bullet)$ for $2\le i\le k$. In the rest of this section we will assume that $\mathcal{A}$ is a $TF_2$ arrangement of rank at least three.
\subsection{Free $TF_2$ arrangements}
Recall that an arrangement $\mathcal{A}$ is \textit{supersolvable} if there is a filtration $\mathcal{A}_1\subset\cdots\subset\mathcal{A}_r=\mathcal{A}$ satisfying the following rank property (RP) and intersection property (IP):
\begin{itemize}
\item[(RP)] $r(\mathcal{A}_i)=i$ for $i=1,\ldots,r(\mathcal{A})$.
\item[(IP)] For any $H,H'\in\mathcal{A}_i$ there exists some $H''\in\mathcal{A}_{i-1}$ so that $H\cap H'\subset H''$.
\end{itemize}
\begin{prop}\label{prop:FreeTF2Arrangements}
Let $\mathcal{A}$ be an irreducible $TF_2$ arrangement of rank $r=r(\mathcal{A})$. Then
\begin{itemize}
\item $|\mathcal{A}|=r-\#L_2^{\text{\tiny{trip}}}+\sum_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)$
\item $|\mathcal{A}|\le 1+\sum_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)$
\item $\#L_2^{\text{\tiny{trip}}}\ge r-1$
\end{itemize}
Furthermore, the following are equivalent.
\begin{enumerate}
\item $\mathcal{A}$ is free
\item $|\mathcal{A}|=1+\sum_{X\inL_2^{\text{\tiny{trip}}}} (|\mathcal{A}_X|-1)$
\item $\#L_2^{\text{\tiny{trip}}}=r-1$
\item $\mathcal{A}$ is supersolvable
\end{enumerate}
In particular, if $\mathcal{A}$ is $TF_2$, its freeness may be determined from $L(\mathcal{A})$.
\end{prop}
\begin{proof}
The first three bullet points are computed from the Euler characteristic of $\mathcal{S}^\bullet(\mathcal{A})$ and $\mathcal{J}^\bullet(\mathcal{A})_1$ as follows. Since $\mathcal{A}$ is $TF_2$, $\mathcal{S}^\bullet(\mathcal{A})$ is a short exact sequence of the form:
\[
0\rightarrow S^\ell=S^r\rightarrow S^{|\mathcal{A}|} \rightarrow \bigoplus_{X\inL_2^{\text{\tiny{trip}}}} S^{|\mathcal{A}_X|-2}\rightarrow 0,
\]
so the alternating sum of the ranks yields $|\mathcal{A}|=r+\sum_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-2)=r-\#L_2^{\text{\tiny{trip}}}+\sum_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)$. For the second bullet point, $\mathcal{J}^\bullet(\mathcal{A})$ has the form
\[
0\rightarrow \bigoplus\limits_{H\in\mathcal{A}} J(H) \xrightarrow{\delta^1_J} \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} J_2(\mathcal{A}_X) \rightarrow 0.
\]
Since $\ker(\delta^1_J)=D(\mathcal{A})$ and we assumed $\mathcal{A}$ is irreducible, $\ker(\delta^1_J)_1$ is one dimensional, spanned by the Euler derivation. We may easily compute $\dim J_2(\mathcal{A}_X)_1=|\mathcal{A}_X|-1$ for $X\inL_2^{\text{\tiny{trip}}}$, hence
\[
\dim H^2(\mathcal{J}^\bullet)_1=\sum\limits_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)-|\mathcal{A}|+1
\]
by computing the Euler characteristic of $\mathcal{J}^\bullet_1$. This must be non-negative, yielding $|\mathcal{A}|\le 1+\sum\limits_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)$. The third bullet point follows from putting the first two bullet points together.
Now we prove the equivalent conditions for freeness. The implication $(4)\implies(1)$ is a well known fact. Since supersolvability is determined from $L(\mathcal{A})$, the final statement is immediate from $(4)$. We first prove $(1)\iff (2)$. From Theorem~\ref{thm:Free} and Remark~\ref{rem:DJiso}, $\mathcal{A}$ is free if and only if $H^2(\mathcal{J}^\bullet)=0$. From the explicit description in Example~\ref{ex:PointsP1}, we see that $J_2(\mathcal{A}_X)$ is generated in degree one for every $X\inL_2^{\text{\tiny{trip}}}$, as is $J(H)\cong\langle \alpha_H\rangle$ for every $H\in\mathcal{A}$. So $H^2(\mathcal{J}^\bullet)$ must also be generated in degree one since it is a quotient of $\sum_{X\inL_2^{\text{\tiny{trip}}}} J_2(\mathcal{A}_X)$. From our above computation,
\[
\dim H^2(\mathcal{J}^\bullet)_1=\sum\limits_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)-|\mathcal{A}|+1,
\]
hence $\mathcal{A}$ is free if and only if this expression vanishes, i.e. $|\mathcal{A}|=1+\sum\limits_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)$. $(3)$ follows immediately from $(2)$ using the expression $|\mathcal{A}|=r-\#L_2^{\text{\tiny{trip}}}+\sum_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)$ already proved. Finally, we show $(3)\implies (4)$. First, for any $X,X'\inL_2^{\text{\tiny{trip}}}$, we prove there is a sequence $X=X_1,H_1,X_2,\ldots,H_{k-1},X_k=X'$ satisfying
\begin{enumerate}
\item $H_i\in\mathcal{A}$ for $i=1,\ldots,k-1$
\item $X_i\inL_2^{\text{\tiny{trip}}}$ for $i=1,\ldots,k$.
\item $H_i<X_i$ and $H_{i+1}<X_{i+1}$ in $L(\mathcal{A})$ for $i=1,\ldots,k-1$.
\end{enumerate}
To show this, let $H_1,H_2\in\mathcal{A}_X$ and $H'_1,H'_2\in\mathcal{A}_{X'}$ with corresponding linear forms $\alpha_1,\alpha_2,\alpha'_1,\alpha'_2$. Complete $\alpha_1,\alpha_2,\alpha'_1$ to a basis $B$ of $V^*$ using defining forms of $\mathcal{A}$ (this is possible because $\mathcal{A}$ is essential). Adding $\alpha'_2$ to $B$, we see there is a relation of length $r+1$ among the forms $B\cup\{\alpha'_2\}$. Since $\mathcal{A}$ is formal, this relation can be expressed as a linear combination of relations of length three. We then read off the sequence $X=X_1,H_1,\ldots,X_k=X'$ from this linear combination of relations of length three.
Now we construct a filtration $\mathcal{F}=\mathcal{F}(\mathcal{A})=\mathcal{A}_1\subseteq\mathcal{A}_2\subseteq\cdots\subseteq\mathcal{A}_{r}=\mathcal{A}$ of $\mathcal{A}$. Let $\mathcal{A}_1=H$ for any $H\in\mathcal{A}$, and $\mathcal{A}_2=\mathcal{A}_{X_1}$ for some $X_1\inL_2^{\text{\tiny{trip}}}$ so that $H\in\mathcal{A}_{X_1}$ (by Corollary~\ref{cor:genericHyperplane}, every $H\in\mathcal{A}$ passes through some $X\inL_2^{\text{\tiny{trip}}}$). Build $\mathcal{A}_{i+1}$ from $\mathcal{A}_i$ for $2\le i\le r$ inductively as follows. By our above claim, there exists $X_i\inL_2^{\text{\tiny{trip}}}$ so that $\mathcal{A}_i\cap\mathcal{A}_{X_i}\neq\emptyset$. Then set $\mathcal{A}_{i+1}=\mathcal{A}_i\cup\mathcal{A}_{X_i}$. This process finishes with $\mathcal{A}_{(r-1)+1}=\mathcal{A}_r$, when we have exhausted $L_2^{\text{\tiny{trip}}}$. Notice that $\mathcal{F}$ satisfies the intersection property (IP) by construction. Moreover, $r(\mathcal{A}_i)\le r(\mathcal{A}_{i-1})+1$, hence since the filtration has length $r$ with $\mathcal{A}_r=\mathcal{A}$, we must have $r(\mathcal{A}_i)=i$. Hence $\mathcal{F}(\mathcal{A})$ is a supersolvable filtration.
\end{proof}
\subsection{Presentation for $H^2(\mathcal{J}^\bullet)$}
Assuming $\mathcal{A}$ is a $TF_2$ arrangement, we now obtain an explicit presentation for $H^2(\mathcal{J}^\bullet(\mathcal{A},\mathbf{m}))$. Consider the diagram in Figure~\ref{fig:H2Pres}, where the chain complex $\mathcal{J}^\bullet$ appears on the right hand side ($\mathcal{J}^\bullet$ has only two terms since $\mathcal{A}$ is $TF_2$). For book-keeping purposes we use the formal symbols $[H]$ and $[X,H]$ (or $[\alpha_H],[X,\alpha_H]$), of degree $\mathbf{m}(H)$, to denote the generators $\alpha_H^{\mathbf{m}(H)}$ of the summands $J(H)=\langle \alpha_H^{\mathbf{m}(H)}\rangle$ which appear in $\bigoplus_{H\in\mathcal{A}} J(H)$ and $\bigoplus_{X\inL_2^{\text{\tiny{trip}}}}\bigoplus_{H<X} J(H)$, respectively. With this notation, the map $\psi_X:D(\mathcal{A}_X,\mathbf{m}_X)\rightarrow\bigoplus_{H<X}J(X)$ in Figure~\ref{fig:H2Pres} is the map $\psi_X(\theta)=\sum_H \dfrac{\theta(\alpha_H)}{\alpha_H^{\mathbf{m}(H)}}[X,H]$ and $\iota:\bigoplus J(H_i)\rightarrow \bigoplus_{X\inL_2^{\text{\tiny{trip}}}}\bigoplus_{X<H_i} J(H_i)$ is the natural inclusion defined by $\iota([H])=\sum_{X\inL_2^{\text{\tiny{trip}}}} \sum_{X<H} [X,H]$ and extended linearly. The main thing to check for commutativity is that $(\sum (\delta^1_J)_X)\circ\iota=\delta^1_J$, which follows from the definition.
\begin{figure}
\centering
\begin{tikzcd}
& \bigoplus\limits_{H\in\mathcal{A}} J(H) \ar{r}{\cong} \ar{d}{\iota} & \bigoplus\limits_{H\in\mathcal{A}} J(H) \ar{d}{\delta^1_J}\\
\bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} D(\mathcal{A}_X,\mathbf{m}_X) \ar{r}{\sum \psi_X} & \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}}\left[ \bigoplus\limits_{H<X} J(H)\right] \ar{r}{\sum (\delta^1_J)_X} & \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} J_2(\mathcal{A}_X,\mathbf{m}_X)
\end{tikzcd}
\caption{Diagram for Proposition~\ref{prop:H2Pres}}\label{fig:H2Pres}
\end{figure}
\begin{prop}\label{prop:H2Pres}
Suppose $\mathcal{A}$ is an irreducible $TF_2$ arrangement of rank at least three. Then
\[
H^2(\mathcal{J}^\bullet)\cong \mbox{coker}\left(\bigoplus_{X\inL_2^{\text{\tiny{trip}}}} D(\mathcal{A}_X,\mathbf{m}_X)\xrightarrow{\sum \overline{\psi_X}} \mbox{coker}(\iota)\cong S^\kappa\right),
\]
where $\kappa=(\sum_{X\inL_2^{\text{\tiny{trip}}}} |\mathcal{A}_X|)-|\mathcal{A}|$. Moreover,
\begin{enumerate}
\item $(\mathcal{A},\mathbf{m})$ is free if and only if $\sum \overline{\psi_X}$ is surjective.
\item $\kappa>0$, i.e. $|\mathcal{A}|<\sum_{X\inL_2^{\text{\tiny{trip}}}} |\mathcal{A}_X|$.
\item If $|\mathcal{A}|<\sum_{X\inL_2^{\text{\tiny{trip}}}} (|\mathcal{A}_X|-1)$ or equivalently $r<\#L_2^{\text{\tiny{trip}}}$ then $\mathcal{A}$ is totally non-free. Furthermore in this case every $\mathcal{A}'\in\mathcal{M}(L(\mathcal{A}))$ is totally non-free.
\end{enumerate}
\end{prop}
\begin{remark}
The presentation in Proposition~\ref{prop:H2Pres} is similar in spirit to a presentation derived in~\cite[Lemma~3.8]{LCoho} for a homology module which governs freeness of bivariate splines on triangulations.
\end{remark}
\begin{proof}
Since the commutative diagram in Figure~\ref{fig:H2Pres} has exact rows, the isomorphism
\[
H^2(\mathcal{J}^\bullet)\cong \mbox{coker}\left(\bigoplus_{X\inL_2^{\text{\tiny{trip}}}} D(\mathcal{A}_X,\mathbf{m}_X)\xrightarrow{\sum \overline{\psi_X}} \mbox{coker}(\iota)\right)
\]
follows from the tail end of the snake lemma. The statement (1) now follows from the isomorphism $H^1(\mathcal{D}^\bullet)\cong H^2(\mathcal{J}^\bullet)$ and Theorem~\ref{thm:Free}.
The ideals $J(H)\cong \langle \alpha_H^{\mathbf{m}(H)}\rangle$ are principal, so are isomorphic to the polynomial ring $S$ (up to a graded shift). The rank of $\bigoplus J(H)$ is $|\mathcal{A}|$ and by the definition of the map $\iota$, we see that the kernel is spanned by the basis elements $[H]$ so that $H$ does not pass through any $X\inL_2^{\text{\tiny{trip}}}$. However, any such hyperplane is a \textit{generic} hyperplane; by Corollary~\ref{cor:genericHyperplane} the existence of such a hyperplane forces $\mathcal{A}$ to be non-formal. Hence if $\mathcal{A}$ is $TF_2$, $\iota$ is injective. Since $\bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}}\left[ \bigoplus\limits_{H_i<X} J(H_i)\right]$ is a free module of rank $\sum_{X\inL_2^{\text{\tiny{trip}}}} |\mathcal{A}_X|$, we have proved that $\mbox{coker}(\iota)\cong S^\kappa$. The map $\iota$ is surjective if and only if $\kappa=0$, in which case $H^2(\mathcal{J}^\bullet)=0$ regardless of the multiplicity $\mathbf{m}$. In this case $\mathcal{A}$ is totally free; by~\cite{TeraoTotallyFree} $\mathcal{A}$ is a product of one and two dimensional arrangements, violating the assumption that $\mathcal{A}$ is irreducible. This proves (2).
For (3), notice that, in order for $D(\mathcal{A},\mathbf{m})$ to be free, the image of $\sum \psi_X$ and the image of $\iota$ must span the entire free module $\bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}}\left[ \bigoplus\limits_{H<X} J(H)\right]$. Given (1), the image of $\iota$ does not span this entire free module. This means that there are some basis elements $[X,H]$ of degree $\mathbf{m}(H)$ (for some hyperplane $H$) that remain in $\mbox{coker}(\iota)$. In order to kill such basis elements, there must be a basis element $\theta_X\in D(\mathcal{A}_X,\mathbf{m}_X)$ of degree $\mathbf{m}(H)$ which does not vanish on $\alpha_H$. Notice that for a fixed $X\inL_2^{\text{\tiny{trip}}}$, there cannot be two distinct $H,H'\in\mathcal{A}_X$ so that $\deg(\theta_X)=\mathbf{m}(H)$, $\deg(\psi_X)=\mathbf{m}(H')$, with $\theta_X(\alpha_H)\neq 0$ and $\psi_X(\alpha_{H'})\neq 0$ (see Lemma~\ref{lem:Boolean}). Hence there are at most $\#L_2^{\text{\tiny{trip}}}$ derivations (one per $X\inL_2^{\text{\tiny{trip}}}$) that can have the right form to cancel remaining basis elements of $\mbox{coker}(\iota)$; it follows that if $|\mathcal{A}|+\#L_2^{\text{\tiny{trip}}}<\sum_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|)$ then $\mathcal{A}$ is totally non-free, proving the first inequality of (3). The equivalent formulation for the inequality follows from the equation $|\mathcal{A}|=r-\#L_2^{\text{\tiny{trip}}}+\sum_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)$ from Proposition~\ref{prop:FreeTF2Arrangements}. For the final statement of (3), it follows from Lemma~\ref{lem:GenericFormal} that $\mathcal{A}'\in\mathcal{M}(\mathcal{A})$ is $TF_2$ on a Zariski open subset of $\mathcal{M}(L(\mathcal{A}))$. Hence on this open set, total non-freeness of $\mathcal{A}'$ follows from the same computation. Moreover, if $\mathcal{A}'$ is in the complement of this open set, $\mathcal{A}'$ is totally non-free by Corollary~\ref{cor:MultifreeFormal}.
\end{proof}
\begin{cor}\label{cor:TF2Restriction}
Suppose $\mathcal{A}$ is a $TF_2$ arrangement with $r(\mathcal{A})>\#L_2^{\text{\tiny{trip}}}$, and suppose $\mathcal{B}$ is an arrangement of rank four. If $L(\mathcal{B})$ has two flats $X,Y\in L(\mathcal{B})$ so that $L(\mathcal{A})\cong [X,Y]$, then $\mathcal{B}$ is not free.
\end{cor}
\begin{proof}
If $L(\mathcal{A})$ is isomorphic to an interval in $L(\mathcal{B})$, then $\mathcal{B}$ has either a closed sub-arrangement or a restriction which is in $\mathcal{M}(L(A))$. In either case, the sub-arrangement or restriction is totally non-free by Proposition~\ref{prop:H2Pres}. If $\mathcal{B}$ is free, any closed sub-arrangement is also free. Moreover, the restriction of a free arrangement admits a free multiplicity by Theorem~\ref{thm:Yoshinaga}. Hence $\mathcal{B}$ cannot be free.
\end{proof}
\begin{exm}[Ziegler's Pair]\label{ex:ZieglerPair}
Consider a central arrangement $\mathcal{A}$ of rank three with nine hyperplanes $\alpha_1,\ldots,\alpha_9$ whose lattice has 18 double points and six triple points, explicitly we assume $L_2^{\text{\tiny{trip}}}=\{145,138,256,289,367,479\}$. This arrangement can be realized as a line arrangement in $\mathbb{P}\mathbb{K}^2$ as the lines extending the edges of a hexagon, along with three lines joining opposite vertices (thus the set $L_2^{\text{\tiny{trip}}}$ forms the vertices of the hexagon). Since there is a non-empty Zariski open space of $\mathcal{M}(L)$ on which $\mathcal{A}$ is $TF_2$ an $\#L_2^{\text{\tiny{trip}}}=6>3=r(\mathcal{A})$, Proposition~\ref{prop:H2Pres} implies that any $\mathcal{A}\in\mathcal{M}(L)$ is totally non-free. By Corollary~\ref{cor:TF2Restriction}, no $\mathcal{A}\in\mathcal{M}(L)$ can be the restriction of a free arrangement.
This arrangement appears in~\cite{ZieglerMulti} and~\cite{YuzFormal} as an example of the non-combinatorial behavior of the minimal free resolution of $D(\mathcal{A})$ and the formality of $\mathcal{A}$, respectively. More precisely, it is known (due to Yuzvinsky~\cite{YuzFormal}, see also~\cite[Example~13]{SchenckComputationsConjectures}) that $\mathcal{A}$ is formal if and only if the points of $L_2^{\text{\tiny{trip}}}$ do not lie on a conic in $\mathbb{P}^2$. We may compute that $\mathcal{S}^\bullet$ has the form $0\rightarrow S^3\xrightarrow{\delta^0_S} S^9 \xrightarrow{\delta^1_S} S^6\rightarrow 0$ if the six points do not lie on a conic and $0\rightarrow S^3 \xrightarrow{\delta^0_S} S^9 \xrightarrow{\delta^1_S} S^5\xrightarrow{\delta^2_S} S\rightarrow 0$ if the six points of $L_2^{\text{\tiny{trip}}}$ do lie on a conic ($\delta^1_S$ drops rank).
\end{exm}
\subsection{A codimension two incidence graph}
The data in the presentation of $H^2(\mathcal{J}^\bullet)$ in Proposition~\ref{prop:H2Pres} can be combinatorially encoded using the \textit{codimension two incidence graph} of $\mathcal{A}$, which we denote by $G(\mathcal{A})$. The graph $G(\mathcal{A})=(V,E)$ is a bipartite graph whose vertex set is partitioned as $V=L_2^{\text{\tiny{trip}}}\cup\mathcal{A}$. There is an edge $[X,H]$ between $X\inL_2^{\text{\tiny{trip}}}$ and $H\in\mathcal{A}$ if and only if $H<X$ in $L(\mathcal{A})$ (notice that we do not include codimension two flats which are intersections of just two hyperplanes). Moreover, we define the \textit{reduced} codimension two incidence graph $\overline{G}(\mathcal{A})$ by removing the vertices $H\in V(G(\mathcal{A}))$ of valence one (i.e. removing vertices corresponding to hyperplanes which only pass through a single flat $X\inL_2^{\text{\tiny{trip}}}$).
Now we describe how $G(\mathcal{A})$ and $\overline{G}(\mathcal{A})$ are useful in the context of Proposition~\ref{prop:H2Pres}. Referring to the diagram in Figure~\ref{fig:H2Pres}, consider the sub-module $N$ of $\bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}}\bigoplus\limits_{H<X} J(H)$ generated by the image of $\iota$ and the image of $\sum\psi_X$. Since $D(\mathcal{A}_X,\mathbf{m}_X)$ is a free rank two module for every $X\inL_2^{\text{\tiny{trip}}}$, it is generated by two derivations; call these $\theta_X$ and $\psi_X$. Then $N$ is generated by the columns of a matrix we denote $M=M(\theta_X,\psi_X\mid X\inL_2^{\text{\tiny{trip}}})$. The rows of $M$ are naturally indexed by the formal symbols $[X,H]$ corresponding to basis elements of $\bigoplus_{X\inL_2^{\text{\tiny{trip}}}}\bigoplus_{H<X} J(H)$ - equivalently we may assume the rows are indexed by edges of $G(\mathcal{A})$. The columns of $M$ are indexed either by hyperplanes $H'\in\mathcal{A}$ (these represent the image of $\iota$, one for each generator of $\bigoplus_{H\in\mathcal{A}}J(H)$) or pairs $(X',\theta_{X'})$ or $(X',\psi_{X'})$ where $X'\inL_2^{\text{\tiny{trip}}}$ and $\theta_{X'},\psi_{X'}$ are generators of $D(\mathcal{A}_{X'},\mathbf{m}_{X'})$ (each pair represents the inclusion of a generator of $D(\mathcal{A}_{X'},\mathbf{m}_{X'})$). The entries of $M$ are
\begin{center}
\begin{tabular}{rl}
$M_{[X,H],[H']}$ & $=\left\lbrace
\begin{array}{ll}
1 & H'=H\\
0 & H'\neq H
\end{array}
\right.,$
\\
$M_{[X,H],[X',\theta_{X'}]}$ & $=\left\lbrace
\begin{array}{ll}
\overline{\theta}_{X'}(\alpha_H) & X'=X\\
0 & X'\neq X
\end{array}
\right.,$\\
$\mbox{and }
M_{[X,H],[X',\psi_{X'}]}$ & $=\left\lbrace
\begin{array}{ll}
\overline{\psi}_{X'}(\alpha_H) & X'=X\\
0 & X'\neq X
\end{array}
\right.,$
\end{tabular}
\end{center}
where $\overline{\theta}_{X'}(\alpha_H)=\dfrac{\theta_{X'}(\alpha_H)}{\alpha_H^{\mathbf{m}(H)}}$.
Moreover we can associate the non-zero entries of $M$ to \textit{oriented} and labeled edges of $G(\mathcal{A})$; the entry $M_{[X,H],[H]}$ corresponds to the orientation $X\to H$ of $[X,H]$ and the entry $M_{[X,H],[X,\theta_X]}$ corresponds to the orientation $H\to X$ of $[X,H]$, along with the label $\theta_X$ on the edge $[X,H]$. If a vertex $H\in G(\mathcal{A})$ has valence one, then the corresponding column of $M$ is a generator of $\bigoplus_{X\inL_2^{\text{\tiny{trip}}}}\bigoplus_{H<X} J(H)$; since we are interested in the cokernel of $M$ we may reduce the matrix $M$ to the matrix $\overline{M}$ whose rows are indexed by pairs $[X,H]$ so that $H$ has valence at least two in $G(\mathcal{A})$. Clearly the rows of $\overline{M}$ are in bijection with edges of the reduced incidence graph $\overline{G}(\mathcal{A})$. Likewise the non-zero entries of $\overline{M}$ correspond to oriented and labeled edges of $\overline{G}(\mathcal{A})$.
By Proposition~\ref{prop:H2Pres}, $D(\mathcal{A},\mathbf{m})$ is free if and only if the columns of $M(\theta_X,\psi_X\mid X\inL_2^{\text{\tiny{trip}}})$ generate the free module $\bigoplus_{X\inL_2^{\text{\tiny{trip}}}}\bigoplus_{H<X} J(H)$. As in the proof of Proposition~\ref{prop:H2Pres}, only one generator for each $D(\mathcal{A}_X,\mathbf{m}_X)$, $X\inL_2^{\text{\tiny{trip}}}$, can map to a generator of $\bigoplus_{X\inL_2^{\text{\tiny{trip}}}}\bigoplus_{H<X} J(H)$. So we will consider sub-matrices of $\overline{M}$ obtained by choosing only a single generator for each $D(\mathcal{A}_X,\mathbf{m}_X)$. We write $M'=M'(\theta_X \mid X\inL_2^{\text{\tiny{trip}}})$ for the sub-matrix of $M$ formed by choosing a single generator $\theta_X$ of each $D(\mathcal{A}_X,\mathbf{m}_X)$, $X\inL_2^{\text{\tiny{trip}}}$. Notice that the columns of $M'$ are now in bijection with the vertices of $\overline{G}$. In the two cases we consider, maximal minors of $M'$ will be obtained by deleting at most one column. Thus the terms of a maximal minor of $M'$ are in bijection with orientations of $\overline{G}$ so that every vertex corresponding to a non-deleted column has exactly one incoming edge. We will use this observation in the next section.
\subsection{Characterization of free multiplicities on $TF_2$ arrangements}
Using Proposition~\ref{prop:H2Pres} we now characterize free multiplicities on $TF_2$ arrangements. By Proposition~\ref{prop:H2Pres} and Proposition~\ref{prop:FreeTF2Arrangements} we are restricted to the two cases
\begin{itemize}
\item $|\mathcal{A}|=1+\sum_{X\inL_2^{\text{\tiny{trip}}}} (|\mathcal{A}_X|-1)$ (equivalently $\mathcal{A}$ is a supersolvable $TF_2$ arrangement)
\item $|\mathcal{A}|=\sum_{X\inL_2^{\text{\tiny{trip}}}} (|\mathcal{A}_X|-1)$
\end{itemize}
\begin{thm}[Free multiplicities on free $TF_2$ arrangements]\label{thm:FreeMultFreeTF2}
Suppose $\mathcal{A}$ is a free, hence supersolvable $TF_2$ arrangement. By Proposition~\ref{prop:FreeTF2Arrangements}, $\overline{G}=\overline{G}(\mathcal{A})$ is a tree. Then $\mathbf{m}$ is a free multiplicity on $\mathcal{A}$ if and only if there is an orientation of $\overline{G}$ satisfying
\begin{enumerate}
\item Every vertex of $\overline{G}$ has at most one incoming edge.
\item The root vertex (no incoming edges) is some $X\inL_2^{\text{\tiny{trip}}}$.
\item Given a directed edge $H\to X$, $\mathbf{m}(H)$ is an exponent of $D(\mathcal{A}_X,\mathbf{m}_X)$
\end{enumerate}
Equivalently, $\mathbf{m}$ is a free multiplicity if and only if there is an ordering $X_1,\ldots,X_{r-1}$ of $L_2^{\text{\tiny{trip}}}$ and a supersolvable filtration $\mathcal{A}_1\subset\cdots\subset\mathcal{A}_r$ satisfying
\begin{enumerate}
\item $\mathcal{A}_2=\mathcal{A}_{X_1}$ and $\mathcal{A}_i=\mathcal{A}_{i-1}\cup\mathcal{A}_{X_{i-1}}$
\item $\mathcal{A}_{X_i}\cap \mathcal{A}_i=\{H_i\}$ for some $H_i\in\mathcal{A}$ ($H_1,\ldots,H_{r-1}$ not necessarily distinct)
\item $\mathbf{m}(H_i)$ is an exponent of $D(X_i,\mathbf{m}_{X_i})$
\end{enumerate}
\end{thm}
\begin{proof}
By Proposition~\ref{prop:H2Pres} and the preceding discussion, $D(\mathcal{A},\mathbf{m})$ is free if and only if there are derivations $\theta_X\in D(\mathcal{A}_X,\mathbf{m}_X)$ so that the columns of $\overline{M'}=\overline{M'}(\theta_X\mid X\inL_2^{\text{\tiny{trip}}})$ generate $\bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}}\bigoplus\limits_{H<X} J(H)$; in other words there should be a maximal minor with determinant equal to a non-zero constant. By Proposition~\ref{prop:FreeTF2Arrangements}, we have $|\mathcal{A}|=1+\sum_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)$ or $|\mathcal{A}|+\#L_2^{\text{\tiny{trip}}}=1+\sum_{X\inL_2^{\text{\tiny{trip}}}}|\mathcal{A}_X|$. It follows that the matrix $\overline{M'}$ has one more column than row; so the maximal minors are obtained by deleting a column of $\overline{M'}$. We may assume that the deleted column corresponds to some $X\inL_2^{\text{\tiny{trip}}}$. Since $\overline{G}$ is a tree, an orientation of $\overline{G}$ satisfying that each vertex has at most one incoming edge is equivalent to a choice of root for the tree. This in turn is equivalent to choosing a maximal minor of $\overline{M}$ (leave out the column corresponding to the root). The maximal minor chosen in this way has determinant
\[
\prod\limits_{H\to X} \overline{\theta}_X(\alpha_H),
\]
where the product is taken over directed edges $H\to X$ in the directed tree $\overline{G}$. This expression is a non-zero constant if and only if $\overline{\theta}_X(\alpha_H)$ is a non-zero constant (equivalently $\theta_X(\alpha_H)=\alpha_H^{\mathbf{m}(H)}$ up to constant multiple) for every directed edge $H\to X$. Since $\mathcal{A}_X$ is not boolean for any $X\inL_2^{\text{\tiny{trip}}}$, we see by Lemma~\ref{lem:Boolean} that $(\mathcal{A}_X,\mathbf{m}_X)$ cannot have an exponent smaller than $\mathbf{m}(H)$, so this is in turn equivalent to $(\mathcal{A}_X,\mathbf{m}_X)$ having an exponent of $\mathbf{m}(H)$ for every directed edge $H\to X$. This proves the first characterization.
We now show the second characterization in terms of supersolvable filtrations is equivalent to the first. Given an orientation of $\overline{G}$, we can build the required filtration by setting $X_1$ equal to the root vertex and inductively selecting $X_{i+1}$ to satisfy 1) $X_i$ and $X_{i+1}$ are both adjacent to some $H\in\overline{G}$ and 2) $X_i\to H\to X_{i+1}$ is a directed path with respect to the chosen orientation on $\overline{G}$. Conversely, given such a supersolvable filtration, we may orient $\overline{G}$ by taking $X_1$ to be the root.
\end{proof}
\begin{exm}\label{ex:TF2Graphic}
Suppose $\mathcal{A}$ is defined by $xyz(x-y)(y-z)$ (this is the graphic arrangement corresponding to a four-cycle with a chord). Then $\overline{G}$ consists of two vertices corresponding to the triple points $X_1$ and $X_2$ defined by $xy(x-y)$ and $yz(y-z)$, respectively. Clearly $\mathcal{A}$ is a supersolvable $TF_2$ arrangement. By Theorem~\ref{thm:FreeMultFreeTF2}, $(\mathcal{A},\mathbf{m})$ is free if and only if either $D(X_1,\mathbf{m}_{X_1})$ or $D(X_2,\mathbf{m}_{X_2})$ has an exponent equal to $\mathbf{m}(y)$.
If $\mathbb{K}$ has characteristic zero, this happens if and only if $\mathbf{m}(y)\ge \mathbf{m}(x)+\mathbf{m}(x-y)-1$ or $\mathbf{m}(y)\ge \mathbf{m}(z)+\mathbf{m}(y-z)-1$ (by~\cite{Wakamiko}), which recovers Abe's classification in~\cite{AbeDeletedA3}. In fact Abe's classification has a natural extension to any graphic $TF_2$ arrangement (these correspond to chordal graphs with two-dimensional clique complex). For instance, suppose $\mathcal{A}$ is defined by $xyzw(x-y)(y-z)(z-w)$. Then $\overline{G}(\mathcal{A})$ has three vertices and Theorem~\ref{thm:FreeMultFreeTF2} combined with the classification in~\cite{Wakamiko} yields that $(\mathcal{A},\mathbf{m})$ is free if and only if
\begin{itemize}
\item $\mathbf{m}(y)\ge \mathbf{m}(x)+\mathbf{m}(x-y)-1$ and $\mathbf{m}(z)\ge \mathbf{m}(y)+\mathbf{m}(y-z)-1$ or
\item $\mathbf{m}(y)\ge \mathbf{m}(z)+\mathbf{m}(y-z)-1$ and $\mathbf{m}(z)\ge \mathbf{m}(w)+\mathbf{m}(z-w)-1$ or
\item $\mathbf{m}(y)\ge \mathbf{m}(x)+\mathbf{m}(x-y)-1$ and $\mathbf{m}(z)\ge \mathbf{m}(w)+\mathbf{m}(z-w)-1$.
\end{itemize}
Each of the three possibilities corresponds to a choice of root for $\overline{G}$.
By similar arguments it is not difficult to show that a constant multiplicity of value greater than one is never a free multiplicity on a graphic $TF_2$ arrangement of rank at least three over a field of characteristic zero. In fact, if the constant multiplicity is free on a graphic arrangement over a field of characteristic zero then it is a product of braid arrangements \cite[Theorem~6.6]{GSplinesGraphicArr}. In contrast, suppose $\mathbb{K}$ is a field of characteristic $p$. Then it is straightforward to check (using Saito's criterion), that
\[
x^{p^k}\frac{\partial}{\partial x}+y^{p^k}\frac{\partial}{\partial y}\qquad\mbox{and}\qquad
x^{p^{k+1}}\frac{\partial}{\partial x}+y^{p^{k+1}}\frac{\partial}{\partial y}
\]
form a basis for the multi-arrangement defined by $x^{p^k}y^{p^k}(x-y)^{p^k}$ (here $k$ is any positive integer). It follows from Theorem~\ref{thm:FreeMultFreeTF2} that the constant multiplicity of value $p^k$ is always free on a graphic $TF_2$ arrangement over a field of characteristic $p$. Ziegler~\cite{ZieglerMatroid} has shown that freeness of simple arrangements may also depend on the characteristic of the field.
\end{exm}
\begin{exm}[Example~\ref{ex:multiplicitiessupersolvable}, continued]\label{ex:multiplicitiessupersolvable0}
Consider the arrangement $\mathcal{A}(\alpha,\beta)$ defined by $xyz(x-\alpha z)(x-\beta z)(y-z)$ where $\alpha,\beta\in\mathbb{K}$. This is a $TF_2$ arrangement with two rank two flats in $L_2^{\text{\tiny{trip}}}$: the flat $X_1$ defined by $xz(x-\alpha z)(x-\beta z)$ and the flat $X_2$ defined by $yz(y-z)$. The reduced graph $\overline{G}(\mathcal{A})$ consists of the three vertices $H,X_1,X_2$ joined by the two edges $[H,X_1]$ and $[H,X_2]$. By Theorem~\ref{thm:FreeMultFreeTF2} a multi-arrangement $(\mathcal{A}(\alpha,\beta),\mathbf{m})$ is free if and only if either $D(\mathcal{A}_{X_1},\mathbf{m}_{X_1})$ or $D(\mathcal{A}_{X_2},\mathbf{m}_{X_2})$ has an exponent of $\mathbf{m}(z)$. Example~\ref{ex:multiplicitiessupersolvable} continues the analysis for this multi-arrangement.
\end{exm}
\begin{remark}
The characterization in Theorem~\ref{thm:FreeMultFreeTF2} reduces the problem of determining free multiplicities on free $TF_2$ arrangements to the problem of determining when rank two multi-arrangements have an exponent which is equal to the multiplicity of one of its points, which is a difficult problem in general~\cite{DerProjLine}. Somewhat surprisingly, free multiplicities on non-free $TF_2$ arrangements admit a complete description, at least in characteristic zero.
\end{remark}
Suppose $\mathcal{A}$ is a non-free $TF_2$ arrangement which admits a free multiplicity. As mentioned earlier, $|\mathcal{A}|=\sum_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)$ or $|\mathcal{A}|+\#L_2^{\text{\tiny{trip}}}=\sum |\mathcal{A}_X|$. Since $\overline{G}(\mathcal{A})$ is connected (see the proof of Proposition~\ref{prop:FreeTF2Arrangements}) and $\overline{G}(\mathcal{A})$ has as many vertices as edges, there is a unique cycle in $\overline{G}(\mathcal{A})$. Write $C={H_0,X_0,H_1,X_1,\ldots,H_{k-1},X_{k-1},H_0}$ for this cycle, and let $\alpha_0,\ldots,\alpha_{k-1}$ be the corresponding linear forms to $H_0,\ldots,H_{k-1}$. We observe that the linear forms $\alpha_0,\ldots,\alpha_{k-1}$ must be linearly independent. To see this, define $\mathcal{A}'=\mathcal{A}_{X_0}\cup\mathcal{A}_{X_1}\cdots\cup\mathcal{A}_{X_{k-2}}$. Then $\mathcal{A}'$ has rank $k$, contains all hyperplanes defined by $\alpha_0,\ldots,\alpha_{k-1}$, and every defining form of $\mathcal{A}'$ is expressible using $\alpha_0,\ldots,\alpha_{k-1}$.
\begin{thm}[Free multiplicities on non-free $TF_2$ arrangements]\label{thm:FreeMultNonFreeTF2}
Suppose $\mathcal{A}$ is a non-free $TF_2$ arrangement (over a field of characteristic zero) which admits a free multiplicity. As above, let $C={H_0,X_0,H_1,X_1,\ldots,H_{k-1},X_{k-1},H_0}$ be the unique cycle in $\overline{G}=\overline{G}(\mathcal{A})$. Then $\mathbf{m}$ is a free multiplicity on $\mathcal{A}$ if and only if the following conditions are satisfied
\begin{enumerate}
\item $\mathbf{m}(H)=1$ for every $H\in\mathcal{A}$ which is not a vertex of $C$
\item There is an integer $n>0$ so that $\mathbf{m}(H)=n$ for every $H\in\mathcal{A}$ which is a vertex of $C$
\item There are $B_1,\ldots,B_k\in\mathbb{K}$ satisfying
\begin{itemize}
\item $B_1\cdots B_k\neq 1$ and
\item for every $H\in\mathcal{A}_{X_i}\setminus\{H_i,H_{i+1}\}$ (indices taken modulo $k$), $\alpha_H$ can be written (up to scalar multiple) as $\alpha_H=\alpha_i+\beta^H_i\alpha_{i+1}$ (indices taken modulo $k$) for some $\beta^H_i\in\mathbb{K}$ satisfying $(\beta^H_i)^{n-1}=B_i$
\end{itemize}
\end{enumerate}
\end{thm}
\begin{proof}
By Proposition~\ref{prop:H2Pres}, we have $|\mathcal{A}|=\sum_{X\inL_2^{\text{\tiny{trip}}}}(|\mathcal{A}_X|-1)$ or $|\mathcal{A}|+\#L_2^{\text{\tiny{trip}}}=\sum |\mathcal{A}_X|$. So for any choice of $\theta_X$ for every $X\inL_2^{\text{\tiny{trip}}}$ the matrix $\overline{M'}=\overline{M'}(\theta_X\mid X\inL_2^{\text{\tiny{trip}}})$ is a square matrix. We find its determinant. A term of $\det(\overline{M'})$ corresponds to an orientation of $\overline{G}$ in which every vertex has exactly one incoming edge. Since $\overline{G}$ has a unique cycle, such an orientation of $\overline{G}$ is determined by an orientation of the cycle (every other edge must be directed `away' from the cycle). Since there are only two choices of orientation for the cycle $C$ which satisfy that every vertex has exactly one incoming edge, there are only two terms in $\det(\overline{M})$. In fact, if $C={H_0,X_0,H_1,X_1,\ldots,H_{k-1},X_{k-1},H_0}$,
\begin{equation}\label{eq:det}
\det(\overline{M})=\left(\prod\limits_{i=0}^{k-1} \overline{\theta}_{X_i}(\alpha_i)-\prod\limits_{i=0}^{k-1} \overline{\theta}_{X_i}(\alpha_{i+1})\right)\prod_{(H\to X)\notin C} \overline{\theta}_X(\alpha_H),
\end{equation}
where the index $i+1$ is taken modulo $k$ and the directed edge $H\to X$ is the unique direction `away' from the cycle $C$. From Proposition~\ref{prop:H2Pres}, $(\mathcal{A},\mathbf{m})$ is free if and only if there is a choice of $\theta_{X}$ for every $X\inL_2^{\text{\tiny{trip}}}$ so that the determinant~\eqref{eq:det} is a non-zero constant. We assume that we have such a choice of $\theta_X, X\inL_2^{\text{\tiny{trip}}}$, and deduce the form for $(\mathcal{A},\mathbf{m})$ given in the theorem. Lemma~\ref{lem:Boolean} guarantees that $\overline{\theta}_{X}(\alpha_H)\neq 0$ for any $X\inL_2^{\text{\tiny{trip}}}$ and $H<X$. Now, fixing an arbitrary $X_i$ in the cycle $C$, we must have $\theta_{X_i}(\alpha_i)=s_i\alpha_i^{\mathbf{m}(\alpha_i)}$ and $\theta_{X_i}(\alpha_{i+1})=t_i\alpha_{i+1}^{\mathbf{m}(\alpha_{i+1})}$ for some non-zero constants $s_i$ and $t_i$. Hence $\mathbf{m}(\alpha_i)=\mathbf{m}(\alpha_{i+1})=\deg(\theta_{X_i})$. Reading around the cycle $C$, we see that $\mathbf{m}(\alpha_0)=\mathbf{m}(\alpha_1)=\cdots=\mathbf{m}(\alpha_{k-1})=n$ for some positive integer $n$, proving (2).
Next again fix an arbitrary $X_i$ in the cycle $C$ and consider the multi-arrangement $(\mathcal{A}_{X_i},\mathbf{m}_{X_i})$. Since $X_i$ has rank $2$, we may assume $(\mathcal{A}_{X_i},\mathbf{m}_{X_i})$ is defined by $\mathcal{Q}(\mathcal{A}_{X_i},\mathbf{m}_{X_i})=x^ny^n\prod_{j=1}^k(x-a_jy)^{m_j}$ for some integer $k\ge 1$ (since $X\inL_2^{\text{\tiny{trip}}}$) and some non-zero constants $a_1,\ldots,a_k$ (we are writing $m_j$ for $\mathbf{m}(x-a_jy)$). Notice that $m_j\le n$ for all $j=1,\ldots,k$ since $\theta_{X_i}$ has degree $n$ (this is easily seen by applying Lemma~\ref{lem:Boolean}). In particular, $(\mathcal{A}_X,\mathbf{m}_X)$ is \textit{balanced} - i.e. $2n\le |\mathbf{m}_X|=2n + \sum_{i=1}^k m_i$.
Next, a result of Abe~\cite[Theorem~1.6]{AbeFreeness3Arrangements} shows that the exponents of a balanced $2$-multi-arrangement differ by at most $|\mathcal{A}|-2=k$. Write $d^{X_i}_1\ge d^{X_i}_2$ for the exponents of $(\mathcal{A}_{X_i},\mathbf{m}_{X_i})$, and remember that we are assuming $d^{X_i}_2=\deg(\theta_{X_i})=n$. From Abe's result we get that $|d^{X_i}_1-d^{X_i}_2|=d^{X_i}_1-n\le k$, so $d^{X_i}_1\le n+k$. But $|\mathbf{m}_{X_i}|=2n+\sum_{j=1}^k m_j=n+d^{X_i}_2$, so $d^{X_i}_2=n+\sum_{i=1}^k m_i\le n+k$ (this last inequality follows from the previous sentence). Since $m_j\ge 1$ for every $j$, we must have $m_j=1$ for each $j=1,\ldots,k$. Now, applying Lemma~\ref{lem:nn11exp} implies that $a_1^{n-1}=\cdots=a_k^{n-1}$. This yields the second bullet point under (3).
As remarked just prior to the statement of Theorem~\ref{thm:FreeMultNonFreeTF2}, $\alpha_0,\cdots,\alpha_{k-1}$ are linearly independent. Change coordinates so that $\alpha_0=x_0,\ldots,\alpha_{k-1}=x_{k-1}$. Lemma~\ref{lem:nn11exp} again yields that the derivation $\theta_{X_i}$ has the form $\theta_{X_i}=x_i^n\frac{\partial}{\partial x_i}+B_ix_{i+1}^n\frac{\partial}{\partial x_{i+1}}$. Plugging this into equation~\eqref{eq:det} yields
\begin{equation}\label{eq:det2}
\det(\overline{M})=\left(1-\prod_{i=0}^{k-1} B_i\right)\prod_{(H\to X)\notin C} \overline{\theta}_X(\alpha_H),
\end{equation}
yielding the first bullet point under (3) since this must be a \textit{non-zero} constant.
Now we prove (1). If $H\in\mathcal{A}$ is not a vertex of $C$ but there is some $X\in C$ so that $H<X$, then $H\in\mathcal{A}_X$ and $\mathbf{m}(H)=1$ since $H\notin C$. So suppose $H\in\mathcal{A}$ but $H\nless X$ for any $X\in C$. Then $H<X$ for some $X\inL_2^{\text{\tiny{trip}}}$, and $X\notin C$. Then there is a unique $H'$ so that $H'$ is closer to $C$ than $X$ as vertices of $\overline{G}$. Thus $H'\to X$ is a directed edge in any orientation of $\overline{G}$ satisfying that every vertex has a unique incoming edge. Thus $\theta_X(\alpha_H)$ appears in the expression of Equation~\eqref{eq:det2} and $\theta_X(\alpha_H)=\alpha_H^{\mathbf{m}(H)}=\alpha_H$ (up to constant multiple, since we assume the right hand side of Equation~\eqref{eq:det} is a non-zero constant). It follows from Lemma~\ref{lem:Boolean} that $(A_X,\mathbf{m}_X)$ is simple, i.e. $\mathbf{m}_X\equiv 1$. Hence $\mathbf{m}(H)=1$ as well.
Finally, suppose $\mathcal{A}$ is a non-free $TF_2$ arrangement and $(\mathcal{A},\mathbf{m})$ has the form indicated in the statement of the theorem. Then clearly $\det(\overline{M})$ is a non-zero constant by equation~\eqref{eq:det2}, so $(\mathcal{A},\mathbf{m})$ is free by Proposition~\ref{prop:H2Pres}.
\end{proof}
\begin{exm}[Example~\ref{ex:TotallyNonFree}, revisited]\label{ex:TotallyNonfree0}
Consider the arrangement $\mathcal{A}(\alpha,\beta)$ defined by $xyz(x-\alpha y)(x-\beta y)(y-z)(z-x)$, where $\alpha,\beta\in\mathbb{K}$. This is a non-free $TF_2$ arrangement with three rank two flats in $L_2^{\text{\tiny{trip}}}$: the flat $X_0$ defined by $xy(x-\alpha y)(x-\beta y)$, the flat $X_1$ defined by $yz(y-z)$, and the flat $X_2$ defined by $xz(x-z)$. The reduced graph $\overline{G}(\mathcal{A})$ consists of the cycle $C=\{H_0,X_0,H_1,X_1,H_2,X_2,H_0\}$, where $H_0=V(x),H_1=V(y),$ and $H_2=V(z)$. By Theorem~\ref{thm:FreeMultFreeTF2} the $(\mathcal{A}(\alpha,\beta),\mathbf{m})$ is free if and only if $\mathcal{Q}(\mathcal{A},\mathbf{m})$ has the form
\[
\mathcal{Q}(\mathcal{A},\mathbf{m})=x^ny^nz^n(x-\alpha y)(x-\beta y)(y-z)(z-x),
\]
where $\alpha^{n-1}=\beta^{n-1}\neq 1$.
\end{exm}
\subsection{Further counterexamples to Orlik's conjecture}
In this section we consider the family of arrangements $\mathcal{A}_{r,t}$ with defining polynomial
\[
\mathcal{Q}(\mathcal{A}_{r,t})=x_0\left(\prod\limits_{i=1}^r (x^2_i-x^2_0) \right) (x_1-x_2)\cdots (x_{r-1}-x_r)(x_r-tx_1),
\]
where $t\neq 0\in\mathbb{K}$. Write $H_0=V(x_0)$. The restriction $\mathcal{A}^{H^0}_{r,t}$ has defining polynomial
\[
\mathcal{Q}(\mathcal{A}^{H_0}_{r,t})=\left(\prod\limits_{i=1}^r x_i \right) (x_1-x_2)\cdots (x_{r-1}-x_r)(x_r-tx_1).
\]
Ziegler's multi-restriction has the defining polynomial
\[
\mathcal{Q}(\mathcal{A}^{H_0},\mathbf{m}^{H_0})=\left(\prod\limits_{i=1}^r x_i^2 \right) (x_1-x_2)\cdots (x_{r-1}-x_r)(x_r-tx_1)
\]
\begin{prop}\label{prop:Xr}
If $t\neq 1$ and $\mathbb{K}$ has characteristic zero, the arrangement $\mathcal{A}_{r,t}$ satisfies
\begin{enumerate}
\item $(\mathcal{A}^{H_0}_{r,t},\mathbf{m}^{H_0})$ is free for $t\neq 0,1$,
\item $\mathcal{A}_{r,t}$ is free if and only if $t=-1$,
\item The minimal free resolution of $D(\mathcal{A}^{H_0}_{r,t})$ is a twisted and truncated Koszul complex, $\mbox{\emph{reg} }(D(\mathcal{A}^{H_0}_{r,t}))=3$, and $\mbox{pdim}(D(\mathcal{A}^{H_0}_{r,t}))=r-2$ (the maximum).
\end{enumerate}
\end{prop}
\begin{proof}
Write $X_{r,t}$ for $\mathcal{A}_{r,t}^{H_0}$, $\alpha_i$ for $x_i$ ($i=1,\ldots,r$), $\beta_i$ for $x_i-x_{i+1}$ ($i=1,\ldots,r-1$), and $\beta_r$ for $x_r-tx_1$. The space of all relations on the linear forms of $X_r$ is an $r$-dimensional space. Write $Y_i$ for the `triple flat' of codimension two given by the vanishing of the forms $\alpha_i,\alpha_{i+1},\beta_i$ for $i=1,\ldots,r-1$, and write $Y_r$ for the flat determined by $\alpha_1,\alpha_r,\beta_r$. Clearly $L_2^{\text{\tiny{trip}}}=\{Y_1,\ldots,Y_r\}$ and it is not difficult to see that each $Y_i$ contributes one relation to the relation space and they are all linearly independent, hence $X_{r,t}$ is a $TF_2$ arrangement. Since $\#L_2^{\text{\tiny{trip}}}=r$, the rank of $X_{r,t}$, it follows from Theorem~\ref{thm:FreeMultNonFreeTF2} that $\mathbf{m}^{H_0}$ is a free multiplicity on $X_{r,t}$, proving (1).
For (2), we use Theorem~\ref{thm:Yoshinaga}. We already have $(\mathcal{A}^{H_0}_{r,t},\mathbf{m}^{H_0})$ free by (1), so we consider local freeness of $\mathcal{A}_{r,t}$ along $H_0$. If $t\neq -1$, then the closed sub-arrangement with defining equation
\[
(x_1^2-x_0^2)(x_r^2-x_0^2)(x_r-tx_1)x_0
\]
is not free, so neither is $\mathcal{A}_{r,t}$. So we need to prove local freeness when $t=-1$. The closed sub-arrangements of $\mathcal{A}_{r,-1}$ along $H_0$ are isomorphic to $A_1\times A_1\times A_1,A_1\times A_2,A_3$ with a hyperplane removed (the \textit{deleted} $A_3$ arrangement), or $A_3$. Since these are all free, $\mathcal{A}_{r,-1}$ is free by Theorem~\ref{thm:Yoshinaga}.
For (3), we use the presentation from Proposition~\ref{prop:H2Pres}. We consider only the case $\mathbf{m}\equiv 1$. As in Proposition~\ref{prop:H2Pres}, write formal symbols $[H]$ (or $[\alpha_H]$) for the generator of $J(H)=\langle \alpha_H \rangle$ and $[X,H]$ (or $[X,\alpha_H]$) for the generator of $J(H)$ inside the direct sum $\bigoplus_{X\inL_2^{\text{\tiny{trip}}}}\bigoplus_{H<X} J(H)$. In the case of $X_{r,t}$, the map $\iota:\bigoplus J(H)\rightarrow\bigoplus_{X,H} J(H)$ has the form $\iota([\alpha_i])=[Y_i,\alpha_i]+[Y_{i+1},\alpha_i]$ for $i=1,\ldots,r-1$, $\iota([\alpha_r])=[Y_r,\alpha_r]+[Y_r,\alpha_1]$, and $\iota([\beta_i])=[Y_i,\beta_i]$. Hence in $\mbox{coker}(\iota)$, we may disregard the generators corresponding to $[Y_i,\beta_i]$ and we can choose generators $[Y_1,\alpha_1],\cdots,[Y_r,\alpha_r]$ with $[Y_2,\alpha_1]=-[Y_1,\alpha_1]$, etc. With this choice of basis, we determine that the map $\sum\overline{\psi_X}:\oplus D(\mathcal{A}_X,\mathbf{m}_X)\rightarrow \mbox{coker}(\iota)$ is given on $\theta\in D(\mathcal{A}_{Y_1},\mathbf{m}_{Y_1})$ by $\theta\rightarrow \overline{\theta}(\alpha_1)[Y_1,\alpha_1]+\overline{\theta}(\alpha_2)[Y_1,\alpha_2]=\overline{\theta}(\alpha_1)[Y_1,\alpha_1]-\overline{\theta}(\alpha_2)[Y_2,\alpha_2]$, where $\overline{\theta}(\alpha_i)=\theta(\alpha_i)/\alpha_i$ (and similarly for $\theta\in D(\mathcal{A}_{Y_i},\mathbf{m}_{Y_i}),i>1$). Thus we may represent the map $\sum \overline{\psi}_X$ by the matrix
\[
\bordermatrix{&\theta_1 &\upsilon_1 & \theta_2 & \upsilon_2& \cdots &\theta_r & \upsilon_r \cr
[Y_1,\alpha_1] &\overline{\theta}_1(x_1) & \overline{\upsilon}_1(x_1) & 0 & 0 &\cdots & -\overline{\theta}_r(x_1) & -\overline{\upsilon}_r(x_1) \cr
[Y_2,\alpha_2] & -\overline{\theta}_1(x_2) & -\overline{\upsilon}_1(x_2) & \overline{\theta}_2(x_2) & \overline{\upsilon}_2(x_2) & \cdots & 0 & 0\cr
[Y_3,\alpha_3] & 0 & 0 & -\overline{\theta}_2(x_3) & -\overline{\upsilon}_2(x_3) & \cdots & 0 & 0 \cr
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots &\vdots & \vdots \cr
[Y_r,\alpha_r] & 0 & 0 & 0 & 0 & \cdots & \overline{\theta}_{r}(x_r) & \overline{\upsilon}_r(x_r)
}
\]
Now, for $i=1,\ldots,r$, $D(\mathcal{A}_{Y_i})$ is generated by the derivations
\[
\begin{array}{l}
\theta_i=x_i\dfrac{\partial}{\partial x_i}+x_{i+1}\dfrac{\partial}{\partial x_{i+1}}\\
\upsilon_i=x_i^2\dfrac{\partial}{\partial x_i}+x_{i+1}^2\dfrac{\partial}{\partial x_{i+1}}
\end{array}
\]
for $i=1,\ldots,r-1$ and $D(Y_r)$ is generated by
\[
\begin{array}{l}
\theta_r=x_r\dfrac{\partial}{\partial x_r}+x_{1}\dfrac{\partial}{\partial x_{1}}\\
\upsilon_r=x_r^2\dfrac{\partial}{\partial x_r}+tx_1^2\dfrac{\partial}{\partial x_{1}}
\end{array}
\]
So the above matrix simplifies to
\[
M=
\bordermatrix{&\theta_1 &\upsilon_1 & \theta_2 & \upsilon_2& \cdots &\theta_r & \upsilon_r \cr
[Y_1,\alpha_1] &1 & x_1 & 0 & 0 &\cdots & -1 & -tx_1 \cr
[Y_2,\alpha_2] & -1 & -x_2 & 1 & x_2 & \cdots & 0 & 0\cr
[Y_3,\alpha_3] & 0 & 0 & -1 & -x_3 & \cdots & 0 & 0 \cr
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots &\vdots & \vdots \cr
[Y_r,\alpha_r] & 0 & 0 & 0 & 0 & \cdots & 1 & x_r
}
\]
Notice that in $\mbox{coker}(M)$, the Euler derivations $\theta_1,\ldots,\theta_r$ identify all basis elements $[Y_1,\alpha_1],\cdots,[Y_r,\alpha_1]$ to a single basis element. Hence
\[
\mbox{coker}(M)\cong H^2(\mathcal{J}^\bullet)\cong \frac{S(-1)}{\langle x_1-x_2,x_2-x_3,\ldots,x_{r-1}-x_r,x_r-tx_1\rangle},
\]
where the $S(-1)$ encodes the fact that the degrees of $[Y_i,\alpha_i]$ are all one. Since $t\neq 0,1$, $H^2(\mathcal{J}^\bullet)\cong S/\mathfrak{m}$, where $\mathfrak{m}$ is the maximal ideal of $S$.
Now, applying the snake lemma to the diagram in Figure~\ref{fig:H2Pres} and using the fact that $\iota$ is injective (see the proof of Proposition~\ref{prop:H2Pres}), we get the four-term exact sequence
\[
0\rightarrow D(X_{r,t}) \rightarrow \bigoplus\limits_{Y\inL_2^{\text{\tiny{trip}}}} D((X_{r,t})_Y,\mathbf{m}_Y) \xrightarrow{M} S(-1)^\kappa \rightarrow H^2(\mathcal{J}(X_{r,t}))\rightarrow 0,
\]
where $S(-1)^{\kappa}=\mbox{coker}(\iota)$. Above we noticed this prunes down to
\[
0\rightarrow D(X_{r,t}) \rightarrow S(-1)\oplus S(-2)^{r} \xrightarrow{T} S(-1) \rightarrow \dfrac{S}{\mathfrak{m}}\rightarrow 0,
\]
where $T=\begin{bmatrix} 0 & x_1-x_2 & \cdots & x_r-tx_1\end{bmatrix}$. It follows that
\[
D(X_{r,t})\cong S(-1) \oplus K_2(\mathfrak{m})(-1),
\]
where $K_2(\mathfrak{m})(-1)$ is the module of second syzygies of $\mathfrak{m}$, twisted by $-1$. It is well-known that $K_2(\mathfrak{m})$ has $\binom{r}{2}$ generators of degree $2$, so $D(X_{r,t})$ is generated by the Euler derivation along with $\dbinom{r}{2}$ generators of degree $3$. Its minimal free resolution is given by truncating the Koszul complex at $K_2(\mathfrak{m})$, so it is linear of length $r-2$, the maximum possible. Since the resolution is linear, $\mbox{\emph{reg} }(D(X_{r,t}))=3$, where $\mbox{\emph{reg} }$ denotes Castelnuovo-Mumford regularity. This completes the proof of (3).
\end{proof}
\begin{remark}\label{rem:GeneralizedX3}
If $t\neq 1$, then the only non-boolean generic flats of $X_{r,t}$ are the obvious ones of rank two corresponding to the closed circuits of length three. Hence the bound on $\mbox{pdim}(X_{r,t})$ given by Corollary~\ref{cor:pdimcircuit} is zero, while $\mbox{pdim}(X_{r,t})=r-2$. If $t=1$ then we can see that $\beta_1,\ldots,\beta_r$ forms a closed circuit of length $r$, in which case $\mbox{pdim}(D(X_{r,1},\mathbf{m}))\ge r-3$ by Corollary~\ref{cor:pdimcircuit}. In fact, if we introduce the extra variable $x_0$ and change coordinates by the rule $x_i\to x_i-x_0$, we see that $X_{r,1}$ is the graphic arrangement corresponding to a wheel with $r$ spokes. From~\cite[Example~7.1]{GSplinesGraphicArr}, $\mbox{pdim}(D(X_{r,1},\mathbf{m}))=r-3$ for any multiplicity $\mathbf{m}$.
\end{remark}
\section{The case of line arrangements}\label{sec:SyzygiesTeraoConj}
It is well-known that $D(\mathcal{A})$ may be identified with the module of syzygies on the Jacobian ideal $\mbox{Jac}(\mathcal{A})$ of the defining polynomial of $\mathcal{A}$; hence $\mathcal{A}$ is free if and only if $\mbox{Jac}(\mathcal{A})$ is codimension two and Cohen-Macaulay. In this section we show that, for rank three arrangements, $D(\mathcal{A},\mathbf{m})$ may be identified with potentially higher syzygies of a less geometric object. We use this to give another formulation of Terao's conjecture for lines in $\mathbb{P}^2$.
First, suppose $\mathcal{A}$ is a $TF_2$ arrangement and consider the diagram in Figure~\ref{fig:H2Pres}. Since $\iota$ is injective (see the proof of Proposition~\ref{prop:H2Pres}) and $H^1(\mathcal{J}^\bullet)\cong D(\mathcal{A},\mathbf{m})$, the full snake lemma applied to this diagram yields the exact sequence
\[
0\rightarrow D(\mathcal{A},\mathbf{m}) \rightarrow \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} D(\mathcal{A}_X,\mathbf{m}_X) \rightarrow S^{\kappa} \rightarrow H^2(\mathcal{J}^\bullet) \rightarrow 0,
\]
where the inclusion $D(\mathcal{A},\mathbf{m})\rightarrow \bigoplus D(\mathcal{A}_X,\mathbf{m}_X)$ is the sum of the restriction maps $D(\mathcal{A},\mathbf{m})\rightarrow D(\mathcal{A}_X,\mathbf{m}_X)$ (recall that the isomorphism $D(\mathcal{A},\mathbf{m})\cong H^1(\mathcal{J})$ is given by the map $\psi(\theta)=\sum_{H\in L} \theta(\alpha_H)$). By Theorem~\ref{thm:Free}, $D(\mathcal{A},\mathbf{m})$ is free if and only if
\[
0\rightarrow D(\mathcal{A},\mathbf{m}) \rightarrow \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} D(\mathcal{A}_X,\mathbf{m}_X) \xrightarrow{\sum\overline{\psi_X}} S^{\kappa}\rightarrow 0
\]
is a short exact sequence. Hence if $D(\mathcal{A},\mathbf{m})$ is free we may identify it with the syzygies on a (necessarily non-minimal) set of generators for the free module $S^\kappa$.
Now suppose $\mathcal{A}$ is rank three, irreducible and totally formal but not $TF_2$, so $\mathcal{S}^3(\mathcal{A})=S_3(\mathcal{A})\neq 0$. We can set up (see Figure~\ref{fig:H2PresC}) a very similar diagram to the one in Figure~\ref{fig:H2Pres}. All maps in the top two rows of Figure~\ref{fig:H2PresC} are the same as in Figure~\ref{fig:H2Pres}; in particular $\kappa=\sum_{X\inL_2^{\text{\tiny{trip}}}}|\mathcal{A}_X|-|\mathcal{A}|$ just as in Proposition~\ref{prop:H2Pres}. The chain complex $\mathcal{J}^\bullet(\mathcal{A},\mathbf{m})$ appears as the right-most column. The map labeled $q$ is the quotient map. The existence of the bottom right horizontal map $\Delta:\mbox{coker}(\iota)\rightarrow J_3(\mathcal{A},\mathbf{m})$ follows from the commutativity of the upper right square; furthermore $\Delta$ is surjective since $\delta^1_J$ and $\sum (\delta^1_J)_X$ are both surjective. The lower left map $i:\ker(\Delta)\rightarrow S^\kappa$ is the inclusion and the map $\hat{q}$ is lifted from $q$ in the obvious way.
\begin{figure}
\centering
\begin{tikzcd}
& \bigoplus\limits_{i=1}^n J(H_i) \ar{r}{\cong} \ar{d}{\iota} & \bigoplus\limits_{i=1}^n J(H_i) \ar{d}{\delta^1_J}\\
\bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} D(\mathcal{A}_X,\mathbf{m}_X) \ar{d}{\hat{q}} \ar{r}{\sum \psi_X} & \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}}\left[ \bigoplus\limits_{H_i<X} J(H_i)\right]\ar{d}{q} \ar{r}{\sum (\delta^1_J)_X} & \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} J_2(\mathcal{A}_X,\mathbf{m}_X)\ar{d}{\delta^2_J}\\
\ker(\Delta)\ar{r}{i} & \mbox{coker}(\iota)\cong S^\kappa \ar{r}{\Delta} & J_3(\mathcal{A},\mathbf{m})
\end{tikzcd}
\caption{Diagram for Proposition~\ref{prop:H2PresC}}\label{fig:H2PresC}
\end{figure}
\begin{prop}\label{prop:H2PresC}
Let $\mathcal{A}$ be an essential, irreducible, formal arrangement of rank $3$ which is not $TF_2$. Then
\[
H^2(\mathcal{J})\cong \mbox{coker}\left( \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} D(\mathcal{A}_X,\mathbf{m}_X)\xrightarrow{\hat{q}} \ker(\Delta)\right).
\]
and $D(\mathcal{A},\mathbf{m})$ is free if and only if $\hat{q}$ is surjective. Moreover, $D(\mathcal{A},\mathbf{m})$ is free if and only if
\[
0\rightarrow D(\mathcal{A},\mathbf{m}) \rightarrow \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} D(\mathcal{A}_X,\mathbf{m}_X) \xrightarrow{i\circ\hat{q}} S^\kappa
\]
is exact in the first two positions and $\mbox{coker}(i\circ\hat{q})=J_2(\mathcal{A},\mathbf{m})$; i.e. the above sequence is a free resolution for $J_2(\mathcal{A},\mathbf{m})$. Moreover, the left-most inclusion of $D(\mathcal{A},\mathbf{m})$ into $\bigoplus D(\mathcal{A}_X,\mathbf{m}_X)$ is given by the sum of natural restriction maps.
\end{prop}
\begin{proof}
The identification of $H^2(\mathcal{J})$ with $\mbox{coker}(\hat{q})$ follows from a long exact sequence in homology. More precisely, the rows of the diagram in Figure~\ref{fig:H2PresC} are all exact. Hence we may view this diagram as a short exact sequence of chain complexes; the chain complexes are the columns of the diagram. As we saw in the proof of Proposition~\ref{prop:H2Pres}, the map $\iota$ is injective so the middle column is exact. Thus the long exact sequence in homology splits into three isomorphisms. The first isomorphism yields $H^1(\mathcal{J})\cong \ker(\hat{q})$; which we may read as $D(\mathcal{A},\mathbf{m})\cong \ker(\hat{q})$ ($H^1(\mathcal{J})\cong D(\mathcal{A},\mathbf{m})$ since $\mathcal{A}$ is essential). The second isomorphisms yields $H^2(\mathcal{J})\cong \mbox{coker}(\hat{q})$, which is the first statement of the proposition. The third isomorphism yields $H^3(\mathcal{J})=0$. Hence by Theorem~\ref{thm:Free}, $D(\mathcal{A},\mathbf{m})$ is free if and only if $H^2(\mathcal{J})=0$, if and only if $\mbox{coker}(\hat{q})=0$.
If $\hat{q}$ is surjective (if and only if $D(\mathcal{A},\mathbf{m})$ is free), then $\mbox{im}(\hat{q})=\ker(\Delta)$; by our previous identification of $D(\mathcal{A},\mathbf{m})$ with $\ker(\hat{q})$ we have freeness of $D(\mathcal{A},\mathbf{m})$ if and only if the sequence
\[
0\rightarrow D(\mathcal{A},\mathbf{m}) \rightarrow \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} D(\mathcal{A}_X,\mathbf{m}) \xrightarrow{i\circ \hat{q}} S^\kappa \xrightarrow{\Delta} J_3(\mathcal{A},\mathbf{m}) \rightarrow 0
\]
is exact. Chasing the diagram in Figure~\ref{fig:H2PresC}, and using that the map $D(\mathcal{A},\mathbf{m})\rightarrow \bigoplus J(H)$ is given by $\psi(\theta)=\sum \theta(\alpha_H)$, yields that the left-most inclusion is given by the sum of natural restriction maps, so we are done.
\end{proof}
Given a matrix for $\Delta$ in the natural choice of basis, we can identify the columns of $\Delta$ with a (often non-minimal) set of generators for $J_3(\mathcal{A},\mathbf{m})$. Thus $\mbox{ker}(\Delta)$ can be identified with syzygies on this set of generators, which we denote by $\syz(\Delta)$. In this language, we have the following corollary.
\begin{cor}\label{cor:syzygeticCriterion}
$D(\mathcal{A},\mathbf{m})$ is free if and only if $\sum_{X\inL_2^{\text{\tiny{trip}}}} (i\circ\hat{q})(D(A_X,\mathbf{m}_X))$ generates $\syz(\Delta)$.
\end{cor}
\begin{remark}
Proposition~\ref{prop:H2PresC} and Corollary~\ref{cor:syzygeticCriterion} generalize Theorem~3.16 and Corollary~6.3 of~\cite{A3MultiBraid}, where the corresponding statements are worked out for $A_3$ multi-braid arrangements.
\end{remark}
Now consider the case $\mathbf{m}\equiv 1$, which is the setting of Terao's question of whether freeness of $\mathcal{A}$ is combinatorial. In this case a special role is again played by the Euler derivations in $D(\mathcal{A}_X)$. In terms of corollary~\ref{cor:syzygeticCriterion}, Euler derivations represent syzygies of degree one, which in turn express redundant generators of $J_3(\mathcal{A})$ (just like $J_2(\mathcal{A})$, $J_3(\mathcal{A})$ is generated in degree one). Write $\overline{D}(\mathcal{A})$ for $D(\mathcal{A})$ modulo the summand generated by the Euler derivation. Then, for $X\inL_2^{\text{\tiny{trip}}}$, $\overline{D}(\mathcal{A}_X)\cong S(-|\mathcal{A}_X|+1)$, as a graded $S$-module. Also write $e$ for the rank of the free module spanned by the image of the Euler derivations of $D(\mathcal{A}_X,\mathbf{m}_X)$ inside of $S^\kappa$. Once we have pruned away the Euler derivations, the chain complex from proposition~\ref{prop:H2PresC} (written as a graded complex of $S$-modules) becomes
\begin{equation}\label{eq:1}
0\rightarrow \overline{D}(\mathcal{A}) \rightarrow \bigoplus\limits_{X\inL_2^{\text{\tiny{trip}}}} S(-|\mathcal{A}_X|+1) \rightarrow S(-1)^{\kappa-e}\rightarrow J_3(\mathcal{A})\rightarrow 0,
\end{equation}
and the first two maps are now \textit{minimal} (matrices for these maps will have no constants other than $0$). Since it is shown in Proposition~\ref{prop:FreeTF2Arrangements} that freeness of $TF_2$ arrangements is combinatorial, Terao's question for line arrangements reduces to:
\begin{ques}[Terao's question for line arrangements]\label{ques}
If $\mathcal{A}$ is a line arrangement in $\mathbb{P}^2$ which is not $TF_2$, is exactness of the chain complex~\eqref{eq:1} combinatorial?
\end{ques}
\begin{exm}[$A_3$ braid arrangement]
For $\mathcal{A}=A_3$ braid arrangement defined by the forms $x,y,z,x-y,x-z,y-z$, $J_3(A_3)=\langle x,y,z,x-y,x-z,y-z \rangle$. The $A_3$ arrangement has four triple points. The image of the Euler derivations $D(\mathcal{A}_X)$, $X\inL_2^{\text{\tiny{trip}}}$ inside of $S^\kappa=S^{12-6}=S^6$ has rank $3$, corresponding to the three redundant generators of $J_3(\mathcal{A})$. Pruning off the Euler derivations yields the chain complex
\[
0\rightarrow \overline{D}(\mathcal{A}) \rightarrow S(-2)^4 \rightarrow S(-1)^3 \rightarrow J_3(\mathcal{A})\rightarrow 0,
\]
which is exact since the Koszul syzygies among $x,y,z$ are obtained from the non-Euler derivations on $D(\mathcal{A}_X)$, $X\inL_2^{\text{\tiny{trip}}}$. This is not minimal since $D(\mathcal{A})$ has a generator of degree $2$ which expresses a relation among the four non-Euler derivations around triple points. Once this generator of degree $2$ is pruned off we obtain the Koszul complex resolving $J_3(\mathcal{A})$,
\[
0\rightarrow S(-3) \rightarrow S(-2)^3\rightarrow S(-1)^3 \rightarrow J_3(\mathcal{A}) \rightarrow 0.
\]
As expected, $D(\mathcal{A})$ is free with exponents $1,2,3$ (the generators of degree $1,2$ were pruned off to produce the minimal resolution).
\end{exm}
\section{Concluding remarks}\label{sec:CR}
We have implemented construction of the chain complexes $\mathcal{J}^\bullet,\mathcal{S}^\bullet,\mathcal{D}^\bullet$ in Macaulay2. Instructions for loading the functions and detailed examples may be found at \href{http://math.okstate.edu/people/mdipasq/}{http://math.okstate.edu/people/mdipasq/} under the Research tab.
So far, we have not studied the behavior of the chain complex $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ under deletion and restriction. In particular, we have the following question.
\begin{ques}\label{ques:1}
Is there a short exact sequence of complexes $0\rightarrow\mathcal{D}^\bullet(\mathcal{A}',\mathbf{m}')\rightarrow\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})\rightarrow \mathcal{D}^\bullet(\mathcal{A}'',\mathbf{m}^*)\rightarrow 0$ corresponding to a triple $(\mathcal{A}',\mathcal{A},\mathcal{A}'')$ of arrangements (in the sense of~\cite[Definition~1.14]{Arrangements}), where $\mathbf{m}^*$ is the Euler multiplicity~\cite{EulerMult}?
\end{ques}
The main difficulty here is to construct the maps between these chain complexes. Constructing such maps would provide a tight relationship to the addition-deletion theorem of~\cite{EulerMult}. We also are not aware of any relationships between the chain complex $\mathcal{D}^\bullet(\mathcal{A},\mathbf{m})$ and the characteristic polynomial of $(\mathcal{A},\mathbf{m})$ or a supersolvable filtration of $\mathcal{A}$.
|
train/arxiv
|
BkiUeAg4eIZjyglw6LHr
| 5 | 1 |
\section{Introduction}\label{sec:introduction}
In this paper we continue studies of solar atmosphere oscillations based on
analyzing the temporal brightness modulation in image sequences taken with
the \emph{Transition Region and Coronal Explorer} (TRACE) in ultraviolet
passbands which sample the upper solar photosphere and low solar
chromosphere. We again exploit the absence of seeing in TRACE data (apart
from space-weather particle hits) to provide extensive Fourier diagnostics
for quiet-sun network and internetwork areas with excellent sampling
statistics.
In
\citet[%
henceforth \citealias{2001A&A...379.1052K}]
{2001A&A...379.1052K},
these techniques were used in a comprehensive overview of quiet-sun
brightness oscillation properties derived from TRACE image sequences in its
three ultraviolet passbands centered at $\lambda=1700$, $1600$, and
$1550~\mbox{\AA}$. In standard models of the solar atmosphere such as FALC of
\citet{1993ApJ...406..319F}
these passbands sample layers just below, at, and just above the
temperature minimum, respectively. The subsequent paper by
\citet{2003A&A...407..735R}
compared low-frequency ultraviolet brightness modulation at
these wavelengths to the underlying white-light patterns in quiet-sun
areas.
\citet{2003A&A...401..685M}
analyzed TRACE ultraviolet brightness modulation maps containing an active
region.
In this paper we return to the high-frequency aspects of ultraviolet
brightness modulation. The data used in
\citealias{2001A&A...379.1052K}
suffered from irregular timing intervals between successive images,
severely reducing the high-frequency information content. The sequences
used here have strict sampling regularity and are therefore better suited
to search for high-frequency oscillation signatures. We also employ a much
improved alignment method.
The obvious motivation for such searches is given by the long quest for
acoustic heating of outer stellar atmospheres started by
\citet{1948ZA.....25..161B}
and
\citet{1948ApJ...107....1S}.
It is concisely summarized by
\citet{2002A&A...395L..51W},
to whom we refer for further background.
\citet{2002A&A...395L..51W}
employed image sequences from the G\"ottingen Fabry-Perot
spectrometer at the German Vacuum Tower Telescope on Tenerife,
scanning the non-magnetic \FeI\ 5434~\AA\ line which samples layers
around $h=500~\mathrm{km}$ above the white-light surface.
They inferred the presence of sufficient power with
50\,--\,100~s periodicity (10\,--\,20~mHz in frequency) to compensate
the radiative losses of the chromosphere, with apparent spatial power
concentration above intergranular lanes. In this analysis we
use TRACE data to search for corroborative evidence in
ultraviolet brightness modulation from the same layers.
\section{Observations and data reduction}\label{sec:observations}
\figureone{bm_trace3_fig1}{fig:disp1600}{Corrections for residual image
displacements for the 1600-\AA\ sequence, in the solar $X$ (upper panels)
and $Y$ directions (lower panels)} plotted against frame number. The
enlargements in the right-hand panels are for the short segments specified
by the bars at left.
The TRACE mission is described by
\citet{1999SoPh..187..229H}.
We use ultraviolet image sequences downloaded from the TRACE
archive\footnote{\url{http://vestige.lmsal.com/TRACE}}. They were recorded
on June~1, 2003 at the request of M.~Carlsson (Oslo), who suggested strict
cadence regularity and low data compression in order to minimize
high-frequency artifacts, in particular those arising from timing
irregularities as analyzed in Sect.~5 of
\citealias{2001A&A...379.1052K}.
TRACE was programmed to obtain such image sequences in its 1600-\AA\ and
1700-\AA\ ultraviolet passbands from 8:14 to 18:34~UT. We selected
uninterrupted subsequences of 1120~images starting at 11:23:12~UT and
ending at 15:25:56~UT. They have strictly regular cadence at 13~s sampling
interval in both passbands. The corresponding Nyquist frequency is
$f_\mathrm{Ny}=38.46~\mathrm{mHz}$; the frequency resolution is $\Delta
f=34.34~\mu\mathrm{Hz}$. The images sample a quiet area of
$256\arcsec\times192\arcsec$ centered at $X=-2.78\arcsec$, $Y=13.46\arcsec$
near the center of the solar disk, corresponding to a field of view of
$512\times384$ square 0.5\arcsec\ pixels. The 1600-\AA\ and 1700-\AA\
images were alternately exposed for 1.724 and 4.872~s, respectively. The
mid-exposure delay between the closest pairs of 1600-\AA\ and 1700-\AA\
images is 5.574~s.
The image sequences were processed with the SolarSoft routine
\texttt{trace\_prep} described in the TRACE Analysis
Guide\footnote{\url{http://moat.nascom.nasa.gov/~bentley/guides/tag}}. It
corrects missing pixels (of which there were none in these data), replaces
saturated pixels with values above 4095, subtracts the dark field, and
corrects for the flat field. The dark and flat fields were recorded on
November~8, 2001 and March~11, 2003, respectively. The image brightness
was normalized by the exposure time.
In the course of this analysis it became clear that precise image
alignment, including corrections for differential solar rotation and for
spacecraft pointing jitter, is crucial to Fourier phase-difference analysis
at high frequencies, and that we should considerably improve on the method
used in
\citealias{2001A&A...379.1052K}.
In that paper, co-aligned subfields of the 1700-\AA\ sequences
were cross-aligned to the corresponding subfields in the 1600-\AA\
sequences. This procedure copies alignment errors from one sequence to the
other and so introduces a high-frequency phase-difference signal at the
retardation set by the timing offset between the exposures at the two
wavelengths. It emerges in Fig.~18 of
\citealias{2001A&A...379.1052K}
as a high-frequency drift of the spatially averaged phase-difference curves
towards the average offset caused by non-simultaneous sampling shown in the
center panel of Fig.~28 of
\citealias{2001A&A...379.1052K}.
The slower cadence of the October, 14 1998 data also analyzed in
\citealias{2001A&A...379.1052K}
caused correspondingly larger offset (Fig.~26).
In the present analysis such erroneous cross-alignment signals are reduced
by significantly improving the alignment procedure. In order to minimize
the use of interpolation, we measured pointing displacements per image
through an elaborate procedure detailed below and then used these
displacements to resample the original images directly onto an aligned
grid.
We began by shifting every row of each image in
solar $X$ to correct solar rotation including its differential shear,
using the expression of
\citet{1990SoPh..130..295H}.
We then aligned each image of 40-image 1600-\AA\ sub-sequences to
the last one of the previous set, comparably to the procedure in
\citealias{2001A&A...379.1052K}.
Each 1700-\AA\ image was subsequently cross-aligned to the corresponding
coarsely aligned 1600-\AA\ image taken 5.574~s before. We then applied
spatial smoothing through $5\times5$ pixel boxcar averaging to every image,
and, merging the two sequences, applied temporal smoothing per pixel by an
eighteen-image boxcar average. Alignment of each individual image of the
de-rotated sequences to this smoothed average yields displacement vectors
per image. SolarSoft routine \texttt{tr\_get\_disp} was employed in all
alignment computations.
\figureone{bm_trace3_fig2}{fig:avghist}{Brightness histogram of one
80-minute average of the 1600-\AA\ sequence. The dotted lines define the
split between network (right), intermediate (middle), and internetwork
(left).}
\figuretwo{bm_trace3_fig3}{fig:masks}{\emph{First panel}: sample image from
the 1600-\AA\ sequence taken at 11:32:22~UT. The intensity was clipped and
scaled logarithmically in order to gain contrast in the internetwork.
\emph{Second panel}: 80-minute 1600-\AA\ average (12:44\,--\,14:05~UT)
using the same gray scale. \emph{Third panel}: the pixel masks applied to
all images in Sect.~\ref{sec:confusograms}. Dark gray, light gray, and
white respectively denote internetwork (28413~pixels), intermediate
(36914~pixels), and network (7430~pixels). Black pixels are discarded.
The box specifies the subfield selected for the power maps in
Fig.~\ref{fig:powermosaic}.}
Figure~\ref{fig:disp1600} shows the resulting displacement corrections for
the 1600-\AA\ sequence. These are the frame-by-frame residuals after the
initial correction for differential rotation. They primarily describe
pointing errors. Both $X$ and $Y$ components show oscillatory behavior
with about 1.5-pixel amplitude and approximately 100-minute periodicity
caused by the spacecraft's orbital motion. The enlargements at right show
ragged excursions with quarter-pixel amplitudes on short timescales which
reflect pointing jitter. Solar rotation causes a much larger additional
drift in the horizontal direction. The de-rotation correction ranges from
$-73.33~\mathrm{pixels}$ at the equator to $-72.83~\mathrm{pixels}$ at the
bottom of the field of view. Our use of whole-field alignment
automatically corrects for any departures from the initially applied
rotation law except for those in differential shear. The error estimates
of
\citet{1990SoPh..130..295H}
suggest that the remaining shear errors are within $0.004~\mathrm{pixel}$
over our range in solar $Y$ and time.
In the final step of the alignment procedure, the sophisticated algorithm
described by
\citet{2004SoPh..219....3D}
is used to re-sample the original images onto a
$432\times384~\mathrm{pixel}$ grid corrected for differential solar
rotation, for spacecraft orbital motion and jitter, and for the re-mapping
from planar to spherical coordinates. The area of incomplete sampling due
to solar rotation is discarded, as is a vertical strip at the left-hand
edge of the field of view which erroneously appears bright in one 1700-\AA\
image. The resulting images consist of $432\times384$
$0.348~\mathrm{Mm}$-square pixels.
For part of our analysis, i.e., the spatially averaged Fourier spectra
presented in Fig.~\ref{fig:triconfuse} in Sect.~\ref{sec:confusograms}, we
followed the procedure of
\citealias{2001A&A...379.1052K}
to divide the field of view into internetwork, intermediate, and network
areas through classification of the time-averaged 1600-\AA\ brightness per
pixel. Temporal averaging increases the contrast between the rapidly
changing internetwork brightness and the more stable network emission. The
1600-\AA\ sequence was split into three parts of approximately 80~minutes
duration. Figure~\ref{fig:avghist} displays the brightness distribution
after averaging over one 80-minute part. It has a Gaussian peak and an
extended high-brightness tail. A pixel is classified as internetwork if in
all three 80-minute averages its brightness remains below the left-hand
dotted line, which is chosen near the three peaks. Pixels with brightness
above the right-hand dotted cutoff in all three averages are classified as
network. Pixels that fall between the two lines in all three averages are
classified as intermediate. Pixels that change category between averages
are discarded. This category amounts to 56\% of all pixels due to the long
sequence duration. It is large to avoid any mixing of internetwork,
intermediate, and network behaviour.
Particle hits were not corrected by interpolation but were removed on the
basis of their high-frequency signature for the analysis in
Sect.~\ref{sec:confusograms}. Their single-image appearance produces
anomalously strong high-frequency power. We applied a spatial mask to
remove all pixels as well as their immediate neighbors that show Fourier
power in excess of three times the average in the highest 50 frequency bins
(36.7\,--\,38.5~mHz).
Figure~\ref{fig:masks} presents a sample 1600-\AA\ image, one of the three
80-minute averages, and the three masks.
\section{Analysis and results}\label{sec:analysis}
Fourier power, coherence and cross-power spectra were computed per pixel in
the 1600-\AA\ and 1700-\AA\ image sequences over their full 243-minute
duration following the recipes in Sect.~3 of
\citealias{2001A&A...379.1052K}.
We follow
\citealias{2001A&A...379.1052K}
also in the presentation of the resulting power maps, spatially-averaged
temporal Fourier spectra, and two-dimensional ($k_\mathrm{h},f$)\ diagrams. Here, the
emphasis is on high-frequency behavior and its significance.
\subsection{Spatially resolved Fourier power maps} \label{sec:powermaps}
As in
\citealias{2001A&A...379.1052K}
we distinguish three different normalization choices in displaying Fourier
power per pixel as spatially resolved maps, namely plotting the
non-normalized oscillatory energy itself (``power''),
\begin{equation}
P_E(x,y,f)=|\mathcal{I}(x,y,f)|^2\,,
\end{equation}
the fractional modulation signal obtained by dividing the oscillatory
energy by the zero-frequency power (``modulation''),
\begin{equation}
P_f(x,y,f)=\frac{|\mathcal{I}(x,y,f)|^2}{|\mathcal{I}(x,y,0)|^2}\,,
\end{equation}
and ``Leahy'' normalization obtained by dividing the energy by the
zero-frequency amplitude,
\begin{equation}
P_\mathrm{L}(x,y,f)=\frac{|\mathcal{I}(x,y,f)|^2}{|\mathcal{I}(x,y,0)|}\,,
\end{equation}
where $x$ and $y$ are spatial coordinates, $f$ is the temporal frequency,
and $\mathcal{I}(x,y,f)$ denotes the Fourier transform of the intensity
measured by TRACE at location $(x,y)$ at frequency $f$. Leahy
normalization is used in the literature to estimate power-peak significance
(e.g.,
\citealp{1983ApJ...266..160L},
\cite{1999A&A...347..335D})
but was not used in
\citealias{2001A&A...379.1052K}.
We here add 95\% significance estimation following
\citet{1975ApJS...29..285G}
and first compare this to Fisher's method of randomization described by,
e.g.,
\citet{Bradley1968}
and
\citet{1985AJ.....90.2317L}
and applied to solar data by, e.g.,
\citet{2001A&A...368.1095O}
and
\citet{2003A&A...401..685M}.
Its assumption is that there is no signal at any frequency, so that the
temporal order in which the data were taken becomes irrelevant. Comparison
of the actual power spectrum to the spectra of temporal permutations of the
data sequence then yields a significance estimate. The test is used
iteratively, progressively deleting significant peaks until no new peaks
are found. It puts no constraint on the noise power distribution at any
given frequency, but the assumption that all samples are temporally
uncorrelated implies frequency-independent white noise. For large data
sets it becomes impractical to repeatedly compute all possible
permutations. Actual tests are therefore usually limited to a few hundred
permutations, but even then remain computationally expensive.
The much simpler significance estimation of
\citet{1975ApJS...29..285G}
assumes that at any frequency the real and imaginary parts of the Fourier
power have independent normal distributions. It requires explicit
specification of the noise power as a function of frequency, i.e., the
noise is not assumed to be white.
\figureone{bm_trace3_fig4}{fig:randplot}{Comparison of the randomization
test to significance estimation following
\citet{1975ApJS...29..285G}.
\emph{Upper panel}: network pixel. \emph{Lower panel}: internetwork pixel.
The ragged curves are the temporal Fourier power at 1600~\AA\ using TRACE
data units divided by the exposure time, on logarithmic scales. In each
panel, the top row of tick marks identifies all significant peaks according
to the randomization test. The second and third rows identify significant
peaks at the 99.9999\% and 95\% significance levels using Groth's test
assuming white noise and absence of signal above $f=24~\mathrm{mHz}$. The
dotted lines show the corresponding cutoff levels.}
We compare the randomization test with Groth's test in
Fig.~\ref{fig:randplot} for an internetwork and a network pixel, adopting
95\% confidence levels in both tests. In the randomization test, a power
peak that is above the maximum power of the randomized data in more than
95\% of 500 permutations is considered statistically significant. Such
peaks are subsequently removed in the iterative re-application of this
procedure until no more significant peaks are found. For Groth's test we
decided from visual inspection to assume that the power spectra display
white noise above $f=24~\mathrm{mHz}$. The corresponding 95\% significance
cutoff is 2.996~times higher than this noise level.
The top and bottom rows of tick marks in Fig.~\ref{fig:randplot} specify
the positions of the peaks that are estimated to be significant by the two
methods. It is obvious that the randomization test is far more rigorous
than Groth's test, as pointed out by
\citet{2003A&A...401..685M}.
The middle rows of ticks result when the upper dotted line is used as
Groth-test cutoff level, at fourteen times the noise level corresponding to
99.9999\% confidence. It closely matches the peak-finding by the
randomization test. Thus, for these data a stringent Groth test may
replace the randomization test at much smaller computational cost. This is
likely to hold for other data with white noise.
The network pixel in the upper panel of Fig.~\ref{fig:randplot} has larger
low-frequency signal than the internetwork pixel in the lower panel, a
different power hump around 5-min periodicity, and higher high-frequency
noise but rather similar peak survival above the lenient 95\% Groth cutoff
estimate.
\figuretwo{bm_trace3_fig5}{fig:powermosaic}{Spatially resolved power
maps using different methods of normalization for the subfield shown in
Fig.~\ref{fig:masks}. The grayscale displays the logarithm of the temporal
Fourier power, clipped to improve contrast. \emph{Columns}: different
frequency bands as specified at the top. \emph{Rows}: 1600-\AA\ and
1700-\AA\ passbands, with different power normalization as specified in the
first column.}
Figure~\ref{fig:powermosaic} expands such comparison of internetwork versus
network by displaying spatial power maps for the small but illustrative
subfield specified by the rectangle in Fig.~\ref{fig:masks} for both the
1600-\AA\ and 1700-\AA\ passbands. The subfield is shown in different
temporal frequency bands, with the three different normalizations
(power, Leahy, and modulation, respectively), and finally also
without normalization but with all pixels having power below the 95\% Groth
cutoff made black.
The first column shows low-frequency power. It shows the stable nature of
bright network. The power-normalized modulation maps show noisy behavior
from the division because the frequency band is close to zero Hz.
The second-column frequency range of 2.6\,--\,3.6~mHz corresponds to
periodicities around 5~minutes. The network appears power-bright in the
unnormalized maps, about equal to the internetwork in the
amplitude-normalized Leahy maps, and power-dark in the modulation maps.
Thus, the choice of normalization affects the apparent relative dominance
of network and internetwork oscillations, as discussed extensively in
\citealias{2001A&A...379.1052K}.
Note that in all representations a power-dark moat appears around the
network.
The 5\,--\,7~mHz maps describe the chromospheric
three-minute oscillation which pervades internetwork areas
(e.g., \cite{Rutten1995b}).
They indeed show the network power-dark in all representations. There are
irregular power-bright ``aureole'' patches near network (cf.
\citealias{2001A&A...379.1052K}).
The two high-frequency columns on which we concentrate here illustrate the
care that must be taken in interpreting such power maps. In the rightmost
column (28\,--\,32~mHz) the network stands out very brightly in the
unnormalized power maps, inviting a claim that the magnetic elements making
up the network display high-frequency wave heating. On the other hand, the
network appears power-dark in the modulation maps, inviting a claim that
high-frequency oscillations are suppressed in magnetic elements. However,
the close spatial correspondence of both these bright and dark features
with the bright network in the unnormalized maps in the first column
suggests strongly that they are simply due to the larger overall network
brightness. A similar power-contrast flip is seen in the 12\,--\,16~mHz
maps for the 1700-\AA\ passband, but the unnormalized 1600-\AA\ map for
these frequencies appears rather featureless. The latter copies directly
into the Groth map, but with considerable pixel deletion wherever the power
averaged over the frequency range drops below the cutoff. In the rightmost
column the Groth maps accept only a minor fraction of the pixels as
significant, both for the network and the internetwork. A more strict
criterion, and certainly the 95\% iterative randomization test, would
reject all. The patterns seen in the other 28\,--\,32~mHz maps thus are
most likely artifacts caused by sources of high-frequency errors with some
sensitivity to the low-frequency power. Note that Leahy normalization
diminishes the apparent structure for 1700~\AA\ but turns it power-dark at
1600~\AA.
The bright specks in the 28\,--\,32~mHz maps are due to particle hits.
They produce high-frequency signal through their instantaneous appearance.
The pattern of horizontal stripes results from TRACE's JPEG data
compression (K.~Muglach, private communication). It appears as a grid
pattern with 8-pixel mesh size in comparable power maps computed from the
original non-aligned image sequences. The compensation for solar rotation
smears out the vertical grid components, leaving only the horizontal ones.
\subsection{Spatially averaged Fourier power, phase difference and
coherence}\label{sec:confusograms}
We now turn to temporal Fourier analysis with spatial averaging over
the different pixel categories defined by the third panel of
Fig.~\ref{fig:masks}. The averaging is performed on the
Fourier measurements per pixel. It reduces the noise in these
measurements and therefore improves the detection of relatively small
modulation signals. Following
\citealias{2001A&A...379.1052K},
the power and coherence are averaged directly over all relevant pixels.
In the present analysis we compute coherence per pixel using frequency
smoothing over 9 bins rather than 5.
The phase differences are again averaged with cross-power weighting as
introduced by
\citet{1979ApJ...231..570L},
i.e., the spatial average over the phase differences of all pixels
transmitted by the mask per frequency bin is defined as the angle of the
vector sum of the cross-powers of all contributing pixels with reference to
the real axis. The advantage of such vector averaging is that it makes
signals stand out even in the presence of much larger noise. For pure
noise the vector mean does not go to zero or some other definite value but
fluctuates randomly over the full $-180$ to $+180$~degree range between
adjacent frequency bins. A small signal, much smaller than the noise, may
therefore emerge as a systematic pattern across multiple bins.
\figureone{bm_trace3_fig6}{fig:cpnoise}{Illustration of vector-averaging
phase differences for 30\,000~pixels. \emph{First panel}: distribution of
the cross-power vector sum for pure Gaussian noise in the complex plane.
\emph{Second panel}: same as the first panel, but with a signal with
amplitude of only 3\% of the rms noise with 0~degree phase difference
added. The vector summation of the 30\,000 samples shifts the scattercloud
significantly to the right. \emph{Third panel}: corresponding
phase-difference distributions for pure noise (dashed, flat) and with the
signal added (solid, peaked). The latter reaches 18.4 at phase difference
0 with 20-degree half-width.}
This is illustrated by Fig.~\ref{fig:cpnoise} which displays
simulation results for vector-averaged cross-power
distributions of pure noise (first panel) and of pure noise with a
much smaller superimposed signal with fixed phase difference (second
panel). In the latter case, the vector addition over 30\,000 pixels
with small but systematic signal shifts the much wider scattercloud to
a location well separated from the origin, making the phase-difference
distribution in the third panel strongly peaked.
\figuretwo{bm_trace3_fig7}{fig:triconfuse}{Temporal Fourier spectra,
spatially averaged over internetwork (left), intermediate
(middle), and network (right), plotted against
frequency up to the Nyquist limit with the corresponding periodicities
shown along the top. The format corresponds to Figs.~18\,--\,19 of
\citealias{2001A&A...379.1052K},
adding dotted lines indicating the sampling time delays and omitting
1$\sigma$ rms estimates for power and coherency to avoid clutter.
\emph{Upper panels}: phase-difference spectra. \emph{Lower panels}:
coherence (upper curves) and power spectra (solid for 1600~\AA, dashed for
1700~\AA). The random-noise estimate for the coherence is $C=0.33$. The
power spectra are on linear scales and are all scaled by the same factor.}
Figure~\ref{fig:triconfuse} presents results from the new TRACE sequences
in the form of power, coherence, and phase-difference spectra with spatial
averaging separated between the internetwork, intermediate, and network
areas. The format is similar to Figs.~18\,--\,22 of
\citealias{2001A&A...379.1052K}
but adds two dotted lines in the phase-difference panels. These are the
phase shifts associated with the temporal sampling delay due to the
non-simultaneous exposures in the two passbands. We have aligned the two
sequences to match the closest pair combinations, corresponding to the
upper dotted lines. All of our phase-difference evaluations are corrected
for this sampling offset, which means that signals that are intrinsically
in phase in the two passbands should indeed end up along the horizontal
$\Delta\phi=0$ axis. Measurements that end up on a dotted line imply
modulation with phase delay exactly matching the corresponding sampling
delay. The grayscaled scattercloud represents individual pixels. In the
case of pure noise the $1\sigma$ rms estimates cover 68\% of the full
figure height around a randomly fluctuating mean.
The results in Fig.~\ref{fig:triconfuse} are similar to those in Fig.~18 of
\citealias{2001A&A...379.1052K}
except for the high-frequency phase differences of interest here. The
present results are more reliable thanks to the regular sampling cadences,
lower data compression, and better image alignment.
The internetwork phase differences reach a wide maximum at
$f\approx7~\mathrm{mHz}$ and then remain well defined at positive values up
to the Nyquist frequency, but with increasing noise above 20~mHz. There is
no drop to negative values as in
\citealias{2001A&A...379.1052K},
which we now attribute to the cross-alignment used there as discussed in
Sect.~\ref{sec:observations} above. However, the present results also show
a drift to the phase difference associated with the timing delay at high
frequencies. The internetwork power spectra show acoustic humps around
4~minutes and become negligible above 12~mHz. The coherence also peaks
around 4~mHz and drops to the 9-bin noise level near 20~mHz.
The network phase differences in the third column of
Fig.~\ref{fig:triconfuse} are much noisier due to the far smaller number of
pixels. Nevertheless, they show a narrow peak of increased phase
difference and reduced coherence around three-minute periodicity which is
not present in Fig.~18 of
\citealias{2001A&A...379.1052K}
and which we deem significant. At high frequencies they become more
erratic and shift to the timing correction line, which points to a
systematic error.
The intermediate-class pixels in the center column of
Fig.~\ref{fig:triconfuse} produce primarily internetwork-like behavior.
\subsection{Two-dimensional Fourier power and phase difference}
\label{sec:komega}
Figure~\ref{fig:komerged} presents two-dimensional Fourier power and phase
difference spectra in the form of ($k_\mathrm{h},f$)\ diagrams. They mix the network,
internetwork and intermediate areas. Particle hits were not removed
because their contribution to the noise is small except at high spatial and
temporal frequencies where the diagrams are noisy anyhow.
The power and phase differences are averaged over rings of constant
$k_\mathrm{h}$, with $k_\mathrm{h}^2=k_\mathrm{x}^2+k_\mathrm{y}^2$,
assuming absence of preferred horizontal propagation directions. The
number of samples per ring increases with $k_\mathrm{h}$ up to the Nyquist
frequency per axis $k_\mathrm{x,Ny} =
k_\mathrm{y,Ny}=9.0~\mathrm{Mm}^{-1}$. Beyond this value, $k_\mathrm{h}$
can still be computed but with fewer samples and increasing loss of
isotropy in each successive bin, up to
$k_\mathrm{h}=\sqrt{2}\,k_\mathrm{x,Ny}=12.8~\mathrm{Mm}^{-1}$ which
samples only oblique propagation.
The left-hand panel of Fig.~\ref{fig:komerged} shows the ($k_\mathrm{h},f$)\ diagram for
1600-\AA\ power. The acoustic $p$-mode ridges and pseudo-ridges above the
Lamb line were extensively discussed in
\citealias{2001A&A...379.1052K}.
There is no particular structure evident in the high-frequency regime of
interest here. At low frequencies there is a ridge of enhanced power at
high spatial wavenumbers, approximately corresponding to
$f=(1/2\pi)\,2~\mathrm{km\,s}^{-1}$ which is caused by the compensation for
solar rotation. Features that are fixed to the CCD camera, such as the
results of an imperfect flat field, or ``hot'' pixels, move apparently with
this speed against the direction of solar rotation and produce power.
\figurethree{bm_trace3_fig8}{fig:komerged}{\emph{Left}: power in the
1600-\AA\ image sequence plotted as function of horizontal wave number
$k_\mathrm{h}$ and temporal frequency $f$. The logarithmic grayscale is
clipped to show the ridges around $1~\mathrm{Mm}^{-1}$ and
$5~\mathrm{mHz}$. The slanted line is the Lamb line
$f=(1/2\pi)\,c_\mathrm{s}k_\mathrm{h}$ with
$c_\mathrm{s}=7~\mathrm{km\,s}^{-1}$. \emph{Right}: corresponding phase
differences between the 1700-\AA\ and 1600-\AA\ image sequences. To avoid
figure clutter, contours are only shown if they lie below the
dashed curve or enclose a large area. The grayscale is clipped at $-9$
and $45~\mathrm{degrees}$ to increase contrast. The white blob peaking at
$k_\mathrm{h}=9~\mathrm{Mm}^{-1}$ and $f=5~\mathrm{mHz}$ reaches 110-degree
difference. The pepper-and-salt regions reflect noise.}
The right-hand panel of Fig.~\ref{fig:komerged} displays the corresponding
($k_\mathrm{h},f$)\ diagram for phase difference between the 1700-\AA\ and 1600-\AA\
sequences. The acoustic ridges stand out through larger phase difference,
as discussed in
\citealias{2001A&A...379.1052K}.
The wedge of negative phase difference at low frequencies and wave numbers
was attributed to atmospheric gravity waves by
\citet{2003A&A...407..735R}.
The effects of solar rotation are again visible as a ridge of slightly
increased phase difference. We attribute this increase to the systematic
$(-0.285, 0.273)$-pixel image offset between the two passbands before
alignment, which causes apparently traveling features fixed to the CCD to
appear at a given solar location with some time delay in the two passbands.
At the highest temporal frequencies of interest here, noise is easily
identified as pepper-and-salt patterning where the phase differences jump
widely from one bin to the next. A large patch of smooth variation extends
up to about 20~mHz in frequency and $4~\mathrm{Mm}^{-1}$ in wavenumber.
This patch contributes most to the definite phase behavior in
Fig.~\ref{fig:triconfuse}, which is therefore set by these spatial scales.
The conspicuous white blob of large positive phase difference located at
$0.75~\mathrm{Mm}$ wavelength and 3-minute periodicity, with an extended
tail upward, is enigmatic. The left-hand diagram suggests enhanced power at
this location. It seems likely that the peak in the network panel of
Fig.~\ref{fig:triconfuse} corresponds to this blob, and that therefore the
source should be sought in the network. It is very tempting to attribute
it to solar three-minute waves with small horizontal extent in magnetic
elements. Its exceedingly large value (up to 110~degree difference at its
center), the clear reduction in coherence, and the lack of such a blob in
comparable phase-difference diagrams sampling the \mbox{Ca\,\sc{ii}\,\,H}\ line core and
inner wing from observations made with the Dutch Open Telescope
(unpublished analysis analogous to Fig.~7 of
\cite{2004A&A...416..333R}),
would then suggest phenomena in the transition region that
leave a signature in these data through the \CIV\ doublet at $\lambda
= 1548~\mbox{\AA}$ and $1550~\mbox{\AA}$
(cf.\ \cite{1998SoPh..183...29H}).
However, the blob lies at the spatial Nyquist frequency per
horizontal axis, and its shape varies with changes in the image alignment
procedure. We reluctantly conclude that the blob is likely a TRACE
artifact, presumably of instrumental origin, introduced by the data
processing, or a combination of these.
\section{Discussion}\label{sec:discussion}
We find intriguing high-frequency behavior in all our Fourier displays, but
at the same time also find reasons to disbelieve these signatures above
20~mHz or even lower frequencies. The pixel-by-pixel power maps in
Fig.~\ref{fig:powermosaic} have disconcerting high-frequency contrast
sensitivity to the type of normalization above 10~mHz. The spatial
averaging in Figs.~\ref{fig:triconfuse} and~\ref{fig:komerged},
respectively over mask-selected pixel types and annuli, improves the
significance in phase-difference measurement, but the high-frequency
behavior in Fig.~\ref{fig:triconfuse} is puzzling in the trends towards
the timing delay lines and the absence of purely random behavior even at
the highest frequencies. The prominent white blob in the phase-difference
panel of Fig.~\ref{fig:komerged} is presumably an artifact.
The phase-difference averaging with cross-power weighting over the
different pixel categories employed in Sect.~\ref{sec:confusograms} is the
most sensitive method to identify weak oscillation signatures in the
presence of noise. Each phase-difference diagram in
Fig.~\ref{fig:triconfuse} indicates systematic non-random behavior out to
frequencies far beyond the extent of measurable power or even of measurable
coherence. This was already the case in Figs.~18\,--\,19 of
\citealias{2001A&A...379.1052K}
and also in the similar diagrams from groundbased \mbox{Ca\,\sc{ii}\,\,H}\ spectrometry in
Figs.~20\,--\,22 of
\citealias{2001A&A...379.1052K}.
It is attractive to believe that the cross-power weighting indeed enhances
the sensitivity of the phase-difference measurement to very small signals
otherwise drowned in noise out to well above the coherence limit of at most
20~mHz, but it is alarming that even at the highest frequencies the phase
differences do not show the randomness expected, and that in all three
panels in Fig.~\ref{fig:triconfuse} it seems to favor the instrumental
timing correction. It is likely that residual image alignment errors are
the cause of this anomalous behavior.
It is well-known that the increasing lack of response due to wide
contribution functions
(e.g.,
\cite{1975SoPh...43..289B},
\cite{1976A&A....51..189D},
\cite{1980A&A....84...99S},
\cite{1980A&A....91..251D})
hampers the detection of high-frequency signals. It was recently
elaborated in TRACE context by
\citet{Fossum2004}.
In addition, we have learned from M.~Carlsson (private communication) that
simulations of acoustic waves propagating upward in the solar atmosphere as
in the well-known \CaII\ \mbox{H$_{2V}$}\ grain simulation of
\citet{1997ApJ...481..500C}
subjected to computational 1700\,--\,1600-\AA\ phase-difference analysis
which emulates the observational analysis presented here meets unexpected
computational problems at low signal to noise and high frequencies.
On the other hand, we have reproduced our phase-difference results
in tests using double precision computation. Very similar non-random
positive phase-difference behavior also appears up to 20~mHz in Fig.~20 of
\citealias{2001A&A...379.1052K},
based on the \mbox{Ca\,\sc{ii}\,\,H}\ slit spectrometry of
\citet{1993ApJ...414..345L}
and measured from \mbox{Ca\,\sc{ii}\,\,H}\ wing intensities and \FeI\ blend Doppler shifts
formed at lower and similar heights as the ultraviolet continua sampled by
TRACE.
The comparable signature in \IminI\ and \VminV\ diagnostics with
negative \VminI\ lag shown there is in agreement with acoustic waves.
The steep \VminV\ signature of upward propagation in
internetwork areas present in the lower-left panel of Fig.~21 of
\citealias{2001A&A...379.1052K}
seems significant also.
In summary, the coherence spectra in Fig.~\ref{fig:triconfuse}, the close
agreement of the phase differences in that figure with those from \mbox{Ca\,\sc{ii}\,\,H}\
spectrometry in
\citealias{2001A&A...379.1052K},
and the smoothness of the corresponding gray area in
Fig.~\ref{fig:komerged}, all taken together lead us to believe that the
phase-difference signals derived from these new TRACE sequences have a
solar origin up to 20~mHz at least in the internetwork, and are to be
attributed to acoustic waves.
This conclusion supports the detection of high-frequency acoustic waves by
\citet{2002A&A...395L..51W}
as significant Doppler-shift power in the 10\,--\,20~mHz frequency band from
differential \FeI\ measurements addressing similar atmospheric heights.
The drop of power with frequency in our Figs.~\ref{fig:randplot},
\ref{fig:triconfuse} and \ref{fig:komerged} suggests that their detection
is dominated by the lower frequencies in this band. Our results indicate
wave presence also at the higher frequencies.
The ultraviolet continua used here suffer from strong scattering while the
TRACE filter bandwidths are wide and overlap considerably. Numerical
simulations as those presently underway at Oslo may explain how and why the
phase differences in Figs.~\ref{fig:triconfuse} and \ref{fig:komerged}
level out at positive values. In our opinion, comparison with detailed
numerical simulations is also required to substantiate any claim that
acoustic waves in the 10\,--\,20~mHz regime compensate fully for the
radiative losses of the chromosphere.
\section{Conclusion}\label{sec:conclusion}
New ultraviolet image sequences from TRACE give evidence of brightness
modulation up to 20~mHz in quiet-sun internetwork. We interpret this
signal as a signature of acoustic waves. It is similarly present in
\mbox{Ca\,\sc{ii}\,\,H}\ and \FeI\ \IminI\ and \VminV\ phase-difference spectra in Fig.~20
of
\citealias{2001A&A...379.1052K}
and it supports the detection of acoustic wave power in the
10\,--\,20~mHz frequency band from \FeI\ Doppler-shift measurements by
\citet{2002A&A...395L..51W}.
The evidence for modulation at higher frequencies remains
inconclusive.
TRACE-like ultraviolet imaging will be achieved with the Atmospheric
Imaging Assembly on NASA's Solar Dynamics Observatory, but it is not yet
clear whether its hardware and operation will permit better high-frequency
modulation measurement than with TRACE. New ground-based telescope
technology, in particular large aperture combined with adaptive optics,
will provide accurate Doppler shifts from integral field spectroscopy at
high cadence and low noise of the same layers using appropriate spectral
lines in the optical. Numerical simulations may contribute quantification
of the corresponding energy budgets.
\acknowledgements We thank M.~Carlsson for suggesting these TRACE observations
to the third author and for sharing simulation insights in the intricacies of
phase-difference determination with the second author. We also thank
C.E.~DeForest, J.~Leenaarts, C.C.~Kankelborg, J.M.~Krijger, B.W.~Lites,
K.~Muglach and R.A.~Shine for advice and discussions, and the referee
for suggesting many clarifications. A.G. de Wijn and R.J. Rutten acknowledge
travel support from NASA (contract NAS5-38099) and the Leids Kerkhoven-Bosscha
Fonds, and are indebted to the Lockheed Martin Solar and Astrophysics Lab.\ at
Palo Alto, the solar physics group of Montana State University at Bozeman, and
the High Altitude Observatory at Boulder for hospitality.
|
train/arxiv
|
BkiUds7xaL3Sui42XKJl
| 5 | 1 |
\section{Introduction}
\subsection{The problem}
Imagine that while you are reading these lines a $\lambda$-phage
injects its DNA into a cell. For the infected cell, this sets a
race against time: its hope to survive depends entirely on the
ability of the proper restriction enzyme to find and recognize the
specific site on viral DNA and then cut it, thus rendering viral
DNA inoperable and harmless. If restriction enzyme takes too long
to locate its target, then the cell is dead.
This is, of course, just an example. Essentially all of molecular
biology is about various enzymes operating with the specific
places on DNA, and each enzyme must locate its target site quickly
and reliably. How can they accomplish the task? It was
recognized very early on that the search by free diffusion through
the 3D solution is far too slow and proteins somehow do it faster.
Indeed, the rate at which diffusing particles find the target was
determined by M. Smoluchowski as early as in 1917
\cite{Smoluchowski}, it is equal to $4 \pi D_3 b c$, where $b$ is
the target radius, $D_3$ and $c$ are, respectively, the diffusion
coefficient and concentration of diffusing particles, in our case
- proteins (see also appendix \ref{sec:Smoluchowski} for a simple
derivation). Although Smoluchowski result sets the rigid upper
bound for the possible diffusion controlled rate, proteins at
least in some instances somehow manage to do it up to about two
orders of magnitude faster - see, for instance,
\cite{Riggs,Eigen}. The idea to resolve this paradox goes back to
Delbr\"uck \cite{Delbruck} who suggested that proteins can fairly
quickly adsorb on a non-specific random place on DNA and then 1D
sliding along DNA can be much faster than the 3D diffusion. In
fact, the idea that reduced dimension speeds up chemical reaction
can be traced even further back to Langmuir \cite{Langmuir}, who
noticed that adsorbtion of reagents on a 2D surface can facilitate
their diffusive finding each other.
The field attracted intensive attention for many years. Early
studies \cite{Riggs,Eigen} seemed to corroborate the Delbr\"uck
model. A nice recent review of various strategies employed to
address the problem experimentally can be found in the paper Ref.
\cite{Halford}. Based on the summary of experimental evidence,
authors of this review conclude, that the process is not just the
naive 1D sliding, but rather a delicately weighted mixture of 1D
sliding over some distances and 3D diffusion. A theorist also
could have guessed the presence of a cross-over between 1D sliding
and 3D diffusion, because sliding along coiled DNA becomes very
inefficient at large scale: having moved by about $t^{1/2}$ along
DNA after 1D diffusion over some time $t$, protein moves in space
by only $t^{1/4}$ if DNA is a Gaussian coil. This is very slow
subdiffusion. That is the situation requiring theoretical
attention to understand how 3D and 1D diffusion can be combined
and how their combination should be manifested in experiments.
On the theoretical front, major contribution to the field is due
to Berg, Winter and von Hippel (BWH) \cite{BWH}. As an outcome of
their theory, these authors formulated the following nice
prediction, partially confirmed by their later \emph{in vitro}
experiments \cite{BWH2}: the rate at which proteins find their
specific target site on DNA depends in a non-monotonic fashion on
the ionic strength of the solution. In this context, ionic
strength is believed to tune the strength of non-specific
adsorbtion of proteins on DNA, presumably because a protein
adsorbs to DNA via positively charged patch on its surface. Thus,
in essence one should speak of the non-monotonous dependence of
the rate on the energy of non-specific adsorbtion of proteins on
DNA.
Although qualitatively consistent with experiment, BWH theory
\cite{BWH} leaves several questions open. First and foremost, how
does the search time of proteins finding their target, or the
corresponding rate, depend on the DNA conformation? In
particular, is it important that the DNA is coiled at the length
scale larger than the persistence length? Is it important that
DNA coil may not fit in the volume available, and then DNA must be
a globule, like in the nucleoid in a procaryotic cell \emph{in
vivo} or under experimental conditions \emph{in vitro}
\cite{Odijk}? Second, closely related aspect is that BWH theory
\cite{BWH} does not answer the experimentally most relevant
question \cite{Halford} of the interplay between 1D sliding and 3D
diffusion. In particular, one of the questions raised by
experiments and not answered by the BWH theory \cite{BWH} is about
the correlations between the place where a protein departs from
DNA and the place where it re-adsorbs. Third aspect, although of
a lesser importance and more taste-dependent, BWH theory
\cite{BWH} does not yield simple intuitive explanation for
non-monotonic dependence of the rate on the strength of
non-specific adsorbtion, and one may want to know whether there
exists simple qualitative description of the rate at least in some
limits.
More recent refinement of the theory is given in the work Ref.
\cite{FrenchGroup}. The authors of this work follow BWH in that
they treat DNA in terms of ``domains'' - a concept having no
unambiguous definition in the physics of DNA. Also, the paper
Ref. \cite{FrenchGroup} makes it very explicit that BWH \cite{BWH}
and subsequent theories neglect correlations between the place
where protein desorbs from DNA and the place where it adsorbs
again - the approximation that clearly defies the polymeric nature
and fractal properties of DNA. At the same time, this
approximation leaves unanswered the experimentally motivated
question of the interplay between 1D and 3D components of the
search process.
In the recent years, the problem was revisited by physicists
several times \cite{Bruinsma,Marko,Mirny}, but the disturbing fact
was that all of them attributed quite different results and
statements to BWH: the paper Ref. \cite{Bruinsma} says that
according to BWH, the search time scales as DNA lengths $L$ rather
than $L^2$ as in 1D diffusion along DNA; the work Ref.
\cite{Marko} states that proteins slide along DNA some distance
which is independent of DNA conformation, regardless even of the
DNA fractal properties; the article Ref. \cite{Mirny}, although
concentrates on the role of the non-uniform DNA sequence, claims
that the time for 3D diffusion must be about the same as time for
1D diffusion along DNA. Further, possibly even more disturbing
fact is that neither of the papers
\cite{BWH,FrenchGroup,Bruinsma,Mirny} makes any clearly
articulated explicit assumption about DNA conformation. Is it
straight, or Gaussian coil with proper persistence length, or
what? Does the result depend on the DNA conformation?
Interestingly, experimenters do discuss in their works (see
\cite{Halford} and references therein) the issue of correlated vs.
uncorrelated re-adsorbtion, these discussions call for theoretical
attention and theoretical description in terms of correlations in
fractal DNA, but so far proper theory was not suggested.
Motivated by these considerations, we in this work set out to
re-examine the problem from the very beginning. We explicitly
take into account that DNA is fairly straight at the length scale
smaller than persistence length, it is Gaussian coil on the larger
length scale. We also consider the possibility that DNA is
confined within such a volume where Gaussian coil does not fit (as
it does not fit into a typical procaryotic cell, for instance), in
which case DNA must be a globule.
\subsection{Model, approach, and limitations}\label{sec:model}
We assume that within some volume $v$ some (double helical) DNA is
confined, with contour length $L$, persistence length $p$, and
with the target site of the size $b$.
We further assume that protein can be non-specifically adsorbed on
any place of the DNA, and that non-specific adsorbtion energy
$\epsilon$, or the corresponding constant $y = e^{\epsilon/k_BT}$,
is the same everywhere on the DNA and does not depend on the DNA
sequence. We assume that every protein molecule has just one site
capable to adsorb on the DNA. There are proteins with two such
sites, they can adsorb on two separate pieces of DNA at the same
time and thus serve as a cross-linker for the DNA itself. We do
not consider this possibility in this article.
We assume that there is only one molecule of DNA. In reality,
macroscopic sample of DNA solution at certain concentration is
used in any \emph{in vitro} experiment. From the theoretical
standpoint, DNA solution with concentration of $1/v$ (in units of
DNA chains per unit volume) is equivalent to the system of one DNA
considered here. We also assume that DNA has only one target site
on it, which is not always true in reality \cite{Halford}.
We assume that non-specifically bound protein can diffuse (slide)
along DNA with the diffusion coefficient $D_1$, while protein
dissolved in surrounding water diffuses in 3D with diffusion
constant $D_3$. Thus, we have a unitless parameter related to the
diffusion coefficients, it is $d = D_1/D_3$. In the simpler
version of the theory, which we shall consider first, we assume
$D_1=D_3$, or $d=1$. For simplicity, we assume that while protein
is diffusing, either in 3D or along the DNA, DNA itself remains
immobile.
The quantity of our interest is the time needed for the target
site to be found by a protein (consider. e.g., an example of
restriction enzyme attacking viral DNA intruder). One should
imagine certain concentration $c$ of proteins randomly introduced
into the system, and ask what is the time needed for the
\emph{first} of these proteins to arrive to the target site. In
this paper, we will only address the mean time, averaged over both
thermal noise and DNA conformation. For this averaged quantity,
since the DNA is assumed immobile, the problem can be addressed in
a simple way, by looking at the stationary \emph{rate}. Namely,
we should consider that there is a sink of proteins in the place
of the specific target site, and that it consumes proteins with
the rate $J$ proportional to concentration $c$, which should be
supported on a constant level by an influx to maintain
stationarity. Obviously then, the averaged time is just $1/J$. At
the end of the paper, in section \ref{sec:single_protein_view} we
show how to re-derive all our results in terms of a single
protein, thus avoiding an artificial assumption that there is a
sink of proteins at the place of the target.
In this article, we calculate the rate $J$ assuming concentration
$c$ an arbitrary constant. In order to compare the predicted rate
to the Smoluchowski rate $J_s = 4 \pi D_3 c b$, we shall mainly
look at the ratio
\begin{equation} \frac{J}{J_s} = \frac{J}{4 \pi D_3 c b} \sim \frac{J}{D_3 c b}
\ , \end{equation}
which characterizes the acceleration of the reaction rate achieved
due to the sliding along DNA.
We will be mainly interested in scaling dependence of the rate $J$
or acceleration $J/J_s$ on major system parameters, such as $y$,
$L$, and $v$. In this context, we will use symbol ``$\sim$'' to
mean ``equal up to a numerical coefficient of order one'', while
symbols $>$ and $<$ mean $\gg$ and $\ll$, respectively.
Along with dropping out all numerical coefficients in our scaling
estimates, we also make several assumptions driven by pure desire
to make formulae simpler and to clarify major physical ideas. We
assume that all the ``microscopic'' length scales are of the same
order, namely, about target size $b$: protein diameter, double
helical DNA diameter, and the distance from DNA at which
non-specific adsorbtion takes place. These assumptions are easy
to relax.
Throughout this work we disregard the excluded volume of DNA,
considering DNA coil as Gaussian and \emph{not} the swollen coil,
described by the Flory index $3/5$. This is a reasonable
approximation for most realistic cases \cite{RedBook}. Indeed,
for many real DNAs, such as, e.g., $\lambda$-DNA, it is justified
because of a large persistence length-to-diameter ratio of the
double helix: excluded volume in the coil remains unimportant up
to DNA length about $L < p^3/b^2$ (up to about 100000 base pairs
under normal non-exotic ionic conditions). We further assume that
the volume fraction of DNA inside volume $v$, which is about
$Lb^2/v$, is sufficiently small even when DNA is a globule. In
particular, we assume $Lb^2/v < b/p$, because in a denser system
liquid crystalline nematic ordering of DNA segments becomes likely
\cite{RedBook}. Of course, real nucleoid is a rather complex
structure involving much more sophisticated features than just
orientational ordering, they are caused by structural and other
proteins, by entanglements, etc - see the recent experimental work
\cite{Odijk} and references therein. In this paper we shall touch
neither of these issues, guided by the prejudice that simple
questions should be addressed first.
\subsection{Outline}
The plan of the article is as follows.
In section \ref{sec:simple_case} we consider first the relatively
simple cases when DNA is a Gaussian coil and 1D sliding of
proteins along DNA involves
only a small part of DNA length.
Already in this situation we will be able to explain the effect of
correlated re-adsorbtion and arrive at a number of new results,
such as, for instance, possible asymmetric character of the
maximum on the curve of the rate as a function of adsorbtion
strength. These results are also derived through the
electrostatic analogy in the appendix
(\ref{sec:electrostatic_analogy}). In the section
\ref{sec:summary} we present a summary of all possible scaling
regimes. We then discuss them in more details (section
\ref{sec:more_cases}). We start this by looking at the rate
saturation when 1D sliding involves entire DNA length (section
\ref{sec:saturation}). We then consider a delicate case when DNA
as a whole is a globule (section \ref{sec:globule}); in this case,
we found that even the 3D transport of proteins is in many cases
realized through the sliding of adsorbed proteins along DNA and
using DNA as a network of 1D transport ways. We continue in
section \ref{sec:d_neq_1} by looking at the situations when
diffusion coefficient of the proteins along DNA is either smaller
or larger than their diffusion coefficient in the surrounding bulk
water. In section \ref{sec:single_protein_view} we re-derive all
our major results using the language of single protein search time
instead of a stationary process and flux. Finally, we conclude
with comparison of our results to those of earlier works and the
discussion of possible further implications of our work (section
\ref{sec:discussion}).
\section{Simple case: straight antenna vs. Gaussian coil
antenna}\label{sec:simple_case}
The reason why non-specific adsorbtion on DNA can speed up the
finding of target is illustrated in Fig. \ref{fig:antenna} (a)
and (b): it is because DNA forms a kind of an antenna around the
target thus increasing the size of the ``effective target''. How
should we determine the size of this antenna? The simplest
argument is this. Suppose antenna size is $\xi$ and contour
length of DNA inside antenna is $\lambda$.
It is worth to emphasize that $\xi$ and $\lambda$ do not define
any sharp border, but rather a smooth cross-over, such that
transport outside antenna is \emph{mainly} due to the 3D
diffusion, while inside antenna transport is \emph{dominated} by
the sliding, or 1D diffusion along DNA. The advantage of thinking
about \emph{stationary} process is that under stationary
conditions, the flux of particles delivered by the 3D diffusion
into the $\xi$-sphere of antenna must be equal to the flux of
particles delivered by 1D diffusion into the target. The former
rate is given by the Smoluchowski formula (see appendix
\ref{sec:Smoluchowski}) for the target size $\xi$ and for the
concentration of ``free'' (not adsorbed) proteins $c_{\rm free}$,
it is $\sim D_3 c_{\rm free} \xi$. To estimate the latter rate, we
note that the time of 1D diffusion into the target site from a
distance of order $\lambda$ is about $\lambda^2/D_1$; therefore,
the rate can be written as $ \left( \lambda c_{\rm ads} \right) /
\left(\lambda^2 / D_1 \right)$, where $\lambda c_{\rm ads}$ is the
number of proteins non-specifically adsorbed on the piece of DNA
of the length $\lambda$. Thus, our main \emph{balance} equation
for the rate $J$ reads
\begin{equation} J \sim D_3 c_{\rm free} \xi \sim \frac{D_1 c_{\rm ads} }{
\lambda } \ . \label{eq:balance} \end{equation}
Formally, this equation follows from the continuity equation,
which says that divergence of flux must vanish everywhere for the
stationary process, flux must be a potential field.
Notice that the balance equation (\ref{eq:balance}) depends on the
relation between $\xi$ and $\lambda$ - between the size of antenna
measured in space ($\xi$) and measured along the DNA ($\lambda$).
Here, we already see why fractal properties of DNA conformations
enter our problem.
To determine the one-dimensional concentration of non-specifically
adsorbed proteins, $c_{\rm ads}$, and concentration of proteins
remaining free in solution $c_{\rm free}$, we now argue that as
long as antenna is only a small part of the DNA present, every
protein in the system will adsorb and desorb many times on DNA
before it locates the target, therefore, there is statistical
equilibrium between adsorbed and desorbed proteins. Assuming that
we know the adsorbtion energy $\epsilon$ or the corresponding
constant $y= e^{\epsilon / k_B T}$, and remembering that adsorbed
proteins are confined within distance or order $b$ from the DNA,
we can write down the equilibrium condition as
\begin{equation} c_{\rm ads} / c_{\rm free} b^2 = y \ , \label{eq:equilibrium}
\end{equation}
which must be complemented by the particle counting condition
\begin{equation} c_{\rm ads} L + c_{\rm free} \left( v - Lb^2 \right) = cv \ .
\label{eq:particle_counting} \end{equation}
Since volume fraction of DNA is always small, $Lb^2 \ll v$,
standard algebra then yields
\begin{eqnarray} c_{\rm ads} & \simeq & \frac{cvyb^2}{yLb^2+v} \sim \left\{\begin{array}{lcr} cyb^2 & {\rm if} & y < v/Lb^2 \\
cv/L & {\rm if} & y > v/Lb^2
\end{array} \right. \ , \nonumber \\
c_{\rm free} & \simeq & \frac{cv}{yLb^2+v} \sim \left\{\begin{array}{lcr} c & {\rm if} & y < v/Lb^2 \\
cv/Lb^2y & {\rm if} & y > v/Lb^2
\end{array} \right. \ . \label{eq:equilibrium_concentrations} \end{eqnarray}
\begin{figure*}
\centerline{\scalebox{0.8}{\includegraphics{antenna_3.eps}}}
\caption{Antenna in a variety of cases. The upper part of every
figure represents a poor man's idea of a prokaryotic cell. In
figures a and b, DNA in the cell is a coil, because coil size $R$
is smaller than the cell dimension; alternatively, one can think
of dilute solution of DNA in which $R$ is much smaller than the
distance to other coils (not shown). In figure c, the amount of
DNA is so large, that the coil size would have exceeded the cell
diameter, and so DNA is a globule; alternatively, one can think of
a semi-dilute solution \protect\cite{DeGennesBook} of strongly
overlapping DNA coils. The lower figures represent blow up view of
the region around the target site on DNA. The antenna part of DNA
around the target is shown in lighter color than the rest of DNA.
The space region below the crossover length scale is shadowed.
This space region is roughly spherical in cases a and b, it is
sausage shaped in case c. Figure a also shows the averaged flow
lines of the diffusion, which go in 3D far away from the target
and go mostly along DNA within antenna length scale (they are
equivalent to electric field lines in terms of electrostatic
analogy, Appendix \protect\ref{sec:electrostatic_analogy}). In
figures b and c flow lines are not shown, simply because it is
difficult to draw them. In figure c, we see that DNA globule
locally looks like a temporal network, with the mesh size $r$. In
this case, antenna might be much longer that one mesh. In the
figure, mesh size is not larger than persistence length, so the
length of DNA in the mesh $g$ is about the same as $r$; at lesser
density, mesh size might be longer, and then DNA in the mesh would
be wiggly, with $g \gg r$.} \label{fig:antenna}
\end{figure*}
Note that at the length scales smaller than persistence length $p$
DNA double helix is practically straight, while on the length
scales greater than $p$, double helix as a whole is a Gaussian
coil. That means, if we take a piece of double helix of the
contour length $\lambda$, then its size in space scales as
\begin{equation} \xi \sim \left\{\begin{array}{lcr} \lambda & {\rm when} & \lambda < p \\
\sqrt{\lambda p} & {\rm when} & \lambda > p
\end{array} \right. \ . \label{eq:fractality} \end{equation}
Substituting this result into the balance equation
(\ref{eq:balance}), we can determine the antenna size and then,
automatically, the rate, the latter being either side of the
balance equation. We have to be careful, because we see that
there are already as many as four different scaling regimes, due
to equations (\ref{eq:equilibrium_concentrations}) and
(\ref{eq:fractality}):
\begin{itemize} \item Regime A - antenna is straight (upper line
of Eq. (\ref{eq:fractality})), adsorbtion is relatively weak
(upper lines in the Eq. (\ref{eq:equilibrium_concentrations}));
\item Regime B - antenna is Gaussian (lower line in the Eq.
(\ref{eq:fractality}), but adsorbtion is still relatively weak;
\item Regime C - antenna is Gaussian and adsorbtion is relatively
strong (lower lines in the Eqs.
(\ref{eq:equilibrium_concentrations})); \item Regime D - Straight
antenna and strong adsorbtion. \end{itemize} Later we will find
plenty more regimes, but now let us consider just these ones, one
by one.
To begin with, suppose antenna is straight ($\lambda < p$, so
$\lambda \sim \xi$, see Fig. \ref{fig:antenna}, (a)) and
non-specific adsorbtion relatively weak ($y<v/Lb^2$, so $c_{\rm
ads} \sim cyb^2$). In this case, balance equation yields $\lambda
\sim b(y d)^{1/2}$, or for the rate
\begin{equation} J \sim c \sqrt{D_3D_1} y^{1/2} b \ ; \end{equation}
in other words, for the ratio of this rate to the Smoluchowski
rate $J_s \sim D_3 c b$, we obtain
\begin{equation} \frac{J}{J_s} \sim (y d)^{1/2} \ \ \ \ \ ({\rm regime \ A}).
\label{eq:straight} \end{equation}
This result remains correct as long as antenna remains shorter
than persistence length, and since we know $\lambda$, we obtain
this condition explicitly: $y < p^2/b^2 d$.
Let us now suppose that non-specific adsorbtion is still
relatively weak ($y<v/Lb^2$, so $c_{\rm ads} \sim cyb^2$), but it
is strong enough such that antenna is longer than persistence
length ($\lambda > p$, so that $\xi \sim \sqrt{\lambda p}$, see
Fig. \ref{fig:antenna}, (b)). Then our balance equation yields
$\lambda \sim \left( y d \right)^{2/3}p^{-1/3}b^{4/3}$ or
\begin{equation} \frac{J}{J_s} \sim \left( \frac{y p d}{b} \right)^{1/3} \ \ \
\ \ ({\rm regime \ B}). \label{eq:gaussian} \end{equation}
One should check that this new result for $\lambda$ implies that
$\lambda > p$ at $y > p^2/b^2d$, and so $y \sim p^2 /b^2d$ is the
cross-over line between the two regimes, A and B. In both regimes,
and as expected, the rate grows with the strength of non-specific
adsorbtion, $y$, because increasing $y$ increases the size of
antenna. However, the functional scaling dependence of the rate
on $y$ is significantly different, reflecting the difference in
DNA fractality at different length scales.
Before we proceed with analysis of other scaling regimes, it is
useful to make the following comment. The balance equation
(\ref{eq:balance}) describes the fact that every protein going
through the 3D diffusion far away must then also go through the 1D
diffusion closer to the target. In other words, balance equation
(\ref{eq:balance}) describes the self-establishing match between
3D and 1D parts of the process. But we can also look at the
situation differently: suppose that one particular protein is
adsorbed on DNA in a random place, and let us estimate the
distance it can diffuse along DNA before it desorbs due to a
thermal fluctuation. Since probability of thermally activated
desorbtion is proportional to $e^{-\epsilon/k_BT}=1/y$, the time
protein spends adsorbed must be about $b^2 y / D_3$. During this
time, protein diffuses along DNA by the distance about $\sqrt{D_1
b^2 y /D_3} = b \sqrt{yd}$. Following \cite{FrenchGroup,Marko},
we call it \emph{sliding distance}. We see, therefore, that
antenna length $\lambda$ is just about sliding distance for the
straight DNA, but $\lambda \gg \ell_{\rm slide}$ for the coiled
DNA. This seems for the first glance like a very weird result:
how can possibly be antenna longer than the distance over which
protein can slide? In fact antenna does become longer than the
bare sliding distance, and this happens because for the coiled DNA
every protein, desorbed after sliding the distance of the order of
$\ell_{\rm slide}$, has a significant chance to re-adsorb nearby.
Such correlated re-adsorbtion gets more likely as we consider more
and more crumpled conformations of DNA. Indeed, if we in general
assume that $\xi \sim \lambda^{\nu}$, then balance equation yields
$\lambda \sim y^{1/(1+\nu)}$, which means that $\lambda$ grows
with $y$ \emph{faster} than $\ell_{\rm slide} \sim y^{1/2}$ at
every $\nu < 1$. This growth of $\lambda$ with $y$ gets
increasingly fast as $\nu$ decreases, which corresponds to more
crumpled conformations. We should emphasize that this mechanism of
correlated re-adsorbtion is impossible to see as long as DNA
polymeric and fractal properties are not considered explicitly,
that is why this mechanism has been overlooked in previous works.
With further increase of either non-specific adsorbtion strength
$y$ or DNA overall length $L$, we ran into the situation when most
of the proteins are adsorbed on the DNA. In other words, if one
prefers to think in terms of a single protein diffusion, then this
single protein molecule spends most of the time adsorbed on DNA
far away from the target. For this case, we have to use the lower
lines of the formulae (\ref{eq:equilibrium_concentrations}) and
substitute it into the balance equation (\ref{eq:balance}). Since
equilibrium condition (\ref{eq:equilibrium}) is still satisfied,
the result $\lambda \xi \sim ydb^2$ remains unchanged. Depending
on whether antenna length $\lambda$ is longer or shorter than
persistence length, we obtain the regimes C and D.
For regime C, we have $\lambda >p$, antenna is a Gaussian coil and
$\xi \sim \sqrt{\lambda p}$, yielding $\lambda \sim (y
d)^{2/3}p^{-1/3}b^{4/3}$ and
\begin{equation} \frac{J}{J_s} \sim \frac{v (pd)^{1/3}}{L b^{7/3}y^{2/3}} \ \
\ \ \ ({\rm regime \ C}). \label{eq:falling_rate} \end{equation}
Given our expression for $\lambda$, the condition $\lambda > p$
implies the familiar $y>p^2/b^2d$, and another condition for this
regime is that most proteins are adsorbed, or $y > v/Lb^2$, see
Eqs. (\ref{eq:equilibrium_concentrations}).
For regime D, antenna is straight, so $\xi \sim \lambda$, and we
get $\lambda \sim b (yd)^{1/2}$, just as in the regime A. For the
rate however substitution of lower lines of the Eqs.
(\ref{eq:equilibrium_concentrations}) into the balance equation
(\ref{eq:balance}) yields
\begin{equation} \frac{J}{J_s} \sim \frac{v d^{1/2}}{L b^2 y^{1/2}} \ \ \ \ \
({\rm regime \ D}). \label{eq:falling_rate_straight} \end{equation}
According to our discussion, this regime should exist when $y
<p^2/b^2d$ and $y>v/Lb^2$. As we shall see later, in the section
\ref{sec:d_neq_1}, these two conditions can be met together and
the room for this regime exists only if $d<1$, which means when 1D
diffusion along DNA is slower than 3D diffusion in space.
In both regimes C and D, overall rate decreases with the increase
of non-specific adsorbtion, $y$, because 3D transport to the
antenna is slowed down by the lack of free proteins.
We have so far discussed four of the scaling regimes, our results
are equations (\ref{eq:straight}), (\ref{eq:gaussian}),
(\ref{eq:falling_rate}) and (\ref{eq:falling_rate_straight}).
Already at this stage, we gained simple understanding of the
non-monotonic dependence of the rate on $y$ - phenomenon formally
predicted in \cite{BWH} and observed in \cite{BWH2}, but
previously not explained qualitatively: at the beginning,
increasing $y$ helps the process because it leads to increasing
antenna length; further increase of $y$ is detrimental for the
rate because it leads to an unproductive adsorbtion of most of the
proteins. We have also obtained a new feature, absent in previous
works: the shape of the maximum on the $J(y)$ curve is asymmetric,
at least if DNA is not too long: in the regimes B and C, rate
grows as $y^{1/3}$ and then falls off as $y^{-2/3}$.
Since there are quite a few more scaling regimes, it is easier to
understand them if we now interrupt and offer the summary of all
regimes as presented in Figure \ref{fig:diagram_d_equal_1} and
Table \ref{tab:rates_table}.
\section{Summary of the results: scaling
regimes}\label{sec:summary}
\begin{table*}
\caption{The summary of rates and antenna lengths in various
regimes. In labelling regimes, we skip J and L to avoid confusion
with the rate and DNA length.} \label{tab:rates_table}
\begin{tabular}{|l|l|l|l|}
\hline Regime & Description & $J/J_s$ & $\lambda$\\
\hline Axes & Smoluchowski: no antenna & $1$ & $b$\\
\hline A & straight antenna, few proteins adsorbed & $(yd)^{1/2}$
& $b(yd)^{1/2}$\\
\hline B & coiled antenna, few proteins adsorbed & $\left(ypd/b
\right)^{1/3}$ &
$ \left( yd \right)^{2/3} p^{-1/3}b^{4/3}$\\
\hline C & coiled antenna, most proteins adsorbed &
$\frac{v(pd)^{1/3}}{L b^{7/3} y^{2/3}}$ & $\left( yd \right)^{2/3} p^{-1/3} b^{4/3}$\\
\hline D ($d<1$) & straight antenna, most proteins adsorbed &
$\frac{vd^{1/2}}{L b^{2} y^{1/2}}$ & $b(yd)^{1/2}$\\
\hline E & whole DNA as straight antenna, few proteins adsorbed & $L/b$ & $L$\\
\hline F & whole DNA as coiled antenna, few proteins adsorbed & $\left( Lp /b^2 \right)^{1/2}$ & $L$\\
\hline G & whole DNA as antenna, most proteins adsorbed & $\frac{vd}{L^2b}$ & $L$\\
\hline H & antenna with coiled mesh, most proteins adsorbed & $\frac{p}{b^2} \left(\frac{vd}{Ly} \right)^{1/2}$ & $\frac{b}{p}\left(\frac{vyd}{L}\right)^{1/2}$\\
\hline I & antenna with straight mesh, most proteins adsorbed & $\frac{vd^{1/2}}{L b^{2} y^{1/2}}$ & $b \left( yd \right)^{1/2}$\\
\hline K ($d>1$) & antenna with straight mesh, few proteins adsorbed & $\left( yd \right)^{1/2}$ & $b \left( yd \right)^{1/2}$\\
\hline M ($d>1$) & antenna with coiled mesh, few proteins adsorbed & $p \left(\frac{Lyd}{v}\right)^{1/2}$ & $\frac{b}{p}\left(\frac{vyd}{L}\right)^{1/2}$\\
\hline
\end{tabular}
\end{table*}
Our results are summarized in Fig. \ref{fig:diagram_d_equal_1} and
in the Table \ref{tab:rates_table}. Figure
\ref{fig:diagram_d_equal_1} represents the log-log plane of
parameters $L$ and $y$, and each line on this plane marks a
cross-over between scaling regimes. This figure gives the diagram
of scaling regimes for the specific case $d=1$ (or $D_1=D_3$);
later on, in the section \ref{sec:d_neq_1} we will return to the
more general situation and present corresponding diagrams for both
$d<1$ and $d>1$ cases.
To be systematic, let us start our review of scaling regimes from
the two trivial cases, which correspond to the axes in Fig.
\ref{fig:diagram_d_equal_1}. When $y \leq 1$, there is no
non-specific binding of proteins to the DNA, and no sliding along
DNA. Proteins find their specific target at the rate which is
equal to the Smoluchowski rate, or $J/J_s =1$. Similarly, if the
DNA is very short, as short as the specific target site itself, or
$L \sim b$, then once again $J/J_s =1$ for trivial reason. Since
we assume that there is some non-specific adsorbtion, or $y \geq
1$, and since DNA length is obviously always greater than the
target size $b$, our diagram in Fig. \ref{fig:diagram_d_equal_1}
presents only the $y>1$ and $L/b>1$ region, which is why pure
Smoluchowski regime is seen only on the axes.
\begin{figure}
\centerline{\scalebox{0.6}{\includegraphics{d_equal_1.eps}}}
\caption{Diagram of scaling regimes for the case $d=1$, when
diffusion along DNA has the same diffusion constant as diffusion
in surrounding water. Both $L$ and $y$ axes are in the
logarithmic scale. When DNA is shorter than persistence length
($b<L<p$) DNA is essentially a rod, DNA is a Gaussian coil as long
as it is longer than persistence length, but coil size is smaller
than the restriction volume $v$ ($p<L<v^{2/3}/p$), DNA is globular
at $L>v^{2/3}/p$, and we only consider $L$ up to about $v/pb$,
because at larger $L$ DNA segments start forming liquid
crystalline order. Summary of the rates for each regime is found
in Table \protect\ref{tab:rates_table}. Here, as well as in the
other figures, to make formulae look shorter, all lengths are
measured in the units of $b$, meaning that $L$, $p$, and $v$ stand
for $L/b$, $p/b$, and $v/b^3$.} \label{fig:diagram_d_equal_1}
\end{figure}
If we increase $y$ and consider $y>1$ situation, then we have
significant non-specific adsorbtion of proteins on DNA, which
increases the rate due to the antenna effect. If $y$ remains
moderate, the antenna is shorter than DNA persistence length, it
is straight. This is regime labelled A in Fig.
\ref{fig:diagram_d_equal_1} and described by formula
(\ref{eq:straight}). With further increase of $y$, when $y >
p^2/b^2d$, we cross-over into the regime labelled B and described
by formula (\ref{eq:gaussian}), in this regime antenna is so long
that it is a Gaussian coil. From the regime B, we can cross over
the line $y=v/Lb^2$ and get into the regime labelled C and
described by the formula (\ref{eq:falling_rate}). One can
cross-over into the regime C by either increasing $y$ or
increasing $L$, because increasing either of these variables
promotes unproductive non-specific adsorbtion of proteins on far
away pieces of DNA and thus slows down the transport to the
specific target.
From regime A, we can also cross over the line $y=v/Lb^2$, but as
long as $d=1$ this does not bring us to the regime D, instead we
get to the new regime labelled I, which we will explain a few
lines below.
To understand all other scaling regimes, we have to remember that
our previous consideration throughout Section
\ref{sec:simple_case} was restricted in two respects. First, we
assumed that the entire DNA in the form of Gaussian coil fits
within volume $v$, which is true only as long as $L < v^{1/3}$ and
$\sqrt{Lp} < v^{1/3}$, where $v^{1/3}$ stands for the linear
dimension of the restriction volume. To relax this assumption, we
will have to consider a long DNA which is many times reflected by
the walls of volume $v$ and inside volume $v$ represents a
globule, locally looking like a semi-dilute solution of separate
DNA pieces, as illustrated in Fig. \ref{fig:antenna} (c). For such
long DNA, we shall find two more regimes labelled H and I in Fig.
\ref{fig:diagram_d_equal_1}. Second, we assumed that the antenna
length $\lambda$ was smaller than full DNA length $L$; the
consequence of this was our statement (\ref{eq:equilibrium}) that
there is equilibrium between adsorbed and dissolved proteins.
Relaxing this assumption, we will have to discuss regimes labelled
E, F, and G on Fig. \ref{fig:diagram_d_equal_1}.
In Figure \ref{fig:rate}, we present a schematic $y$-dependence of
the rate for a number of values of DNA lengths $L$. Each curve is
labelled with the corresponding value of $L$. To be specific, we
have chosen the lengths which correspond to various cross-overs
and are marked on the scaling regimes diagram, Figure
\ref{fig:diagram_d_equal_1}. Note that in many cases our result
for the rate exhibits a maximum and saturation beyond the maximum
- features first described in the work BWH, Ref. \cite{BWH}.
Unlike BWH, we find that the maximum is asymmetric and, even more
importantly, $J/J_s$ can become much smaller than unity, i.e., one
can observe deceleration in comparison with Smoluchowski rate. We
also find a number of other features, such as specific power law
scaling behavior of the rate.
Thus, we have to discuss one by one all the new regimes E, F, G,
H, I. This is what we do in the next section \ref{sec:more_cases}.
\begin{figure}
\centerline{\scalebox{0.4}{\includegraphics{rate-y.eps}}}
\caption{Schematic representation of rate dependence on $y$. Both
the rate $J$ and $y$ are given in logarithmic scale. The fraction
next to each curve shows its slope, which is the power of $J(y)$
dependence. Each curve corresponds to the specified value of DNA
length $L$, also indicated in Figure
\protect\ref{fig:diagram_d_equal_1}, the length $L$ is shown above
the right end of each curve. Experimentally, the value of $y$ can
be controlled through the salt concentration, because non-specific
adsorbtion of proteins is controlled by Coulomb interaction
between negative DNA and positive patch on the protein surface;
for instance, if the salt is ${\rm KCl}$, then it is believed
\cite{BWH2,Bruinsma} that $y = 10 \left[ {\rm KCl} \right] +2.5$,
where $\left[ {\rm KCl} \right]$ is the molar concentration of the
salt. Note that we recover the possibility, first indicated in
\protect\cite{BWH}, that the rate goes through the maximum and
then saturates, but in our case maximum is in many cases
asymmetric, while at large $y$ the rate becomes very small $J/J_s
\ll 1$, particularly for long DNA. Here, as well as in the other
figures, to make formulae look shorter, all lengths are measured
in the units of $b$, meaning that $L$, $p$, and $v$ stand for
$L/b$, $p/b$, and $v/b^3$.} \label{fig:rate}
\end{figure}
\section{Systematic consideration of scaling
regimes}\label{sec:more_cases}
\subsection{DNA is not long enough for full
antenna}\label{sec:saturation}
If DNA is too short for antenna, then proteins already adsorbed on
DNA can find their target faster than new proteins can be
delivered to the DNA from solution. There is no adsorbtion
equilibrium any longer, and instead of formula
(\ref{eq:equilibrium}) we can only claim that $c_{\rm ads} < y
c_{\rm free}b^2$. Therefore, the amount of adsorbed proteins
under stationary conditions is physically determined by the
stationarity itself, which means, we have to look at formula
(\ref{eq:balance}) as \emph{two} equations. In doing so, we have
to replace $\lambda$ in the right hand side (one-dimensional rate)
by $L$, because we don't have more DNA than $L$, and we have to
replace $\xi$ in the left hand side, which is the antenna size for
3D transport, by $R$ - overall size of DNA coil. Of course,
particle counting equation (\ref{eq:particle_counting}) is still
valid, it is the third equation. Thus, our equations read:
\begin{eqnarray} && \frac{J}{J_s} \sim \frac{c_{\rm free } R}{cb} \ ;
\nonumber \\
&& c_{\rm free} R \sim \frac{c_{\rm ads} d}{L} \ ; \nonumber \\
&& c_{\rm ads} L + c_{\rm free} v \sim cv \ .
\label{eq:three_equations} \end{eqnarray}
From here, we find
\begin{equation} \frac{J}{J_s} = \frac{vRd/b}{RL^2+vd} \ . \label{eq:long_DNA}
\end{equation}
We can now easily address all possible scaling regimes in which
antenna is longer than DNA.
To begin with, it is possible that DNA length is shorter than DNA
persistence length $L<p$, such that the entire DNA is essentially
straight, and then $R \simeq L$. Assuming also $L^3 < v$, we
arrive at the scaling regime labelled E in Fig.
\ref{fig:diagram_d_equal_1}, in this regime
\begin{equation} \frac{J}{J_s} \sim \frac{L}{b} \ \ \ \ \ ({\rm regime \ E}).
\end{equation}
The borderline of this regime can be established from the
condition that since entire DNA is smaller than ``equilibrium''
antenna, we must expect that $c_{\rm ads}$ is smaller than its
equilibrium value, or $c_{\rm ads}/c_{\rm free}b^2 \leq y$. Since
according to the second of the formulae (\ref{eq:three_equations})
we have $c_{\rm ads}/c_{\rm free} = LR/d$, so we have the
condition $LR/d < y b^2$; at $L<p$ this yields $y>L^2/b^2d$. At
the same condition we can also arrive from the other side of the
crossover, by noting that regime A continues as long as antenna is
shorter than entire DNA, $\lambda < L$; using our result for
$\lambda$ for the regime A, this produces the same cross-over line
between regimes A and E.
For longer DNA, when $L>p$, entire DNA is Gaussian coil, its size
is $R \sim (Lp)^{1/2}$. Still assuming that the second term
dominates in the denominator in formula (\ref{eq:long_DNA}), we
arrive at
\begin{equation} \frac{J}{J_s} \sim \left( \frac{L p}{b^2} \right)^{1/2} \ \ \
\ \ ({\rm regime \ F}). \end{equation}
This regime is labelled F in Fig. \ref{fig:diagram_d_equal_1}.
Its borderline with regime E is obviously vertical line $L=p$. As
regards cross-over to the regime B, once again it can be
established either from $c_{\rm ads}/c_{\rm free} = LR/d <y$ for
the regime F or from $\lambda < L$ for the regime B. In either
way we arrive at the cross-over condition $y=L^{3/2}p^{1/2}/b^2
d$.
For even longer DNA, the antenna length becomes equal to the
length of entire DNA only at so large $y$, that the system is
already in the regime C, with rate falling down with increasing
$y$ because of the unproductive adsorbtion of proteins. Since
antenna length $\lambda$ in the regime C is given by the same
formula as in the regime B, so the upper border line of the regime
C is the continuation of the corresponding line bordering regime
B, it is $y=L^{3/2}p^{1/2}/b^2d$. However, when we cross this
line upwards from the regime C, we arrive at the new situation,
because now the first term dominates in the denominator of the
equation (\ref{eq:long_DNA}), meaning that most of the proteins
are adsorbed on DNA, such that we obtain
\begin{equation} \frac{J}{J_s} \sim \frac{vd}{L^2b} \ \ \ \ \ ({\rm regime \
G}). \end{equation}
The cross-over between this regime and regime F is vertical line
at which both terms are comparable in the denominator of equation
(\ref{eq:long_DNA}), it is $L = (vd)^{2/5}/p^{1/5}$. Crossover
line with the regime C can once again be established from the
condition $c_{\rm ads}/c_{\rm free} = LR/d <y$.
In all regimes E, F, and G the rate saturates with increasing $y$.
For the regimes E and F this happens after just initial growth of
rate; for the regime G saturation occurs after rate goes through
the maximum and starts decreasing. In all cases saturation is due
to the fact that increasing adsorbtion strength does not lead to
any increase of the antenna size, because already the entire DNA
is employed as antenna and antenna has nowhere to grow.
\subsection{Cell is not big enough to house DNA Gaussian
coil}\label{sec:globule}
When DNA is very long for a given volume, specifically, when
$(Lp)^{1/2} > v^{1/3}$, DNA cannot remain just a coil, it must be
a globule, as it is forced to return many times back into the
volume after touching the walls (see, for instance, \cite{RedBook}).
For the purposes of this work, it is sufficient to keep assuming
that excluded volume of DNA is not important, because volume
fraction of DNA within confinement volume $v$ is still small, and
even small compared at $b/p$. Nevertheless, the system locally
looks like a so-called semi-dilute solution of DNA, or transient
network with certain mesh size (see Figure \ref{fig:antenna}c).
We should remind some basic facts regarding the semi-dilute
solution of transient network \cite{DeGennesBook,RedBook}. Let us
denote $r$ the characteristic length scale of a mesh in the
network, it is in the scaling sense the same as the characteristic
radius of density-density correlation (see Figure
\ref{fig:antenna}c). Let us further denote $g$ the characteristic
length along the polymer corresponding to the spatial distance
$r$. Quantities $r$ and $g$ can be estimated from the following
physical argument \cite{DeGennesBook,RedBook}. Consider a piece of
polymer of the length $g$ starting from some particular monomer,
it occupies region $\sim r^3$ and makes density about $\sim
g/r^3$; this density must be about overall average density, which
for our system is of the order of $L/v$. Thus, $g/r^3 \sim L/v$.
Second relation between $g$ and $r$ is similar to formula
(\ref{eq:fractality}), it depends on whether mesh size is bigger
or smaller than persistence length $p$:
\begin{equation} r \sim \left\{ \begin{array}{lcr} g & {\rm if} & g < p \\
\sqrt{gp} & {\rm if} & g >p \end{array} \right. \ . \end{equation}
Accordingly, we obtain after some algebra
\begin{equation} \begin{array}{lccr} g \sim \sqrt{\frac{v}{L}} \ , &
r \sim \sqrt{\frac{v}{L}} & {\rm if} & L > \frac{v}{ p^2} \\
g \sim \frac{v^2}{L^{2}p^{3}} \ , & r \sim \frac{v}{Lp} & {\rm if}
& \frac{v^{2/3}}{p} < L < \frac{v}{p^2}
\end{array} \ . \label{eq:blob_size} \end{equation}
The upper line corresponds to the network so dense that every mesh
is shorter than persistence length and polymer is essentially
straight within each mesh. The lower line describes much less
concentrated network, in which every mesh is represented by a
little Gaussian coil.
Returning to our problem, we should realize that the antenna
length $\lambda$ can in fact be longer than the mesh size $g$, as
illustrated in Fig. \ref{fig:antenna} (c). To estimate the
antenna size for this case, we should remember that desorbtion
from antenna does not necessarily completely breaks the sliding
along DNA, because protein can still re-adsorb on a nearby place
of DNA, more generally - on a \emph{correlated} place on DNA. To
account for this, let us imagine that the antenna part of DNA is
decorated by a tube of the radius $r$. Since $r$ is the
correlation length in the DNA solution, protein remains correlated
with antenna as long as it remains within this tube around
antenna. Accordingly, our main balance equation
(\ref{eq:balance}) must be modified to account for the fact that
3D transport on the scales larger than $r$ is now realized through
DNA network and, therefore, the task of regular 3D diffusion is
only to deliver proteins over the length scale of order of one
mesh size $r$, into any one of the $\lambda / g$ network meshes
along the antenna. The rate of delivery into one such mesh would
be $\sim D_3 c_{\rm free} r$, so overall delivery rate into the
antenna tube scales as $\sim D_3 c_{\rm free} r \lambda / g$. As
usual, this must be equal to the rate of 1D delivery along antenna
into the specific target, so instead of (\ref{eq:balance}) we
finally get
\begin{equation} J \sim D_3 c_{\rm free} r \frac{\lambda }{ g} \sim D_1
\frac{c_{\rm ads}}{\lambda} \ . \label{eq:balance_mesh} \end{equation}
As long as antenna is shorter than the entire DNA, the relation
between $c_{\rm free}$ and $c_{\rm ads}$ equilibrates and obeys
(\ref{eq:equilibrium}-\ref{eq:equilibrium_concentrations}), so we
finally get
\begin{equation} \lambda^2 \sim b^2\frac{g y d}{r} \ , \end{equation}
and
\begin{equation} \frac{J}{J_s} \sim \frac{c_{\rm free}}{c} \frac{r \lambda}{g}
\sim \frac{v}{Lb^2}\left( \frac{r d}{y g} \right)^{1/2} \ . \end{equation}
What is nice about this formula is that it remains correct in a
variety of circumstances - when antenna is straight ($\lambda
<p$), or antenna is Gaussian ($p< \lambda < v^{2/3}/p$), or
antenna is a globule ($\lambda > v^{2/3}/p$).
Taking $r$ and $g$ from the formulae (\ref{eq:blob_size}), we
finally obtain two new regimes. When every mesh is Gaussian,
\begin{equation} \frac{J}{J_s} \sim \frac{p}{b^2} \left( \frac{v d}{Ly}
\right)^{1/2} \ \ \ \ \ ({\rm regime \ H}).
\label{eq:Gaussian_mesh}\end{equation}
This regime borders regime C along the line where antenna size is
equal to the mesh size, $\lambda = g$, which reads $y =
v^3/(L^3p^4b^2d)$. Regime H also borders regime G along the line
where antenna size is as long as the entire DNA, $\lambda = L$, or
$y = L^3p^2/vb^2d$. Finally, regime H also borders another regime
I along the vertical line $L = v/p^2$, which corresponds to DNA
within every mesh becoming straight (shorter than persistence
length). For this regime, we have to use upper line in formulae
(\ref{eq:blob_size}), thus obtaining
\begin{equation} \frac{J}{J_s} \sim \frac{v d^{1/2}}{Lb^2y^{1/2}} \ \ \ \ \
({\rm regime \ I}). \label{eq:straight_mesh} \end{equation}
This regime borders saturation regime G along the line
$y=L^2/b^2d$ where $\lambda = L$.
As regards the lower border of the regime I, it corresponds to the
situation when antenna becomes straight, which happens at
$y=v/Lb^2d$. However, as long as $d=1$, which is the case
presented in Figure \ref{fig:diagram_d_equal_1}, this line
coincides with the line $y=v/Lb^2$ below which most proteins are
desorbed and free in solution. That is why at $d=1$, there is no
room for the regime D, in which antenna is straight, but most
proteins adsorbed. Indeed, when $d=1$, then 3D transport is
mostly realized by sliding along the network edges as soon as most
proteins are adsorbed, which precisely means that regime A crosses
over directly to regime I.
As we see, in both H and I regimes the rate $J$ decreases with
growing $y$, but does so slower than in the regime C, only as
$y^{-1/2}$ instead of $y^{-2/3}$. This happens because adsorbed
proteins are not just taken away from the process, as in the
regime C, but they participate in 3D transport through the
network, albeit this transport is still pretty slow.
This completes our scaling analysis for the $d=1$ case shown in
Fig. \ref{fig:diagram_d_equal_1}.
\subsection{Diffusion rate along DNA is different from that in surrounding water}\label{sec:d_neq_1}
Let us now relax the $d=1$ condition and examine the cases when
diffusion along DNA is either slower ($d<1$) or faster ($d>1$)
than in surrounding water.
First let us consider $d<1$ case, when diffusion along DNA is
slower than that in the surrounding water ($D_1<D_3$),
corresponding scaling regimes are summarized in the diagram Figure
\ref{fig:diagram_d_smaller_1}. Most of the diagram is
topologically similar to that in the Figure
\ref{fig:diagram_d_equal_1}, and we do not repeat corresponding
analysis. Of course, there are now powers of $d$ in all
equations, but the major qualitative novelty is that there is now
a room for the regime D sandwiched between regimes A and I. The
formal reason why this regime now exists in a separate region is
because the line $y=v/Lb^2d$ goes above the line $y=v/Lb^2$. To
understand the more meaningful physical difference, let us recall
that the line $y=v/Lb^2$ marks the cross-over above which most of
the proteins are adsorbed, but it is not enough for the
sliding-along-network
mechanism to dominate in the 3D transport at $d<1$.
Interestingly, the rate for both regimes D and I is given by the
same formula - compare Eqs. (\ref{eq:falling_rate_straight}) and
(\ref{eq:straight_mesh}). This happens because antenna is
straight for the regime D and, while antenna is not straight for
the regime I, it still consists of a number of essentially
straight pieces, each representing one mesh. The major difference
between regimes D and I, despite similar scaling of the rate, is
in the mechanism of diffusion: in the regime D, proteins diffuse
through the water in a usual manner, while in the regime I they
are mostly transported along the network of DNA, with only short
``switches'' on the scale of one mesh size $r$ between sliding
tours. This is why straight pieces of DNA in different meshes
independently add together to yield the same overall formula for
rate as in the regime D.
\begin{figure*}
\centerline{\scalebox{0.6}{\includegraphics{d_smaller_1.eps}}}
\caption{Scaling regimes for the case $d<1$. The major difference
from the $d=1$ case is the presence of regime D, in which majority
of proteins are adsorbed, but still the dominant 3D transport is
the usual diffusion through the surrounding water, because sliding
along DNA is too slow ($D_1<D_3$). In this figure, as well as in
the other figures, to make formulae look shorter, all lengths are
measured in the units of $b$, meaning that $L$, $p$, and $v$ stand
for $L/b$, $p/b$, and $v/b^3$.} \label{fig:diagram_d_smaller_1}
\end{figure*}
\begin{figure*}
\centerline{\scalebox{0.6}{\includegraphics{d_larger_1.eps}}}
\caption{Scaling regimes for the case $d>1$. The major new
feature of this diagram compared to previous ones is the presence
of regimes K and M. In these regimes the majority of proteins are
not adsorbed, but still the dominant 3D transport mechanism is the
sliding of minority proteins along DNA network, because it is so
much faster ($D_1>D_3$). We skip J and L in labelling regimes to
avoid confusion with rate $J$ and DNA length $L$. In this figure,
as well as in the other figures, to make formulae look shorter,
all lengths are measured in the units of $b$, meaning that $L$,
$p$, and $v$ stand for $L/b$, $p/b$, and $v/b^3$.}
\label{fig:diagram_d_larger_1}
\end{figure*}
Let us now switch to the opposite limit and consider the $d>1$
case, for which the results are summarized in Figure
\ref{fig:diagram_d_larger_1}. This diagram is quite similar to
the previously considered ones in Figures
\ref{fig:diagram_d_equal_1} and \ref{fig:diagram_d_smaller_1},
except there are now two new regimes labelled K and M (in
alphabetical labelling of the regimes we skip J and L to avoid
confusion with rate and DNA length). These regimes are both below
the line $y=v/Lb^2$, which means, most of the proteins are not
adsorbed. However, since $d>1$, the new physical feature of the
situation is that adsorbed proteins, although they are in
minority, can nevertheless dominate in 3D transport by sliding
along DNA network, because sliding is now so fast at $d>1$. Thus,
regimes K and M are the ones in which effective diffusion along
DNA network dominates, so we have to use formula
(\ref{eq:balance_mesh}) for the rate and antenna size, while for
the concentrations of free and adsorbed proteins we have to use
upper lines in the formulae (\ref{eq:equilibrium_concentrations}).
In the regime K, local concentration of DNA segments is so high,
that every mesh in DNA network contains an essentially straight
piece of DNA, so we have to use the upper line in formula
(\ref{eq:blob_size}), yielding (after some algebra)
\begin{equation} \frac{J}{J_s} \sim (yd)^{1/2} \ \ \ \ \ ({\rm regime \ K}).
\label{eq:straigh_mesh_large_d} \end{equation}
Similarly, in the regime M mesh of the DNA network is Gaussian, we
have to use lower line in equation (\ref{eq:blob_size}), and this
produces
\begin{equation} \frac{J}{J_s} \sim p \left( \frac{L y d}{v} \right)^{1/2} \ \
\ \ \ ({\rm regime \ M}). \label{eq:Gauss_mesh_large_d}\end{equation}
Since the majority of proteins are not adsorbed, it is not
surprising that rate grows with $y$ in both regimes K and M.
Notice that the rate is given by the same formula for the regimes
A and K - compare (\ref{eq:straight}) and
(\ref{eq:straigh_mesh_large_d}). This is similar to the situation
with regimes D and I, as discussed before, because although rate
is given by the same formula, the underlying diffusion mechanism
is fundamentally different. In both cases of D and I or A and K,
it is possible that although scaling laws are the same, the
numerical pre-factors are different.
It is also interesting to note that the cross-over between regimes
B and M takes place on the line $y=v^3/p^4L^3b^2d$ where antenna
length is equal to the DNA length in one mesh: on the side of B
regime, antenna is shorter than one mesh, and transport to antenna
must be through water; on the side of M, antenna is longer than
one mesh, and effective transport along DNA network is at play.
\subsection{Maximal rate}\label{sec:maximal_rate}
To finalize our discussion of scaling regimes, it is reasonable to
ask: what is the maximal possible rate? According to our results,
the maximal rate is achieved on the border between regimes F and
G, that is, at $L \sim (vd)^{2/5}/p^{1/5}$ and at $y \geq
v^{3/5}p^{1/5}/b^2d^{2/5}$. Maximal possible acceleration
compared to Smoluchowski rate is about $(v p^2 d/b^5)^{1/5}$. It
is interesting to note that the ``optimal strategy'' in achieving
the maximal rate at the minimal possible $y$ requires to have the
adsorbtion strength $y$ right at the level at which the
probability of non-specific adsorbtion for every protein is about
$1/2$ (on the line $y \sim v/L$).
It is interesting that the maximal possible acceleration grows
with overall volume $v$, which may seem counterintuitive. This
result is due to the fact that total amount of DNA grows with
increasing $v$, and, according to our assumption, all this DNA has
still just one target.
\section{Discussion}\label{sec:discussion}
\subsection{Single protein view}\label{sec:single_protein_view}
Many of the previous theoretical works
\cite{FrenchGroup,Bruinsma,Mirny,Marko} looked at the situation in
terms of a single protein molecule diffusing to its target. In
this view, one should imagine that a protein molecule is initially
introduced into a random place within volume $v$, and then one
should ask what is the first passage time \cite{Redner} needed for
the protein to arrive to the specific target site on DNA. The
mean first passage time $\tau$ can of course be found using our
results for the rate $J$ by inverting the value of the rate and
assuming that on average there is just one protein molecule in the
system at any time: $\left. \tau = 1/J\right|_{c=1/v} $. However,
we want to re-derive all our results directly in terms of $\tau$
in order to build bridges to the works of other authors. The
re-derivation turns out also quite illuminating.
First let us consider that DNA is a globule, $L >v^{2/3}/p$ (or
semi-dilute solution), and look at the regimes H, I, K, and M;
unlike stationary diffusion approach above, in the single protein
language the derivation for the globular DNA case is actually
simpler. Following \cite{Mirny}, we imagine that the search
process for the given single protein consists of tours of 1D
sliding along DNA followed by diffusion in 3D, followed by 1D
sliding, etc. If in one tour of 1D sliding protein moves some
distance $\lambda$ along DNA, then it takes time about
$\lambda^2/D_1$. The length $\lambda$ here is, of course, our
familiar antenna length, but we will re-derive it here, so we
\emph{do not} assume it known. As regards the tour of 3D
diffusion, it breaks correlation of the 1D sliding if it carries
protein over a distance larger or about the correlation length in
the DNA system, which is $r$ - mesh (or blob) size. Thus, the
longevity of one tour of 3D diffusion is about $r^2/D_3$.
The next step of our argument is this. On its way to the target,
the protein will go through great many adsorbtion and de-sorbtion
cycles, therefore, the ratio of times protein spends adsorbed and
de-sorbed should simply follow equilibrium Boltzmann statistics:
\begin{equation} \frac{\lambda^2/D_1}{r^2/D_3} \sim \frac{yLb^2}{v} \ .
\label{eq:ratio_of_times_in_globule} \end{equation}
(Here, we note parenthetically that there is an approximation
underlying our argument: one tour of ``correlated 1D sliding''
does include small 3D excursions of the protein into water, but
they are small in the sense that they do not go beyond the
cross-over correlation distance and, therefore, re-adsorbtion
after excursion occurs on a correlated place on DNA. Accordingly,
these excursions make only marginal contribution to the sliding
time which is correctly estimated as $\sim \lambda^2/D_1$.)
The final part of the argument is most clearly formulated by
Bruinsma in the work ref. \cite{Bruinsma}: since subsequent tours
of 1D sliding occur over uncorrelated parts of DNA, full search
requires about $L/\lambda$ rounds. Therefore, the total search
time $\tau$ can be written as
\begin{equation} \tau \sim \frac{L}{\lambda} \left[ \frac{\lambda^2}{D_1} +
\frac{r^2}{D_3} \right] \ . \label{eq:full_time_for_globule} \end{equation}
Equations (\ref{eq:ratio_of_times_in_globule}) and
(\ref{eq:full_time_for_globule}) solve the problem for all regimes
of globular DNA if we remember that mesh (or blob) size $r$ is
given by the formula (\ref{eq:blob_size}). Notice that formula
(\ref{eq:ratio_of_times_in_globule}) gives a new interpretation to
the line $y \sim v/Lb^2$ on any of our diagrams Fig.
\ref{fig:diagram_d_equal_1}, \ref{fig:diagram_d_smaller_1},
\ref{fig:diagram_d_larger_1}: for the parameters below this line
most of the overall search time is spent in 3D diffusion, while
for the system with parameters above the line the major time
consuming part is 1D sliding. It is close to this line where the
result of the work ref. \cite{Mirny} applies and these two times
are of the same order. And let us remind that it is also close to
this line where the maximal possible rate is achieved (see section
\ref{sec:maximal_rate}).
Thus, four regimes H, I, K, and M result from two possibilities
for $r$ in Eq. (\ref{eq:blob_size}) (straight or Gaussian DNA
within a mesh) and two possibilities of either first or second
term dominance in formula (\ref{eq:full_time_for_globule}).
Let us now turn to the regimes A, B, C, and D, when DNA is a coil.
In this case, we still essentially rely on the equations similar
to (\ref{eq:ratio_of_times_in_globule}) and
(\ref{eq:full_time_for_globule}), except some effort is now needed
to understand the time of 3D diffusion.
Our argument for this case starts from noticing that there is a
cross-over spatial scale $\xi$, such that correlated sliding takes
place inside scale $\xi$, while regular 3D diffusion in water
occurs on a larger length scale, as it breaks correlations between
desorbtion and subsequent re-adsorbtion. Thus, the time of one
tour of 3D diffusion is the mean first passage time into any one
of the $L/\lambda$ balls of the size $\xi$ (here $\lambda$ is the
contour length of DNA accommodated by one ball of the size $\xi$;
once again, we \emph{pretend} that we do not know $\xi$ and
$\lambda$, we will re-derive them in this single-protein
language). The arrival time into one such ball is the
Smoluchowski time (discussed in the appendix
\ref{sec:Smoluchowski}) for the target of size $\xi$, it is about
$v/D_3\xi$; the arrival time into any one of the $L/\lambda$ balls
is $L/\lambda$ times smaller: $\sim v\left/ D_3 \xi (L/\lambda)
\right.$
In order to present our equations for $\lambda$ and overall search
time $\tau$ in the form similar to Eqs.
(\ref{eq:ratio_of_times_in_globule}) and
(\ref{eq:full_time_for_globule}), we define distance $r_{\rm eff}$
such that $r_{\rm eff}^2 \sim D_3\left[ v\left/ D_3 \xi
(L/\lambda) \right.\right] = v\lambda/L\xi$ and then we obtain
\begin{equation} \frac{\lambda^2/D_1}{r_{\rm eff}^2/D_3} \sim \frac{y Lb^2}{v}
\ , \label{eq:ratio_of_times_in_coil} \end{equation}
and
\begin{equation} \tau \sim \frac{L}{\lambda} \left[ \frac{\lambda^2}{D_1} +
\frac{r_{\rm eff}^2}{D_3} \right] \ .
\label{eq:full_time_for_coil} \end{equation}
Once again, remembering two regimes for the relation between
$\lambda$ and $\xi$, formula (\ref{eq:fractality}), and having
either first or second term dominate in the total time
(\ref{eq:full_time_for_coil}), we recover four regimes A, B, C,
and D.
Finally, the results for all saturation regimes E, F, G are
recovered by replacing the antenna length $\lambda$ with $L$ in
equation (\ref{eq:full_time_for_globule}) or
(\ref{eq:full_time_for_coil}), and replacing equality with
inequality in the conditions (\ref{eq:ratio_of_times_in_globule})
or (\ref{eq:ratio_of_times_in_coil}).
\subsection{Comparison with earlier theoretical works}
Let us now compare our findings with various statements found in
the literature. The most widely known result of the classical
work \cite{BWH} was the prediction, later confirmed experimentally
\cite{BWH2}, that the rate depends on $y$ (controlled by ionic
strength) in a characteristic way, exhibiting a maximum followed
by a plateau. We have recovered this as a possible scenario for
some combinations of parameters (regimes), as shown in Fig.
\ref{fig:rate}. However, we found also a number of additional
features not noticed previously: first, the maximum is in many
cases asymmetric; second, the scaling of rate dependence on $y$
exhibits rich behavior, with the possibilities of crossing over
from $y^{1/2}$ to $y^{1/3}$ on the way to the maximum, or from
$y^{-2/3}$ to $y^{-1/2}$ on the way down; third, there is a
possibility of very strong deceleration at large adsorbtion
strength $y$ compared at the Smoluchowski rate. All these
features have simple qualitative explanation: the rate grows
because increasing $y$ increases the antenna; the rate decays when
most of the proteins are fruitlessly adsorbed far from target (or,
in other language, every protein spends most of the time adsorbed
far away); the rate saturates and comes to the plateau because
antenna becomes as long as the DNA itself. All of these features
are the direct consequence of the fractal properties of DNA, in
either coil or globule state.
The work Ref. \cite{Bruinsma} represents a review of a variety of
topics related to protein-DNA interactions, and the issue of
search rate is considered only briefly. In the context, the work
Ref \cite{Bruinsma} provides an important insight, used above in
presenting the formula (\ref{eq:full_time_for_globule}), that
subsequent rounds of 1D search are performed on uncorrelated
pieces of DNA. In other words, there exists a cross-over from
mostly correlated events, earlier combined into one ``correlated
sliding length $\lambda$'', to mostly uncorrelated ones. In
accord with this insight, the search time is linear in DNA length
in the regime I.
In the paper Ref. \cite{Marko} antenna length was explicitly
identified with the sliding distance (that is, with the bare
sliding distance, earlier in this paper denoted as $\ell_{\rm
slide} \sim b \sqrt{yd}$), and then essentially formula
(\ref{eq:full_time_for_globule}) was used to determine the search
time. This approach is perfectly valid as long as the antenna is
straight, $\lambda = \xi$, and $\lambda = \ell_{\rm slide}$, it
predicts the symmetric maximum of $J(y)$ dependence, but it should
not be used when DNA antenna is coiled. For the globular DNA, the
approximation of straight antenna - implicit in the identification
of $\lambda$ with bare $\ell_{\rm slide}$ - is valid for the right
end of the regime A and for the regime D, while of course other
globular regimes require going beyond this approximation.
The main emphasis of the article Ref. \cite{Mirny} is on the role
of non-uniform sequence of DNA, which may lead to either
non-specific adsorbtion strength $y$, or 1D diffusion coefficient
$D_1$, or both to be ``noisy'' functions of coordinate on DNA. In
their review of the uniform homopolymer case, Ref. \cite{Mirny}
employ formula equivalent to our Eqs.
(\ref{eq:full_time_for_globule}) or (\ref{eq:full_time_for_coil}),
but instead of the condition like
(\ref{eq:ratio_of_times_in_globule}) or
(\ref{eq:full_time_for_coil}) they minimize overall time with
respect to $\lambda$. As we pointed out before, this approach is
valid within the cross-over corridor around the line $y \sim
v/Lb^2$. In general, the idea to apply variational principle is
very interesting. It can be generalized beyond the above
mentioned corridor if one minimizes the overall dissipation, which
is equivalent to energy minimization in terms of electrostatic
analogy, as we show in appendix \ref{sec:electrostatic_analogy}.
Of course, minimization of dissipation is equivalent to the
diffusion equation as long as diffusion is linear. Alternatively,
one can also think, as emphasized in the work Ref. \cite{Marko},
that search mechanism was subject to optimization by biological
evolution. To employ this idea, it is obviously necessary first
to understand the possible search scenario, or regimes, existing
in physics, and then, on the next stage, one could attempt
optimization with respect to the parameters, such as DNA packing
properties etc, which could be subject to selective pressure in
evolution.
BWH \cite{BWH} and some subsequent authors treated DNA solution in
terms of domains. Although this term was never particularly
clearly defined, it could be understood as space regions more or
less occupied by separate DNA coils in solution. With such
understanding, the terminology of domains can be used as long as
DNA coil fits into the volume $v$, or, in other words, better
suitable for an \emph{in vitro} experiment, DNA solution is
dilute, such that DNA coils do not overlap. The terminology of
DNA domains becomes unsatisfactory at larger DNA concentrations.
Work Ref. \cite{FrenchGroup} considered the stochastic approach,
which means they did not look at the stationary diffusion, but
rather at the trajectory of a single protein. As we pointed out
before, these approaches must be equivalent as long as one is only
interested in the average time of the arrival of the first of
proteins. The important contribution of the work Ref.
\cite{FrenchGroup} was the elucidation of the crucial neglect of
the correlations between the desorbtion point of a protein and its
re-adsorbtion point. It is because of this crucial and not always
justified approximation previous theories appear to have
overlooked the mechanism of correlated re-adsorbtion, which is
entirely due to the DNA being a polymer and a fractal coil.
Correlated re-adsorbtion was anticipated in the experimental works
\cite{Halford}.
\subsection{Experimental situation}
Most of the experiments in the field (see review \cite{Halford}
and references therein) involve various ingenious arrangements of
two or more target sites on the linear or ring DNA and observation
of the resulting enzyme processivity. In the light of our theory,
it would be interesting to revive the earlier BWH-style
experiments and to look carefully at the theoretically predicted
multiple features of $J(y)$ curves, such as asymmetric maximum,
various scaling regions, the possible deceleration, etc.
The seeming difficulty is that all our ``interesting'' regimes
start when $y > p^2/b^2d$, when antenna is longer than DNA
persistence length. Since persistence length of dsDNA, $p$, is
fairly large, about 150 base pairs under usual ionic conditions
(say, $\left[ {\rm Na} \right] = 0.2 \ {\rm M}$), and assuming $b$
is about the diameter of the double helix, we get $p/b \approx 25$
for the dsDNA. Unless $d$ is large, this seems to require fairly
large non-specific adsorbtion energies, about $6k_BT$ to $10k_BT$,
which is a lot but not impossible. In any case, we would like to
emphasize that the maximum $J(y)$ \emph{has} been observed
\cite{BWH2}, which, according to our theory, could have happened
only at $y > p^2/b^2d$, thus assuring that this range is within
reach.
One of the most critical and poorly known parameters of our theory
is $d= D_1/D_3$. Of course, $D_3$, diffusion coefficient of the
protein in water, is known pretty well, and can be simply
estimated based on its size using Stokes-Einstein relation. The
difficult part is about $D_1$, which involves friction of the
protein against DNA in the solvent. It is clear that slow
diffusion along DNA would make the entire mechanism of 1D sliding
less efficient, and indeed decreasing $d$ systematically reduces
the rate that we obtain in almost all regimes. There are only two
exceptions to this: one is trivial, it is pure Smoluchowski
process not involving any sliding and realized only when there is
no non-specific adsorbtion on DNA ($y \leq 1$); another exception
is in the regimes E and F - regimes when entire DNA, rod-like or
coil-like, serves as an antenna, which means 3D transport to the
DNA is the slowest part, the bottleneck of the whole process, so
that reducing $d$ does not do any damage - except, of course,
pushing away the corresponding regime boundaries.
Experimental data on the 1D diffusion of proteins along DNA are
scarce and not completely clear \cite{Eber}.
An interesting spin on the whole issue of 1D transport is added by
the proteins, such as, e.g., helicase, which, provided with proper
energy supply, can move actively. For us, in the context of our
present theory, active movement is likely to correspond to great
increase of $D_1$, or $d$, for either actively moving proteins
themselves, or for passively diffusing proteins which might
receive push or pull from active ones. At the first glance, this
sounds like a paradoxical statement, because active motion is not
diffusion in the sense that displacement is linear in time.
However, this is only true up to a certain time and length scales.
At larger scale, we can reasonably assume that it would be
diffusion again, albeit with a vastly increased diffusion
coefficient. Indeed, first, there is always a probability of
thermally activated detachment from DNA, and, second, given that
two strands in DNA are antiparallel, the re-adsorbtion is likely
to lead to random choice of direction of further sliding. These
two ingredients surely correspond to diffusion, in the sense that
displacement goes like $t^{1/2}$. Of course, this entire issue of
active transport requires further investigation, which naturally
brings us to the conclusion of this paper.
\section{Conclusion}
Many questions remain open. The role of concurrent protein
species, the role of non-uniform DNA sequence, the role of DNA
motion \cite{Berg_moving_DNA}, the probability of unusually long
search times, the search on a single stranded DNA or RNA, the role
of superhelical structures, the dependence of rate (or search
time) on the specific positions of one or more targets on DNA, the
related issue of enzyme processivity, the role of excluded volume
for very long DNA and corresponding loop-erasing walks
\cite{loop_erasing_walk} - all of these questions invite
theoretical work.
To conclude, we have analyzed all scaling regimes of the
diffusion-controlled search by proteins of the specific target
site located on DNA. We found many regimes. The major idea can
be formulated in terms of the cross-over between 1D sliding along
DNA up to a certain length scale and 3D diffusion in surrounding
space on the larger length scale. Overall, qualitatively, this
idea seems to be in agreement with the intuition expressed in
experimental papers. In addition, we have made several
theoretical predictions which are verifiable and (even more
importantly) falsifiable by the experiments. We are looking
forward to such experiments.
\begin{acknowledgments} We gratefully acknowledge very useful discussion with
J.-L. Sikorav. The work of AYG was supported in part by the MRSEC
Program of the National Science Foundation under Award Number
DMR-0212302. \end{acknowledgments}
|
train/arxiv
|
BkiUdzvxK4sA-5fmxufr
| 5 | 1 |
\section{Introduction}
\thispagestyle{plain}
Single channel measurements offer a unique challenge. Regarding a setup of such experiment, there is one more characteristics to be fixed in advance during the experiment setup, compared to multichannel measurements. This is the distribution of measurement time.
Let there be a single channel experiment in its design phase. A spectrum to be recorded by the experiment is dependent on several parameters. Experimenters are often interested in one of them. Let the mechanical design of experiment be fixed, and let the measurement points be chosen. So, it is known how to measure, and where to measure. However, it is unknown for how long to measure in the particular measurement points, which we call a distribution of measurement time.
Then the task is to distribute the measurement time into the measurement points in the way, that minimizes the standard deviation of the parameter, in which the experimenters are interested. Up to our knowledge, there are no studies available concerning this aspect. The point is, we can not set the measurement times for all the points simultaneously (it is numerically not feasible to minimize a function of tens or hundreds of parameters). Instead, we can fix the measurement time in points one by one, approaching the optimal distribution of measurement time in an iterative way.
\section{Method}
Let $\{E_i\}$, $i = 1\,...\,n$ be a fixed set of measurement points, where $n$ is their number and $\tau_{\rm tot}$ is the total measurement time. Further, let $\{T_i\}$ be the initial time distribution chosen intuitively, e.g. uniform distribution, meaning that equal time is spent in each point. Let the initial time distribution result in an initial standard deviation of the selected fit parameter $\sigma_{\rm init} = \sigma(\{T_i\})$ that was obtained by simulations (creation of pseudo-experimental spectra and their evaluation).
Now, we minimize standard deviation of the selected parameter varying the time $T_1$ in the first measurement point only.
(This means that we simulate spectra corresponding to various $T_1$ and evaluate them to find the value of $T_1$ supplying the minimum standard deviation of the selected fit parameter $\sigma$.
The measurement times in all the rest points are scaled by the same factor to keep the total measurement time equal to $\tau_{\rm tot}$. As the result we obtain the time distribution $\{T^{(1)}_i\}$ and the corresponding standard deviation of the selected fit parameter $\sigma^{(1)} = \sigma(\{T^{(1)}_i\})$. So, the measurement time in the first point is $T^{(1)}_1$, and the measurement time in each of the rest points is equal to
\begin{equation}
T^{(1)}_i = T_i \times \frac{\tau_{\rm tot} - T^{(1)}_1}{\tau_{\rm tot} - T_1} \quad ,
\label{eqn:sc_factor}
\end{equation}
$i = 2\,...\,n$. Then we do the same for all the other measurement points (starting always with the $\{T_i\}$ distribution). So, we get $n$ time distributions $\{T^{(k)}_i\}\, , \, k = 1\, ...\, n$ and corresponding sigmas $\sigma^{(k)}$.
(For the sake of clarity, we repeat that $\sigma^{(k)}$ are standard deviations of the {\it same} parameter but obtained from spectra corresponding to different time distributions $T_i^{(k)}$.)
Now we construct the best final time distribution from the set $\{T^{(k)}_i\}$ of the partial ones. There are several possibilities how to do it. Experimentally, it turned out that a good way is a weighted sum
\begin{equation}
T'_i = {1 \over n} \sum_{k=1}^n \omega_k T^{(k)}_i \quad ,
\label{eqn:time_dist}
\end{equation}
The weight factors $\omega_k$ may be estimated as:
\begin{equation}
\omega_k = \left( {\sigma_{\rm init} - \sigma^{(k)} \over {1\over n} \sum \sigma^{(k)}}\right)^s \quad ,
\label{eqn:weight_fact}
\end{equation}
where $s$ is chosen to minimize $\sigma(\{T'_i\})$, i.e., we simulate spectra corresponding to various $s$ and evaluate them to find the value of $s$ minimizing the standard deviation $\sigma$.
Having the new time distribution $\{T'_i\}$ we renormalize it with respect to the $\tau_{\rm tot}$. (Note, that the weight factors in eq. (\ref{eqn:weight_fact}) do not keep the $\sum T'_i = \tau_{\rm tot}$.)
Then, we use it instead to the original $\{T_i\}$ to repeat the whole process in an iterative way.
Based on our extensive simulations, this seems to be the best way how to combine the partial time distributions $\{T^{(k)}_i\}$ together, guaranteeing convergence, numerical stability, and reasonable speed. The partial time distributions that significantly improved the standard deviation of the selected parameter are favored, and the convergence speed is enhanced by the choice of the $s$ factor. (Note, that eq. (\ref{eqn:weight_fact}) is well defined because $\sigma_{\rm init} \geq \sigma^{(k)}$ for any $k$.)
We also tried several other ways how to merge the partial time distributions $\{T^{(k)}_i\}$ together. In particular, following choice of the final time distribution:
\begin{equation}
T'_i = T_i^{(i)} \times \frac{\sum_i T_i}{\sum_i T_i^{(i)}}
\label{eqn:time_dist2}
\end{equation}
looked feasible and promised fast convergence to the optimal time distribution. Nevertheless, it proved to be highly numerically unstable, and it oscillated practically always. Finally, if the numerical stability is favored, we tested that setting all the $\omega_k$ in eq. (\ref{eqn:time_dist}) equally to one was a good choice. However, the convergence is about 5 times slower. Practically, we implement the method using a simplex minimizer provided by Minuit2 library wrapped inside the ROOT framework \cite{root}.
\section{Examples}
\begin{figure}
\begin{center}
\includegraphics{fig1_graf_gauss_pos.eps}
\end{center}
\caption{An example of the optimal distribution of measurement time. The time distribution minimizes the standard deviation of Gaussian line position fitting four parameters: amplitude, background, position, and width. The initial parameter values are given in text. For convenience the Gaussian line is shown as well.}
\label{fig:pos}
\end{figure}
To test the method we have chosen an idealistic Gaussian line on a constant background as measured by a differential spectrometer:
\begin{equation}
G(E|A,B,E_0,\sigma) = A \cdot \exp\left(\frac{-(E - E_0)^2}{2\sigma^2}\right) + B \quad .
\label{eqn:eqn_gauss}
\end{equation}
The line is described by four fit parameters: amplitude $A$, background $B$, position $E_0$ and width $\sigma$. The initial parameter values were chosen as follows: the amplitude of 10 Hz, the background of 10 Hz, the line position of 25 eV, and the width (sigma) of 5 eV. The distribution of measurement points was chosen to be an uniform one from 0 eV up to 50 eV with a step of 1 eV.
\begin{figure}
\begin{center}
\includegraphics{fig2_graf_gauss_sig.eps}
\end{center}
\caption{An example of the optimal distribution of measurement time. The time distribution minimizes the standard deviation of Gaussian line width fitting four parameters: amplitude, background, position, and width. The initial parameter values are given in text. For convenience the Gaussian line is shown as well.}
\label{fig:sig}
\end{figure}
We studied two cases; optimal time distribution with respect to the standard deviation of the Gaussian line position and that with respect to the line width.
As for the line {\it position}, the resulting optimum time distribution is given in fig. \ref{fig:pos}. It turned out, surprisingly,
that the most of the relevant information for this case is concentrated in four points only. The standard deviation of the line position improved by a factor of 2.0 with respect to the uniform time distribution.
When optimizing the line {\it width} standard deviation, we reached the time distribution depicted in fig. \ref{fig:sig}. Here, the particular standard deviation improved by a factor of 1.4. (In both cases, the optimization procedure needed 100 iterations.)
As the second example, we have chosen an electron spectroscopy experiment to determine the neutrino mass, in particular, the KATRIN experiment \cite{katrin_loi, katrin_des} aiming at neutrino mass sensitivity of $0.2\,$eV/$c^2$. The necessity of the single channel measurement originates from the intrinsic property of the applied spectrometer type, that is the only one exhibiting simultaneously required energy resolution and luminosity. There are four fit parameters in the experiment: the neutrino mass, beta spectrum endpoint, amplitude and background.
Integrated beta spectrum as proposed to be measured by the KATRIN experiment is given in \cite{katrin_des}, as well as the standard distribution of the measurement points, which is a non-equidistant one.
\begin{figure}
\begin{center}
\includegraphics{fig3_graf_mnu.eps}
\end{center}
\caption{An example of the optimal distribution of measurement time, which could be used by the KATRIN experiment. The time distribution minimizes the standard deviation of the neutrino mass when fitting four parameters: amplitude, background, the tritium $\beta$-spectrum endpoint, and the neutrino mass. Note, the initial free neutrino mass of 0 eV, and the initial free endpoint of 18\,575 eV was chosen to run the simulations.}
\label{fig:mnu}
\end{figure}
First, let the neutrino mass be the selected parameter. Then, starting with the uniform time distribution, and optimizing the time distribution in 16 iterations, the simulation resulted in the optimal time distribution shown in fig. \ref{fig:mnu}. The standard deviation of the neutrino mass was improved by factor $1.18$ compared to the uniform time distribution. The same improvement could be achieved by a prolongation of the total measurement time by factor of $1.9$. (Standard deviation of the neutrino mass scales with fourth root of the total measurement time, since the neutrino mass {\it squared} enters the form of tritium beta spectrum.)
\begin{figure}
\begin{center}
\includegraphics{fig4_graf_q.eps}
\end{center}
\caption{An optimal time distribution with respect to the minimal standard deviation of the endpoint energy assuming a fixed and known neutrino mass, and fitting amplitude, background, and the endpoint energy. Compare with the time distribution in fig. \ref{fig:mnu} optimized with respect to the standard deviation of the neutrino mass.}
\label{fig:q}
\end{figure}
\section{Other applications}
The method can also be effectively used to show what energy region of the measured spectrum affects mostly a standard deviation of a selected parameter. As an example, we focused on the endpoint of beta spectrum. Since the beta spectrum endpoint and the neutrino mass are strongly correlated \cite{katrin_loi, katrin_des}, we fixed the neutrino mass to $0\,$eV/$c^2$. Then, the optimal time distribution minimizing the standard deviation of the endpoint was derived. It is shown in fig. \ref{fig:q}.
The optimal distribution of measurement time was found following the method described in the above paragraphs using the same standard distribution of the measurement points as in the previous example. Here, it was the standard deviation of the tritium endpoint we were focused on, not the standard deviation of the neutrino mass. Again, starting with a uniform time distribution 10 iterations were performed. The method sets longer measurement times in the measured points that are sensitive to the beta spectrum endpoint.
We would like to note that the same method can be used to suppress systematics, i.e., to find a region of the spectrum, which is the most sensitive to systematics, and then to exclude the region from the set of the measurement points.
\section{Discussion}
A method to distribute the measurement time in single channel measurements into the measurement points to minimize the standard deviation of the selected fit parameter was offered and demonstrated on two examples. It worked the desired way. Even more, the method proved useful to show what energy region of a measured spectrum is sensitive to a selected parameter.
In place of summary, we would like to emphasize, that the method is mathematical. And so are its results. Although the method assumed a fixed distribution of measurement points, this assumption should be reconsidered if necessary. E.g., if the optimal distribution of measurement time aggregates most of the measurement time into few measurement points, then a more dense distribution of points in the particular measurement region is appropriate. This actually happened in the KATRIN example. The case of measurement points with negligible measurement time can be treated in a similar way. An extraordinary care should be paid to a case when regions sensitive to the selected parameter overlap regions sensitive to systematics.
\begin{ack}
This work was partly supported by the Grant Agency of the Czech Republic under contract No.\,202/06/0002, and by the Ministry of Education, Youth, and Sports of the Czech Republic under contracts No.\,LA 318, and LC 07050.
\end{ack}
|
train/arxiv
|
BkiUdfk5qYVBjW9queG2
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
Designing algorithms for solving ill-posed imaging inverse problems that both deliver high precision reconstructions and scale to large data volumes is a significant challenge of imaging sciences.
A problem where such considerations are paramount is aperture synthesis by Radio Interferometry (RI) in astronomy, an iconic example of linear inverse Fourier imaging problems \cite{wiaux2009compressed}.
In this context, the acquisition of an unknown target radio image $\overline{x}\in\mathbb{R}^N$ from observed data $z\in\mathbb{C}^M$ follows the observation model
\begin{equation}
z = H\overline{x} + e,
\label{eq:inv_pb}
\end{equation}
where $e\sim\mathcal{N}(0,\sigma^2)$ is the realisation of a complex Gaussian random noise, and where the linear measurement operator $H\colon \mathbb{R}^N \to \mathbb{C}^M$ represents (in its simplest form, see \cite{terris2022image} for details) a non-uniform Fourier transform.
More precisely, $H = UFZ$, where $Z$ is a zero-padding operator, $F$ denotes the 2D discrete Fourier transform, and $U$ is an interpolation matrix mapping the discrete Fourier coefficients onto the measured values of the continuous Fourier transform. The set of sampled values is characterised by a so-called sampling pattern. With the advent of a whole new generation of radio telescopes arrays aiming to probe the sky with much higher resolution and sensitivity, such as the flagship MeerKAT (\href{https://www.sarao.ac.za/}{sarao.ac.za}) and SKA (\href{https://www.skatelescope.org/}{skatelescope.org}), extreme image dimensions and data volumes are to be expected, along with dynamic ranges (\emph{i.e.}~the ratio between the brightest and faintest sources in $\overline{x}$) spanning multiple orders of magnitude.
Deep Neural Networks (DNNs) have shown outstanding results in solving linear inverse imaging problems. On the one hand, end-to-end approaches provide extremely fast reconstruction. They are widespread in other imaging communities \cite{ahmad2020plug, muckley2021results, liang2021swinir}, but their use remains limited in RI imaging \cite{terris2022image, gheller2021convolutional, connor2022deep}. This is mainly due to the lack of ground-truth datasets, combined with a wide intrinsic variability of the RI observation model, leading to generalisation issues. While unfolded architectures \cite{adler2018learned, banert2020data} provide necessary robustness to variations of the measurement setting, embedding large-scale measurement operators in DNN architectures is impractical, both for training and inference. On the other hand, Plug-and-Play (PnP) algorithms \cite{venkatakrishnan2013plug, zhang2021plug, pesquet2021learning}, substituting learned DNN denoisers in lieu of proximal regularisation operators in optimisation algorithms,
have shown outstanding performance and robustness, including for high-dynamic range imaging \cite{terris2022image,dabbech2022first}. However, PnP approaches remain highly iterative and will still struggle to scale to the image sizes and data volumes of interest in applications such as RI imaging.
In this paper, a new residual DNN series approach to large-scale high-dynamic range computational imaging is devised, where the reconstructed image is built as a sum of few
residual images progressively increasing the dynamic range, and estimated iteratively as output of DNNs taking the back-projected data residual of the previous iteration as input. We show on preliminary simulations for RI imaging that such a strategy may enable to achieve an imaging quality matching that delivered by the most advanced optimisation and PnP algorithms, while being orders of magnitude faster thanks to its very limited iterative nature.
\section{Methodology}
\label{sec:approach}
\subsection{Proposed approach}
\label{subsec:proposedapproach}
We propose to train a series of DNNs $(G_{k})_{1\leq k \leq K}$ yielding a sequence of estimated reconstructions $(x_k)_{1\leq k \leq K}$ of $\overline{x}$, following the update rule
\begin{equation}
(\forall 1\leq k \leq K) \quad
\left\{
\begin{aligned}
r_{k-1} &= \gamma H^*(z-Hx_{k-1}),\\ x_{k} &= x_{k-1}+G_k(r_{k-1}),
\end{aligned}
\right.
\label{eq:sum_rules}
\end{equation}
where $x_0=0$ and where $(r_{k-1})_{1\leq k \leq K}$ corresponds to the data residuals back-projected in the image domain.
In \eqref{eq:sum_rules},
we set $\gamma = 1/\operatorname{max}(H^*H \delta)$ to ensure the normalisation of back-projected data residuals at the input of the networks ($\delta$ being the Dirac delta).
The last element of the sequence
and final reconstruction
$x_K$ also reads
\begin{equation}
x_K = \sum_{k=1}^K G_k(r_{k-1}),
\label{eq:sum_interp}
\end{equation}
hence the name ``deep network series''. In practice, we aim at keeping $K$ as small as possible. We also note that the first iteration is equivalent to a standard end-to-end learning approach, as $x_1=G_1(r_0)$ with $r_0=H^*z$ the back-projected data. The approach is summarised in Figure~\ref{fig:res}.
\begin{figure}[t]
\includegraphics[width=.48\textwidth, trim=0cm 4cm 0cm 4cm, clip]{images/series_summary}
\vspace{-2.25em}
\caption{\small Summary of the proposed approach. Top row shows the successive image estimates $x_k$ for $1\leq k \leq K$ while the bottom row shows the back-projected data residuals $r_{k-1}$, input to $G_k$, and the predicted image residuals $x_k-x_{k-1}$, output of $G_k$. Each network is shown in a different colour to stress that it is trained separately. In blue, the computation of $r_{k-1}$. Note that $x_0=0$ and $r_0=H^*z$.}
\vspace{-1.25em}
\label{fig:res}
\end{figure}
In order to train the sequence of DNNs $(G_k)_{1\leq k \leq K}$, we start from a dataset of $L$ synthetic ground-truth images $(\overline{x}_{l})_{1\leq l \leq L}$ from which we simulate measurements $(z_{l})_{1\leq l \leq L}$ as per \eqref{eq:inv_pb}. Then, for every $1\leq k \leq K$, we generate a dataset of residuals $(r_{k-1,l})_{1\leq l\leq L}$ as per \eqref{eq:sum_rules} and train $G_k$ to
\begin{equation}
\underset{\theta_k}{\text{minimise}}\,\, \frac{1}{L}\sum_{l=1}^{L} \|G_{k_,\theta_k}(r_{k-1,l})+x_{k-1,l}-\overline{x}_l\|_1,
\end{equation}
where $\theta_k$ denotes the learnable parameters of $G_k$.
\subsection{Relation to unfolding and PnP}
The proposed approach is reminiscent of both unfolded and PnP algorithms based on the Forward-Backward optimisation algorithm \cite{bauschke2017convex}, as the back-projected data residuals in \eqref{eq:sum_rules} can be written as $r_{k-1}=-\gamma\nabla f(x_{k-1})$ for $f(\cdot) = \frac{1}{2}\|H\cdot-z\|_2$. There are however important differences with such schemes. Firstly, instead of applying $G_k$ to $\operatorname{Id}-\gamma \nabla f$, $G_k$ is applied to the residual $-\gamma \nabla f$ only. Secondly, unfolded architectures are trained in an end-to-end fashion, whereas our approach proposes to train $G_k$ sequentially, with a dataset of back-projected data residuals $(r_{k-1,l})_{1\leq l\leq L}$ as input. We stress that this procedure,
involving the training of $K$ DNNs, is dictated by the large-scale nature of $H$, precluding its direct embedding into a DNN architecture. Thirdly, DNNs in PnP algorithms
are generally the same across iterations and trained as simple denoisers.
In its series expression \eqref{eq:sum_interp}, our approach is reminiscent of Neumann networks \cite{gilton2019neumann}. But the latter are unfolded architectures, embedding a single DNN regulariser across iterations.
\section{Experiments}
\label{sec:experiments}
\subsection{Training RI dataset \& training}
\label{dataset}
To circumvent the absence of an appropriate dataset of target radio images, we rely on a synthetic dataset from \cite{terris2022image}, where noisy, low-dynamic range optical images have been processed through denoising and exponentiation procedures to create a dataset of $L=2235$ clean images $(\overline{x}_{l})_{1\leq l \leq L}$, of size $N = 512\times 512$, and with nominal dynamic range slightly above $10^4$. The full dataset is normalised so that the maximum pixel value across all images is equal to 1.
For each image $\overline{x}_l$ we create data $z_l = H_l\overline{x}_l+e_l$, with a measurement operator $H_l$ resulting from simulating a sampling pattern of the MeerKAT telescope, with fixed integration and observation times, and randomly chosen pointing direction, leading to data vectors of size $M = 1.6\times 10^6$. The measurement noise $e_l$ is generated with fixed standard deviation set to ensure a back-projected noise level $\sigma/ \sqrt{2\|H_l^*H_l\|_2}\lesssim 10^{-4}$, thus leading to a target dynamic range of the order of, or slightly above, $10^4$. Following standard practice in the field, the data $z_l$, operator $H_l$, and noise $e_l$ are weighted component-wise by the inverse square-root of the sampling density at each point to mitigate the side-lobes of the point spread function $H_l^*H_l\delta$ (see Figure~\ref{fig:step_visualisation}.c).
We train the sequence of networks $(G_k)_{1\leq k \leq K}$ following the procedure detailed in Section~\ref{subsec:proposedapproach}. Each network $G_k$ has the same architecture, namely a UNet \cite{zhang2021plug} in which convolutional blocks were replaced by WDSR blocks \cite{yu2018wide}. The networks also include an initial normalisation layer, ensuring that any input image effectively has zero mean and unit standard deviation, mitigating potential generalisation issues related to variations of statistics between the training samples and any test sample. The output image is de-normalised as part of a last layer, using the same mean and standard deviation as computed at the input. Each network is trained with the Adam algorithm, for approximately 200 epochs for each network. In practice, we noticed that the reconstruction quality saturates for $K>4$ on the training dataset, and therefore chose $K=4$ for all our experiments.
\subsection{Test RI dataset \& simulation set up}
We validate the proposed methodology on a test dataset of 3 radio images (3c353, Hercules A, and Centaurus A, see \cite{terris2022image} for more details) of size $N = 512\times 512$, with peak value normalised to 1, and dynamic range slightly above $10^4$. For each of them, 5 RI data vectors are simulated from 5 MeerKAT sampling patterns generated as in Section \ref{dataset}, using the same integration and observation time, same noise standard deviation, and random pointing directions. This leads to a total of 15 inverse problems with data vectors of size $M = 1.6\times 10^6$.
Reconstruction quality is evaluated with the signal-to-noise ratio metric, as $\text{SNR}(x,\overline{x}) = 20\operatorname{log}_{10}(\|\overline{x}\|_2/\|\overline{x}-x\|_2)$ for an estimate $x$ of $\overline{x}$.
Since we aim at reconstructing high-dynamic range images, we visualise reconstructions in logarithmic scale (\emph{i.e.}~after applying the transform $\operatorname{rlog}\colon x \mapsto \operatorname{log}_{10}(10^3x+1)/3$), which helps visualising very faint features. We also compute the SNR in logarithmic scale as $\text{logSNR}(x,\overline{x}) = \text{SNR}(\operatorname{rlog}(x),\operatorname{rlog}(\overline{x}))$. When evaluating the reconstruction quality of the proposed method across iterations, we also use the relative norm of the back-projected data residuals $\eta_k=\|r_k\|_2/\|r_0\|_2$, measuring the relative amount of data left to be processed after each iteration.
\subsection{Benchmark RI algorithms}
We compare the proposed method to various RI imaging algorithms, from the recent and advanced sparsity-promoting optimisation algorithm SARA \cite{onose2016scalable, thouvenin2022parallel}, to its PnP counterpart AIRI \cite{terris2022image, dabbech2022first}, to the traditional RI imaging algorithm CLEAN \cite{offringa2014wsclean}, and to an end-to-end approach resulting from cutting the network series to $K=1$ in the proposed approach.
Interestingly, CLEAN relies on a matching pursuit approach iteratively identifying model components from a back-projected data residual. From this perspective our approach is conceptually close to CLEAN. However, while the simplicity of CLEAN's implicit sparsity image model makes it scalable to large image dimensions and data volumes, this comes at the cost of a now commonly accepted severe sub-optimality in reconstruction quality. Our approach leverages a fundamentally different, DNN-based, component identification process from the back-projected data residual at each iteration.
\subsection{Results}
\begin{figure}[t] \small %
\centering %
\begin{tabular}{@{\hspace{-0.\tabcolsep}} c @{\hspace{0.1\tabcolsep}} c @{\hspace{0.1\tabcolsep}} c @{\hspace{0.1\tabcolsep}} c @{\hspace{0.\tabcolsep}} l @{\hspace{0.\tabcolsep}}}
\includegraphics[width=0.11\textwidth]{images/results/zoom_3c353_ter.pdf} &
\includegraphics[width=0.11\textwidth]{images/results/zoom_dirty_bis.pdf} &
\includegraphics[height=0.11\textwidth]{images/results/sampling_pattern.png}
&
&
\\
(a) $\overline{x}$ & (b) $H^*z$ & (c) sampling & & \\
\includegraphics[width=0.11\textwidth]{images/results/dirty.png} &
\includegraphics[width=0.11\textwidth]{images/results/res1.png} &
\includegraphics[width=0.11\textwidth]{images/results/res2.png} &
\includegraphics[width=0.11\textwidth]{images/results/res3.png} &
\raisebox{-0.\height}[0pt][0pt]{\includegraphics[width=0.025\textwidth]{images/results/errorbar_vertical.png}} \\
(d) $r_0=H^*z$ & (e) $r_1$ & (f) $r_2$ & (g) $r_3$ \\
\includegraphics[width=0.11\textwidth]{images/results/zoom_G1.pdf} &
\includegraphics[width=0.11\textwidth]{images/results/zoom_G2.pdf} &
\includegraphics[width=0.11\textwidth]{images/results/zoom_G3.pdf} &
\includegraphics[width=0.11\textwidth]{images/results/zoom_G4.pdf} &
\raisebox{-0.\height}[0pt][0pt]{\includegraphics[width=0.025\textwidth]{images/results/colorbar_vertical.png}} \\
(h) $x_1$ & (i) $x_2$ & (j) $x_3$ & (k) $x_4$ & \\
$(21.0, 11.0)$ & $(27.9, 22.3)$ & $(28.1, 24.2)$ &
$(28.2, 25.1)$
\end{tabular}
\vspace{-0.5em}
\caption{\small Different stages of the proposed approach illustrated on one of the three images (3c353) and for one of the five sampling patterns of the test dataset. Top row shows (a) the ground truth $\overline{x}$, (b) the back-projected data $H^*z$, and (c) the weighted Fourier sampling pattern (colorbar displaying weight values). The second row (panels (d)-(g)) and third row (panels (h)-(k)) show the back-projected data residuals $r_{k-1}$ and reconstructions $x_k$ for $1\leq k \leq 4$, below which associated (SNR, logSNR) metrics are displayed.}
\label{fig:step_visualisation}
\end{figure}
\begin{figure}[t] %
\begin{tabular}{@{\hspace{-0.\tabcolsep}} c @{\hspace{-0.\tabcolsep}} c @{\hspace{-0.\tabcolsep}}} %
\includegraphics[width=0.235\textwidth, trim=0cm 0.25cm 0cm 0.2cm, clip]{images/results/metrics.png} &
\includegraphics[width=0.24\textwidth, trim=0cm 0.25cm 0cm 0.2cm, clip]{images/results/residual_norms_vert.png}
\end{tabular}%
\vspace{-0.75em}
\caption{\small Evolution of average reconstruction quality across iterations $1\leq k \leq 4$ over the test dataset, with error bars indicating $95\%$ confidence intervals. Left: SNR and logSNR metrics for $x_k$. Right: Back-projected data residual norm $\eta_k$.}
\vspace{-0.75em}
\label{fig:metrics_average}%
\end{figure}%
\begin{figure*}[t] \small
\centering
\begin{tabular}{@{\hspace{0.\tabcolsep}} c @{\hspace{0.05\tabcolsep}} c @{\hspace{0.05\tabcolsep}} c @{\hspace{0.05\tabcolsep}} c @{\hspace{0.05\tabcolsep}} c @{\hspace{0.05\tabcolsep}} c @{\hspace{0.05\tabcolsep}} c @{\hspace{-0.0\tabcolsep}} l @{\hspace{0.\tabcolsep}}}
(a) Ground truth & (b) Back-projected & (c) CLEAN & (d) SARA & (e) AIRI & (f) $x_1$ & (g) $x_4$ \\
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_hercA_4_true} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_hercA_4_dirty} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_hercA_4_wsclean} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_hercA_4_SARA} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_hercA_4_uPnP_astro} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_hercA_4_G1} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_hercA_4_G4} &
\raisebox{-0.5\height}[0pt][0pt]{\includegraphics[width=0.05\textwidth]{images/results/colorbar_vertical.png}} \\
& $(-11.0, -6.9)$ & $(-2.7, 0.2)$ & $(\mathbf{27.0}, 24.6)$ & $(25.0, 24.7)$ & $(20.8, 11.8)$ & $(26.0, \mathbf{25.1})$\\
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_cena_2_true} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_cena_2_dirty} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_cena_2_wsclean.pdf} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_cena_2_SARA} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_cena_2_uPnP_astro} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_cena_2_G1} &
\includegraphics[width=0.135\textwidth]{images/results_bis/zoom_cena_2_G4} &
\\
& $(-13.5, -6.8)$ & $(-4.6, 0.4)$ & $(26.8, \mathbf{24.7})$ & $(25.3, 24.3)$ & $(18.0, 16.9)$ & $(\mathbf{27.7}, 23.8)$ \\
\end{tabular}
\vspace{-1em}
\caption{\small Reconstruction results for the various algorithms considered, for the Hercules A (top) and Centaurus A (bottom) test images, each probed through one of the 5 test MeerKAT sampling patterns. (SNR, logSNR) metrics (in dB) are displayed below each reconstruction.}
\vspace{-0.75em}
\label{fig:comparison_3c353_dt8}
\end{figure*}
\begin{table}[t]
\centering
\small
\begin{tabular}{@{\hspace{0.1\tabcolsep}}l@{\hspace{1.1\tabcolsep}}c@{\hspace{1.1\tabcolsep}}c@{\hspace{1.1\tabcolsep}}c@{\hspace{1.1\tabcolsep}}c@{\hspace{0.1\tabcolsep}}}
\toprule
Algorithm & SNR & logSNR & time (s.) & iteration \#\\
\midrule
CLEAN & $5.3 \pm 0.1$ & $8.2 \pm 0.4$ & $217 \pm 5$ & $8.8\pm 0.2$\\
SARA & $\mathbf{26.6} \pm 0.8$ & $\mathbf{24.9} \pm 0.3$ & $ 11291\pm 563$ & $4825\pm190$ \\
AIRI & $25.1 \pm 0.8$ & $\mathbf{25.0} \pm 0.4$ & $4258 \pm 140$ & $4515\pm103$ \\
\midrule
$x_1$ & $20.3 \pm 1.1$ & $13.3 \pm 1.2$ & $\mathbf{1.0}\pm 0.2$ & $1$\\
$x_4$ & $\mathbf{26.8} \pm 1.1$ & $24.5 \pm 0.4$ & $3.9 \pm 0.6$ & $4$\\
\bottomrule
\end{tabular}
\caption{\small Reconstruction SNR, logSNR, times, and number of iterations, for the different imaging algorithms considered. Values reported are averages over the test dataset, with associated 95\% confidence intervals.}
\vspace{-1.25em}
\label{tab:comparison_table}
\end{table}
Figure~\ref{fig:step_visualisation} shows the different stages of the proposed reconstruction method, on 3c353 and for the sampling pattern shown in panel (c). Figure~\ref{fig:metrics_average} shows the average SNR and logSNR metrics and residual norms on the 15 inverse problems of the test dataset. The first estimate $x_1 = G_1(r_0)$ provides good results at high intensities but is not accurate at faint intensities. This translates into relatively high SNR, but poor logSNR values. The associated residual $r_1$ contains important information and is far from random noise. The output $x_2$ is much more accurate than $x_1$, not only at high intensities (improvement of the SNR), but also at fainter intensities (improvement of the logSNR).
It is also associated with a residual $r_2$ containing much less information than $r_1$, also evidenced by $\eta_2<\eta_1$, supporting the observation that the $x_2$ reconstruction is more faithful to the measurements than $x_1$. While the SNR does not evolve significantly between $x_2$ and $x_3$, with also $\eta_3\simeq\eta_2$, it appears clearly that $r_3$ contains less residual signal structure than $r_2$, suggesting that $x_3$ provides further improvement at faint intensities. This is evidenced by a noticeable improvement in logSNR. A final logSNR increase is brought in $x_4$.
Table~\ref{tab:comparison_table} shows the average SNR, logSNR, reconstruction times, and number of iterations, as well as associated 95\% confidence intervals over the 15 test inverse problems, with comparison to the benchmark algorithms. Figure~\ref{fig:comparison_3c353_dt8} provides visual reconstruction results of Hercules A and Centaurus A, for 2 sampling patterns different from the one in Figure~\ref{fig:step_visualisation}. In terms of imaging quality, the reported statistics confirm that the proposed approach is on par with SARA and AIRI, all three methods achieving high resolution and high dynamic range. The end-to-end approach falls short from delivering a similar resolution and dynamic range, but is still significantly better than CLEAN. In terms of reconstruction times, the proposed approach is two orders of magnitude faster than CLEAN, and three orders of magnitude faster than AIRI, itself more than twice as fast as SARA. The computation time of each method crudely evolves proportionally to the number of iterations. More precisely, the proposed method, the end-to-end approach, and AIRI, share a very similar computation time per iteration, due to a similar iteration structure consisting in computing a residual followed by application of a DNN. SARA's iteration cost is heavier due to costly proximal operations associated with its regularisation approach. CLEAN's iteration cost is also heavier, mainly due to the fact it does not keep the measurement operator in memory.
We finally note that preliminary results not reported here suggest that the proposed hybrid DNN architecture used outperforms architectures such as UNet and WDSR, both in the end-to-end and residual network series approaches.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed a novel residual DNN series approach for solving large-scale high-dynamic range computational imaging problems, circumventing scalability challenges encountered by both unfolded end-to-end architectures and PnP approaches. Each network of the series, trained to transform a back-projected data residual into an image residual, improves the dynamic range from the previous iteration, with the sum of the output residual images across iterations representing the final reconstruction.
Simulation results for RI imaging show that a series of only a few terms achieves similar reconstruction quality to the state-of-the-art optimisation algorithm SARA and PnP method AIRI, whilst being orders of magnitude faster.
Future work should investigate the robustness of the approach to the wide variations of measurements settings characterising RI observation, and its practical effectiveness on real large-scale high-dynamic range data. The suggested superiority of the DNN architecture proposed over state-of-the-art architectures should also be confirmed. The application of the proposed methodology to other computational imaging modalities is naturally an open question.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
train/arxiv
|
BkiUdsM5qoTBEmQ0P13l
| 5 | 1 |
\section{Introduction}
The spin properties of gluons and quarks are rather different. In particular,
this is manifested in the fact, that there is no analog of the twist-two
transversity distributions for gluons and their contribution
to transverse asymmetry starts at twist-three level. This is leading,
generally speaking, to the relative suppression
of gluon transverse asymmetries with respect to the quark ones,
which was used recently to formulate the selection rule \cite{JaSa}
in QCD, which is of direct relevance for the physics programme
of the future polarized $pp$ collider at RHIC.
However, the detailed analysis of the quark contribution to the
double transverse spin asymmetries $A_{TT}$
using the Monte-Carlo simulation \cite{MS}, resulted in rather small numbers
of the order of $1\%$, and therefore it seems natural to
question the role of gluon corrections.
This is a subject of the present paper.
The transverse polarization effects are arising from two basic
sources: the leading twist transversity distribution, resulting
in the correlation of transverse polarizations, and the twist-three
parton correlations, suppressed by hadron mass. While the first are
absent for gluons, the gluon correlations are, generally speaking,
rather complicated \cite{Ji}.
At the same time, the experimental data on the $g_2$ structure
function \cite{g2}
do not deviate strongly from the twist-two approximation,
suggested by Wandzura and Wilczek (WW) \cite{WW}, whose physical meaning
is just the dominance of the effect of transverse motion of the quark
over that of the gluon field \cite{kt}.
This is the reason, why we are suggesting here
a generalization of the WW approximation to
the case of gluons.
To do this, let us start from the light-cone density matrix of gluon,
where we ignore twist-four term, namely:
\begin{eqnarray}
\label{dm}
\int{{d\lambda \over2\pi}}
e^{i\lambda x}
\langle p,s|A^\rho(0) A^\sigma (\lambda n) |p,s \rangle =
d^{\rho \sigma}G(x)+M[\Delta G (x) (sn) \epsilon ^{\rho \sigma p n}
+\Delta G_T (x)
\epsilon ^{\rho \sigma s_T n}],
\end{eqnarray}
where $n$ is the gauge-fixing light-cone vector such that $np=1$,
and we define the two transverse tensors
$d_{\rho \sigma}=g_{\rho \sigma}-p_\rho n_\sigma-n_\rho p_\sigma$ and
$\epsilon ^{\rho \sigma p n}=\epsilon ^{\rho \sigma \alpha \beta }
p_{\alpha }n_{\beta }$. We denote by
$s_{\mu }$ the covariant
polarization vector of the proton of momentum
$p$ and mass $M$ and we have $s^2=-1, sp=0$ and $s_T=s-p(sn)$ which
corresponds to
the transverse polarization.
Here $G(x)$ and $\Delta G(x)$ are the familiar unpolarized gluon distribution
and gluon helicity distribution, respectively.
The transverse gluon distribution $\Delta G_T$
is the most natural measure of transverse polarization,
analogous to the quark structure function $g_T=g_1+g_2$,
since in the quark case we have:
\begin{eqnarray}
\label{quark}
{1\over{2M}}\int{{d\lambda \over2\pi}}e^{i\lambda x}
\langle p,s|\bar \psi (0)\gamma^{\mu}
\gamma ^5\psi (\lambda n)|p,s \rangle=g_1(x)(sn)p^{\mu}+g_T (x) s_T^{\mu}.
\end{eqnarray}
The quantity
$g_T$ was shown
to be the good variable to study the generalized Gerasimov-Drell-Hearn
sum rule, and the $x-$dependence of the
anomalous gluon contribution \cite{ST95}.
The latter result was recently confirmed \cite{rav}.
The light-cone distributions $\Delta G$ and $\Delta G_T$ can be
easily obtained
by the projection of gluon density matrix,so we have,
\begin{eqnarray}
\label{distr}
\Delta G(x)={1\over{4M(sn)}} \int{{d\lambda \over2\pi}}
e^{i\lambda x}
\langle p,s|A_\rho(0) A_\sigma (\lambda n) |p,s \rangle
\epsilon ^{\rho \sigma p n}, \nonumber \\
\Delta G_T(x)={1\over{4M(s^2)}} \int{{d\lambda \over2\pi}}
e^{i\lambda x}
\langle p,s|A_\rho(0) A_\sigma (\lambda n) |p,s \rangle
\epsilon ^{\rho \sigma p s}.
\end{eqnarray}
Now by making use of the axial gauge $An=0$, one may express their moments
\footnote {The first moment requires to take into account the non-local
operator \cite{ET88,BB}.At the same time, the
non-local operators found in the renormalization of
the gluon contribution to $g_2$ \cite{BET94}
should be equal to zero, when one uses the gauge invariance and
equations of motion.}
in terms of gluon field
strength $G_{\mu \nu}$, according to
\begin{eqnarray}
\label{distrG}
\int_0^1 dx x^k \Delta G(x)={1\over{4M(sn)}}
\langle p,s|G_{\rho n}(0) (i \partial n)^{k-1} G_{\sigma n}(0) |p,s \rangle
\epsilon ^{\rho \sigma p n}, \nonumber \\
\int_0^1 dx x^k \Delta G_T(x)={1\over{4M(s^2)}}
\langle p,s|G_{\rho n}(0) (i \partial n)^{k-1} G_{\sigma n}(0) |p,s \rangle
\epsilon ^{\rho \sigma p s}.
\end{eqnarray}
We denote here $G^{\mu n}=G^{\mu \nu} n_{\nu},
\partial n=n_{\mu}\partial^{\mu}$, and we recall that in configuration space
$x^k=(i \partial n)^k$.
The kinematical identities, implied by the vanishing of the
totally antisymmetric tensor of rank 5 in four-dimensional space,
\begin{eqnarray}
\label{kin}
n^{\mu}\epsilon_{\rho \sigma p n}-
n^{\sigma}\epsilon_{\rho \mu p n}+
n^{\rho}\epsilon_{\sigma \mu p n}=\epsilon_{\rho \sigma \mu n} \
\mbox{and}
\ n^{\mu}\epsilon_{\rho \sigma p s}-
n^{\sigma}\epsilon_{\rho \mu p s}+
n^{\rho}\epsilon_{\sigma \mu p s}=\epsilon_{\rho \sigma \mu s_T}
\end{eqnarray}
allow one to come to the standard gluonic operators,
used in the operator product expansion for spin-dependent
case \cite{AR}
\begin{eqnarray}
\label{momG}
\int_0^1 dx x^k \Delta G(x)=
{i^{k-1}\over{2M(sn)}}
\langle p,s|\tilde G_{\sigma \alpha}(0)
\partial ^{\mu_1}...\partial ^{\mu_{k-1}}
G_{\sigma \beta}(0) |p,s \rangle
n^{\alpha} n_{\beta} n_{\mu_1}...n_{\mu_{k-1}}, \nonumber \\
\int_0^1 dx x^k \Delta G_T(x)=
{i^{k-1}\over{2M(s^2)}}
\langle p,s|\tilde G_{\sigma \alpha}(0)
\partial ^{\mu_1}...\partial ^{\mu_{k-1}}
G_{\sigma \beta}(0) |p,s \rangle
s_T^{ \alpha} n_{\beta} n_{\mu_1}...n_{\mu_{k-1}},
\end{eqnarray}
where $\tilde G_{\sigma \alpha}={1 \over 2} \epsilon_{\sigma \alpha \mu \nu}
G_{\mu \nu}$.
Taking the totally symmetric part of the matrix element
\begin{eqnarray}
\label{matr}
i^{k-1}\langle p,s|\tilde G_{\sigma \alpha}(0)
\partial ^{\mu_1}...\partial ^{\mu_{k-1}}
G_{\sigma \beta}(0) |p,s \rangle =a_k S_{\alpha \beta \mu_1...\mu_{k-1}}
s^{\alpha}
p^{\beta} p^{\mu_1}... p^{\mu_{k-1}},
\end{eqnarray}
where $S$ denotes the total symmetrization and $a_k$ is the scalar constant,
one immediately obtains the relation
\begin{eqnarray}
\label{momWW}
\int_0^1 dx x^k \Delta G(x)=(k+1)\int_0^1 dx x^k \Delta G_T(x),
\end{eqnarray}
which is equivalent to the WW formula:
\begin{eqnarray}
\label{xWW}
\Delta G_T(x)=\int_x^1 {\Delta G (z)\over z}dz.
\end{eqnarray}
The existence of this relation is very natural because of the
similarity between quark and gluon
density matrices (see eqs. (\ref{quark}) and (\ref{dm}) ).
Our present knowledge on $\Delta G(x)$, which is not very precise,
allows a great freedom, so several different parametrizations have been
proposed in the literature \cite{B1,B2,GS,GRSV}. We show in Figs 1a and 2a
some possible gluon helicity distributions $x \Delta G(x)$ and in Figs 1b and
2b the corresponding $x \Delta G_T(x)$ obtained by using (\ref{xWW}).
It is worth
noting from these pictures that, in all cases $x \Delta G(x)$ and
$x \Delta G_T(x)$ are rather similar in shape and magnitude.
Let us now move to the calculation of short-distance subprocess.
For this, it is instructive to compare the two terms in the
gluon density matrix (\ref{dm}).
While the longitudinal term is in fact a two-dimensional transverse
antisymmetric tensor and corresponds to the density matrix
of a circularly polarized gluon
\begin{eqnarray}
\label{londen}
\Delta G(x) \epsilon ^{\rho \sigma p n}=
\Delta G(x) \epsilon_{TT} ^{\rho \sigma} ,
\end{eqnarray}
the transverse polarization term generates the circular
polarization in the plane, defined by one transverse and one
longitudinal direction
\begin{eqnarray}
\label{trden}
M \Delta G_T(x) \epsilon ^{\rho \sigma s_T n}=
\Delta G_T(x) \epsilon_{TL} ^{\rho \sigma} ,
\end{eqnarray}
and therefore corresponds to the circular {\it transverse} polarization
of gluon. Such a polarization state is clearly impossible for
on-shell collinear gluons. They should have either nonzero virtuality,
or nonzero transverse momentum. Note that one of these effects is required to
have nonzero anomalous contribution to the first moment of the structure
function $g_1$\cite{EST,CCM}.
One may consider this similarity as supporting the
mentioned relations between $\Delta G_T$ and anomalous gluon contribution
\cite{ST95,rav}.
We should adopt the second possibility, namely a nonzero
transverse momentum, because the gluon remains on-shell
and the explicit gauge invariance is preserved. In this case,
the transverse polarization of nucleon may be converted to the longitudinal
circular polarization of gluon. The similar effect for quarks was
discussed earlier \cite{Ratcl,kt}.
To calculate now the asymmetry
in short-distance subprocess it is enough to find
the effective longitudinal polarization by projecting the
transverse polarization onto the gluon momentum:
\begin{eqnarray}
\label{pro}
s_L={\vec s_T \vec k \over{|\vec k|}} =s_T {k_T \over k_L}.
\end{eqnarray}
The partonic double transverse asymmetry can be easily obtained from the
longitudinal one according to,
\begin{eqnarray}
\label{att}
\hat A_{TT}={ {k_{T1} k_{T2}}\over {k_{L1} k_{L2}}} \hat A_{LL}.
\end{eqnarray}
By neglecting the transverse momentum dependence of $\hat A_{LL}$ one has
\begin{eqnarray}
\label{attq}
\hat A_{TT}={2\langle k_T^2\rangle \over {\hat s}} \hat A_{LL},
\end{eqnarray}
where $\hat s$ is the partonic c.m. energy.
Here we see that $\hat A_{TT}$ is strongly suppressed with respect to
$\hat A_{LL}$ and even more than in the case of partonic processes
with initial quarks and antiquarks \cite{JaSa}. If we consider
the hadronic double transverse asymmetry $A_{TT}$ for the two-jet production,
one can simply relate it, within some approximation, to the corresponding
double helicity asymmetry $A_{LL}$ as follows
\begin{eqnarray}
\label{atth}
A_{TT}={2\langle k_T^2\rangle \over M^2_{JJ}} {\Delta G_T (x_1) \over
\Delta G (x_1) }
{\Delta G_T (x_2)\over \Delta G (x_2) }A_{LL},
\end{eqnarray}
where $M_{JJ}$ is the invariant mass of the dijet.
Since $\Delta G_T (x)/ \Delta G (x)$ is of order of unity
and assuming the
average transverse momentum to be of the order of $1 GeV$,
we see that for $M_{JJ}=10 GeV$, where $A_{LL}$ is at most $10\%$
at the maximum energy of RHIC, it
leads to $A_{TT} \sim 0.1\%$.
Actually, this small number is due to a large extent
to the small longitudinal asymmetry and small suppression factor
$k_T/M_{JJ}$. One may wonder that the double transverse asymmetry of
the direct photons with moderate $p_T \sim 5 GeV/c$ should be
of order $A_{TT} \sim A_{LL} (k_T/p_T)^2 \sim 1\%$.
In conclusion, we have obtained the analog of the
Wandzura-Wilczek relation for
the transverse circular gluon polarization which has same order of
magnitude and sign as the gluon helicity distribution.
A transversely polarized nucleon
picks up the intrinsic transverse momentum of gluon
defining naturally the mass parameter of transverse asymmetries
for subprocesses with initial gluons (see eq.(\ref{attq})).
This leads to a strong suppression for the hadronic double transverse
asymmetry $A_{TT}$, which is expected to be $10^{-3}$ for
dijet production and $10^{-2}$ for direct photon production at RHIC energy.
We are indebted to V.M. Braun, A.V. Efremov and J.M. Virey
for useful discussions.
O.T. is grateful to Centre de Physique Th\'eorique for warm hospitality
and to the Universit\'e de Provence for financial support. He was
partially supported by Russian Foundation of Fundamental Investigation
under Grant 96-02-17361.
This investigation was supported in part by INTAS Grant 93-1180.
\newpage
\begin{figure}[ht]
\hfill
\begin{minipage}{6.5in}
\caption{Fig. 1a. The gluon helicity distributions versus $x$ from
ref.\protect\cite{B1} (soft gluon is dotted curve, hard gluon
is dashed curve)
and from ref.
\protect\cite{B2} (solid curve).
Fig. 1b. The transverse polarization of the gluon obtained using
eq. (9) with labels corresponding to Fig. 1a}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\hfill
\begin{minipage}{6.5in}
\label{fig2a}
\caption{Fig. 2a. The gluon helicity distributions versus $x$ from
ref.\protect\cite{GS} (gluon A is solid curve, gluon B
is dashed curve, gluon C
is dotted curve )
and from ref.
\protect\cite{GRSV} (standard scenario, dotted dasched curve).
Fig. 2b. The transverse polarization of the gluon obtained using
eq. (9) with labels corresponding to Fig. 2a}
\end{minipage}
\end{figure}
|
train/arxiv
|
BkiUddI5qoYA4o7sC1Lf
| 5 | 1 |
\section{Introduction}
Deep convolutional neural networks (CNN) \cite{fukushima1982neocognitron,lecun1998gradient} are the backbone in vision-related tasks, including image classification \cite{krizhevsky2012imagenet}, object detection \cite{redmon2016you}, image generation \cite{goodfellow2014generative}, video recognition \cite{simonyan2014two}, and audio classification \cite{hershey2017cnn}. A closer look at the dominating success of deep CNNs reveals its basis on two factors.
The first factor is the strong capacity of the convolutional neural networks, usually characterized by the enormous model size. Recent state-of-the-art progresses usually result from very large models, with millions to billions of trainable parameters. For example, ImageNet \cite{deng2009imagenet} accuracy grows when larger VGGs \cite{simonyan2014very} (increasing 11 to 19 layers$\Longrightarrow$69\% to 74\% accuracy) or ResNets \cite{he2016deep} (increasing 18 to 152 layers$\Longrightarrow$70\% to 78\% accuracy) are used \cite{pytorchvision}. Consequently, the eager to use larger models for better accuracy naturally draws people's attention to the scalability and efficiency of training.
The second factor is the availability of big data, which oftentimes contain private and sensitive information. The usage of such data demands rigorous protection against potential privacy attacks. In fact, one standard approach to guarantee the protection is by differentially private (DP) \cite{dwork2006calibrating,dwork2014algorithmic} training of the models. Since \cite{abadi2016deep}, CNNs have achieved promising results under strong DP guarantee: CIFAR10 achieves 92.4\% accuracy in \cite{tramer2020differentially} and ImageNet achieves 81.1\% accuracy in \cite{de2022unlocking}.
Unifying the two driving factors of CNNs leads to the DP training of large CNNs. However, the following challenges are hindering our application of large and private CNNs in practice.
\textbf{Challenge 1: Time and space efficiency in DP training.}
DP training can be extremely inefficient in memory and speed. For example, a straightforward implementation in Tensorflow Privacy library shows that DP training can be $1000\times$ slower than the non-DP training, even on a small RNN \cite{bu2021fast}; other standard DP libraries, Opacus \cite{opacus} and JAX \cite{kurakin2022toward,subramani2021enabling}, which trade off memory for speed, could not fit a single datapoint into GPU on GPT2-large \cite{li2021large}; addtionally, $3\sim 9\times$ slowdown of DP training has been reported in \cite{kurakin2022toward,de2022unlocking,subramani2021enabling} using JAX.
The computational bottleneck comes from the per-sample gradient clipping at each iteration (see \eqref{eq:privatized grad}), a necessary step in DP deep learning. I.e., denoting the loss as $\sum_i\mathcal{L}_i$, we need to clip the per-sample gradient $\{\frac{\partial \mathcal{L}_i}{\partial \mathbf{W}}\}_i$ individually. This computational issue is even more severe when we apply a large batch size, which is necessary to achieve high accuracy of DP neural networks. In \cite{li2021large,kurakin2022toward,mehta2022large}, it is shown that the optimal batch size for DP training is significantly larger than for regular training. For instance, DP ResNet18 achieves best performance on ImageNet when batch size is 64*1024 \cite{kurakin2022toward}; and DP ResNet152 and ViT-Large use a batch size $2^{20}$ in \cite{mehta2022large}. As a result, an efficient implementation of per-sample gradient clipping is much-needed to fully leverage the benefit of large batch training.
\textbf{Challenge 2: Do large DP vision models necessarily harm accuracy?}
An upsetting observation in DP vision models is that, over certain relatively small model size, larger DP CNNs seem to underperform smaller ones. This is observed in models that are either pre-trained or trained from scratch \cite{klause2022differentially,abadi2016deep}. As an example of the pre-trained cases, previously state-of-the-art CIFAR10 is obtained from a small DP linear model \cite{tramer2020differentially}, and the fine-tuned DP ResNet50 underperforms DP ResNet18 on ImageNet \cite{kurakin2022toward}. On the contrary, the empirical evidence in DP language models shows that larger models can consistently achieve better accuracy \cite{li2021large}. Interestingly, we empirically demonstrate that the trend in \cite{li2021large} can possibly hold in vision models as well.
\subsection{Contributions}
In this work, we propose new algorithms to efficiently train large-scale CNNs with DP optimizers. To be specific, our contributions are as follows.
\begin{enumerate}
\item We propose a novel implementation, termed as the \textbf{\textit{mixed ghost clipping}}, of the per-sample gradient clipping for 1D$\sim$3D convolutional layers. The mixed ghost clipping is the first method that can \textbf{\textit{implement per-sample gradient clipping without per-sample gradients}} of the convolutional layers. It works with any DP optimizer and any clipping function almost as memory efficiently as in standard training, thus significantly outperforming existing implementation like Opacus \cite{opacus}.
\item In some tasks, mixed ghost clipping also claims supremacy on speed using a fixed batch size. The speed can be further boosted (say $1.7\times$ faster than the fastest alternative DP algorithms and only $2\times$ slower than the non-private training) when the memory saved by our method is used to fit the largest possible batch size.
\item We provide the first complexity analysis of mixed ghost clipping in comparison to other training algorithms. This analysis clearly indicates the necessity of our layerwise decision principle, without which the existing methods suffer from high memory burden.
\item Leveraging our algorithms, we can efficiently train large DP models, such as VGG, ResNet, Wide-ResNet, DenseNet, and Vision Transformer (ViT).
Using DP ViTs at ImageNet scale, we are the first to train convolutional ViTs under DP and achieve dominating SOTA on CIFAR10/100 datasets, thus bringing new insights that larger vision models can consistently achieve better accuracy under DP.
\end{enumerate}
\subsection{Previous arts}
The straightforward yet highly inefficient way of per-sample gradient clipping is to use batch size 1 and compute gradients with respect to each individual loss. Recently, more advanced methods have significantly boosted the efficiency by avoiding such a naive approach. The most widely applied method is implemented in the Opacus library \cite{opacus}, which is fast but memory costly as per-sample gradients $\bm{g}_i=\frac{\partial \mathcal{L}_i}{\partial \mathbf{W}}$ are instantiated to compute the weighted gradient $\sum_i C_i\cdot \bm{g}_i$ in \eqref{eq:privatized grad}. A more efficient method, FastGradClip \cite{lee2020scaling}, is to use a second back-propagation with weighted loss $\sum_i C_i\cdot \mathcal{L}_i$ to indirectly derive the weighted gradient.
In all above-mentioned methods and \cite{rochette2019efficient}, the per-sample gradients are instantiated, whereas this can be much inefficient and not necessary according to the `ghost clipping' technique, as to be detailed in \Cref{sec:ghost clipping}. In other words, ghost clipping proves that the claim `DP optimizers require access to the per-sample gradients' is wrong. Note that ghost clipping is firstly proposed by \cite{goodfellow2015efficient} for linear layers, and then extended by \cite{li2021large} to sequential data and embedding layers for language models. However, the ghost clipping has not been extended to convolutional layers, due to the complication of the convolution operation and the high dimension of data (text data is mostly 2D, yet image data are 3D and videos are 4D). In fact, we will show that even ghost clipping alone is not satisfactory for CNNs, and therefore propose the mixed ghost clipping, that narrows the efficiency gap between DP training and the regular training.
\section{Preliminaries}
\subsection{Differential privacy}
Differential privacy (DP) has become the standard approach to provide privacy guarantee for modern machine learning models. The privacy level is characterized through a pair of privacy quantities $(\epsilon,\delta)$, where smaller $(\epsilon,\delta)$ means stronger protection.
\begin{definition}[\cite{dwork2014algorithmic}]\label{def:DP}
A randomized algorithm $M$ is $ (\varepsilon, \delta)$-DP if for any neighboring datasets $ S, S^{\prime} $ that differ by one arbitrary sample, and for any event $E$, it holds that
\begin{align*}
\mathbb{P}[M(S) \in E] \leqslant \mathrm{e}^{\varepsilon} \mathbb{P}\left[M\left(S^{\prime}\right) \in E\right]+\delta.
\end{align*}
\end{definition}
In deep learning where the number of parameters are large, the Gaussian mechanism \cite[Theorem A.1]{dwork2014algorithmic} is generally applied to achieve DP at each training iteration, i.e. we use regular optimizers on the following privatized gradient:
\begin{align}
\widetilde \bm{g}=\sum_i C(\|\bm{g}_i\|;R) \cdot \bm{g}_i+\sigma R\cdot\mathcal{N}(0,\mathbf{I})=\sum_i C_i\bm{g}_i+\sigma R\cdot\mathcal{N}(0,\mathbf{I})
\label{eq:privatized grad}
\end{align}
where $C$ is any function whose output is upper bounded by $R/\|\bm{g}_i\|$ and $R$ is known as the clipping norm. To name a few examples of $C$, we have the Abadi's clipping $\min(R/\|\bm{g}_i\|,1)$ in \cite{abadi2016deep} and the global clipping $\mathbb{I}(\|\bm{g}_i\|<Z)\cdot R/Z$ for any constant $Z$ in \cite{bu2021convergence}. Here $\sigma$ is the noise multiplier that affects the privacy loss $(\epsilon,\delta)$, but $R$ only affects the convergence, not the privacy.
In words, DP training switches from updating with $\sum_i \bm{g}_i$ to updating with the private gradient $\widetilde \bm{g}$: SGD with private gradient is known as DP-SGD; Adam with private gradient is known as DP-Adam.
Algorithmically speaking, the Gaussian mechanism can be decomposed into two parts: the per-sample gradient clipping and the Gaussian noise addition. From the viewpoint of computational complexity, the per-sample gradient clipping is the bottleneck, while the noise addition costs negligible overhead.
In this work, our focus is the implementation of per-sample gradient clipping \eqref{eq:privatized grad}. We emphasize that our implementation is only on the algorithmic level, not affecting the mathematics and thus not the performance of DP optimizers. That is, our mixed ghost clipping provides exactly the same accuracy results as Opacus, FastGradClip, etc.
\vspace{-0.2cm}
\subsection{Per-sample gradient for free during standard back-propagation}
In DP training, the per-sample gradient is a key quantity which can be derived for free from the standard back-propagation. We briefly introduce the back-propagation on linear layers, following the analysis from \cite{goodfellow2015efficient,li2021large}, so as to prepare our new clipping implementation for convolutional layers. Note that the convolutional layers can be viewed as equivalent to the linear layers in \Cref{sec:conv as linear}.
Let the input of a hidden layer be $\a\in\ensuremath{\mathbb{R}}^{B\times \cdots\times d}$ (a.k.a. post-activation). Here $\a$ can be in high dimension: for sequential data such as text, $\a\in\ensuremath{\mathbb{R}}^{B\times T\times d}$ where $T$ is the sequence length; for image data, $\a\in\ensuremath{\mathbb{R}}^{B\times H\times W\times d}$ where $(H,W)$ is the dimension of image and $d$ is number of channels; for 3D objects or video data, $\a\in\ensuremath{\mathbb{R}}^{B\times H\times W\times D\times d}$ where $D$ is the depth or time length, respectively.
Denote the weight of a linear layer as $\mathbf{W}\in \ensuremath{\mathbb{R}}^{d\times p}$, its bias as $\b \in \ensuremath{\mathbb{R}}^{p}$ and its output (a.k.a. pre-activation) as $\mathbf{s} \in \ensuremath{\mathbb{R}}^{B\times \cdots\times p}$, where $B$ is the batch size and $p$ is the output dimension.
In the $l$-th layer of a neural network with $L$ layers in total, we denote its weight, bias, input and output as $\mathbf{W}_{(l)}, \b_{(l)},\a_{(l)}, \mathbf{s}_{(l)}$ respectively, and the activation function as $\phi$. Consider
\begin{align}
\a_{(l+1),i} = \phi (\mathbf{s}_{(l),i}) = \phi(\a_{(l),i}\mathbf{W}_{(l)}+\b_{(l)}).
\label{eq:linear forward}
\end{align}
Clearly the $i$-th sample's \textit{hidden feature} $\a_{(l),i}$ at layer $l$ is freely extractable during the forward pass.
Let $\mathcal{L}=\sum_{i=1}^n \mathcal{L}_i$ be the total loss and $\mathcal{L}_i$ be the per-sample loss with respect to the $i$-th sample. During a standard back-propagation, the following \textit{partial product} is maintained,
\begin{align}
\frac{\partial \mathcal{L}}{\partial \mathbf{s}_{(l),i}}
=&
\frac{\partial \mathcal{L}}{\partial \a_{(L),i}}\circ \frac{\partial \a_{(L),i}}{\partial \mathbf{s}_{(L-1),i}}\cdot
\frac{\partial \mathbf{s}_{(L-1),i}}{\partial \a_{(L-1),i}}\circ\cdots\frac{\partial \a_{(l+1),i}}{\partial \mathbf{s}_{(l),i}}= \frac{\partial \mathcal{L}}{\partial \mathbf{s}_{(l+1),i}}\mathbf{W}_{(l+1)}\circ\phi^{\prime}(\mathbf{s}_{(l),i})
\label{eq:back prop1}
\end{align}
so as to compute the standard gradient $\frac{\partial \mathcal{L}}{\partial \mathbf{W}_{(l)}}=\sum_i\frac{\partial \mathcal{L}_i}{\partial \mathbf{W}_{(l)}}$ in \eqref{eq:outer product linear}. Here $\circ$ is the Hadamard product and $\cdot$ is the matrix product. Therefore, $\frac{\partial \mathcal{L}}{\partial \mathbf{s}_{(l),i}}$ is also available for free from \eqref{eq:back prop1} and extractable by Pytorch hooks, which allows us compute the per-sample gradient by
\begin{align}
\frac{\partial \mathcal{L}_i}{\partial \mathbf{W}_{(l)}}=\frac{\partial \mathcal{L}_i}{\partial \mathbf{s}_{(l),i}}^\top\frac{\partial \mathbf{s}_i}{\partial \mathbf{W}_{(l)}}=\frac{\partial \mathcal{L}}{\partial \mathbf{s}_{(l),i}}^\top\a_{(l),i},
\quad\frac{\partial \mathcal{L}_i}{\partial \b_{(l)}}=\frac{\partial \mathcal{L}_i}{\partial \mathbf{s}_{(l),i}}^\top\frac{\partial \mathbf{s}_{(l),i}}{\partial \b_{(l)}}=\frac{\partial \mathcal{L}}{\partial \mathbf{s}_{(l),i}}^\top\mathbf{1}.
\label{eq:outer product linear}
\end{align}
\vspace{-0.2cm}
\subsection{Equivalence between convolutional and linear layer}
\label{sec:conv as linear}
In a convolutional layer\footnote{See a detailed explanation in \Cref{app:explain conv} for the $U,F$ operation and the dimension formulae in convolution.}, the forward pass is
\begin{align}
\a_{(l+1),i} = \phi (\mathbf{s}_{(l),i}) = \phi(F(U(\a_{(l),i})\mathbf{W}_{(l)}+\b_{(l)}))
\label{eq:conv forward}
\end{align}
in which $F$ is the folding operation and $U$ is the unfolding operation. To be clear, we consider a 2D convolution that $\a_{(l,i)}\in\ensuremath{\mathbb{R}}^{ H_\text{in}\times W_\text{in}\times d_{(l)}}$ is the input of hidden feature, $(H_\text{in},W_\text{in})$ is the input dimension, $d_{(l)}$ is the number of input channels. Then $U$ unfolds the hidden feature from dimension $(H_\text{in}, W_\text{in}, d_{(l)})$ to $(H_\text{out} W_\text{out}, d_{(l)}k_H k_W)$, where $k_H,k_W$ are the kernel sizes and $(H_\text{out},W_\text{out})$ is the output dimension. After the matrix multiplication with $\mathbf{W}_{(l)}\in\ensuremath{\mathbb{R}}^{d_{(l)}k_H k_W\times p_{(l)}}$, the intermediate output $\mathbf{s}_{(l),i}$ is folded by $F$ from dimension $(H_\text{out}W_\text{out}, p_{(l)})$ to $(H_\text{out},W_\text{out}, p_{(l)})$.
To present concisely, we ignore the layer index $l$ and write the per-sample gradient of weight for the convolutional layer, in analogy to the linear layer in \eqref{eq:outer product linear},
\begin{align}
\frac{\partial \mathcal{L}_i}{\partial \mathbf{W}}=\frac{\partial \mathcal{L}}{\partial F^{-1}(\mathbf{s}_i)}^\top U(\a_i)=F^{-1}\left(\frac{\partial \mathcal{L}}{\partial\mathbf{s}_i}\right)^\top U(\a_i).
\label{eq:outer product conv}
\end{align}
Here $F^{-1}$ is the inverse operation of $F$ and simply flattens all dimensions except the last one: from $(H_\text{out},W_\text{out}, p_{(l)})$ to $(H_\text{out}W_\text{out}, p_{(l)})$. From \eqref{eq:outer product conv}, we derive the per-sample gradient norm for the convolutional layers from the same formula as in \cite[Appendix F]{li2021large},
\begin{align}
\left\|\frac{\partial \mathcal{L}_i}{\partial \mathbf{W}}\right\|_\text{Fro}^2=\text{vec}(U(\a_i )U(\a_i)^\top)\text{vec}\left(F^{-1}\left(\frac{\partial \mathcal{L}}{\partial \mathbf{s}_i}\right)F^{-1}\left(\frac{\partial \mathcal{L}}{\partial \mathbf{s}_i}\right)^\top\right).
\label{eq:ghost norm conv}
\end{align}
\vspace{-0.2cm}
\section{Ghost clipping for Convolutional Layers}
\label{sec:ghost clipping}
\vspace{-0.2cm}
Leveraging our derivation in \eqref{eq:ghost norm conv}, we propose the ghost clipping to compute the clipped gradient without ever generating the per-sample gradient $\frac{\partial \mathcal{L}_i}{\partial \mathbf{W}}$. The entire procedure is comprised of the ghost norm computation and the second back-propagation, as demonstrated in \Cref{fig:flowcharts}.
\begin{figure}[!htp]
\centering
\includegraphics[scale=0.32]{images/diagram1.pdf}
\caption{Per-sample gradient clipping for convolutional layers. \textbf{Left: Opacus} = Back-propagation + Gradient instantiation + Weighted gradient. \textbf{Middle: FastGradClip} = Back-propagation + Gradient instantiation + Second back-propagation. \textbf{Right: Ghost clipping} = Back-propagation + Ghost norm + Second back-propagation. See \Cref{sec:complexity} for their complexity analysis.}
\label{fig:flowcharts}
\end{figure}
\vspace{-0.2cm}
\subsection{Ghost norm: computing gradient norm without the gradient}
The per-sample gradient norm is required to compute the per-sample $C_i$ in \eqref{eq:privatized grad}. While it is natural to instantiate the per-sample gradients and then compute their norms \cite{rochette2019efficient,lee2020scaling,opacus,de2022unlocking,mehta2022large}, this is not always optimal nor necessary. Instead, we can leverage \eqref{eq:ghost norm conv}, the ghost norm, to compute the per-sample gradient norm and avoid the possibly expensive per-sample gradient. Put differently, when $T=H_\text{out}W_\text{out}$ is small, the multiplication $U(\a_i)U(\a_i)^\top$ plus $F^{-1}\left(\frac{\partial \mathcal{L}}{\partial \mathbf{s}_i}\right)F^{-1}\left(\frac{\partial \mathcal{L}}{\partial \mathbf{s}_i}\right)^\top$ is cheap, but the multiplication $F^{-1}\left(\frac{\partial\mathcal{L}}{\partial\mathbf{s}_i}\right)^\top U(\a_i)$ is expensive. We demonstrate the ghost clipping's supremacy over complexity empirically in \Cref{tab:cifar10 fixed 256} and theoretically in \Cref{tab:complexity}.
\subsection{Second back-propagation: weighted loss leads to weight gradient}
We conduct a second back-propagation with the weighted loss $\sum_i C_i\mathcal{L}_i$ to derive the weighted gradient $\sum_i C_i\bm{g}_i$ in \eqref{eq:privatized grad}, which costs extra time. In contrast, Opacus \cite{opacus} and JAX \cite{de2022unlocking,subramani2021enabling} generate and store the per-sample gradient $\bm{g}_i$ for all $i\in[B]$. Thus the weighted gradient is directly computable from $\bm{g}_i$ as the memory is traded off for faster computation. However, in some cases like \Cref{tab:imagenet} on ImageNet and \Cref{tab:cifar100ViT} on CIFAR100, we can use larger batch size to compensate the slowdown of the second back-propagation.
\vspace{-0.2cm}
\section{Mixed Ghost Clipping: To be a ghost or not, that is the question}
\vspace{-0.5cm}
\begin{algorithm}[H]
\caption{Mixed Ghost Clipping (single iteration)}\label{alg:dpsgd1}
\textbf{Parameters:} number of layers $L$, gradient clipping norm $R$.
\begin{algorithmic}[0]
\vspace{-0.4cm}
\Statex\tikzmark{begin}
\For{$l = 1,2, \ldots, L$}
\If{$2T_{(l)}^2<p_{(l)}d_{(l)}$}
\Statex {\hspace{1.1cm}}
$\mathbf{W}_{(l)}.\text{ghost\_norm}=\text{True}$
\Comment \textcolor{red}{Forward pass}
\EndIf
\Statex {\hspace{0.55cm}}
Compute $\a_{(l+1),i} = \phi(F(U(\a_{(l),i})\mathbf{W}_{(l)}+\b_{(l)}))$
\EndFor
\tikzmark{end}
\drawredbox
\Statex
Compute per-sample losses $\mathcal{L}_i$.
\vspace{-0.3cm}
\tikzmark{begin}
\For{$l = L,L-1, \ldots, 1$}
\Statex {\hspace{0.55cm}}
Compute $\frac{\partial \mathcal{L}}{\partial \mathbf{s}_{(l),i}}
= \frac{\partial \mathcal{L}}{\partial \mathbf{s}_{(l+1),i}}\mathbf{W}_{(l+1)}\circ\phi^{\prime}(\mathbf{s}_{(l),i})$
\Comment{\textcolor{cyan}{Mixed ghost norm in first back-propagation}}
\If{$\mathbf{W}_{(l)}.\text{ghost\_norm}=\text{True}$}
\Statex {\hspace{1.1cm}}
$\|\frac{\partial \mathcal{L}_i}{\partial W_{(l)}}\|_\text{Fro}^2=\text{vec}(U(\a_{(l),i} )U(\a_{(l),i})^\top)\text{vec}\left(\frac{\partial \mathcal{L}}{\partial F^{-1}(\mathbf{s}_{(l),i})}\frac{\partial \mathcal{L}}{\partial F^{-1}(\mathbf{s}_{(l),i})}^\top\right)$
\Else
\Statex {\hspace{1.1cm}}
$\frac{\partial \mathcal{L}_i}{\partial W_{(l)}}=F^{-1}(\frac{\partial \mathcal{L}}{\partial\mathbf{s}_{(l),i}})U(\a_{(l),i}) \longrightarrow \|\frac{\partial \mathcal{L}_i}{\partial W_{(l)}}\|_\text{Fro}^2$
\EndIf
\EndFor
\tikzmark{end}
\drawcyanbox
\Statex
Compute per-sample gradient norm $\|\frac{\partial \mathcal{L}_i}{\partial W}\|_\text{Fro}^2=\sum_l \|\frac{\partial \mathcal{L}_i}{\partial W_{(l)}}\|_\text{Fro}^2$
\Statex
Compute weighted loss $\mathcal{L}_\text{weighted}=\sum_i C(\|\frac{\partial \mathcal{L}_i}{\partial W}\|_\text{Fro};R) \cdot\mathcal{L}_i$
\Statex \textcolor{blue}{Second back-propagation} with $\mathcal{L}_\text{weighted}$ to generate $\sum_i C_i\frac{\partial \mathcal{L}_i}{\partial \mathbf{W}}$
\end{algorithmic}
\label{alg:mixed ghost}
\end{algorithm}
While the ghost norm offers the direct computation gradient norm at the cost of an indirect computation of the weighted gradient, we will show that ghost clipping alone may not be sufficient for efficient DP training, as we demonstrate in \Cref{tab:cifar10 fixed 256} and \Cref{fig:max bs and throughput}. In \Cref{tab:complexity}, we give the first fine-grained analysis of the space and time complexity for DP training algorithms. Our analysis gives the precise condition when the per-sample gradient instantiation (adopted in Opacus \cite{opacus}) is more or less efficient than our ghost norm method. To take the advantage of both methods, we propose the mixed ghost clipping method in \Cref{alg:mixed ghost}, which applies the ghost clipping or non-ghost clipping by a layerwise decision.
We highlight that the key reason supporting the success of mixed ghost clipping method is its layerwise adaptivity to the dimension parameters, $(p_{(l)}, d_{(l)}, T_{(l)},k_{H},k_{W})$, which vary largely across different layers (see \Cref{fig:vgg11}). The variance results from the fact that images are non-sequential data, and that the convolution and pooling can change the size ($T=H_\text{out}W_\text{out}$) of hidden features drastically.
In the next two sections, we will analyze rigorously the time and memory complexities of the regular training and the DP training, using ghost or non-ghost clippings.
\begin{remark}
In \Cref{alg:mixed ghost}, we present the mixed ghost clipping that prioritizes the space complexity by \eqref{eq:memory priority}. We also derive and implement a speed-priority version by comparing the time complexity of ghost norm and gradient instantiation in \Cref{tab:block complexity}. However, the efficiency difference is empirically insignificant.
\end{remark}
\subsection{Complexity analysis}
\label{sec:complexity}
We now break each clipping method into operation modules and analyze their complexities. A similar but coarse analysis from \cite{li2021large} only claims, on sequential layers, $O(BT^2)$ space complexity with ghost clipping and $O(Bpd)$ without ghost clipping. The time complexity and/or convolutional layers are not analyzed until this work.
\begin{table}[!htb]
\centering
\setlength\tabcolsep{2.5pt}
\begin{tabular}{c|c|c|c|c}
\hline
Complexity &Back-propagation&Ghost norm &Grad instantiation&Weighted grad \\\hline
Time &$2BTD(2p+1)$&$2BT^2(D+p+1)-B$&$2B(T+1)pD$&$2BpD$ \\\hline
Space&$BTp+2BTD+pD$&$B(2T^2+1)$&$B(pD+1)$&0 \\\hline
\end{tabular}
\caption{Complexities of operation modules in per-sample gradient clipping methods, contributed by a single 2D convolutional layer.}
\label{tab:block complexity}
\end{table}
Here $B$ is the batch size, $D=dk_H k_W$ where $d$ is the number of input channels, $k$ is the kernel sizes, $p$ is the number of output channels, and $T=H_\text{out}W_\text{out}$. We leave the detailed complexity computation in \Cref{app:complexity}. Leveraging \Cref{tab:block complexity}, we give the complexities of different clipping algorithms in \Cref{tab:complexity}.
\begin{table}[!htb]
\centering
\begin{tabular}{c|c|c}
\hline
Complexity &Time&Space \\\hline\hline
Opacus \cite{opacus}&$6BTpD$&$B(pd+Tp+2TD)$*
\\\hline
FastGradClip \cite{lee2020scaling}&$10BTpD$&$B(pD+Tp+2TD)$\\\hline
Ghost clipping (ours) &$8BTpD+2BT^2(p+D)$&$B(2T^2+Tp+2TD)$ \\\hline
Mixed ghost clipping (ours)&SEE CAPTION&$B(\min(2T^2, pD)+Tp+2TD)$
\\\hline
\end{tabular}
\caption {Complexity of different implementations of per-sample gradient clipping by a single 2D convolutional layer. Only highest order terms are listed. * indicates that Opacus stores the per-sample gradients of all layers, thus a per-layer space complexity does not accurately characterize its memory burden, since other methods only store the intermediate variables one layer at a time. The mixed ghost clipping's time complexity is between FastGradClip and ghost clipping, depending on which of $(2T^2,pD)$ is smaller.}
\label{tab:complexity}
\end{table}
\subsection{Layerwise decision in mixed clipping}
From the space complexity in \Cref{tab:complexity}, we derive the layerwise decision that selects the more memory efficient of FastGradClip (gradient instantiation) and ghost clipping (ghost norm):
\begin{align}
\text{Choose ghost norm over per-sample gradient instantiation if } 2T^2<pD=pd k_H k_W.
\label{eq:memory priority}
\end{align}
\begin{minipage}[b]{.35\textwidth}
\hspace{1cm}
\centering
\includegraphics[width=2.5cm,height=7cm]{images/vgg11.pdf}
\captionof{figure}{VGG-11 architecture on ImageNet ($224\times 224$).}
\label{fig:vgg11}
\end{minipage}
\hspace{0.3cm}
\begin{minipage}[b]{.64\textwidth}
\begin{tabular}{c|c|c}
\hline
& Ghost norm & Non-ghost norm \\ \hline
Space& \multirow{2}{*}{$2T_{(l)}^2=2H_\text{out}^2W_\text{out}^2$}& \multirow{2}{*}{$p_{(l)} d_{(l)}k_{H}k_{W}$} \\
Complexity&&\\\hline\hline
conv1&$5.0\times 10^9$ & \cellcolor{green!25}$\bm{1.7\times 10^3}$ \\ \hline
conv2&$3.0\times 10^8$ & \cellcolor{green!25}$\bm{7.3\times 10^4}$ \\ \hline
conv3&$2.0\times 10^7$ & \cellcolor{green!25}$\bm{2.9\times 10^5}$ \\ \hline
conv4&$2.0\times 10^7$ & \cellcolor{green!25}$\bm{5.8\times 10^5}$ \\ \hline
conv5&$1.2\times 10^6$ & \cellcolor{green!25}$\bm{1.1\times 10^6}$ \\ \hline
conv6&\cellcolor{green!25}$\bm{1.2\times 10^6}$ & $2.3\times 10^6$ \\ \hline
conv7&\cellcolor{green!25}$\bm{7.6\times 10^4}$ & $2.3\times 10^6$ \\ \hline
conv8&\cellcolor{green!25}$\bm{7.6\times 10^4}$ & $2.3\times 10^6$ \\ \hline
fc9&\cellcolor{green!25}$\bm{2}$ & $1.0\times 10^8$ \\ \hline
fc10&\cellcolor{green!25}$\bm{2}$ & $1.6\times 10^7$ \\ \hline
fc11&\cellcolor{green!25}$\bm{2}$ & $4.1\times 10^6$\\ \hline
\end{tabular}
\captionof{table}{Layerwise decision of mixed ghost clipping on VGG-11. Green background indicates being selected.}
\label{tab:layerwise decision}
\end{minipage}
Therefore, our mixed ghost clipping is a mixup of FastGradClip and the ghost clipping (c.f. \Cref{fig:flowcharts}). We note that the decision by the mixed ghost clipping \eqref{eq:memory priority} depends on different dimensions: the ghost clipping depends on the size of hidden features (height $H$ and width $W$) which in turn depends on kernel size, stride, dilation and padding (see \Cref{app:explain conv} for introduction of convolution), while only the non-ghost clipping depends on the number of channels. In ResNet and VGG, the hidden feature size decreases as layer depth increases, due to the shrinkage from the convolution and pooling operation; on the opposite, the number of channels increases in deeper layers.
\begin{remark}[Ghost clipping favors bottom layers]
As a consequence of decreasing hidden feature size and increasing number of channels, there exists a depth threshold beyond which the ghost clipping is preferred in bottom layers, where the save in complexity is substantial. In \Cref{fig:vgg11} and \Cref{tab:layerwise decision}, as the layer of VGG 11 goes deeper, the hidden feature size shrinks from $224\to 112\to\cdots\to 14$ and the number of channels increases from $3\to 64\to\cdots\to 512$.
\end{remark}
\vspace{-0.3cm}
\section{Performance}
\vspace{-0.3cm}
We compare our ghost clipping and mixed ghost clipping methods to state-of-the-art clipping algorithms, namely Opacus \cite{opacus} and FastGradClip \cite{lee2020scaling}, which are implemented in Pytorch. We are aware of but will not compare to implementations of these two algorithms in JAX \cite{jax2018github}, e.g. \cite{kurakin2022toward,subramani2021enabling,de2022unlocking}, so as to only focus on the algorithms rather than the operation framework.
All experiments run on one Tesla V100 GPU (16GB RAM).
We highlight that switching from the regular training to DP training only needs a few lines of code using our privacy engine (see \Cref{app:privacy engine}). For CNNs, we use models from \url{https://github.com/kuangliu/pytorch-cifar} on CIFAR10 \cite{krizhevsky2009learning} and models from Torchvision \cite{pytorchvision} on ImageNet \cite{deng2009imagenet}. For ViTs, regardless of datasets, we use models from PyTorch Image Models \cite{rw2019timm}.
\vspace{-0.3cm}
\subsection{Time and memory efficiency (fixed batch size)}
\vspace{-0.2cm}
We first measure the time and space complexities when the physical batch size is fixed. Here we define the physical batch size (or the virtual batch size) as the number of samples actually fed into the memory, which is different from the logical batch size. For example, if we train with batch size 1000 but can only feed 40 samples to GPU at one time, we back-propagate 25 times before updating the weights for 1 time. This technique is known as the gradient accumulation and is widely applied in large batch training, which particularly benefits the accuracy of DP training \cite{,li2021large,kurakin2022toward,de2022unlocking,mehta2022large}.
\begin{table}[!htb]
\setlength\tabcolsep{3pt}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Dataset& Model \& \# Params&Package& Time (sec) / Epoch& Active Memory (GB)
\\\hline
\multirow{15}{*}{CIFAR10} &\multirow{5}{*}{CNN \cite{tramer2020differentially,papernot2020tempered}}&Opacus&12&1.37
\\\cline{4-5}
&&FastGradClip&11&0.94
\\\cline{4-5}
&&Ghost (ours)&11&2.47
\\\cline{4-5}
&0.551M &Mixed (ours)&7&0.79
\\\cline{4-5}
&&Non-DP&5&0.66
\\\cline{2-5}
&&Opacus&OOM / \textcolor{blue}{OOM} / \textcolor{red}{OOM}&OOM / \textcolor{blue}{OOM} / \textcolor{red}{OOM}
\\\cline{4-5}
&&FastGradClip& 45 / \textcolor{blue}{80} / \textcolor{red}{OOM}&6.32 / \textcolor{blue}{7.65} / \textcolor{red}{OOM}
\\\cline{4-5}
&ResNet 18 / \textcolor{blue}{34} / \textcolor{red}{50}&Ghost (ours)&59 / \textcolor{blue}{98} / \textcolor{red}{158}&4.00 / \textcolor{blue}{4.90} / \textcolor{red}{9.62}
\\\cline{4-5}
&11M / \textcolor{blue}{21M} / \textcolor{red}{23.5M}&Mixed (ours)&37 / \textcolor{blue}{66} / \textcolor{red}{119}&3.31 / \textcolor{blue}{4.13} / \textcolor{red}{9.62}
\\\cline{4-5}
&&Non-DP&14 / \textcolor{blue}{24} / \textcolor{red}{49}&3.30 / \textcolor{blue}{4.12} / \textcolor{red}{9.56}
\\\cline{2-5}
&&Opacus&OOM / \textcolor{blue}{OOM} / \textcolor{red}{OOM}&OOM / \textcolor{blue}{OOM} / \textcolor{red}{OOM}
\\\cline{4-5}
&&FastGradClip&18 / \textcolor{blue}{25} / \textcolor{red}{33}&5.17 / \textcolor{blue}{5.45} / \textcolor{red}{5.61}
\\\cline{4-5}
&VGG 11 / \textcolor{blue}{13} / \textcolor{red}{16}&Ghost (ours)&14 / \textcolor{blue}{25} / \textcolor{red}{29}&2.58 / \textcolor{blue}{3.30} / \textcolor{red}{3.41}
\\\cline{4-5}
&9M / \textcolor{blue}{9.4M} / \textcolor{red}{14.7M}
&Mixed (ours)&13 / \textcolor{blue}{18} / \textcolor{red}{23}&2.58 / \textcolor{blue}{2.76} / \textcolor{red}{2.84}
\\\cline{4-5}
&&Non-DP&5 / \textcolor{blue}{6} / \textcolor{red}{8}&2.54 / \textcolor{blue}{2.73} / \textcolor{red}{2.81}
\\\hline
\end{tabular}
\caption{Time and memory of selected models on CIFAR10, with physical batch size 256. Additional models are in \Cref{tab:cifar10 extend}. Out of memory (OOM) means the total memory exceeds 16GB.}
\label{tab:cifar10 fixed 256}
\end{table}
From \Cref{tab:cifar10 fixed 256} and the extended \Cref{tab:cifar10 extend}, we see a clear advantage of mixed ghost clipping: our clipping only incurs $\leq 1\%$ memory overhead than the regular training, and is the fastest DP training algorithm. In contrast on ResNet18, Opacus uses $5\times$ memory, and FastGradClip uses $2\times$ memory. Even the ghost clipping uses $1.2\times$ memory of regular training, while being slower than both Opacus and FastGradClip. Similar phenomenon is observed on ImageNet in \Cref{tab:imagenet}.
\subsection{Maximum batch size and throughput}
Importantly, the speed efficiency in \Cref{tab:cifar10 fixed 256} can be further boosted, if we use up the saved memory to increase the batch size. To stress test the maximum physical batch size and the throughput of each clipping method, we train ResNet \cite{he2016deep}, VGG \cite{simonyan2014very}, MobileNet\cite{howard2017mobilenets}, ResNeXt \cite{xie2017aggregated},AlexNet\cite{krizhevsky2014one},Wide-ResNet\cite{zagoruyko2016wide}, DenseNet\cite{huang2017densely} and ViTs on CIFAR10 and ImageNet, as summarized partially in \Cref{fig:max bs and throughput} and in \Cref{tab:imagenet}, respectively. For example, on VGG19 and CIFAR10, the mixed ghost clipping has a maximum batch size $18\times$ bigger (thus $3\times$ faster) than Opacus, $3\times$ bigger (thus $1.7\times$ faster) than FastGradClip, and $2\times$ bigger (thus $1.3\times$ faster) than the ghost clipping.
Similarly, on Wide-ResNet50 and ImageNet, the mixed ghost clipping has a maximum batch size $5\times$ bigger than Opacus, and $<0.3\%$ more memory costly than the non-private training.
\vspace{-0.2cm}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.48\linewidth]{cifar_resnet_memory.pdf}
\includegraphics[width=0.48\linewidth]{cifar_resnet_speed.pdf}
\vspace{-0.2cm}
\caption{Memory (left) and speed (right) comparison of DP clipping algorithms on CIFAR10.
}
\label{fig:max bs and throughput}
\end{figure}
\vspace{-0.3cm}
\subsection{Vision transformers with convolution on ImageNet scale}
\label{sec:vit}
\vspace{-0.2cm}
In addition to training large-scale CNNs such as ResNet152, we apply our mixed ghost clipping to train ViTs, which substantially outperform existing SOTA on CIFAR10 and CIFAR100. Notice that the ViTs are pretrained on ImageNet scale, by which we resize CIFAR images (from $32\times 32$ pixels to $224\times 224$ pixels).
It is worth mentioning that ViT \cite{dosovitskiy2020image} is originally proposed as a substitute of CNN. Hence it and many variants do not contain convolutional layers. Here we specifically consider the convolutional ViTs, including ScalableViT\cite{yang2022scalablevit}, XCiT\cite{ali2021xcit}, Visformer\cite{chen2021visformer}, CrossVit\cite{chen2021crossvit}, NesT\cite{zhang2022nested}, CaiT\cite{touvron2021going}, DeiT\cite{touvron2021training}, BEiT\cite{bao2021beit}, PiT\cite{heo2021rethinking}, and ConViT\cite{d2021convit}.
Performance of these ViTs on CIFAR10 and CIFAR100 are listed in \Cref{app:ViT} for a single-epoch DP training and several ViTs already beat previous SOTA, even though we do not apply additional techniques as in \cite{de2022unlocking,mehta2022large} (e.g. learning rate schedule or random data augmentation).
\begin{table}[!htb]
\centering
\def0.9{0.9}
\begin{tabular}{lc@{\hskip 15pt}c@{\hskip 15pt}c}
\toprule [0.15em]
&
\multirow{2}{*}{$\varepsilon$}
&
\multicolumn{2}{c}{Test Accuracy (\%)} \\
\cmidrule{3-4}
& & CIFAR-10 & CIFAR-100 \\
\midrule [0.1em]
\multirow{2}{*}{Yu et al. \cite{yu2021not} (ImageNet1k)} & 1 & 94.3 & -- \\
& 2 & 94.8 & -- \\
\midrule
Tramer et al. \cite{tramer2020differentially} (ImageNet1k)& 2 & 92.7 & -- \\
\midrule
\multirow{2}{*}{De et al. \cite{de2022unlocking}} & 1 & 94.8 & 67.4\\
& 2 & 95.4 & 74.7 \\
\multirow{2}{*}{(ImageNet1k)}& 4 & 96.1 & 79.2 \\
& 8 & 96.6 & 81.8 \\
\midrule
\multirow{2}{*}{Our CrossViT base (104M params)} & 1 & 95.5 & 71.9 \\
& 2 & 96.1 & 74.3\\
\multirow{2}{*}{(ImageNet1k)}& 4 & 96.2 & 76.7\\
& 8 & 96.5 &77.8 \\
\midrule
\multirow{2}{*}{Our BEiT large (303M params)} & 1 & 96.7& 83.0 \\
& 2 & 97.1&86.2 \\
\multirow{2}{*}{(ImageNet21k)}& 4 & 97.2& 87.7 \\
& 8 & 97.4 &88.4 \\
\bottomrule [0.15em]
\end{tabular}
\caption{CIFAR-10 and CIFAR-100 test accuracies when fine-tuning with DP-SGD. We train CrossViT base for 5 epochs, lr 0.002, optim. We train BEiT large for 3 epochs lr 0.001. Here batch size is 1000 and clipping norm is 0.1. `( )' indicates the pretrained datasets.}
\label{table:imagenet_cifar_transfer}
\vspace{-0.7cm}
\end{table}
By training multiple epochs with best performing ViTs in \Cref{tab:cifar10ViT} and \Cref{tab:cifar100ViT}, we achieve new SOTA under DP in \Cref{table:imagenet_cifar_transfer}, with substantial improvement especially for strong privacy guarantee (e.g. $\epsilon<2$). Our DP training is at most $2\times$ slower and $10\%$ more memory expensive than the non-private training, even on BEiT large, thus significantly improving the $9\times$ slowdown reported in \cite{de2022unlocking}.
\vspace{-0.4cm}
\begin{figure}[!htb]
\centering
\hspace{-0.8cm}
\includegraphics[width=0.52\linewidth]{images/CIFAR100_ViT_memory.pdf}
\hspace{-0.7cm}
\includegraphics[width=0.52\linewidth]{images/CIFAR100_ViT_speed.pdf}
\hspace{-0.8cm}
\vspace{-0.2cm}
\caption{Memory (left) and speed (right) comparison of DP and non-DP training on CIFAR100 with convolutional ViTs. Note that CIFAR10 has an almost identical pattern.}
\label{fig:CIFAR ViT}
\end{figure}
\section{Discussion}
\vspace{-0.2cm}
We have shown that DP training can be efficient for large CNNs and ViTs with convolutional layers. For example, in comparison to non-private training, we reduce the training time to $<2\times$ and the memory overhead to $<10\%$ for all vision models examined (up to 303.4 million parameters), including BEiT that achieves SOTA accuracy on CIFAR100 ($+15.6\%$ absolutely at $\epsilon=1$). We have observed that for many tasks and large CNNs and ViTs, the memory overhead of DP training can be as low as less than 1\%.
We emphasize that our DP training only improves the efficiency, not affecting the accuracy, and therefore is generally applicable, e.g. with SOTA data augmentations in \cite{de2022unlocking}. With efficient training algorithms, we look forward to applying DP CNNs to generation tasks \cite{goodfellow2014generative} , seq-to-seq learning \cite{gehring2017convolutional}, text classification \cite{zhang2015character}, reinforcement learning \cite{mnih2013playing}, and multi-modal learning. Further reducing time complexity and prioritizing speed in DP training is another future direction.
In particular, our layerwise decision principle in \eqref{eq:memory priority} highlights the advantages of ghost clipping when $T=HW$ is small. This advocates the use of large kernel sizes in DP learning, as they shrink the hidden feature aggressively, and have been shown to be highly accurate on non-private tasks \cite{he2016deep,ding2022scaling,peng2017large}.
\bibliographystyle{abbrv}
|
train/arxiv
|
BkiUc0Q5qoYAmyeQZdfo
| 5 | 1 |
\section{I.Introduction}
Recently, non-Hermitian physics has gained widespread attention, such as non-Hermitian linear response theory \cite{PCCZ}, non-Hermitian topological systems \cite{SZF,Ge:2019crj} and non-Hermitian holography \cite{Arean:2019pom,Liu:2020abb}. One of the primary reasons for those studies is due to the probability in nature effectively becomes non-conserving due to the presence of energy, particles, and information regarding the external degrees of freedom that are out of the Hilbert space. In a non-Hermitian system experiencing an exceptional point in the wave momentum, the corresponding eigenfrequencies change from real to complex numbers \cite{Bender:2007nj,NHOPTICS1,NHOPTICS2012,NHOPTICS18,NHOPTICS17}.
This strongly contradicts with Hermiticity, one of the key principles of quantum mechanics, ensuring the conservation of probability in an isolated quantum system and validating the expectation value of energy concerning a quantum state.
However, seminal work by C. Bender and S. Boettcher demonstrates that in the physics of non-Hermitian systems, a huge class of non-conservative Hamiltonians can exhibit entirely real spectra as long as they commute with the parity-time (PT) operator \cite{Bender:1998ke}. Furthermore, it was found that a Hamiltonian with a real spectrum is pseudo-Hermitian.
Moreover, all the PT symmetric Hamiltonians studied reported in the literature exhibited such property \cite{mostafa1,mostafa2,mostafa3}. The similarity transformations can also enable one to construct non-Hermitian Hamiltonian with real spectrum \cite{mostafa3,Zhao:2020xrt, Zhao:2020khc}.
In a quantum many-body system, the quantum two-level system can be simulated by coupling two copies of the Sachdev-Ye-Kitaev (SYK) model. The SYK mode, which is well-known as a disordered and strongly-coupled quantum system composed of Majorana fermions \cite{SY,K,Maldacena:2016upp}, has recently emerged as an exemplary model providing insight into the nature of non-Fermi liquids \cite{Phillips:2019qva}, quantum chaos \cite{Jensen:2016pah}, holography \cite{Maldacena:2016hyu,He:2021dhr}, strange metallic transport \cite{Sachdev:2015efa,Ge:2018lzo} and high temperature superconducting \cite{Cai:2018lsr,Salvati:2021eos}. SYK is closely related to two-dimensional dilaton gravity describing excitations above the near horizon external black hole \cite{almheiri14,mertens16}. Therefore, an eternal traversable wormhole can be constructed by considering two copies of SYK models coupled by a simple interaction. This proposed model demonstrates that at low temperature, the coupling can drive phase transitions to a phase holo-graphically dual to an eternal traversable wormhole with an $AdS_2$ throat \cite{Maldacena:2018lmt}. Conversely, at a higher temperature, the system reduces to two gapless black hole phases \cite{Maldacena:2018lmt}. However, in non-Hermitian set up, one may expect the gapped-gapless physical picture drastically changed. We will prove that as long as the thermodynamical structure is concerned, non Hermiticity can nevertheless strengthen the wormhole-black hole physical picture.
Thus, considering the non-Hermiticity and two coupled SYK models, we developed novel non-Hermitian two-site SYK models. We first prove that the system yields real energy spectrum. Furthermore, we show that the degree of entanglement, the low energy effective action and the phase structure are non-Hermiticity independent.
As illustrated in Fig.\ref{fig:Figure phase}, even though these two SYK sites are approaching a ``ground state/excited state'' picture in the regime of strong non-Hermitian limit, the thermodynamic phase structure indicated three distinct properties at different temperatures.
\begin{figure}[!t]
\label{figureone}
\centerline{\includegraphics[width=7.0cm]{phase.png}}
\caption{\label{fig:Figure phase} Sketched phase diagram for the physical picture. Left: We have a geometry connecting the two sides in the low temperature regime. The left Sachdev-Ye-Kitaev (SYK) and right SYK are in different states because of the non-Hermitian parameter.
Middle: An unstable geometry connecting two SYK sites. Right: Two separated SYK sites at high temperature, representing the gapless two black hole phases. The non-Hermitian parameter can change the states of the left and right SYK sites, which are marked in orange and green colors.}
\end{figure}
\section{II.Non-Hermitian two coupled SYK model}
We consider non-Hermitian two coupled SYK model with the Hamiltonians
\begin{align}
H&=-\sum^N_{ijkl}J_{ijkl}\sum_{A=R,L}(c_1 C^{A\dag}_iC^{A\dag}_jC^{A}_kC^{A}_l\nonumber\\
&+ c_2 C^{A\dag}_iC^{A}_jC^{A\dag}_kC^{A}_l)+H_{int},\nonumber\\
H_{\rm int}&=i\mu\sum^N_i(e^{-2\alpha}C^{L\dag}_iC^{R}_i-e^{2\alpha}C^{R\dag}_iC^{L}_i),
\end{align}
where the coupling $J_{ijkl}$ are random real numbers and $c_1$ and $c_2$ are two constants. $A=L,R$ refers to the ``left" and ``right" side of the two identical copies. We choose $c_1=2$ and $c_2=4$ in what follows. The random real numbers obey the Gaussian distribution and satisfy $J_{ijkl}=-J_{jikl}=-J_{ijlk}=J_{klij}$ with
$
<J_{ijkl}>=0\,,~\,<J^2_{ijkl}>=\frac{J^2}{8N^3}\,.
$
The parameter $\alpha$ is a real number controlling the strength of non-Hermiticity, which is introduced by a non-Hermitican particle-hole similarity transformation. See appendix A for details. By analytically continued to an imaginary value $\alpha\rightarrow i\alpha$, one can recover the Hermitian Hamiltonian.
This Hamiltonian can be inspired by performing a non-Hermitican particle-hole similarity transformation on the original MQ model and dropping all the nonphysical terms. In the absence of the interacting term, the Hamiltonian describes two complex SYK (cSYK) models at zero chemical potential with the two identical copies of Dirac fermions $L,~R$ referring to the ``left" and ``right" side of the system, respectively. Also without $H_{int}$, the gravity dual of this Hamiltonian describes a two sided $AdS_2$ space.
\section{III.Energy spectrum and degree of entanglement}
Energy spectrum is an important feature of the non-Hermitian quantum system. We compute the energy spectrum by using exact diagonalization techniques. The energy spectrum is real and independent of the non-Hermitian parameter $\alpha$, as demonstrated in Fig. \ref{fig:Figure 2}.
\begin{figure}[!t]
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=4.5cm]{E1.png}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=4.5cm]{E2.png}}
\centerline{(b)}
\end{minipage}
\caption{\label{fig:Figure 2} Plots of the spectrum with $\mu=0.15,N=8$ for the values of $\alpha=0$ (a) and for the values of $\alpha=3$ (b).}
\end{figure}
For non-Hermitian systems, we need construct the ground state by introducing a biorthogonal set $\{|\psi^l_n\rangle,|\psi^r_n\rangle\}$ \cite{mostafa3,Zhao:2020xrt,Zhao:2020khc,Matzkin:2006}.
The right/left eigenstates are defined as
\begin{equation}
H|\psi^r_n\rangle=E_n|\psi^r_n\rangle\,,\,H^\dag|\psi^l_m\rangle=E^*_m|\psi^l_m\rangle\,.
\end{equation}
The eigenstates satisfy the properties as follow
\begin{equation}
\sum_n|\psi^l_n\rangle\langle\psi^r_n|=I\,,\,\langle\psi^l_n|\psi^r_m\rangle=\delta_{nm}\,.
\end{equation}
We impose the constraints the system yields the ground state energy as those of \cite{Sahoo:2020unu} by taking $H^\dag_{int}|\psi_0^l\rangle=-\mu N|\psi_0^l\rangle$, $H_{int}|\psi_0^r\rangle=-\mu N|\psi_0^r\rangle$ and $\langle \psi_0^l|\psi_0^r\rangle=1$.
Without loss of generality, the generated ground states are proposed as
\begin{eqnarray}
|\psi_0^r\rangle&=&\prod^N_{j}\frac{1}{\sqrt{2}}(\tilde{A}|1\rangle_{L,j}|0\rangle_{R,j}+i \tilde{B}|0\rangle_{L,j}|1\rangle_{R,j})\,,\label{eq:gs1}\\
|\psi_0^l\rangle&=&\prod^N_{j}\frac{1}{\sqrt{2}}(\tilde{C}|1\rangle_{L,j}|0\rangle_{R,j}+i \tilde{D}|0\rangle_{L,j}|1\rangle_{R,j})\,,\label{eq:gs2}
\end{eqnarray}
where the coefficients $\tilde{A}, \tilde{B}, \tilde{C}$ and $\tilde{D}$ satisfy the relation $\tilde{B}=\tilde{A} e^{2\alpha}$ and $\tilde{D}=\tilde{C} e^{-2\alpha}$. The constraint $\langle \psi_0^l|\psi_0^r\rangle=1$ further leads to $\tilde{A} \tilde{C}=\tilde{B} \tilde{D}=\frac{1}{2}$. The ground states can return to the ground state of Hermitian case ($\alpha=0$) consistently if setting
\begin{equation}
\tilde{A}=\frac{1}{\sqrt{2}}\,,\,\tilde{B}=\frac{e^{2\alpha}}{\sqrt{2}}\,,\,\tilde{C}=\frac{1}{\sqrt{2}}\,,\,\tilde{D}=\frac{e^{-2\alpha}}{\sqrt{2}}\,.
\end{equation}
If $\alpha\neq0$, $\langle \psi_0^l|\psi_0^l\rangle\neq1$ and $\langle \psi_0^r|\psi_0^r\rangle\neq1$.
In our non-Hermitian model, the degree of entanglement turns out to be
\begin{align}
P_E&=-tr(\rho_A\log \rho_A)=1\,,
\end{align}
which can be derived from the von-Neumann entropy of the reduced density matrix
\begin{align}
\rho_{LR}&=|\psi_0^r\rangle\langle \psi_0^l|=\tilde{A} \tilde{C}|10\rangle\langle 01|+i \tilde{B} \tilde{C}|01\rangle\langle 01|\nonumber\\
&-i\tilde{A} \tilde{D}|10\rangle\langle 10|+\tilde{B} \tilde{D}|01\rangle\langle 10|)\,.\\
\rho_{L}&=tr_R(\rho_{LR})=\left(
\begin{array}{cc}
\tilde{A} \tilde{C} & 0 \\
0 & \tilde{B} \tilde{D} \\
\end{array}
\right)
\,.
\end{align}
Because of $P_E=1$, which is independent of $\alpha$, the ground states are maximally entangled between two systems. Note that for large $\alpha$, the ground states (\ref{eq:gs1}) and (\ref{eq:gs2}) will approach the pure state, but the degree of entanglement remains unchanged. That is to say
\begin{align}
&|\psi_0^r\rangle\rightarrow\prod^N_{j}\frac{1}{\sqrt{2}}|1\rangle_{L,j}|0\rangle_{R,j}\,,\,|\psi_0^l\rangle\rightarrow\prod^N_{j}\frac{i e^{-2\alpha}}{\sqrt{2}}|0\rangle_{L,j}|1\rangle_{R,j}\nonumber\\
&\rm{as}\,\alpha\rightarrow -\infty\, {\rm and} \nonumber\\
&|\psi_0^l\rangle\rightarrow\prod^N_{j}\frac{i}{\sqrt{2}}|0\rangle_{L,j}|1\rangle_{R,j}\,,\,|\psi_0^r\rangle\rightarrow\prod^N_{j}\frac{i e^{2\alpha}}{\sqrt{2}}|0\rangle_{L,j}|1\rangle_{R,j}\nonumber\\
&\rm{as}\,\alpha\rightarrow +\infty\,.
\end{align}
Therefore, the left and right SYK sites are no lonnger symmetric as illustrated in Fig.\ref{fig:Figure phase}.
\section{IV.Low energy effective action}
In the low energy limit, the model simplifies due to the emergence of a conformal symmetry.
We demonstrate that the low energies physics of our non-Hermitian Hamiltonian is also independent of the non-Hermitian parameter $\alpha$.
The retarded Green function of non-Hermitian system is defined as
\begin{equation}
G_{AB}(\tau_1,\tau_2)=\frac{1}{N}\sum_n\langle\psi^l_n|TC^{A\dag}_i(\tau_1)C^{B}_i(\tau_2)|\psi^r_n\rangle\,,
\end{equation}
where $A,B=L,R$.
The saddle-point equations are invariant under the time reparametrization $\tau\rightarrow h(\tau)$ and the $U(1)$ symmetry, which is the same as the complex SYK model in \cite{Cai:2017vyk,Davison:2016ngz,Jian:2017unn}.
\begin{align}
\tilde G_{AB}(\tau_1,\tau_2)&=[h_A^\prime(\tau_1)h_B^\prime(\tau_2)]^{\Delta}G_{AB}\big(h(\tau_1),h(\tau_2)\big)\nonumber\\
&e^{i\phi_A(\tau_1)-i\phi_B(\tau_2)}\,,\nonumber\\
\tilde \Sigma_{AB}(\tau_1,\tau_2)&=[h_A^\prime(\tau_1)h_B^\prime(\tau_2)]^{1-\Delta}\Sigma_{AB}\big(h(\tau_1),h(\tau_2)\big)\nonumber\\
&e^{i\phi_B(\tau_2)-i\phi_A(\tau_1)}\,,\nonumber
\end{align}
In the absence of the interacting term, the Schwarzian effective action of the left or right copy turns out to be
\begin{align}
S_{A}&=-N\alpha_S\int d\tau\{\tanh\frac{h_A(\tau)}{2},\tau\}\nonumber\\
&+\frac{NK}{2}\int d\tau\bigg(\phi^\prime_A(\tau)+i\varepsilon_A h^\prime_A(\tau)\bigg)^2\,.
\label{eq:SA}
\end{align}
where $\varepsilon_A$ is related to the the charge $Q_A$ with $A=L,R$ and $\alpha_S$ is determined by four-point calculation of the SYK model. Note that
\begin{equation}
\{h,\tau\}=\frac{h^{\prime\prime\prime}(\tau)}{h^\prime(\tau)}-\frac{3}{2}\bigg(\frac{h^{\prime\prime}(\tau)}{h^\prime(\tau)}\bigg)^2\,.\nonumber
\end{equation}
The effective action of the coupled part is written as
\begin{align}
S_{int}&=\frac{N\mu}{2}\int d\tau\bigg[\frac{bh^\prime_L(\tau)h^\prime_R(\tau)}{\cosh^2\frac{h_L(\tau)-h_R(\tau)}{2}}\bigg]^\Delta\nonumber\\
&\cosh(\varepsilon h_L(\tau)-\varepsilon h_R(\tau))\nonumber\\
&[e^{i(\phi_L-\phi_R)-2\alpha}+e^{-i(\phi_L-\phi_R)+2\alpha}]\,.\label{eq:Sint1}
\end{align}
The action has the global $SL(2)\times U(1)$ symmetry generated by
\begin{align}
&\delta h_L=\epsilon^0+\epsilon^+e^{ih_L}+\epsilon^-e^{-ih_L}\,,\nonumber\\
&\delta h_R=\epsilon^0-\epsilon^+e^{ih_R}-\epsilon^-e^{-ih_R}\,,\nonumber\\
&\delta\phi_A=-i\varepsilon\delta h_A+\epsilon\,.
\label{eq:generator}
\end{align}
The total action could be simplified to
\begin{align}
\frac{S}{N}&=-2\alpha_S\int d\tau\{\tan\frac{h(\tau)}{2},\tau\}+K\int d\tau\bigg(\phi^\prime(\tau)^2-\varepsilon^2h^\prime(\tau)^2\bigg)\nonumber\\
&+\frac{\mu}{2^{2(\Delta-1)}}\int d\tau\big(h^\prime(\tau)\big)^{2\Delta}\,,
\end{align}
with the solution
\begin{equation}
h_L=h_R=h\,,\,\phi_L=\phi_R-2i\alpha=\rm{const}\,.\label{sol}
\end{equation}
We derive the $SL(2)$ Noether charge in appendix B. The $SL(2)$ Noether charge vanishes with the simple solution $h(\tau)=t^\prime\tau$, so it can be treated as gauge symmetry. The solutions of equation (\ref{sol}) lead to $Q_{\pm}=0$, and
\begin{equation}
Q_0/N=2e^{-\phi}[-\phi^{\prime\prime}-e^{2\phi}+\Delta\mu e^{2\Delta\phi}]=0\,,
\end{equation}
by introducing $\phi=\log h^\prime$. We can derive the equations of motion from the action of a non-relativistic particle in a potential
\begin{equation}
S=N\int du\bigg[(\phi^\prime)^2-\big(e^{2\phi}-\mu e^{\phi/2}\big)\bigg]\,.
\end{equation}
The effective potential is independent of the non-Hermitian parameter $\alpha$, but same as that of MQ model \cite{Maldacena:2018lmt}. One can therefore conclude that there is an $\alpha$-independent energy gap at low energies.
We can thus add a boundary interaction to the bulk action
\begin{equation}
S_{int}=g\sum^{N}_{i=1}\int du\bigg(e^{-2\alpha}O^{i\dag}_L(u)O^i_R(u)-e^{2\alpha}O^{i\dag}_R(u)O^i_L(u)\bigg)\,,\label{eq:Sint2}
\end{equation}
where $O$ is a set of $N$ operators with dimension $\Delta$ and $g$ is proportional to the coupling $\mu$. When $\alpha$ and $g$ is small, the interacting (\ref{eq:Sint2}) corresponds to the interaction term of the low energy effective action in Eq. (\ref{eq:Sint1}). The coupling of left and right black holes $(1-2\alpha)O^{i\dag}_LO^{i}_R\,,\,(1+2\alpha)O^{i\dag}_RO^{i}_L$ are not symmetric. The two sides of $AdS_2$ is directly coupled by the double trace deformation. Since $e^{2\alpha}$ or $e^{-2\alpha}$ is always positive, the double trace interaction generates negative null energy in the bulk without violating causality same as that of \cite{Gao:2016bin}. Therefore quantum entangled states at left and right boundaries are connected. So a pair of infalling particles traverse the wormhole from one side of the horizon to the other side.
\section{V.Thermodynamic phase structure beyond the low energy limit}
At finite temperature, the retarded Green function receives great contribution from the non-Hermitian parameter $\alpha$. But the whole thermodynamic phase structure unchanged by the non-Hermitian parameter $\alpha$.
The effective action can be obtained as,
\begin{align}
\frac{S_{eff}}{N}&=-\log\det(\sigma_{AB}-\Sigma_{AB})-\int d\tau_1d\tau_2\bigg(\Sigma_{BA}(\tau_2,\tau_1)\nonumber\\
&G_{AB}(\tau_1,\tau_2)+\frac{36}{4}J^2G^2_{AB}(\tau_1,\tau_2)G^2_{BA}(\tau_2,\tau_1)\bigg), \label{eq:Seff}
\end{align}
where
\begin{equation}
\sigma_{AB}=\begin{pmatrix}
\partial_\tau & i\mu e^{-2\alpha} \\
-i\mu e^{2\alpha} & \partial_\tau
\end{pmatrix}.
\end{equation}
After performing a Fourier transformation
\begin{equation}
f(\tau)=\frac{1}{\beta}\sum_{\omega_n}e^{-i\omega_n\tau}f(\omega_n)\,,\,f(\omega_n)=\int^\beta_0d\tau e^{i\omega_n\tau}f(\tau)\,\nonumber,
\end{equation}
the saddle-point equations can be written as,
\begin{align}
&\Sigma_{AB}(\tau_1,\tau_2)=-36J^2G^2_{AB}(\tau_1,\tau_2)G_{BA}(\tau_2,\tau_1)\,,\nonumber\\
&G_{LL}(i\omega_n,\alpha)=\frac{-i\omega_n-\Sigma_{LL}(i\omega_n,\alpha)}{D(i\omega_n,\alpha)}\,,\nonumber\\
&G_{RR}(i\omega_n,\alpha)=\frac{-i\omega_n-\Sigma_{RR}(i\omega_n,\alpha)}{D(i\omega_n,\alpha)}\,,\nonumber\\
&G_{LR}(i\omega_n,\alpha)=\frac{-i\mu e^{-2\alpha}+\Sigma_{LR}(i\omega_n,\alpha)}{D(i\omega_n,\alpha)}\,,\nonumber\\
&G_{RL}(i\omega_n,\alpha)=\frac{i\mu e^{2\alpha}+\Sigma_{RL}(i\omega_n,\alpha)}{D(i\omega_n,\alpha)}\,,\nonumber\\
&D(i\omega_n,\alpha)=\bigg(-i\omega_n-\Sigma_{LL}\bigg)\bigg(-i\omega_n-\Sigma_{RR}\bigg)\nonumber\\
&+\bigg(i\mu e^{-2\alpha}-\Sigma_{LR}\bigg)\bigg(i\mu e^{2\alpha}+\Sigma_{RL}\bigg)\,,\label{eq:SDeq}
\end{align}
with the Matsubara frequency $\omega_n=2\pi(n+\frac{1}{2})/\beta$.
\begin{figure}[!t]
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=4.5cm]{GF1.pdf}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=4.5cm]{GF2.pdf}}
\centerline{(b)}
\end{minipage}
\vfill
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=4.6cm]{GF3.pdf}}
\centerline{(c)}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=4.6cm]{GF4.pdf}}
\centerline{(d)}
\end{minipage}
\caption{\label{fig:Figure 3} (a) Green function with $\alpha=0,\pm 0.3$ at temperature $T=0.01$. (b) Green function with $\alpha=0.2$ at low temperature $T=0.005$ or high temperature $T=0.05$. (c) Green function with $\alpha=10$ at low temperature $T=0.005$ or high temperature $T=0.05$. (d) Green function with $\alpha=-10$ at low temperature $T=0.005$ or high temperature $T=0.05$.}
\end{figure}
The numerical results in Fig. \ref{fig:Figure 3}(a) show that $G_{RR}(i\omega_n,\alpha)=G_{LL}(i\omega_n,\alpha)$, $G_{LR}(i\omega_n,\alpha)=-G_{RL}(i\omega_n,-\alpha)$. When $\alpha=0$, the model recovers the pseudo-complex SYK model at zero chemical potential \cite{Sahoo:2020unu}. Green function decays exponentially
\begin{equation}
G_{ab}(\tau)\sim e^{-E_{gap}\tau}
\end{equation}
(see appendix C) within a certain $\alpha$ region at low temperature, and the correlators decay as a power law like SYK behavior at high temperatures, as shown in Fig. \ref{fig:Figure 3}(b). The Green functions can be considered as order parameter. When $\alpha$ is large enough, the off-diagonal Green function decays exponentially no matter at low temperature $T=0.005$ or high temperature $T=0.05$, and $E_{gap}$ decreases as temperature increases from $T=0.005$ to $T=0.05$(see Fig. \ref{fig:Figure 3}(c) and \ref{fig:Figure 3}(d)). According to the approximate behavior of the saddle-point equations Eq. (\ref{eq:SDeq}), as $\alpha\rightarrow+\infty$, the off-diagonal Green function $G_{RL}\thicksim-\frac{1}{\Sigma_{LR}}$ dominates while as $\alpha\rightarrow-\infty$, the term $G_{LR}\thicksim-\frac{1}{\Sigma_{RL}}$ dominates. The approximate solutions are indicative of decoupled SYK behavior in the IR limit ($G\thicksim-\frac{1}{\Sigma}$). The results with $\alpha=10$ supports this statement numerically in Fig. \ref{fig:Figure 3}(c) and \ref{fig:Figure 3}(d).
We evaluate the free energy of this non-Hermitian coupled model in this section. Substituting the saddle-point solutions into the action in Eq. (\ref{eq:Seff}) as the method in \cite{Cao:2021upq}, we obtain the free energy
\begin{align}
\frac{F}{N}=&-T\frac{\log Z}{N}=T\frac{S_{eff}}{N}\nonumber\\
=&-T\bigg[2\log 2+\sum_{\omega_n}\log\frac{D(i\omega_n,\alpha)}{(i\omega_n)^2}+\sum_{\omega_n}\bigg(\frac{3}{4}\Sigma_{LL}(i\omega_n,\alpha)\nonumber\\
&G_{LL}(i\omega_n,\alpha)+\frac{3}{4}\Sigma_{RR}(i\omega_n,\alpha)G_{RR}(i\omega_n,\alpha)+\frac{3}{4}\Sigma_{LR}(i\omega_n,\alpha)\nonumber\\
&G_{RL}(i\omega_n,\alpha)+\frac{3}{4}\Sigma_{RL}(i\omega_n,\alpha)G_{LR}(i\omega_n,\alpha)\bigg)\bigg]\,.\nonumber
\end{align}
In turn, we calculate the free energy from the high temperature to the low temperature, and increase back to the high temperature with the non-zero $|\alpha|$.
The free energy as a function of temperature are plotted in Fig. \ref{fig:Figure 5}.
\begin{figure}[!t]
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=4.5cm]{F1.png}}
\centerline{(a)}
\end{minipage}
\hfill
\begin{minipage}{0.48\linewidth}
\centerline{\includegraphics[width=4.5cm]{F2.png}}
\centerline{(b)}
\end{minipage}
\caption{\label{fig:Figure 5} The free energy as a function of $T$ with different coupling $\mu$ for the values of $|\alpha|=0,10$ and $J=1/6$. Starting from the high temperature, we decrease the temperature to low value, and then increase it again back to the high temperature value.}
\end{figure}
The free energy obtained in Fig. \ref{fig:Figure 5} is analogous to the free energy of the pseudo-complex SYK model with the Hermitian coupled term in Ref.\cite{Sahoo:2020unu,Ferrari:2019ogc,Garcia-Garcia:2020vyr}. Noticeably, our numerical result shows the free energy is non-Hermitian parameter $\alpha$-independent. As mentioned previously, the term $G_{LR}\thicksim-\frac{1}{\Sigma_{RL}}$ does not vanish when $\alpha\rightarrow-\infty$, and $G_{RL}\thicksim-\frac{1}{\Sigma_{LR}}$ doesn`t vanish when $\alpha\rightarrow+\infty$. Now Green function and the self-energy satisfy $\Sigma_{LR}(i\omega_n,\alpha)=-\Sigma_{RL}(i\omega_n,-\alpha)$. Obviously, the free energy does not changed by $\alpha=\pm\infty$. However, for a non-zero $\alpha$, the coupling of left and right black holes is not symmetric.
The first-order phase transition from the low temperature traversable wormhole phase to the high temperature two black hole phase ending at a second-order critical point $(T_c=0.25, \mu_c=0.7)$, which is not in deduced by the non-Hermitian parameter. The almost zero slopes of free energy is a signature of the gapped wormhole phase, and the constant negative slope is a signature of the black hole phase \cite{Garcia-Garcia:2020vyr}.
\section{VI.Conclusion and discussion}
We have constructed a novel non-Hermitian two coupled SYK model yielding a real energy spectrum. This is a pseudo-Hermitian Hamiltonian, where the non-Hermiticity is reflected in the coupling between two copies of the pseudo-complex SYK model. The ground states of the total Hamiltonian receive contribution from the non-Hermitian parameter. In the strong non-Hermitian limit, the wave function $|\psi_0^r\rangle$ approaches the ``ground states'' on the left and ``excited states'' on the right or vice versa. The effective action and the free energy are $\alpha$-independence, although the left and right side states are actually $\alpha$-dependent. Low energy analysis further reveals an $\alpha$-independent energy gap.
However, a key observation is that the off-diagonal Green's functions $G_{LR}$ and $G_{RL}$ are no longer symmetric and strongly $\alpha$-dependent. Note that the transmission amplitude of particles across the traversable wormhole is proportional to the retarded Green's function. Thus, this may elucidate the observable aspects of the dynamics.
\section*{Acknowledgements}
We would like to thank Matteo Baggioli, A.~M.~Garc\'\i{}a-Garc\'\i{}a and Su-Peng Kou for valuable comments and discussions.This work is supported by NSFC China (Grant No. 11805117 and No. 11875184).
|
train/arxiv
|
BkiUdKY5qYVBV2nwbAoc
| 5 | 1 |
\section{Introduction}
Stochastic partial differential equations (SPDEs) form a flexible class of models for space-time data. They combine phenomena such as diffusion and transport that occur naturally in many processes, but also include random forcing terms, which may arise from microscopic scaling limits or account for model uncertainty. Quantifying the size of these different effects and testing for their presence from data is an important step in model validation.
Suppose that $X=(X(t))_{0\leq t\leq T}$ solves the linear parabolic SPDE
\begin{equation}
\label{eq:SPDE}
dX(t) = A_\theta X(t)dt+ dW(t), \quad 0\leq t\leq T,
\end{equation}
on an open, bounded and smooth domain $\Lambda\subset\mathbb{R}^d$ with some initial value $X_0$, a space-time white noise $dW$ and a second order elliptic operator
\begin{align}
A_{\theta} = \sum_{i=1}^p \theta_i A_{i} + A_0\label{eq:generalA}
\end{align}
satisfying zero Dirichlet boundary conditions. The $A_i$ are known differential operators of differential order $n_i$ and we aim at recovering the unknown parameter $\theta\in\R^p$. A prototypical example is
\begin{align}\label{eq:canonical:model}
A_{\theta}=\theta_1\Delta+\theta_2(b\cdot\nabla)+\theta_3,\qquad \theta\in(0,\infty)\times\R\times(-\infty,0],
\end{align}
with diffusivity, transport and reaction coefficients $\theta_1$, $\theta_2$, $\theta_3$ and a known unit vector $b\in\R^d$. Equations such as \eqref{eq:SPDE} are also called stochastic advection–diffusion equations and often serve as building blocks for more complex models, with applications in different areas such as neuroscience \cite{ Tuckwell2013, sauer_analysis_2016, walsh_stochastic_1981}, biology \cite{altmeyer_parameter_2020, alonso2018modeling}, spatial statistics \cite{sigrist_stochastic_2015, liu_statistical_2021} and data assimilation \cite{llopis2018particle}.
While the estimation of a scalar parameter in front of the highest order operator $A_i$ is well studied in the literature \cite{huebner_asymptotic_1995, kriz_central_2018, cialenco_drift_2019, cialenco2021statistical, gugushvili2020bayesian}, there is little known about estimating the lower order coefficients or the full multivariate parameter $\theta$. Relying on discrete space-time observations $X(t_k,x_j)$ in case of $\eqref{eq:canonical:model}$ and in dimension $d=1$, \cite{bibinger_volatility_2020, hildebrandt_parameter_2019, tonaki2022parameter} have analysed power variations and contrast estimators. For two parameters in front of operators $A_1$ and $A_2$, \cite{lototsky_parameter_2003} computed the maximum likelihood estimator from $M$ spectral measurements $(\sc{X(t)}{e_j})_{0\leq t\leq T}$, $j=1,\dots,M$, where the $e_j$ are the eigenfunctions of $A_{\theta}$. This leads as $M\rightarrow\infty$ to rates of convergence depending on the differential order of the operators $A_1$, $A_2$, but is restricted to domains and diagonalisable operators with known $e_j$, independent of $\theta$. In particular, in the spectral approach there is no known estimator for the transport coefficient $\theta_2$ in \eqref{eq:canonical:model}. Estimators for nonlinearities or noise specifications are studied e.g. by \cite{chong_high-frequency_2020, hildebrandt2021nonparametric, gaudlitz2022estimation, benth2022weak}.
In contrast, we construct an estimator $\hat{\theta}_{\delta}$ of $\theta$ on general domains and arbitrary possibly anisotropic $A_{\theta}$ from local measurement processes $X_{\delta,k}=(\sc{X(t)}{K_{\delta,x_k}})_{0\leq t\leq T}$, $X^{A_i}_{\delta,k}=(\sc{X(t)}{A^*_i K_{\delta,x_k}})_{0\leq t\leq T}$ for $i=0,\dots,p$ and locations $x_1,\dots,x_M\in\Lambda$. The $K_{\delta,x_k}$, also known as \emph{point spread functions} in optical systems \cite{aspelmeier_modern_2015,backer_extending_2014}, are compactly supported functions on subsets of $\Lambda$ with radius $\delta>0$ and centered at the $x_k$. They are part of the observation scheme and describe the physical limitation that point sources $X(t_k,x_j)$ typically can only be measured up to a convolution with the point spread function. Local measurements were introduced in a recent contribution by \cite{altmeyer_nonparametric_2020} to demonstrate that a nonparametric diffusivity can already be identified at $x_k$ from the spatially localised information $X_{\delta,k}$ as $\delta\rightarrow 0$ with $T>0$ fixed. See also \cite{altmeyer_parameterSemi_2020, altmeyer_parameter_2020} for robustness to semilinear equations and different noise configurations besides space-time white noise.
Let us briefly describe our main contributions. Our first result extends the estimator $\hat{\theta}_{\delta}$ and the CLT of \cite{altmeyer_nonparametric_2020} to obtain asymptotic normality of
\begin{equation*}
(M^{1/2}\delta^{1-n_i}(\hat{\theta}_{\delta,i}-\theta_{i}))_{i=1}^p,\quad \delta\rightarrow 0,
\end{equation*}
with $M= M(\delta)$ measurements. In particular, this yields the convergence rates $M^{1/2}\delta^{1-n_i}$ for $\vartheta_i$, with the best rate for diffusivity terms and the worst rate for reaction terms. We then turn to the problem of establishing optimality of these rates in case of \eqref{eq:canonical:model}. To achieve this, we compute the reproducing kernel Hilbert space (RKHS) of the Gaussian measures induced by the laws of $X$ and of the local measurements. These results are used to derive minimax lower bounds, implying that the rates in the CLT are indeed optimal. In addition, our lower bounds also provide conditions under which consistent estimation is impossible. Combining these conditions with our CLT, we deduce that for general point spread functions $K_{\delta,x_k}$ with non-intersecting supports, consistent estimation is possible if and only if $M^{1/2}\delta^{1-n_i}\rightarrow\infty$. In \eqref{eq:canonical:model}, we have $n_2=1$ and $n_3=0$, meaning that consistent estimation of $\theta_2$ requires necessarily $M\rightarrow\infty$, and $\theta_3$ cannot be estimated consistently unless $M^{1/2}\delta$ remains bounded from below. In particular, $\theta_3$ cannot be estimated consistently in $d=1$, while $d=2$ appears as an interesting boundary case where consistency depends on regularity properties of point spread functions. Conceptually, spectral measurements can be obtained from local measurements approximately by a discrete Fourier transform and we indeed recover the rates of convergence of \cite{huebner_asymptotic_1995} by taking $M$ of maximal size $\delta^{-d}$.
The proofs for the CLT and the lower bound are involved due to the complex information geometry arising from non-standard spatial asymptotics $\delta\rightarrow 0$ and the correlations between measurements at different locations. For instance, we introduce a novel lower bound scheme for Gaussian measures by relating the Hellinger distance of their laws to properties of their RKHS. This is different from the lower bound approach of \cite{altmeyer_nonparametric_2020} for $M=1$ and paves the way to rigorous lower bounds for each coefficient and an arbitrary number of measurements. One of our key results states that the RKHS of the Gaussian measure induced by $X$ with $A_\vartheta=\Delta$ consists of all $h\in L^2([0,T];L^2(\Lambda))$ with $\Delta h,h'\in L^2([0,T];L^2(\Lambda))$ and its squared RKHS norm is upper bounded by an absolute constant times
\begin{align*}
\|\Delta h\|_{L^2([0,T];L^2(\Lambda))}^2 +\|h\|_{L^2([0,T];L^2(\Lambda))}^2+\|h'\|_{L^2([0,T];L^2(\Lambda))}^2.
\end{align*}
This formula generalises the finite-dimensional Ornstein-Uhlenbeck case \cite{MR3024389}, and provides a route to obtain the RKHS of local measurements as linear transformations of $X$. To the best of our knowledge this result has not been stated before in the literature, and may be of independent interest, e.g. in constructing Bayesian procedures with Gaussian process priors, cf. \cite{van_der_vaart_rates_2008}.
The paper is organised as follows. Section \ref{sec:main} deals with the local measurement scheme, the construction of our estimator and the CLT. Section \ref{sec:optimality} presents the lower bounds for the rates established in the CLT, while Section \ref{Sec:RKHS:SPDE} addresses the RKHS of $X$ and of the local measurements. Section \ref{sec:examples} covers model examples, applications to inference and the boundary case for estimating zero order terms in $d=2$. All proofs are deferred to Section \ref{sec:proofs} and Appendix \ref{app:additional:proofs}.
\subsection{Basic notation}
Throughout, we work on a filtered probability space $(\Omega, \mathcal{F},(\mathcal{F}_t)_{0\leq t\leq T},\P)$. We write $a\lesssim b$ if $a\leq Cb$ for a universal constant $C$ not depending on $\delta$, but possibly depending on other quantities such as $T$ and $\Lambda$. Unless stated otherwise, all limits are understood as $\delta\rightarrow 0$ with non-decreasing $M=M(\delta)$ possibly depending on~$\delta$.
The Euclidean inner product and distance of two vectors $a,b\in\R^p$ is denoted by $a\cdot b$ and $|b-a|$. We write $\|\cdot\|_{\operatorname{op}}$ for the operator norm of a matrix. For a multi-index $\alpha=(\alpha_1,\dots,\alpha_d)$ let $D^{\alpha}$ denote the $\alpha$-th weak partial derivative operator of order $|\alpha|=\alpha_1+\dots+\alpha_d$. The gradient, divergence and Laplace operators are denoted by $\nabla$, $\nabla\cdot$ and $\Delta$.
For an open set $U\subset\R^d$, $L^2(U)$ is the usual $L^2$-space with inner product $\sc{\cdot}{\cdot}=\sc{\cdot}{\cdot}_{L^2(U)}$. Let $H^k(U)$ denote the usual Sobolev spaces and let $H_0^1(U)$ be the completion of the space of smooth compactly supported functions $C_c^{\infty}(U)$ relative to the $H^1(U)$-norm.
For a Hilbert space $\mathcal{H}$, the space $L^2([0,T];\mathcal{H})$ consists of all measurable functions $h:[0,T]\rightarrow \mathcal{H}$ with $\int_0^T\|h(t)\|_{\mathcal{H}}^2dt<\infty$. We write $\norm{T}_{\operatorname{HS}(\mathcal{H}_1,\mathcal{H}_2)}$ for the Hilbert-Schmidt norm of a linear operator $T:\mathcal{H}_1\rightarrow \mathcal{H}_2$ between two Hilbert spaces $\mathcal{H}_1,\mathcal{H}_2$.
\section{Joint parameter estimation}\label{sec:main}
\subsection{Setup}
For an unknown parameter $\theta\in\Theta\subseteq\R^p$ let the operator $A_{\theta}$ be as in \eqref{eq:generalA} and suppose for $i=0,\dots,p$ that $A_i = \sum_{|\alpha|\leq 2} a^{(i)}_{\alpha}D^{\alpha}$ with known $a^{(i)}_{\alpha}\in\R$. Let $n_i= \operatorname{ord}(A_i)\in\{0,1,2\}$ be their differential orders. We suppose that $A_{\theta}$ has domain $H^1_0(\Lambda)\cap H^2(\Lambda)$ and is strongly elliptic, that is, for some $C_{\theta}>0$ and with $x^\alpha=x^{\alpha_1}_1\cdots x^{\alpha_d}_d$
\begin{align}\label{eq:differential:operators}
\sum_{|\alpha|=2} \left(\sum_{i=1}^p\theta_i a^{(i)}_{\alpha} + a^{(0)}_{\alpha}\right)x^\alpha > C_{\theta}|x|^{2},\quad x\in\R^d.
\end{align}
The operator $A_{\theta}$ generates an analytic semigroup on $L^2(\Lambda)$, denoted by $(S_{\theta}(t))_{t\geq 0}$.
With an $\mathcal{F}_0$-measurable initial value $X_0$ and a cylindrical Wiener process $W$ on $L^2(\Lambda)$ define a process $X=(X(t))_{0\leq t\leq T}$ by
\begin{align}
X(t) = S_{\theta}(t)X_0 + \int_0^t S_{\theta}(t-t')dW(t'),\quad 0\leq t\leq T.\label{eq:weakSolution}
\end{align}
Due to the low spatial regularity of $W$ this process is understood as a random element with values in $L^2(\Lambda)\subset\mathcal{H}_1$ almost surely for any larger Hilbert space $\mathcal{H}_1$ with an embedding $\iota:L^2(\Lambda)\rightarrow\mathcal{H}_1$ such that $\int_0^t\norm{\iota S_{\theta}(t')}_{\operatorname{HS}(L^2(\Lambda),\mathcal{H}_1)}dt'<\infty$ \cite[Remark 5.6]{hairer_introduction_2009}. Such an embedding always exists. For example, $\mathcal{H}_1$ can be realised as a negative Sobolev space (see Section \ref{sec:RKHS:computations} below). Let $\mathcal{H}_1'$ denote the dual space of $\mathcal{H}_1$ with the associated dual pairing $\sc{\cdot}{\cdot}_{\mathcal{H}_1\times\mathcal{H}_1'}$. Let $(e_k)_{k\geq 1}$ be an orthonormal basis of $L^2(\Lambda)$ and let $\beta_k$ be independent scalar Brownian motions. Then, realising the Wiener process as $W=\sum_{k\geq 1}e_k\beta_k$, we find for all $z\in\mathcal{H}_1'\subset L^2(\Lambda)$, $0\leq t\leq T$, that (see, e.g.~\cite[Lemma 2.4.1 and Proposition 2.4.5]{liu_stochastic_2015})
\begin{align*}
&\sc{X(t)-S_{\theta}(t)X_0}{z}_{\mathcal{H}_1\times\mathcal{H}_1'}
= \sum_{k\geq 1} \int_{0}^t\sc{S_{\theta}(t-t')e_k}{z}_{\mathcal{H}_1\times\mathcal{H}_1'}d\beta_k(t')\\
&\quad\quad = \sum_{k\geq 1} \int_{0}^t\sc{S_{\theta}(t-t')e_k}{z}d\beta_k(t')= \int_{0}^t\sc{S^*_{\theta}(t-t')z}{dW(t')}.
\end{align*}
According to \cite[Proposition 2.1]{altmeyer_nonparametric_2020} and \cite[Lemma 2.4.2]{liu_stochastic_2015} this allows us to extend the dual pairings $\sc{X(t)}{z}_{\mathcal{H}_1\times\mathcal{H}_1'}$ to a real-valued Gaussian process $(\sc{X(t)}{z})_{0\leq t\leq T, z\in L^2(\Lambda)}$ (the notation $\sc{\cdot}{\cdot}$ is used for convenience and indicates that the process does not depend on the embedding space $\mathcal{H}_1$). This process solves the SPDE \eqref{eq:SPDE} in the sense that for all $z\in H^1_0(\Lambda)\cap H^2(\Lambda)$ and $0\leq t\leq T$
\begin{align}
\sc{X(t)}{z} = \sc{X_0}{z} +\int_0^t\sc{X(t')}{A_{\theta}^*z}dt'+\sc{W(t)}{z}, \label{eq:Ito}
\end{align}
where $\sc{W(t)}{z}/\norm{z}_{L^2(\Lambda)}$ is a scalar Brownian motion, and where
\begin{align*}
A_{\theta}^* = \sum_{i=1}^p\theta_i A_i^*+A_0^*,\quad A_i^* = \sum_{|\alpha|\leq 2}(-1)^{|\alpha|}a_{\alpha}^{(i)}D^{\alpha},
\end{align*}
is the adjoint operator of $A_{\theta}$ with the same domain $H^1_0(\Lambda)\cap H^2(\Lambda)$.
\subsection{Local measurements, construction of the estimator}
Introduce for $z\in L^2(\R^d)$ the scale and shift operation \begin{align}
z_{\delta,x}(y) & =\delta^{-d/2}z(\delta^{-1}(y-x)),\quad x,y\in\Lambda,\quad \delta>0.\label{eq:scaling}
\end{align}
Suppose that a function $K\in H^2(\R^d)$ with compact support is fixed, and consider locations $x_1,\dots,x_M\in\Lambda$, $M\in\N$, and a resolution level $\delta>0$, which is small enough to ensure that the functions $K_{\delta,x_k}$ are supported on $\Lambda$. Local measurements of $X$ at the locations $x_1,\dots,x_M$ at resolution $\delta$ correspond to the continuously observed processes $X_{\delta}, X^{A_0}_\delta\in L^2([0,T];\R^M)$, $X^A_{\delta}\in L^2([0,T];\R^{p\times M})$, where for $i=1,\dots,p$, $k=1,\dots,M$
\begin{align*}
(X_{\delta})_k & = X_{\delta,k} = (\sc{X(t)}{K_{\delta,x_k}})_{0\leq t\leq T},\\
(X^{A_0}_{\delta})_{k}&=X_{\delta,k}^{A_0}=(\sc{X(t)}{A^*_0 K_{\delta,x_k}})_{0\leq t\leq T},\\
(X^A_{\delta})_{ik} & = X^{A_i}_{\delta,k} = (\sc{X(t)}{A^*_i K_{\delta,x_k}})_{0\leq t\leq T}.
\end{align*}
Let $W_k(t)=\sc{W(t)}{K_{\delta,x_k}}/\norm{ K}_{L^2(\R^d)}$ be scalar Brownian motions. According to \eqref{eq:Ito}, every local measurement is an Itô process with initial value $X_{\delta,k}(0)=\sc{X_0}{K_{\delta,x_k}}$ and
\begin{equation}
\label{eq:SPDEk}
dX_{\delta,k}(t)=\left(\sum_{i=1}^p \theta_i X^{A_i}_{\delta,k}(t) + X^{A_0}_{\delta,k}(t)\right) dt+\norm{ K}_{L^2(\R^d)} dW_k(t).
\end{equation}
We construct an estimator for $\theta$ by a generalised likelihood principle. Neither \eqref{eq:SPDEk} nor the system of equations augmented with $X^A_{\delta}$, $X^{A_0}_{\delta}$ are Markov processes, because the time evolution at $x_k$ depends on the spatial structure of the whole process $X$, and not only of $X_{\delta}$. Therefore standard results for estimating the parameters $\theta_i$ from continuously observed diffusion processes by the maximum likelihood estimator (e.g., \cite{kutoyants_statistical_2013}) do not apply here. Instead, a general Girsanov theorem for multivariate It{\^o} processes, cf. \cite[Section 7.6]{liptser_statistics_2001}, yields after ignoring conditional expectations, the initial value and possible correlations between measurements the modified log-likelihood function
\begin{equation}
\begin{split}
\ell_{\delta}(\theta)= \norm{K}^{-1}_{L^2(\R^d)}\sum_{k=1}^{M}\Bigg(\int_{0}^{T} \left(\sum_{i=1}^p \theta_i X^{A_i}_{\delta,k}(t)+X^{A_0}_{\delta,k}(t)\right)dX_{\delta,k}(t)\nonumber\\
-\frac{1}{2}\int_{0}^{T} \left(\sum_{i=1}^p \theta_i X^{A_i}_{\delta,k}(t)+X^{A_0}_{\delta,k}(t)\right)^2\Bigg)dt.\label{eq:logLik}
\end{split}
\end{equation}
Maximising $\ell_\delta(\theta)$ with respect to $\theta$ leads to the estimator
\begin{align}
\hat{\theta}_{\delta} & = \mathcal{I}_{\delta}^{-1} \sum_{k=1}^{M} \left(\int_{0}^{T}X^A_{\delta,k}(t)dX_{\delta,k}(t)-\int_{0}^{T}X^A_{\delta,k}(t) X^{A_0}_{\delta,k}(t)dt\right),\label{eq:augMLE}
\end{align}
which we call \emph{augmented MLE} in analogy to
\cite[Section 4.1]{altmeyer_parameter_2020}, with the \emph{observed Fisher information}
\begin{equation}\label{eq:Fisher}
\mathcal{I}_{\delta}=\sum_{k=1}^{M}\int_{0}^{T}X^A_{\delta,k}(t) X^A_{\delta,k}(t)^\top dt.
\end{equation}
\subsection{A central limit theorem}
We show now that the augmented MLE $\hat{\theta}_{\delta}$ satisfies a CLT as $\delta\rightarrow0$. Replacing $dX_{\delta,k}(t)$ in the definition of the augmented MLE by the right hand side in \eqref{eq:SPDEk} yields the basic decomposition
\begin{align}
\hat{\theta}_{\delta} = \theta + \mathcal{I}_{\delta}^{-1} \norm{K}_{L^2(\R^d)}\mathcal{M}_{\delta}\label{eq:basicError}
\end{align}
with the martingale term
\begin{align}
\mathcal{M}_{\delta} = \sum_{k=1}^{M} \left(\int_{0}^{T}X^A_{\delta,k}(t)dW_{k}(t)\right)\label{eq:martingale}.
\end{align}
If the Brownian motions $W_k$ are independent, then the matrix $\mathcal{I}_{\delta}$ corresponds to the quadratic co-variation process of $\mathcal{M}_{\delta}$ and we therefore expect $\mathcal{I}_{\delta}^{-1/2}\mathcal{M}_{\delta}$ to follow approximately a multivariate normal distribution. The rate at which the estimation error in \eqref{eq:basicError} vanishes corresponds to the speed at which the components of the observed Fisher information diverge. Exploiting scaling properties of the underlying semigroup we will see that this depends on the action of the `highest order' operators
\begin{align}
\bar{A}_i^*=\sum_{|\alpha|=n_i}(-1)^{|\alpha|}a_{\alpha}^{(i)}D^{\alpha},\quad \bar{A}_{\theta}= \sum_{|\alpha|=2} \left(\sum_{i=1}^p \theta_i a^{(i)}_{\alpha} + a^{(0)}_{\alpha}\right)D^{\alpha},\label{eq:highestOrder}
\end{align}
on the point spread functions $K_{\delta,x_k}$. Let us define a diagonal matrix of scaling coefficients $\rho_{\delta}\in \R^{p\times p}$,
\begin{align}
(\rho_{\delta})_{ii} = M^{-1/2}\delta^{n_i-1}.
\end{align}
We consider $\bar{A}_{\theta}$ on the full space with domain $H^2(\R^d)$ and make the following structural assumptions.
\begin{myassumption}{H}\label{assump:mainAssump}\,
\begin{enumerate}[label=(\roman*)]
\item The functions $\bar{A}^*_i K$ are linearly independent for all $i=1,\dots,p$.
\item If $d\leq 2$, then $n_i>0$ for all $i=1,\dots,p$.
\item The locations $x_k$, $k=1,\dots,M$, belong to a fixed compact set $\mathcal{J}\subset\Lambda$, which is independent of $\delta$ and $M$. There exists $\delta'>0$ such that $\mathrm{supp}(K_{\delta,x_k})\cap\mathrm{supp}(K_{\delta,x_l})=\emptyset$ for $k\neq l$ and all $\delta\leq\delta'$.
\item $\sup_{x\in\mathcal{J}} \int_{0}^{T}\E[\sc{X_0}{S^*_{\theta}(t)A_i^* K_{\delta,x}}^2] dt=o(\delta^{2-2n_i})$ for all $i=1,\dots,p$.
\end{enumerate}
\end{myassumption}
Assumption \ref{assump:mainAssump}(i) guarantees invertibility of the observed Fisher information, for a proof see Section \ref{sec:remainingProofs}.
\begin{lemma}\label{lem:FisherInvertible}
Under Assumption \ref{assump:mainAssump}(i), $\mathcal{I}_{\delta}$ is $\P$-almost surely invertible.
\end{lemma}
The support condition in Assumption \ref{assump:mainAssump}(iii) is natural in view of applications in microscopy. It guarantees that the Brownian motions $W_k$ in \eqref{eq:SPDEk} are independent as $\delta\rightarrow 0$, while the processes $X_{\delta,k}$ are \emph{not} independent due to the infinite speed of propagation in the solution to the deterministic PDE \eqref{eq:SPDE} without space-time white noise. The support condition requires the $x_k$ to be separated by a Euclidean distance of at least $C\delta$ for a fixed constant $C$, which means that $M$ grows at most as $M = O(\delta^{-d})$. The next lemma shows that Assumption \ref{assump:mainAssump}(iv) on the initial value is satisfied in most relevant situations. For a proof see again Section \ref{sec:remainingProofs}.
\begin{lemma}\label{lem:X0}
Given Assumption \ref{assump:mainAssump}(ii), Assumption \ref{assump:mainAssump}(iv) holds for any $X_0$ taking values in $L^q(\Lambda)$, $q>2$, and if $\sum_{|\alpha|=0} \left(\sum_{i=1}^p\theta_i a^{(i)}_{\alpha} + a^{(0)}_{\alpha}\right) \leq 0$ also for the stationary initial condition $X_0=\int_{-\infty}^0 S_{\theta}(-t')dW(t')$.
\end{lemma}
We establish now the asymptotic behavior of the observed Fisher information and a CLT for the augmented MLE as the resolution $\delta$ tends to zero.
\begin{theorem}
\label{thm:clt}
Under Assumption \ref{assump:mainAssump} the matrix $\Sigma_{\theta}\in\R^{p\times p}$ with entries
\begin{align*}
(\Sigma_{\theta})_{ij}=(T/2)\sc{(-\bar{A}_{\vartheta})^{-1/2}\bar{A}_i^* K}{(-\bar{A}_{\vartheta})^{-1/2}\bar{A}_j^* K}_{L^2(\R^d)}
\end{align*}
is well-defined and invertible, and we have $\rho_{\delta} \mathcal{I}_{\delta}\rho_{\delta} \stackrel{\P}\rightarrow \Sigma_{\theta}$ as $\delta\rightarrow 0$. Moreover, the augmented MLE satisfies the CLT
\begin{equation*}
(\rho_\delta\mathcal{I}_{\delta}\rho_\delta)^{1/2}\rho_\delta^{-1}(\hat{\theta}_{\delta}-\vartheta)\stackrel{d}{\rightarrow}\mathcal{N}(0, \norm{K}^{2}_{L^2(\R^d)} I_p),\quad\delta\rightarrow 0,
\end{equation*}
or equivalently,
\begin{equation*}
(M^{1/2}\delta^{1-n_i}(\hat{\theta}_{\delta,i}-\theta_{i}))_{i=1}^p\stackrel{d}{\rightarrow}\mathcal{N}(0, \norm{K}^{2}_{L^2(\R^d)} \Sigma^{-1}_{\theta}).
\end{equation*}
\end{theorem}
Theorem \ref{thm:clt} shows that parameters $\theta_i$ of an operator $A_i$ with differential order $n_i$ can be estimated at the rate of convergence $M^{1/2}\delta^{1-n_i}$. The asymptotic variances for two parameters $\theta_i$, $\theta_j$ are independent if the leading order terms $\bar{A}^*_iK$ and $\bar{A}^*_jK$ are orthogonal in the geometry induced by $\norm{(-\bar{A}_{\theta})^{-1/2}\cdot}_{L^2(\R^d)}$. The theorem generalises \cite[Theorem 5.3]{altmeyer_nonparametric_2020} (in the parametric case and with the identity operator for $B$) to the anisotropic setting with $M$ measurement locations, without requiring their kernel condition $\int Kdx=0$. Concrete examples and applications are studied in Section \ref{sec:examples}.
\section{Optimality}\label{sec:optimality}
In this section we show that the rates of convergence $M^{1/2}\delta^{1-n_i}$ achieved by the augmented MLE for parameters $\theta_i$ with respect to operators $A_i$ of order $n_i=\operatorname{ord}(A_i)$ are indeed optimal and cannot be improved in our general setup.
The proof strategy (presented in Section \ref{sec:proof:optimality:result}) relies on a novel lower bound scheme for Gaussian measures by relating the Hellinger distance of their laws to properties of their RKHS. The Gaussian lower bound is then applied to one-dimensional submodels $(\mathbb{P}_{\theta})_{\vartheta\in\Theta_i}$ with $A_{\theta}$ from \eqref{eq:canonical:model} assuming a sufficiently regular kernel function $K$ and a stationary initial condition to simplify some computations. Extensions to general initial values $X_0$ are possible. Note that this strategy also yields lower bounds for nonparametric models, e.g.~for estimating the local diffusivity as in \cite{altmeyer_nonparametric_2020}.
\begin{myassumption}{L}\label{assu:lowerBound} Suppose that $\P_{\theta}$ corresponds to the law of the stationary solution $X$ to the SPDE \eqref{eq:SPDE} and assume that the following conditions hold:
\begin{enumerate}[label=(\roman*)]
\item The kernel function satisfies $K=\Delta^2\tilde{K}$ with $\tilde{K}\in C_c^{\infty}(\R^d)$.
\item The models are $A_{\theta}=\theta_1\Delta+\theta_2(b\cdot\nabla)+\theta_3$ for $\theta\in\R^3$, a fixed unit vector $b\in\R^d$, and where $\theta$ lies in one of the parameter classes
\begin{align}
\Theta_1&=\{\vartheta=(\vartheta_1,0,0):\vartheta_1\geq 1\},\nonumber\\
\Theta_2&=\{\vartheta=(1,\vartheta_2,0):\vartheta_2\in[0,1]\},\nonumber\\
\Theta_3&=\{\vartheta=(1,0,\vartheta_3):\vartheta_3\leq 0\}.\label{eq:lower:bound:models}
\end{align}
\item Let $x_1,\dots,x_M$ be $\delta$-separated points in $\Lambda$, that is, $|x_k-x_l|> \delta$ for all $1\leq k\neq l\leq M$. Moreover, suppose that $\mathrm{supp}(K_{\delta,x_k})\subset\Lambda$ for all $k=1,\dots,M$ and $\mathrm{supp}(K_{\delta,x_k})\cap\mathrm{supp}(K_{\delta,x_l})=\emptyset$ for all $1\leq k\neq l\leq M$.
\end{enumerate}
\end{myassumption}
The parameter classes $\Theta_i$ correspond to the cases of estimating the diffusivity $\theta_1$, transport coefficient $\theta_2$ and reaction coefficient $\theta_3$ in front of operators $A_i$ with differential orders $n_1=2$, $n_2=1$, $n_3=0$.
We start with a non-asymptotic lower bound when only $X_{\delta}$ is observed.
\begin{theorem}\label{thm:lower:bound:M>1}
Grant Assumption \ref{assu:lowerBound} with $M\geq 1$, $T\geq 1$ and $i\in\{1,2,3\}$. Then there exist constants $c_1,c_2>0$ depending only on $K$ and an absolute constant $c_3>0$ such that the following assertions hold:
\begin{itemize}
\item[(i)] If $\delta^{n_i-1}/\sqrt{TM}< 1$ and $\delta\leq c_1$, then
\begin{align*}
\inf_{\hat\vartheta_i}\sup_{\substack{\vartheta\in\Theta_i\\ |\vartheta-(1,0,0)^\top|\leq c_2\frac {\delta^{n_i-1}}{\sqrt{TM}}}}
\P_\vartheta\left(|\hat\vartheta_i-\vartheta_i|\geq \frac{c_2}{2}\frac{\delta^{n_i-1}}{\sqrt{TM}}\right)>c_3.
\end{align*}
\item[(ii)] If $\delta^{n_i-1}/\sqrt{TM}\geq 1$ and $\delta\leq c_1$, then
\begin{align*}
\inf_{\hat\vartheta_i}\sup_{\substack{\vartheta\in\Theta_i\\|\vartheta-(1,0,0)^\top|\leq c_2}}\P_\vartheta\left(|\hat\vartheta_i-\vartheta_i|\geq c_2/2\right)>c_3.
\end{align*}
\end{itemize}
In (i) and (ii), the infimum is taken over all real-valued estimators $\hat\vartheta_i=\hat\vartheta_i(X_{\delta})$.
\end{theorem}
Several comments are in order for the above result. First, by Markov's inequality Theorem \ref{thm:lower:bound:M>1} also implies lower bounds for the squared risk. Second, part (ii) detects settings under which consistent estimation is impossible. For instance, if $i=2$, then consistent estimation is impossible for $T=1$ (resp.~$T$ bounded) and $M=1$, that is, if only a single spatial measurement is observed in a bounded time interval. A similar conclusion holds in the case $i=3$, in which case consistent estimation is even impossible in a full observation scheme with $M=\lceil c\delta^{-d}\rceil$ locations for $d\leq 2$ and $T$ bounded. Third, part (i) of Theorem \ref{thm:lower:bound:M>1} shows that the different rates in our CLT are minimax optimal. In particular, it easily implies an asymptotic minimax lower bound when $\delta\rightarrow 0$. A first important case is $M=1$ and $i=1$ in which case Theorem \ref{thm:lower:bound:M>1} also follows from Proposition 5.12 in \cite{altmeyer_nonparametric_2020} and gives the rate of convergence $\delta$. Another important case is given by the full measurement scheme $M=\lceil c\delta^{-d}\rceil$, in which case we get the following corollary.
\begin{corollary}\label{cor:lower:bound:M>1}
Grant Assumption \ref{assu:lowerBound} with $M=\lceil c\delta^{-d}\rceil$, $T\geq 1$ and $i\in\{1,2,3\}$.
If $n_i-1+d/2>0$, then
\[
\liminf_{\delta\rightarrow 0}\inf_{\hat{\theta}_{i}}\sup_{|\theta-(1,0,0)^\top|\leq c_1}\P_{\theta}\left(\delta^{n_i-1-d/2}|\hat{\theta}_{i}-\theta_{i}|\geq c_{2}\right)>0,
\]
where the infimum is taken over all real-valued estimators $\hat\vartheta_i=\hat\vartheta_i(X_{\delta})$. On the other hand, if $n_i-1+d/2\leq 0$, then consistent estimation is impossible.
\end{corollary}
Similar optimality results have been derived in \cite{huebner_asymptotic_1995} for the case of $M$ spectral measurements. Provided there exists an orthonormal basis of eigenfunctions $(e_j)_{j=1}^\infty$ of $A_{\theta}$ independent of $\vartheta$ (e.g., in the case $i=1$ or $i=3$), it is possible to estimate $\vartheta_i$ from $M$ spectral measurements $(\sc{X(t)}{e_j})_{0\leq t\leq T,1\leq j\leq M}$ with rates $M^{-\tau}$ or $\log M$ if $\tau=n_i/d-1/d+1/2>0$ or $\tau=0$, respectively. Consistent estimation fails to hold for $\tau<0$.
While \cite{huebner_asymptotic_1995} obtained asymptotic efficiency by combining Girsanov's theorem with LAN techniques, these rates can also be easily derived from our RKHS results (cf.~Lemma~\ref{lem:RKHS:projected:SPDE}) combined with a version of Lemma \ref{lem:seriesBound:M>1}.
For $\delta=cM^{-1/d}$ the rate in Corollary \ref{cor:lower:bound:M>1} and Theorem \ref{thm:clt} coincides with $M^{-\tau}$ if $\tau>0$, and $\tau=0$ is again a boundary case. Regarding the latter case, we briefly discuss in Section \ref{sec:examples} that a non-negative point spread function achieves the $\log M$-rate when $i=3$ and $d=2$.
Recall that the augmented MLE $\hat{\theta}_{\delta}$ depends also on the measurements $X^A_{\delta}$. We show next that including them into the lower bounds does not change the optimal rates of convergence.
\begin{theorem}\label{thm:lower:bound:M>1:add:measurements}
Theorem \ref{thm:lower:bound:M>1} remains valid when the infimum is taken over all real-valued estimators $\hat\vartheta_i=\hat\vartheta_i(X_{\delta},X_{\delta}^\Delta,X_{\delta}^{b\cdot \nabla})$, provided that $K$, $\Delta K$ and $(b\cdot \nabla) K$ are independent and Assumption \ref{assu:lowerBound}(i) holds for $K$, $\Delta K$ and $(b\cdot \nabla) K$.
\end{theorem}
\section{The RKHS}\label{Sec:RKHS:SPDE}
A crucial ingredient for the proofs of the lower bounds in the last section is a good understanding of the RKHS of the Gaussian measure induced by the law of the observations when $A_{\theta}=\Delta$. For some background on the RKHS of a Gaussian measure see Section \ref{sec:general_lower_bound_setting}.
We first derive the RKHS of the stochastic convolution \eqref{eq:weakSolution} in a more general setting. Suppose that $A_{\theta}$ is an (unbounded) negative self-adjoint closed operator on a Hilbert space $(\mathcal{H},\norm{\cdot}_{\mathcal{H}})$ with domain $\mathcal{D}(A_{\theta})\subset\mathcal{H}$ such that $A_{\theta}e_j=-\lambda_j e_j$ for a sequence $(\lambda_j)_{j\geq 1}$ of positive real numbers and an orthonormal basis $(e_j)_{j\geq 1}$ of $\mathcal{H}$, and such that $A_{\theta}$ generates a strongly continuous semigroup $(S_{\theta}(t))_{t\geq 0}$ on $\mathcal{H}$. With a cylindrical Wiener process $W$, consider the stationary stochastic convolution
\begin{align}
X(t)=\int_{-\infty}^t S_{\theta}(t-t')dW(t'),\quad t\geq 0.\label{eq:stochConv}
\end{align}
As discussed after \eqref{eq:weakSolution} the process $X=(X(t))_{0\leq t\leq T}$ is understood as a random element with values in $\mathcal{H}\subset\mathcal{H}_1$ almost surely for some larger Hilbert space $\mathcal{H}_1$.
Since the RKHS of a Gaussian measure depends only on its distribution, the RKHS, as well as its norm, in the next theorem are independent of the embedding space $\mathcal{H}_1$ (see, e.g.,~Exercise 2.6.5 in \cite{gine_mathematical_2016}).
\begin{theorem}\label{thm:RKHS:SPDE}
Let $(H_{X},\|\cdot\|_{X})$ be the RKHS of the process $X$ in \eqref{eq:stochConv}. Then
\begin{align*}
H_X
& = \{h\in L^2([0,T];\mathcal{H}):h\text{ absolutely continuous, } A_{\theta} h,h'\in L^2([0,T];\mathcal{H})\}
\end{align*}
and
\begin{align*}
\quad & \|h\|_{X}^2
=\|A_{\theta} h\|_{L^2([0,T];\mathcal{H})}^2 +\|h'\|_{L^2([0,T];\mathcal{H})}^2\\ &\quad\quad +\|(-A_{\theta})^{1/2} h(0)\|^2_{\mathcal{H}}+\|(-A_{\theta})^{1/2} h(T)\|^2_{\mathcal{H}},\quad h\in H_X.
\end{align*}
Moreover, we have
\begin{align*}
\|h\|_{X}^2\leq 3\|A_{\theta} h\|_{L^2([0,T];\mathcal{H})}^2 +\|h\|_{L^2([0,T];\mathcal{H})}^2+2\|h'\|_{L^2([0,T];\mathcal{H})}^2,\quad h\in H_X.
\end{align*}
\end{theorem}
The theorem generalises the corresponding result for scalar Ornstein-Uhlenbeck processes to the infinite dimensional process $X$, cf. Lemma \ref{lemma:RKHS_norm_OU_process} below. Next, as in \eqref{eq:Ito}, consider the Gaussian process $(\sc{X(t)}{z}_{\mathcal{H}})_{t\geq 0,z\in\mathcal{H}}$, where the `inner product' here corresponds to
\begin{align*}
\sc{X(t)}{z}_{\mathcal{H}} = \int_{-\infty}^t\sc{S_{\theta}(t-t')z}{dW(t')}_{\mathcal{H}},
\end{align*}
satisfying \eqref{eq:Ito} for $z\in\mathcal{D}(A_\theta^*)=\mathcal{D}(A_{\theta})$ by analogous arguments. We study the RKHS of $(\sc{X(t)}{z}_{\mathcal{H}})_{0\leq t\leq T}$ for finitely many $z$. We will deduce the result directly from Theorem \ref{thm:RKHS:SPDE} by realising $\sc{X}{z}_{\mathcal{H}}$ as a linear transformation of $X$ by a bounded linear map $L$ from $L^2([0,T];\mathcal{H}_1)$ to $L^2([0,T])^M$ and using the machinery described in Section \ref{sec:RKHS:computations}. This restricts $z$ to lie in the dual space of $\mathcal{H}_1$. Another proof presented in Appendix \ref{app:RKHS:measurements:approximation:argument} circumvents this by an approximation argument.
\begin{theorem}\label{thm:upper:bound:RKHS:norm:M>1}
For $K_1,\dots,K_M\in \mathcal{D}(A_{\theta})$ and with $X$ in \eqref{eq:stochConv} consider the process $X_K$ with $X_{K}(t)=(\sc{X(t)}{K_k}_{\mathcal{H}})_{k=1}^M$. Suppose that the Gram matrix $G=(\sc{K_k}{K_l}_{\mathcal{H}})_{1\leq k,l\leq M}$ is non-singular. Then the RKHS $(H_{X_K},\|\cdot\|_{X_{K}})$ of $X_K$ satisfies $H_{X_K}=H^M$, where
\begin{align*}
H=\{h\in L^2([0,T]):h\text{ absolutely continuous}, h'\in L^2([0,T])\},
\end{align*}
and
\begin{align*}
\norm{h}_{X_K}^2&\leq (3\|G^{-1}\|_{\operatorname{op}}^2\|G_{A_{\vartheta}} \|_{\operatorname{op}}+\|G^{-1}\|_{\operatorname{op}})\sum_{k=1}^M\norm{h_k}^2_{L^2([0,T])}\\
&+2\|G^{-1}\|_{\operatorname{op}}\sum_{k=1}^M\norm{h_k'}^2_{L^2([0,T])}
\end{align*}
for all $h=(h_k)_{k=1}^M\in H_{X_K}$, where $G_{A_{\vartheta}}=(\sc{A_{\theta} K_k}{A_{\theta}K_l}_{\mathcal{H}})_{1\leq k,l\leq M}$.
\end{theorem}
In the specific case $A_{\theta}=\Delta$ and local measurements with $K_k=K_{\delta,x_k}$ we obtain the following.
\begin{corollary}\label{cor:upper:bound:RKHS:norm:M>1_Laplace}
Let $(H_{X_\delta},\|\cdot\|_{X_\delta})$ be the RKHS of $X_\delta$ with respect to $A_{\theta}=\Delta$, let $K\in H^2(\R^d)$, $\norm{K}_{L^2(\R^d)}=1$, satisfy Assumption \ref{assu:lowerBound}(iii) and suppose $\delta^2\leq \norm{\Delta K}_{L^2(\R^d)}$, $T\geq 1$. Then $ H_{X_\delta}=H^M$
and for $h=(h_k)_{k=1}^M\in H_{X_{\delta}}$
\begin{align*}
\|h\|_{X_\delta}^2&\leq 4\frac{\norm{\Delta K}^2_{L^2(\R^d)}}{\delta^{4}}\sum_{k=1}^M\|h_k\|_{L^2([0,T])}^2+2\sum_{k=1}^M\|h_k'\|_{L^2([0,T])}^2.
\end{align*}
\end{corollary}
Similar results hold for the RKHS of the jointly Gaussian process $(X_\delta, X^A_{\delta})$, see e.g.~Corollary~\ref{cor:lower:bound:RKHS:norm:M>1:mult:measurement} below.
\section{Applications and extensions}\label{sec:examples}
\subsection{Examples}
Let us illustrate the main results in a few examples.
\begin{example}
\label{ex:1}
Suppose $A_\theta=\theta_1\Delta+\theta_2b\cdot\nabla+c$. This corresponds to \eqref{eq:generalA} with $A_0=c$, $A_1=\Delta$, $A_2=b\cdot\nabla$ for $c\in\R$ and a unit vector $b$, and with differential orders $n_1=2$, $n_2=1$. A typical realisation of the solution $X$ in $d=1$ can be seen in Figure \ref{fig: heatmap_RMSE}(left). The ellipticity condition \eqref{eq:differential:operators} holds if $\theta_1>0$. We know from Theorem \ref{thm:lower:bound:M>1} and Proposition \ref{prop: reactiond2} that $c$ cannot be estimated consistently unless $d\geq 2$. For known $c$, the augmented MLE $\hat{\theta}_{\delta}$ is a consistent estimator of $\theta\in\R^2$ by Theorem \ref{thm:clt}, attaining the optimal rates of convergence $M^{1/2}\delta^{-1}$, $M^{1/2}$ for the diffusivity and the transport terms, respectively according to the lower bounds in Theorem \ref{thm:lower:bound:M>1}. If we suppose for simplicity $\norm{K}_{L^2(\R^d)}=1$, then the CLT holds with a diagonal matrix $$\Sigma_{\theta} = \frac{T}{2\theta_1}\operatorname{diag}\left(\norm{\nabla K}^2_{L^2(\R^d)}, \norm{(-\Delta)^{-1/2}(b\cdot\nabla) K}^2_{L^2(\R^d)}\right),$$implying that $\hat{\theta}_{\delta,1}$ and $\hat{\theta}_{\delta,2}$ are asymptotically independent. Interestingly, in $d=1$, $\norm{(-\Delta)^{-1/2}(b\cdot\nabla) K}^2_{L^2(\R^d)}=\norm{K}^2_{L^2(\R^d)}$ and so the asymptotic variance of $\hat{\theta}_{\delta,2}$ is independent of $K$.
Figure \ref{fig: heatmap_RMSE}(right) presents root mean squared estimation errors in $d=1$ for local measurements obtained from the data displayed in the left part of the figure with $K(x)=\exp(-5/(1-\norm{x}^2))\mathbf{1}(-1<x<1)$ and the maximal choice of $M\asymp\delta^{-1}$. We see that the optimal rates of convergence, and even the exact asymptotic variances (blue dashed lines) are approached quickly as $\delta\rightarrow 0$. For comparison, we have included in Figure \ref{fig: heatmap_RMSE}(right) estimation errors also for an estimator $\bar{\theta}_{\delta}$ without the correction factor depending on the lower order 'nuisance operator' $A_0$ in \eqref{eq:augMLE}. We can see that this introduces only a small bias, which is negligible as $\delta\rightarrow 0$.
\end{example}
\begin{example}\label{ex:2}
Consider the operator $A_{\theta}=\theta_1\Delta + \theta_2b\cdot\nabla + \theta_3$ from \eqref{eq:canonical:model} such that $A_0 =0$, $A_1 = \Delta$, $A_2=b\cdot\nabla$ and $A_3 = 1$, with differential orders $n_1=2$, $n_2=1$, $n_3=0$. If $d\geq 3$ and $M^{1/2}\delta\rightarrow\infty$, then the CLT in Theorem \ref{thm:clt} applies again with optimal rates of convergence as in the last example for $\hat{\theta}_{\delta,1}$, $\hat{\theta}_{\delta,2}$ and $M^{1/2}\delta$ for the reaction term. Using integration by parts we find this time
\begin{equation*}
\Sigma_{\theta} = \frac{T}{2\theta_1}\begin{pmatrix}
\norm{\nabla K}^2_{L^2(\R^d)} & 0 & -1\\
0 & \norm{(-\Delta)^{-1/2}(b\cdot\nabla) K}^2_{L^2(\R^d)} & 0\\
-1 & 0 & \norm{(-\Delta)^{-1/2}K}^2_{L^2(\R^d)}
\end{pmatrix}.
\end{equation*}
\end{example}
\begin{example}
The results of the last example apply with the same asymptotic distribution to $A_{\theta}=\theta_1(\Delta + b'\cdot\nabla + c')+ 2\theta_2(b\cdot\nabla + c'')+ \theta_3$ for $b'\in\R^d$, $c',c''\in\R$ with lower order perturbations, because the leading-order operators $\bar{A}_i$ remain unchanged.
\end{example}
\begin{figure}
\centering
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=1.\linewidth]{heatmap_002_03_02_initialbumb2_update.pdf}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=1.\linewidth]{Diff002_Trans03_RMSE_update3.pdf}
\end{subfigure}
\caption{(left) heat map for a typical realisation of $X(t,x)$ corresponding to \eqref{eq:SPDE} in $d=1$ with domain $\Lambda=(0,1)$ in Example \ref{ex:1}; (right) $\log_{10}\log_{10}$ plot of the root mean squared errors for estimating $\theta_1$ and $\theta_2$ in Example \ref{ex:1}.}
\label{fig: heatmap_RMSE}
\end{figure}
\subsection{Statistical inference}
The CLT in Theorem \ref{thm:clt} easily yields statistical inference procedures. For instance, if $\alpha\in (0,1)$, $q_{1-\alpha}$ is the $1-\alpha$ quantile of the $\chi^2_p$-distribution and if $\norm{K}_{L^2(\R^d)}=1$ for simplicity, then
\begin{equation*}
C_{\alpha} = \{\theta\in\mathbb{R}^p:(\hat{\vartheta}_{\delta}-\theta)^\top \mathcal{I}_{\delta}(\hat{\vartheta}_{\delta}-\theta)\leq q_{1-\alpha}\}
\end{equation*}
is an asymptotic joint $1-\alpha$ confidence set for the parameter $\theta$.
Moreover, we can test for the presence of lower order terms. In Example \ref{ex:2}, with the $1-\alpha$ quantile $z_{1-\alpha}$ of the standard normal distribution,
\begin{align*}
\phi_{\alpha} = \mathbf{1}\left(\left|\left(TM\delta^2/(2\hat{\theta}_{\delta,1})\norm{(-\Delta)^{-1/2}K}^2_{L^2(\R^d)}\right)^{1/2}\hat{\theta}_{\delta,3}\right|>z_{1-\alpha}\right)
\end{align*}
is an asymptotic level $\alpha$ test for testing the null hypothesis $H_0:\theta_3=0$ against the alternative $H_1:\theta_3<0$.
\subsection{A boundary case: estimation in $d=2$}
The augmented MLE generally does not satisfy the CLT in Theorem \ref{thm:clt} in $d\leq 2$ for reaction terms $\theta_i$ with differential order $n_i=0$. We show now that in $d=2$ and for the maximal choice of measurements with $M=\lceil c\delta^{-2}\rceil$ the CLT can be recovered under integrability conditions on $K$, but consistency is lost, while for non-negative $K$ a logarithmic rate holds. This is consistent with results for the MLE from spectral observations in $d=2$, cf. \cite{huebner_asymptotic_1995}. For a proof see Section \ref{sec: reaction}.
\begin{proposition}
Suppose that $d=2$, $A_{\theta}=\Delta+\theta$ for $\theta\in\R$, $K\neq 0$, $X(0)=0$ and $M\delta^2\rightarrow 1$ as $\delta\rightarrow 0$. Then the following holds:
\begin{enumerate}[label=(\roman*)]
\item If $\int_{\R^2}K(x)dx=0$, $\int_{\R^2}xK(x)dx=0$, then
$\hat{\theta}_\delta\stackrel{d}{\rightarrow}\mathcal{N}(\theta,\norm{K}^{2}_{L^2(\R^d)} \Sigma^{-1}_{\theta})$ with $\Sigma_{\theta}=(T/2)\norm{(-\Delta)^{-1/2}K}_{L^2(\R^2)}^2$.
\item If $K\geq 0$, then
$\hat{\theta}_\delta=\theta+O_{\P}(\log(\delta^{-1})^{-1/2})$.
\end{enumerate}
\label{prop: reactiond2}
\end{proposition}
\section{Proofs}\label{sec:proofs}
Let us first define some notation. For convenience in the proofs we write the elliptic operator as $A_{\theta} = \nabla \cdot a_{\theta} \nabla + b_{\theta}\cdot \nabla + c_{\theta}$ with a symmetric positive definite matrix $a_{\theta}\in\R^{d\times d}$, $b_{\theta}\in\R^d$, $c_{\theta}\in\R$ and let $\tilde{A}_{\theta}=\nabla\cdot a_{\theta}\nabla$. The operators $A_{\theta}$ and $\tilde{A}_{\theta}$ with domain $H^1_0(\Lambda)\cap H^2(\Lambda)$ generate the analytic semigroups $(S_{\theta}(t))_{t\geq 0}$ and $(\tilde S_{\theta}(t))_{t\geq 0}$. By a standard-PDE result (see, e.g., \cite[Example III.6.11]{kato2013perturbation} or \cite[equation (5.1)]{reddy1994pseudospectra}) $A_{\theta}$ (and by \cite[Example 2.1 in Section II.2]{EngNag00}) its generated semigroup are diagonalizable, which yields the useful representations
\begin{align}
A_\theta=U_\theta(\tilde{A}_\theta +\tilde{c}_{\theta})U_\theta^{-1},\quad S_{\theta}(t)=e^{t\tilde{c}_{\theta}}U_{\theta}\tilde{S}_{\theta}(t)U_{\theta}^{-1}\label{eq:diagonalizable}
\end{align}
with the multiplication operator $(U_\theta z)(x)= \exp\left(-\frac{1}{2}(a_\theta^{-1}b_\theta)\cdot x\right)z(x)$ and with $\tilde{c}_\theta=c_\theta -\frac{1}{4}b_\theta\cdot (a_\theta^{-1}b_\theta)$. Below we will use $U_{\theta}$ to denote both the multiplication operator and the function $x\mapsto \exp\left(-\frac{1}{2}(a_\theta^{-1}b_\theta)\cdot x\right)$.
\subsection{The rescaled semigroup}\label{sec:rescaledSemigroup}
In this section we collect a number of results on the semigroup operators $S_{\theta}(t)$ and their actions on localised functions $z_{\delta,x}(\cdot)=\delta^{-d/2}z(\delta^{-1}(\cdot-x))$. Write $\Lambda_{\delta,x} = \{\delta^{-1}(y-x):y\in\Lambda\}$, $\Lambda_{0,x}=\mathbb{R}^d$ and introduce the operators with domains $H_0^1(\Lambda_{\delta,x})\cap H^2(\Lambda_{\delta,x})$
\begin{align}
A_{\theta,\delta,x} &= \nabla \cdot a_{\theta} \nabla + \delta b_{\theta}\cdot \nabla + \delta^2 c_{\theta}, \quad \tilde{A}_{\theta,\delta,x} = \nabla \cdot a_{\theta} \nabla,\label{eq:A_local}\\
A_{i,\delta,x} & = \delta^{n_i}\sum_{|\alpha|\leq n_i} \delta^{-|\alpha|}a^{(i)}_{\alpha}D^{\alpha}.\nonumber
\end{align}
The operators in \eqref{eq:A_local} generate the analytic semigroups $(S_{\theta,\delta,x}(t))_{t\geq 0}$ and $(\tilde{S}_{\theta,\delta,x}(t))_{t\geq 0}$ on $L^2(\Lambda_{\delta,x})$. Let $(\bar{S}_{\theta}(t))_{t\geq 0}$ be the semigroup on $L^2(\R^d)$ generated by $\bar{A}_{\theta}$. We also write $e^{t\Delta_0}=\bar{S}_{\theta}(t)$ when $a_{\theta}$ is the identity matrix.
\begin{lemma}\label{rescaledsemigroup}
Let $\delta'\geq \delta\geq 0$, $x\in\Lambda$, $i=1,\dots,p$, $t\geq 0$.
\begin{enumerate}[label=(\roman*)]
\item If $z\in H_0^1(\Lambda_{\delta,x})\cap H^2(\Lambda_{\delta,x})$, then $A_i^*z_{\delta,x}=\delta^{-n_i}(A^*_{i,\delta,x}z)_{\delta,x}$, $A^*_{\theta}z_{\delta,x}=\delta^{-2}(A^*_{\theta,\delta,x}z)_{\delta,x}$.
\item If $z\in H^2(\R^d)$ is compactly supported in $\bigcap_{x\in\mathcal{J}}\Lambda_{\delta',x}$, then $\sup_{x\in\mathcal{J}}\norm{A^*_{i,\delta,x}z-\bar{A}_iz}_{L^2(\R^d)}\rightarrow 0$, $\sup_{x\in\mathcal{J}}\norm{A^*_{\theta,\delta,x}z- \bar{A}_{\theta}z}_{L^2(\R^d)}\rightarrow 0$ as $\delta\rightarrow 0$.
\item If $z\in L^2(\Lambda_{\delta,x})$ then $S_{\theta}^*(t)z_{\delta,x}=(S_{\theta,\delta,x}^*(t\delta^{-2})z)_{\delta,x}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Parts (i), (ii) are clear, part (iii) follows analogously to \cite[Lemma 3.1]{altmeyer_nonparametric_2020}.
\end{proof}
The semigroup on the bounded domain $\Lambda_{\delta,x}$ is after zooming in as $\delta\rightarrow 0$ intuitively close to the semigroup on $\R^d$. The next result states precisely in which way this holds, uniformly in $x\in\mathcal{J}$.
\begin{lemma}
\label{FeynmanKac}
Under Assumption \ref{assump:mainAssump}(iii) the following holds:
\begin{itemize}
\item [(i)] There exists $C>0$ such that if $z\in C_c(\mathbb{R}^d)$ is supported in $\bigcap_{x\in \mathcal{J}}\Lambda_{\delta,x}$ for some $\delta\geq 0$, then for all $t\geq 0$
\begin{equation*}
\sup_{x\in \mathcal{J}}\left|(S^*_{\vartheta,\delta,x}(t)z)(y)\right|\leq Ce^{\tilde{c}_\theta t\delta^2}(\bar{S}_{\theta}(t)|z|)(y),\quad y\in\R^d.
\end{equation*}
\item [(ii)] If $z\in L^2(\mathbb{R}^d)$, then as $\delta \rightarrow 0$ for all $t>0$
\begin{equation*}
\sup_{x\in \mathcal{J}}\Norml S^*_{\vartheta,\delta,x}(t)(z|_{\Lambda_{\delta,x}})-\bar{S}_{\vartheta}(t)z\Normr_{L^2(\mathbb{R}^d)}\rightarrow0.
\end{equation*}
\end{itemize}
\end{lemma}
\begin{proof}
(i). Apply first the scaling in Lemma \ref{rescaledsemigroup} in reverse order such that $(S^*_{\theta,\delta,x}(t)z)_{\delta,x}=S^*_{\theta}(t\delta^2)z_{\delta,x}$. By \eqref{eq:diagonalizable} and direct computation, noting that the function $x\mapsto U_{\theta}(x)$ is uniformly upper and lower bounded on $\Lambda$, we get
\begin{align*}
|S^*_{\theta}(t\delta^2)z_{\delta,x}|
&=|U_\theta^{-1} \tilde{S}_{\vartheta}(t\delta^2) \exp(t\delta^2\tilde{c}_{\theta})U_{\theta}z_{\delta,x}| \lesssim e^{t\delta^2\tilde{c}_{\theta}}\tilde{S}_{\vartheta}(t\delta^2)|z_{\delta,x}|.
\end{align*}
Applying again the $\delta$-scaling it is therefore enough to prove the claim with respect to $\tilde{S}_{\theta}$ and with $|z|$ instead of $z$. By the classical Feynman-Kac formulas (cf. \cite[Chapter 4.4]{karatzas_brownian_1998}, the anisotropic case is an easy generalisation, which can also be obtained by a change of variables leading to a diagonal diffusivity matrix $a_{\theta}$, which corresponds to $d$ scalar heat equations) we have with a process $Y_t=y+\at^{1/2}\tilde{W}_t$ and a $d$-dimensional Brownian motion $\tilde{W}$, all defined on another probability space with expectation and probability operators $\tilde{\E}_y$, $\tilde{\P}_y$, that $(\bar{S}_{\vartheta}(t)z)(y)=\tilde{\mathbb{E}}_y[z(Y_t)]$ and $\tilde{S}_{\vartheta,\delta,x}(t)z(y)=\tilde{\mathbb{E}}_y\left[z(Y_t)\mathbf{1}\left(t<\tau_{\delta,x}\right)\right]$ with the stopping times $\tau_{\delta,x}:=\inf\{t\geq0:Y_t\notin\Lambda_{\delta,x}\}$. The claim follows now from
\begin{equation*}
\sup_{x\in \mathcal{J}}(\tilde{S}_{\theta,\delta,x}(t)|z|)(y)\leq \tilde{E}_y[|z(Y_t)|]=(\bar{S}_{\theta}(t)|z|)(y),
\end{equation*}
where the right hand side is $L^2(\R^d)$-integrable.
(ii). By an approximation argument it is enough to consider $z\in C_c(\bar{\Lambda})$ and $0<\delta\leq\delta'$ such that $z$ is supported in $\Lambda_{\delta',x}$, hence $z|_{\Lambda_{\delta,x}}=z$ for all such $\delta$. Compactness of $\mathcal{J}$ according to Assumption \ref{assump:mainAssump}(iii) guarantees for sufficiently small $\delta$ the existence of a ball $B_{\rho\delta^{-1}}\subset\bigcap_{x\in\mathcal{J}}\Lambda_{\delta,x}$ with center $0$ and radius $\rho\delta^{-1}$ for some $\rho>0$. With this and the representation formulas in (i), combined with the Cauchy-Schwarz inequality, we have for all $y\in\R^d$
\begin{align}
& \sup_{x\in\mathcal{J}}|(\tilde{S}_{\vartheta,\delta,x}(t)z)(y)-(\bar{S}_{\vartheta}(t)z)(y)|^2=\sup_{x\in\mathcal{J}}|\tilde{\mathbb{E}}_y\left[z(Y_t)\mathbf{1}(\tau_{\delta,x}\leq t)\right]|^2\nonumber\\
& \leq \sup_{x\in\mathcal{J}}\tilde{\E}_y[z^2(Y_t)]\tilde{\mathbb{P}}_{y}(\tau_{\delta,x}\leq t)\leq (\bar{S}_{\theta}(t)z^2)(y)\tilde{\mathbb{P}}_{y}(\max_{0\leq s\leq t} |Y_s| \geq \rho\delta^{-1})\nonumber\\
& \leq (\bar{S}_{\theta}(t)z^2)(y)\tilde{\mathbb{P}}_{y}(\max_{0\leq s\leq t} |\tilde{W}_s| \geq \tilde{\rho}\delta^{-1})\lesssim (\bar{S}_{\theta}(t)z^2)(y)(\delta t^{1/2} e^{-\delta^{-2}t^{-1}})\rightarrow 0\label{eq: pointwFeyn}
\end{align}
as $\delta\rightarrow 0$ for a modified constant $\tilde{\rho}$, concluding by \cite[equation (2.8.4)]{karatzas_brownian_1998}. \begin{comment} Since
\begin{equation*}
\sup_{x\in \mathcal{J}}|\bar{S}_{\theta,\delta,x}(t)z(y)|\leq \tilde{E}_y[|z(Y_t)|]=S_{\theta,0}(t)|z|(y)
\end{equation*}
is $L^2(\R^d)$-integrable, the \AT{first} claim follows from dominated convergence.\end{comment}
As a consequence of \eqref{eq:diagonalizable} and the $\delta$-scaling we have
$(S^*_{\theta,\delta,x}(t)z)(y)=(U_{\theta,\delta,x}^{-1} \tilde{S}_{\theta,\delta,x}(t) e^{t\delta^2\tilde{c}_\theta}U_{\theta,\delta,x}z)(y)$
with $U_{\theta,\delta,x}z(y)=\exp(-(a_\theta^{-1}b_\theta)\cdot (\delta y+x)/2)z(y)$. This converges to $z(y)$ uniformly in $x\in\mathcal{J}$ as $\delta\rightarrow 0$, and we also have $e^{t\delta^2\tilde{c}_\theta}\rightarrow 1$. With this conclude
\begin{align*}
&\sup_{x\in\mathcal{J}}|(S^*_{\theta,\delta,x}(t)z)(y)-(\bar{S}_{\theta}(t)z)(y)|\\&=\sup_{x\in\mathcal{J}}|(U_{\theta,\delta,x}^{-1} \tilde{S}_{\theta,\delta,x}(t) e^{t\delta^2\tilde{c}_\theta}U_{\theta,\delta,x}z)(y)-(\bar{S}_{\theta}(t)z)(y)|\rightarrow0,
\end{align*}
by the uniform pointwise convergence in \eqref{eq: pointwFeyn}. From (i) we know that $\sup_{0\leq \delta\leq\delta'}\sup_{x\in\mathcal{J}}|(S^*_{\theta,\delta,x}(t)z)(y)|$ is $L^2(\R^d)$-integrable. Dominated convergence yields the claim.
\end{proof}
We require frequently quantitative statements on the decay of the action of the semigroup operators $S^*_{\theta,\delta,x}(t)$ as $t\rightarrow\infty$ when applied to functions of a certain smoothness and integrability. This is well-known for a fixed general analytic semigroups, but is shown here to hold true regardless of $\delta$ and uniformly in $x\in\mathcal{J}$.
\begin{lemma}\label{lem:semigroupProp}
Let $0\leq \delta \leq 1$, $t>0$, $x\in \mathcal{J}$ and $1< p\leq \infty$. Moreover, let $z\in L^p(\Lambda_{\delta,x})$ if $1<p<\infty$ and $z\in C(\Lambda_{\delta,x})$ with $z=0$ on $\partial \Lambda_{\delta,x}$ if $p=\infty$. Then it holds with implied constants not depending on $x$:
\begin{align*}
\norm{A^*_{\theta,\delta,x}S^*_{\theta,\delta,x}(t)z}_{L^p(\Lambda_{\delta,x})}\lesssim t^{-1}\norm{z}_{L^p(\Lambda_{\delta,x})}.
\end{align*}
\end{lemma}
\begin{proof}
Apply first the scaling in Lemma \ref{rescaledsemigroup} in reverse order such that with $1<p\leq \infty$
\begin{align*}
\norm{A^*_{\theta,\delta,x}S^*_{\theta,\delta,x}(t)z}_{L^p(\Lambda_{\delta,x})} = \delta^{d(1/2-1/p)-2} \Norml{A^*_{\theta}S^*_{\theta}(t\delta^{-2})z_{\delta,x}}\Normr_{L^{p}(\Lambda)}.
\end{align*}
If $p<\infty$, by the semigroup property for analytic semigroups in \cite[Theorem V.2.1.3]{amann1995linear}, the $L^p(\Lambda)$-norm is up to a constant upper bounded by $(t\delta^{-2})^{-1}\norm{z_{\delta,x}}_{L^p(\Lambda)}$, and the claim follows. The same proof applies to $p=\infty$, noting that $A^*_{\theta}$ generates an analytic semigroup on $\{u\in C(\Lambda),u=0$ on $\partial\Lambda\}$, cf. \cite[Theorem 7.3.7]{pazy_semigroups_1983}.
\end{proof}
The proof for the next result relies on the Bessel-potential spaces $H_0^{s,p}(\Lambda_{\delta,x})$, $1<p<\infty$, $s\in\R$, defined for $\delta>0$ as the domains of the fractional weighted Dirichlet-Laplacian $(-\tilde{A}_{\theta,\delta,x})^{s/2}$ of order $s/2$ on $\Lambda_{\delta,x}$ with norms $\norm{\cdot}_{H^{s,p}(\Lambda_{\delta,x})}=\norm{(-\tilde{A}_{\theta,\delta,x})^{s/2}\cdot}_{L^p(\Lambda_{\delta,x})}$, see \cite{debussche_regularity_2015} for details and also Section \ref{sec:RKHS:computations} below.
\begin{lemma}
\label{boundS*u}
Let $0< \delta\leq 1$, $t>0$, $1<p\leq 2$ and grant Assumption \ref{assump:mainAssump}(iii). Let $z\in H_0^{s}(\mathbb{R}^d)$, $s\geq 0$, be compactly supported in $\bigcap_{x\in \mathcal{J}}\Lambda_{\delta,x}$, suppose that $V_{\delta,x}:L^p(\Lambda_{\delta,x})\rightarrow H_0^{-s,p}(\Lambda_{\delta,x})$ are bounded linear operators with $\norm{V_{\delta,x}z}_{H^{-s,p}(\Lambda_{\delta,x})}\leq V_{\operatorname{op}}\norm{z}_{L^p(\Lambda_{\delta,x})}$ for some $V_{\operatorname{op}}$ independent of $\delta$, $x$. Then there exists a universal constant $C>0$ such that for $1< p\leq2$ and $\gamma=(1/p-1/2)d/2$
\begin{align*}
\sup_{x\in \mathcal{J}}\Norml S^*_{\vartheta,\delta,x}(t) V_{\delta,x}z \Normr_{L^2(\Lambda_{\delta,x})}
& \leq Ce^{\tilde{c}_{\theta}t\delta^2}\sup_{x\in \mathcal{J}}\left(\norm{V_{\delta,x}z}_{L^2(\Lambda_{\delta,x})}\wedge (V_{\operatorname{op}}t^{-s/2-\gamma}\norm{z}_{L^{p}(\Lambda_{\delta,x})})\right).
\end{align*}
If $s=0$, then this holds also for $p=1$.
\begin{proof}
Let us write $u=V_{\delta,x}z$ and recall the multiplication operators $U_{\theta,\delta,x}$ from the proof of Proposition \ref{FeynmanKac}(ii). They are bounded operators on $L^2(\Lambda_{\delta,x})$ uniformly in $\delta\geq 0$ and $x\in\mathcal{J}$ and thus by \eqref{eq:diagonalizable}
\begin{align}
\Norml S^*_{\vartheta,\delta,x}(t) u \Normr_{L^2(\Lambda_{\delta,x})}
& \lesssim e^{\tilde{c}_{\theta}t\delta^2}\Norml \tilde{S}_{\vartheta,\delta,x}(t) U_{\theta,\delta,x}u \Normr_{L^2(\Lambda_{\delta,x})}.\label{eq:bound_1}
\end{align}
Let first $s=0$ such that $H_0^{-s,p}(\R^d)=L^p(\R^d)$. Approximating $u$ by continuous and compactly supported functions we obtain by Proposition \ref{FeynmanKac}(i), ellipticity of $a_\theta$ and hypercontractivity of the heat kernel on $\mathbb{R}^d$ uniformly in $x\in\mathcal{J}$
\begin{align*}
\Norml S^*_{\vartheta,\delta,x}(t) u \Normr_{L^2(\Lambda_{\delta,x})}
& \lesssim e^{\tilde{c}_{\theta}t\delta^2} \Norml e^{Ct\Delta}|U_{\theta,\delta,x}u| \Normr_{L^2(\R^d)}\\
& \lesssim e^{\tilde{c}_{\theta}t\delta^2}t^{-\gamma}\norm{u}_{L^p(\mathbb{R}^d)}\lesssim e^{\tilde{c}_{\theta}t\delta^2}t^{-\gamma}\norm{z}_{L^p(\R^d)}.
\end{align*}
This yields the result for $s=0$. These inequalities hold also for $p=1$, thus proving the supplement of the statement. For $s>0$ and $p>1$ note first that by \cite[Proposition 17(i)]{altmeyer_parameterSemi_2020} we have $\norm{(-t\tilde{A}_{\theta,\delta,x})^{s/2}\tilde{S}_{\theta,\delta,x}(t)z}_{L^2(\Lambda_{\delta,x})}\lesssim_{s} \norm{z}_{L^2(\Lambda_{\delta,x})}$. Inserting this and then the last display with $u$ replaced by $(-\tilde{A}_{\theta,\delta,x})^{-s/2} U_{\theta,\delta,x}u$ into \eqref{eq:bound_1} we get
\begin{align*}
& \Norml S^*_{\vartheta,\delta,x}(t) u \Normr_{L^2(\Lambda_{\delta,x})}
\lesssim e^{\tilde{c}_{\theta}t\delta^2}\Norml{(-\tilde{A}_{\theta,\delta,x})^{s/2}\tilde{S}_{\theta,\delta,x}(t)(-\tilde{A}_{\theta,\delta,x})^{-s/2}U_{\theta,\delta,x}u}\Normr_{L^2(\Lambda_{\delta,x})}\\
& \quad \lesssim e^{\tilde{c}_{\theta}t\delta^2}t^{-s/2}\Norml{\tilde{S}_{\theta,\delta,x}(t/2)(-\tilde{A}_{\theta,\delta,x})^{-s/2}U_{\theta,\delta,x}u}\Normr_{L^2(\Lambda_{\delta,x})}\\
& \quad\lesssim e^{\tilde{c}_{\theta}t\delta^2}t^{-s/2-\gamma} \Norml{ U_{\theta,\delta,x}u}\Normr_{H^{-s,p}(\Lambda_{\delta,x})},
\end{align*}
uniformly in $x\in\mathcal{J}$. Since the functions $U_{\theta,\delta,x}$ are smooth and uniformly bounded on $\R^d$ for $x\in\mathcal{J}$, they induce a family of multiplication operators on $H^{s,p}_0(\R^d)$ for $s\geq 0$ with operator norms uniformly bounded in $x\in\mathcal{J}$, cf. \cite[Theorem 2.8.2]{Triebel1983book}. By duality and restriction this transfers to $H^{s,p}_0(\Lambda_{\delta,x})$ for general $s$ according to \cite[Theorem 3.3.2]{Triebel1983book}. Hence,
\begin{align*}
\Norml S^*_{\vartheta,\delta,x}(t) u \Normr_{L^2(\Lambda_{\delta,x})} \lesssim e^{\tilde{c}_{\theta}t\delta^2}t^{-s/2-\gamma}\Norml{u}\Normr_{H^{-s,p}(\Lambda_{\delta,x})}\lesssim e^{\tilde{c}_{\theta}t\delta^2}t^{-s/2-\gamma} V_{\operatorname{op}}\norm{z}_{L^{p}(\Lambda_{\delta,x})}.
\end{align*}
\end{proof}
\end{lemma}
\subsection{Proof of the central limit theorem}
\subsubsection{Covariance structure of multiple local measurements}
\begin{lemma}\label{lem:covFun}
\begin{enumerate}[label=(\roman*)]
\item If $X_0=0$, then the Gaussian process from \eqref{eq:Ito} has mean zero and covariance function
\begin{align*}
&\operatorname{Cov}(\sc{X(t)}{z},\sc{X(t')}{z'}) = \int_0^{t\wedge t'} \sc{ S^*_{\theta}(t-s)z}{ S^*_{\theta}(t'-s)z'}ds.
\end{align*}
\item If $X_0$ is the stationary initial condition from Lemma \ref{lem:X0}, then the Gaussian process from \eqref{eq:Ito} has mean zero and covariance function
\begin{align*}
&\operatorname{Cov}(\sc{X(t)}{z},\sc{X(t')}{z'}) = \int_0^{\infty} \sc{ S^*_{\theta}(t+s)z}{ S^*_{\theta}(t'+s)z'}ds.
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
Part (i) follows from \eqref{eq:weakSolution} and Itô's isometry \cite[Proposition 4.28]{da_prato_stochastic_2014}. For part (ii) we conclude in the same way from noting that the stationary solution given by $\sc{X(t)}{z}=\int_{-\infty}^{t}\sc{S_{\theta}(t-s)z}{dW(s)}$ has mean zero.
\end{proof}
\begin{lemma}
\label{ConvFisher} Grant Assumption \ref{assump:mainAssump} and let $X_0=0$. Introduce for $z,z'\in L^2(\R^d)$
\begin{align*}
\Psi_{\theta}(z,z') &= \int_0^\infty\sc{\bar{S}_{\theta}(t')z}{\bar{S}_{\theta}(t')z'}_{L^2(\R^d)} dt'.
\end{align*}
If $n_i+n_j>2-d$ for some $i,j=1,\dots,p$, then $\Psi_{\theta}(\bar{A}_i^*K,\bar{A}_j^*K)$ is well-defined and we have as $\delta\rightarrow 0$
\begin{align*}
\delta^{-2+n_i+n_j}(MT)^{-1}\sum_{k=1}^M\int_{0}^{T}\E\left[\sc{X(t)}{A_i^*K_{\delta,x_k}}\sc{X(t)}{A_j^*K_{\delta,x_k}}\right]dt \rightarrow \Psi_{\theta}(\bar{A}_i^*K,\bar{A}_j^*K).
\end{align*}
\begin{proof}
Fix $i,j$ with $n_i+n_j>2-d$. Then, applying Lemma \ref{lem:covFun}(i), the scaling from Lemma \ref{rescaledsemigroup} and changing variables give \begin{align*}
\delta^{-2+n_i+n_j}(MT)^{-1}\sum_{k=1}^M\int_{0}^{T}\E\left[\sc{X(t)}{A_i^*K_{\delta,x_k}}\sc{X(t)}{A_j^*K_{\delta,x_k}}\right]dt =
\int_0^{\infty}f_{\delta}(t')dt'
\end{align*}
with
\begin{align*}
f_{\delta}(t')=(MT)^{-1}\sum_{k=1}^M\sc{S^*_{\theta,\delta,x_k}(t')A^*_{i,\delta,x_k}K}{S^*_{\theta,\delta,x_k}(t')A^*_{j,\delta,x_k}K}_{L^2(\Lambda_{\delta,x_k})}\int_0^T\mathbf{1}_{\{0\leq t'\leq t\delta^{-2}\}}dt.
\end{align*}
Consider now the differential operators $V_{\delta,x_k}=A^*_{i,\delta,x_k}$. If $D^m$ is a composition of $m$ partial differential operators, then Theorem 1.43 of \cite{yagi_abstract_2009} yields that $D^m$ is a bounded linear operator from $L^{p}(\Lambda)$ to $H_0^{-m,p}(\Lambda)$, implying $\Norml{D^m K_{\delta,x_k}}\Normr_{H^{-m,p}(\Lambda)}\lesssim \delta^{-m}\Norml{K_{\delta,x_k}}\Normr_{L^{p}(\Lambda)}$. Since $(D^m K)_{\delta,x_k}=\delta^m D^m K_{\delta,x_k}$, changing variables gives $\Norml{D^m K}\Normr_{H^{-m,p}(\Lambda_{\delta,x_k})}\lesssim \Norml{K}\Normr_{L^{p}(\Lambda_{\delta,x_k})}$. From this we find $\norm{V_{\delta,x_k}K}_{H^{-n_i,p}(\Lambda_{\delta,x})}\leq \norm{K}_{L^p(\Lambda_{\delta,x_k})}$, $\norm{V_{\delta,x_k}K}_{L^2(\Lambda_{\delta,x_k})}\lesssim \norm{K}_{H^{n_i}(\R^d)}$, and Lemma \ref{boundS*u} shows for $0\leq t'\leq T\delta^{-2}$, $\varepsilon>0$ and all sufficiently small $\delta> 0$
\begin{equation}
\sup_{x\in\mathcal{J}}\norm{S^*_{\vartheta,\delta,x}(t') A^*_{i,\delta,x}K}_{L^2(\Lambda_{\delta,x})} \lesssim 1\wedge (t')^{-n_i/2-d/4+\varepsilon}.
\label{eq:fisher_5}
\end{equation}
This allows us to infer by the Cauchy-Schwarz inequality $|f_{\delta}(t')|\lesssim 1\wedge (t')^{-n_i/2-n_j/2-d/2+2\varepsilon}$. In particular, taking $\epsilon$ so small that $n_i+n_j>2-d-4\epsilon$ yields $\sup_{\delta> 0}|f_{\delta}|\in L^1([0,\infty))$. Lemma \ref{FeynmanKac}(ii), Lemma \ref{rescaledsemigroup}(ii) and continuity of the $L^2$-scalar product show now pointwise for all $t'>0$ that $f_{\delta}(t')\rightarrow \sc{\bar{S}_{\theta}(t')\bar{A}^*_iK}{\bar{S}_{\theta}(t')\bar{A}^*_jK}_{L^2(\R^d)}$. The wanted convergence follows from the dominated convergence theorem, which also shows that $\Psi_{\theta}(\bar{A}_i^*K,\bar{A}^*_jK)$ is well-defined.
\end{proof}
\end{lemma}
\begin{lemma}
\label{ConvouterVar}
Grant Assumption \ref{assump:mainAssump} and let $X_0=0$. If $n_i+n_j> 2-d$ for $i,j=1,\dots,p$, then $\sup_{x\in\mathcal{J}}\operatorname{Var}\left(\int_0^T\sc{X(t)}{A^*_iK_{\delta,x}}\sc{X(t)}{A^*_jK_{\delta,x}}dt\right)=o(\delta^{4-2n_i-2n_j})$.
\begin{proof}
Applying the scaling from Lemma \ref{rescaledsemigroup} and using Wicks theorem \cite[Theorem 1.28]{janson_gaussian_1997} we have for $x\in\mathcal{J}$
\begin{align*}
&\delta^{2n_i+2n_j}\operatorname{Var}\left(\int_0^T\sc{X(t)}{A_i^*K_{\delta,x}}\sc{X(t)}{A_j^* K_{\delta,x}}dt\right)\\
&\quad =\operatorname{Var}\left(\int_0^T\sc{X(t)}{(A^*_{i,\delta,x}K)_{\delta,x}}\sc{X(t)}{(A^*_{j,\delta,x}K)_{\delta,x}}dt\right) = V_1 + V_2
\end{align*}
with
\begin{align*}
V_1 &= V_{\delta,x}(A^*_{i,\delta,x}K, A^*_{i,\delta,x}K, A^*_{j,\delta,x}K, A^*_{j,\delta,x}K),\\
V_2 &= V_{\delta,x}(A^*_{i,\delta,x} K, A^*_{j,\delta,x} K, A^*_{j,\delta,x} K, A^*_{i,\delta,x} K),
\end{align*}
and where for $v,v',z,z'\in L^2(\Lambda_{\delta,x})$
\begin{align*}
V_{\delta,x}(v,v',z,z') = \int_0^T \int _0^{T} \E[\sc{X(t)}{v_{\delta,x}}\sc{X(t')}{v'_{\delta,x}}]\E[\sc{X(t)}{z_{\delta,x}}\sc{X(t')}{z'_{\delta,x}}]dt'dt.
\end{align*}
We only upper bound $V_1$, the arguments for $V_2$ are similar. Set $f_{i,j}(s,s')=\sc{S^*_{\theta,\delta,x}(s+s')A^*_{i,\delta,x}K}{S^*_{\theta,\delta,x}(s')A^*_{j,\delta,x}K}_{L^2(\Lambda_{\delta,x})}$. Using Lemma \ref{lem:covFun}(i) and the scaling from Lemma \ref{rescaledsemigroup} we have
\begin{align*}
V_1 = 2\delta^6\int_0^{T}\int_0^{t\delta^{-2}}&\left(\int_0^{t\delta^{-2}-s}f_{i,i}(s,s')ds'\right)\left(\int_0^{t\delta^{-2}-s}f_{j,j}(s,s'')ds''\right)dsdt,
\end{align*}
cf. \cite[Proof of Proposition A.9]{altmeyer_nonparametric_2020}.
From \eqref{eq:fisher_5} and the Cauchy-Schwarz inequality we infer for $\varepsilon>0$
\begin{align*}
\sup_{x\in\mathcal{J}}\left|f_{i,i}(s,s')f_{j,j}(s,s'')\right| \lesssim &\left(1\wedge s^{-n_i/2-n_j/2-d/2+2\varepsilon}\right) \left(1\wedge s'^{-n_i/2-d/4+\varepsilon}\right)\\
&\quad\cdot\left(1\wedge s''^{-n_j/2-d/4+\varepsilon}\right),
\end{align*}
which gives
\begin{align*}
\sup_{x\in\mathcal{J}} \left|V_1\right|
&\lesssim \delta^6 \int_0^{T\delta^{-2}} \left(1\wedge s^{-n_i/2-n_j/2-d/2+2\varepsilon}\right) ds \int_0^{T\delta^{-2}} \left(1\wedge s'^{-n_i/2-d/4+\varepsilon}\right)ds'\\
&\quad\quad\cdot\int_0^{T\delta^{-2}} \left(1\wedge s''^{-n_j/2-d/4+\varepsilon}\right)ds''\\ &\lesssim \delta^6\left(1\vee \delta^{n_i+n_j+d-2-4\varepsilon}\right)\left(1\vee\delta^{n_i+d/2-2-2\varepsilon}\right)\left(1\vee\delta^{n_j+d/2-2-2\varepsilon}\right).
\end{align*}
Without loss of generality let $n_i\leq n_j$. Taking $\varepsilon$ small enough, we can ensure $1\vee \delta^{n_i+n_j+d-2-4\varepsilon}= 1$, as $n_i+n_j>2-d$. In $d\leq 2$ only the pairs $(n_i,n_j)\in\{(0,0),(0,1)\}$ are excluded, and in every case the claimed bound holds. The same applies to $d\geq 3$ for all pairs $(n_i,n_j)$.
\end{proof}
\end{lemma}
\subsubsection{Proof of Theorem \ref{thm:clt}}
\begin{proof}
We begin with the observed Fisher information. Suppose first $X_0=0$. Under Assumption \ref{assump:mainAssump} we find that $n_i+n_j >2-d$ for all $i,j=1,\dots,p$ in all dimensions $d\geq 1$. It follows from Lemmas \ref{ConvFisher} and \ref{ConvouterVar} that
\begin{align*}
(\rho_{\delta}\mathcal{I}_{\delta}\rho_{\delta})_{ij} &=\delta^{-2+n_i+n_j}M^{-1}\sum_{k=1}^M\int_{0}^{T}\sc{X(t)}{A_i^*K_{\delta,x_k}}\sc{X(t)}{A_j^*K_{\delta,x_k}}dt\\
&=T\Psi_{\theta}(\bar{A}^*_iK,\bar{A}^*_jK) + o_{\P}(1) = (\Sigma_{\theta})_{ij} + o_{\P}(1),
\end{align*}
concluding by $\Psi_{\theta}(z,z')=\frac{1}{2}\sc{(-\bar{A}_{\theta})^{-1/2}z}{(-\bar{A}_{\theta})^{-1/2}z'}_{L^2(\R^d)}$.
This yields for $X_0=0$ the wanted convergence \begin{align}
\rho_{\delta} \mathcal{I}_{\delta}\rho_{\delta} \stackrel{\P}\rightarrow \Sigma_{\theta}.\label{eq:Fisher_info_conv}
\end{align}
In order to extend this to the general $X_0$ from Assumption \ref{assump:mainAssump}, let $\bar{X}$ be defined as $X$, but starting in $\bar{X}(0)=0$ such that for $v\in L^2(\Lambda)$, $\sc{X(t)}{v}=\sc{\bar{X}(t)}{v}+\sc{S_{\vartheta}(t)X_0}{v}$. If $\bar{\mathcal{I}}_{\delta}$ is the observed Fisher information corresponding to $\bar{X}$, then by the Cauchy-Schwarz inequality, with $v_{i}=\sup_{k}\delta^{2n_i-2}\int_{0}^{T}\sc{X_0}{S^*_{\theta}(t)A^*_i K_{\delta,x_k}}^2 dt$,
\begin{align*}
& |(\rho_{\delta}\mathcal{I}_{\delta}\rho_{\delta})_{ij}-(\rho_{\delta}\bar{\mathcal{I}}_{\delta}\rho_{\delta})_{ij}|
\lesssim (\rho_{\delta}\bar{\mathcal{I}}_{\delta}\rho_{\delta})_{ii}^{1/2}
v_{j}^{1/2}
+ (\rho_{\delta}\bar{\mathcal{I}}_{\delta}\rho_{\delta})_{jj}^{1/2}
v_{i}^{1/2}
+
v_{i}^{1/2}v_{j}^{1/2}.
\end{align*}
By the first part, $(\rho_{\delta}\bar{\mathcal{I}}_{\delta}\rho_{\delta})_{ii}$ is bounded in probability and Assumption \ref{assump:mainAssump}(iv) gives $v_{i}=o_{\P}(1)$ for all $i$. From this obtain again \eqref{eq:Fisher_info_conv}.
Regarding the invertibility of $\Sigma_{\theta}$, let $\lambda\in\R^p$ such that
\begin{align*}
0 = \sum_{i,j=1}^p \lambda_i\lambda_j(\Sigma_{\theta})_{ij} & = T \Psi_{\theta}\left(\sum_{i=1}^p\lambda_i\bar{A}^*_{i}K,\sum_{i=1}^p\lambda_i\bar{A}^*_iK\right).
\end{align*}
By the definition of $\Psi_{\theta}$ this implies $\bar{S}_{\theta}(t)(\sum_{i=1}^p\lambda_i\bar{A}^*_iK)=0$ for all $t\geq 0$ and thus $\sum_{i=1}^p\lambda_i\bar{A}^*_iK=0$. Since the functions $\bar{A}^*_iK$ are linearly independent by Assumption \ref{assump:mainAssump}(i), conclude that $\Sigma_{\theta}$ is invertible.
We proceed next to the proof of the CLT. The augmented MLE and the statement of the limit theorem remain unchanged when $K$ is multiplied by a scalar factor. We can therefore assume without loss of generality that $\norm{K}_{L^2(\R^d)}=1$. By the basic error decomposition \eqref{eq:basicError} and because $\Sigma_{\theta}$ is invertible, this means
\begin{align}
(\rho_{\delta} \mathcal{I}_{\delta}\rho_{\delta})^{1/2}\rho_{\delta}^{-1} (\hat{\theta}_{\delta}-\theta) = (\rho_{\delta}\mathcal{I}_{\delta}\rho_{\delta})^{-1/2} \Sigma_{\theta}^{1/2}(\Sigma_{\theta}^{-1/2}\rho_{\delta}\mathcal{M}_{\delta}).\label{eq:CLT}
\end{align}
Note that $\mathcal{M}_{\delta}=\mathcal{M}_{\delta}(T)$ corresponds to a $p$-dimensional continuous and square integrable martingale $(\mathcal{M}_{\delta}(t))_{0\leq t\leq T}$ with respect to the filtration $(\mathcal{F}_t)_{0\leq t\leq T}$ evaluated at $t=T$. In view of the support Assumption \ref{assump:mainAssump}(i) let $\delta\leq\delta'$ such that for $k,k'$ with the Kronecker delta $\delta_{k,k'}$
\begin{equation*}
\E[W_k(t)W_{k'}(t)] = \E[\sc{W(t)}{K_{\delta,x_k}}\sc{W(t)}{K_{\delta,x_{k'}}}] =\sc{K_{\delta,x_k}}{K_{\delta,x_{k'}}}=\delta_{k,k'}.
\end{equation*}
This means that the Brownian motions $W_k$ and $W_{k'}$ are independent for $k\neq k'$ and thus their quadratic co-variation process at $t$ is $[W_k,W_{k'}]=t\delta_{k,k'}$. From this infer that the quadratic co-variation process of the martingale $(\mathcal{M}_{\delta}(t))_{0\leq t\leq T}$ at $t=T$ for $\delta\leq\delta'$ is equal to
\begin{align*}
[\mathcal{M}_{\delta}]_T = \sum_{k,k'=1}^M \int_0^T X^A_{\delta,k}(t)X^A_{\delta,k'}(t)d[W_k,W_{k'}]_t=\mathcal{I}_{\delta}.
\end{align*}
Theorem \ref{thm:MartingaleCLT} now implies $\Sigma_{\theta}^{-1/2}\rho_{\delta}\mathcal{M}_{\delta}\stackrel{d}\rightarrow \mathcal{N}(0,I_p)$. Conclude in \eqref{eq:CLT} by \eqref{eq:Fisher_info_conv} and Slutsky's lemma.
\end{proof}
\subsection{Proof of Theorem \ref{thm:lower:bound:M>1}}\label{sec:proof:optimality:result}
In this section, we give the main steps of the proof of Theorem \ref{thm:lower:bound:M>1}. The RKHS computations and the proofs of two key lemmas are deferred to Section~\ref{sec:RKHS:computations} and the appendix.
\subsubsection{Gaussian minimax lower bounds}\label{sec:general_lower_bound_setting}
Let $(\mathbb{P}_{\vartheta})_{\vartheta\in\Theta}$ be a family of probability measures defined on the same measurable space with a parameter set $\Theta\subseteq \mathbb{R}^p$. For $\theta^0,\theta^1\in\Theta$, the (squared) Hellinger distance between $\mathbb{P}_{\theta^0}$ and $\mathbb{P}_{\theta^1}$ is defined by $H^2(\mathbb{P}_{\vartheta^0},\mathbb{P}_{\theta^1})=\int (\sqrt{\mathbb{P}_{\theta^0}}-\sqrt{\mathbb{P}_{\theta^1}})^2$ (see, e.g. \cite[Definition 2.3]{tsybakov_introduction_2008}). Moreover, if $\theta^0,\vartheta^1\in\Theta$ satisfy
\begin{align}\label{eq_Hellinger_lower_bound_cond}\
H^2(\mathbb{P}_{\theta^0},\mathbb{P}_{\vartheta^1})\leq 1,
\end{align}
then we have the lower bound
\begin{align}\label{eq_Hellinger_lower_bound}
\inf_{\hat\vartheta}\max_{\vartheta\in\{\vartheta^0,\vartheta^1\}}\mathbb{P}_\vartheta\left(|\hat\vartheta-\vartheta|\geq \frac{|\vartheta^0-\vartheta^1|}{2}\right)\geq \frac{1}{4}\frac{2-\sqrt{3}}{4}=:c_3,
\end{align}
where the infimum is taken over all $\R^p$-valued estimators~$\hat\vartheta$ and $|\cdot|$ denotes the Euclidean norm. For a proof of this lower bound, see~\cite[Theorem~2.2(ii)]{tsybakov_introduction_2008}.
Next, let $\mathbb{P}_{\vartheta^0}$ and $\mathbb{P}_{\vartheta^1}$ be two Gaussian measures defined on a separable Hilbert space $\mathcal{H}$ with expectation zero and positive self-adjoint trace-class covariance operators $C_{\vartheta^0}$ and $C_{\vartheta^1}$, respectively. By the spectral theorem, there exist (strictly) positive eigenvalues $(\sigma_j^2)_{j\geq 1}$ and an associated orthonormal system of eigenvectors $(u_j)_{j\geq 1}$ such that $C_{\vartheta^0}=\sum_{j\geq 1}\sigma_j^2(u_j\otimes u_j)$. Given the Gaussian measure $\mathbb{P}_{\vartheta^0}$, we can associate the so-called kernel or RKHS $(H_{\theta^0},\|\cdot\|_{H_{\theta^0}})$ of $\mathbb{P}_{\vartheta^0}$ given by
\begin{align}\label{eq:RKHS:Hilbert:space}
H_{\theta^0}=\{h\in \mathcal{H}:\|h\|_{H_{\vartheta^0}}<\infty\},\qquad \|h\|_{H_{\vartheta^0}}^2=\sum_{j\geq 1}\frac{\sc{u_j}{h}_{\mathcal{H}}^2}{\sigma_j^2}
\end{align}
(see, e.g., \cite[Example 4.2]{MR3024389} and also \cite[Chapters 4.1 and 4.3]{MR3024389} and \cite[Chapter 3.6]{gine_mathematical_2016} for other characterizations of the RKHS of a Gaussian measure or process).
Alternatively, we have $H_{\theta^0}=C_{\vartheta^0}^{1/2}\mathcal{H}$ and $\|h\|_{H_{\vartheta^0}}=\norm{C_{\vartheta^0}^{-1/2}h}$ for $h\in H_{\theta^0}$. A useful tool to compute the RKHS is the fact that the RKHS behaves well under linear transformation. More precisely, if $L:\mathcal{H}\rightarrow \mathcal{H}'$ is a bounded linear operator between Hilbert spaces, then the image measure $Q_{\vartheta^0}=\mathbb{P}_{\vartheta^0}\circ L^{-1}$ is a centered Gaussian measure having RKHS $L(H_{\vartheta^0})$ with norm $\norm{h}_{ L(H_{\vartheta^0})}=\inf\{\norm{f}_{\mathcal{H}}:f\in L^{-1}h\}$ (see Proposition 4.1 in \cite{MR3024389} and also Chapter 3.6 in \cite{gine_mathematical_2016}).
Finally, combining \eqref{eq_Hellinger_lower_bound_cond} with the RKHS machinery, we get the following lower bound.
\begin{lemma}\label{lem:Gaussian:lower:bound}
In the above Gaussian setting, suppose that $(u_j)_{j\geq 1}$ is an orthonormal basis of $\mathcal{H}$ and that
\begin{align}\label{eq:Gaussian:lower:bound:condition}
\sum_{j\geq 1}\sigma_j^{-2}\|(C_{\vartheta^1}-C_{\vartheta^0})u_{j}\|_{H_{\vartheta^0}}^
\leq 1/2.
\end{align}
Then the lower bound in \eqref{eq_Hellinger_lower_bound} holds, that is
\begin{align*}
\inf_{\hat\vartheta}\max_{\vartheta\in\{\vartheta^0,\vartheta^1\}}\mathbb{P}_\vartheta\left(|\hat\vartheta-\vartheta|\geq \frac{|\vartheta^0-\vartheta^1|}{2}\right)\geq c_3.
\end{align*}
\end{lemma}
Lemma \ref{lem:Gaussian:lower:bound} is a consequence of the proof of the Feldman-Hajek theorem \cite[Theorem 2.25]{da_prato_stochastic_2014} in combination with basic properties of the Hellinger distance and the minimax risk. A proof is given in Appendix \ref{proof:lem:Gaussian:lower:bound}.
\subsubsection{Proof of Theorem \ref{thm:lower:bound:M>1}}
Our goal is to apply Lemma \ref{lem:Gaussian:lower:bound} and Corollary \ref{cor:upper:bound:RKHS:norm:M>1_Laplace} to the Gaussian process $X_{\delta}$ under Assumption \ref{assu:lowerBound}. We assume without loss of generality that $\norm{K}_{L^2(\R^d)}=1$.
We choose $\theta^0=(1,0,0)$ and $\theta^1\in\Theta_1\cup\Theta_2\cup\Theta_3$, meaning that the null model is $A_{\vartheta^0}=\Delta$ and the alternatives are $A_{\theta^1}=\theta^1_1\Delta+\theta^1_2(b\cdot\nabla)+\theta^1_3$ for $\theta^1\in\R^3$, where $\theta^1$ lies in one of the parameter classes $\Theta_1$, $\Theta_2$ or $\Theta_3$. For $\vartheta\in\{\vartheta^0,\vartheta^1\}$, let $\P_{\theta,\delta}$ be the law of $X_{\delta}$ on $\mathcal{H}=L^2([0,T])^M$, let $C_{\vartheta,\delta}$ be its covariance operator, and let $(H_{\vartheta,\delta},\norm{\cdot}_{H_{\vartheta,\delta}})$ be the associated RKHS. For $(f_k)_{k=1}^M\in\mathcal{H}$, we have $C_{\theta,\delta}(f_k)_{k=1}^M=(\sum_{l=1}^MC_{\theta,\delta,k,l}f_l)_{k=1}^M$ with (cross-)covariance operators defined by
\begin{align*}
&C_{\theta,\delta,k,l}:L^2([0,T])\rightarrow L^2([0,T]),\\ &C_{\theta,\delta,k,l}f_l(t)=\E_{\theta} [\sc{X_{\delta,l}}{f_l}_{L^2([0,T])}X_{\delta,k}(t)],\qquad 0\leq t\leq T
\end{align*}
(see, e.g.,~Appendix \ref{proof:lem:seriesBound:M>1} for more background on bounded linear operator on $\mathcal{H}$). Because of the stationary of $X_\delta$ under Assumption \ref{assu:lowerBound}, we have
\begin{align*}
C_{\theta,\delta,k,l}f_l(t)
&=\int_0^t c_{\theta,\delta,k,l}(t-t')f_l(t')\,dt'+\int_t^T c_{\theta,\delta,l,k}(t'-t)f_l(t')\,dt',\quad 0\leq t\leq T
\end{align*}
with covariance kernels $$c_{\theta,\delta,k,l}(t)= \E_{\theta} [X_{\delta,k}(t)X_{\delta,l}(0)],\qquad 0\leq t\leq T.$$
Following the notation of Section \ref{sec:general_lower_bound_setting}, let $(\sigma_j^2)_{j\geq 1}$ be the strictly positive eigenvalues of $C_{\vartheta^0,\delta}$ and let $(u_j)_{j\geq 1}$ with $u_j=(u_{j,k})_{k=1}^m\in \mathcal{H}$ be a corresponding orthonormal system of eigenvectors. By Corollary \ref{cor:upper:bound:RKHS:norm:M>1_Laplace}, we have $H_{\vartheta^0,\delta}=H^M$ as sets. Since $H^M$ is dense in $\mathcal{H}$, $(u_j)_{j\geq 1}$ forms an orthonormal basis of $\mathcal{H}$. This means that the first assumption of Lemma \ref{lem:Gaussian:lower:bound} is satisfied. To verify the second assumption in \eqref{eq:Gaussian:lower:bound:condition}, we will use the bound for the RKHS norm in Corollary \ref{cor:upper:bound:RKHS:norm:M>1_Laplace} to turn the left-hand side of \eqref{eq:Gaussian:lower:bound:condition} into a more accessible expression.
\begin{lemma}\label{lem:seriesBound:M>1}
In the above setting, we have
\begin{align*}
&\sum_{j=1}^\infty \sigma_j^{-2}\norm{(C_{\theta^0,\delta}-C_{\theta^1,\delta}) u_{j}}^2_{H_{\vartheta^0,\delta}}\\
&\leq cT \sum_{k,l=1}^M\Big(\frac{\norm{\Delta K}^4_{L^2(\R^d)}}{\delta^{8}}\norm{c_{\theta^0,\delta,k,l}-c_{\theta^1,\delta,k,l}}^2_{L^2([0,T])}+\norm{c_{\theta^0,\delta,k,l}''-c_{\theta^1,\delta,k,l}''}^2_{L^2([0,T])}\Big)
\end{align*}
for all $\delta^2\leq \norm{\Delta K}_{L^2(\R^d)}$ and all $T\geq 1$, where $c>0$ is an absolute constant.
\end{lemma}
The proof of Lemma \ref{lem:seriesBound:M>1} can be found in Appendix \ref{proof:lem:seriesBound:M>1}. Moreover, combining Lemma \ref{lem:covFun}(ii) with perturbation arguments for semigroups, we prove the following upper bound in Section \ref{proof:lem:concrete_lower_bound}.
\begin{lemma}\label{lem:concrete_lower_bound}
In the above setting let $\theta^1=(\theta_1,\theta_2,\theta_3)\in\Theta_1\cup\Theta_2\cup\Theta_3$ with $M\geq 1$. Then there exists a constant $c>0$, depending only on $K$ such that
\begin{align*}
&\sum_{k, l=1}^M\left(\delta^{-8}\norm{c_{\theta^0,\delta,k,l}-c_{\theta^1,\delta,k,l}}^2_{L^2([0,T])}+\norm{c_{\theta^0,\delta,k,l}''-c_{\theta^1,\delta,k,l}''}^2_{L^2([0,T])}\right)\\
&\qquad\leq cM(\delta^{-2}(1-\theta_1)^2 + \theta_2^2 + \delta^2 \theta_3^2).
\end{align*}
\end{lemma}
Choosing consecutively
\begin{align*}
\vartheta^1&=(\theta_1,0,0)\in\Theta_1, \qquad \theta_1=1+c_2\frac{\delta}{\sqrt{TM}},\nonumber\\
\vartheta^1&=(1,\theta_2,0)\in\Theta_2, \qquad \theta_2=c_2\frac{1}{\sqrt{TM}},\nonumber\\
\vartheta^1&=(1,0,\theta_3)\in\Theta_3, \qquad \theta_3=c_2\min\Big(1,\frac{\delta^{-1}}{\sqrt{TM}}\Big),
\end{align*}
claims (i) and (ii) of Theorem \ref{thm:lower:bound:M>1} follow from Lemma \ref{lem:Gaussian:lower:bound} in combination with Lemmas \ref{lem:seriesBound:M>1} and \ref{lem:concrete_lower_bound}.
\qed
\subsection{RKHS computations}\label{sec:RKHS:computations}
The proofs of the RKHS results from Section \ref{Sec:RKHS:SPDE} are achieved by basic operations on RKHS, in particular under linear transformation (see, e.g., Section \ref{sec:general_lower_bound_setting} and \cite[Chapter 4]{MR3024389} or \cite[Chapter 12]{MR3967104}).
Recall the stationary process $X$ in \eqref{eq:stochConv} and that $A_{\theta}e_j=-\lambda_je_j$. The cylindrical Wiener process can be realised as $W=\sum_{j\geq 1}e_j\beta_j$ for independent scalar Brownian motions $\beta_j$ and we obtain the decomposition
\begin{align}
X(t)=\sum_{j\geq 1}\int_{-\infty}^t e^{-\lambda_j(t-t')}d\beta_j(t')e_j=\sum_{j\geq 1}Y_j(t)e_j,\label{eq:series_decomp}
\end{align}
with independent stationary Ornstein-Uhlenbeck processes $Y_j$ satisfying
\begin{align}\label{eq:OU:SDE}
dY_{j}(t) = -\lambda_jY_j(t)dt+d\beta_j(t).
\end{align}
For a sequence $(\mu_j)$ of non-decreasing, positive real numbers, take $\mathcal{H}_1$ to be the closure of $\mathcal{H}$ under the norm
\begin{align*}
\norm{z}^2_{\mathcal{H}_1} = \sum_{j\geq 1}\frac{1}{\mu_j^2}\sc{z}{e_j}^2_{\mathcal{H}},
\end{align*}
such that $\mathcal{H}$ is continuously embedded in $\mathcal{H}_1$. If for $0\leq t\leq T$
\begin{align}\label{eq:existence:SPDE}
\int_{-\infty}^t\norm{S_{\theta}(t')}^2_{\operatorname{HS}(\mathcal{H},\mathcal{H}_1)}dt'
& = \sum_{j\geq 1}\int_{-\infty}^t \norm{S_{\theta}(t')e_j}^2_{\mathcal{H}_1}dt'\nonumber\\
&= \sum_{j\geq 1}\frac{1}{\mu_j^2}\int_{-\infty}^t e^{-2\lambda_jt'}dt'<\infty,
\end{align}
then we conclude by \cite[Theorem 5.2]{da_prato_stochastic_2014} that the law of $X$ induces a Gaussian measure on $L^2([0,T];\mathcal{H}_1)$. A first universal choice is given by $\mu_j=j$ for all $j\geq 1$. Moreover, if $A_{\vartheta}$ is a differential operator, then Weyl's law \cite[Lemma 2.3]{shimakura_partial_1992} says that the $\lambda_j$ are positive real numbers of the order $j^{2/d}$, meaning that the choice $\mu_j=\lambda_j^{s/2}$ is possible whenever $s\geq 0$ and $s+1>d/2$. In this case, $\mathcal{H}_1$ corresponds to a Sobolev space of negative order $-s$ induced by the eigensequence $(\lambda_j,e_j)_{j\geq 1}$.
\subsubsection{RKHS of an Ornstein-Uhlenbeck process}
We start by computing the RKHS $(H_{Y_j},\norm{\cdot}_{Y_{j}})$ of the processes $Y_j$. We show that the RKHS is, as a set, independent of $j$ and equal to
\begin{align*}
H=\{h\in L^2([0,T]):h\text{ absolutely continuous}, h'\in L^2([0,T])\}
\end{align*}
from Theorem \ref{thm:upper:bound:RKHS:norm:M>1}, while the corresponding norm depends on the parameter $\lambda_j$.
\begin{lemma}\label{lemma:RKHS_norm_OU_process}
For every $j\geq 1$ we have $H_{Y_j}=H$ and
\begin{align}\label{eq:RKHS_norm_OU_process}
\norm{h}^2_{Y_j}
&= \lambda_j^2 \norm{h}^2_{L^2([0,T])} + \lambda_j(h^2(T)+h^2(0)) + \norm{h'}^2_{L^2([0,T])}.
\end{align}
\end{lemma}
\begin{proof}
By Example 4.4 in \cite{MR3024389}, a scalar Brownian motion $(\beta(t))_{0\leq t\leq T}$ starting in zero has RKHS $H_{\beta}=\{h:h(0)=0, h\text{ absolutely continuous, }h,h'\in L^2([0,T])\}$ with norm $\norm{h}_{H_\beta}^2=\int_0^T(h'(t))^2\,dt$. Moreover, the Brownian motion $(\bar \beta(t))_{0\leq t\leq T}$ with $\bar \beta(t)=X_0+\beta(t)$, where $X_0$ is a standard Gaussian random variable independent of $(\beta(t))_{0\leq t\leq T}$ has RKHS $$H_{\bar \beta}=\{\alpha+h:\alpha\in\R,h\in H_{\beta}\}=H,\quad \norm{h}_{H_{\bar \beta}}^2=\int_0^T(h'(t))^2\,dt+h^2(0),$$
as can be seen from Proposition 4.1 in \cite{MR3024389} or Example 12.28 in \cite{MR3967104}. To compute the RKHS of $Y_j$ we now proceed similarly as in Example 4.8 in \cite{MR3024389}. Define the bounded linear map $L:L^2([0,e^{2\lambda_j T}-1])\rightarrow L^2([0,T])$, $(Lf)(t)=(2\lambda_j)^{-1/2}e^{-\lambda_j t}f(e^{2\lambda_j t}-1)$. Then we have $L\bar \beta=Y_j$ in distribution and $L$ is bijective with inverse $L^{-1}h(s)=\sqrt{2\lambda_j (s+1)}h((2\lambda_j)^{-1}\log (s+1))$, $0\leq s\leq e^{2\lambda_j T}-1$. By Proposition 4.1 in \cite{MR3024389} (see also the discussion after \eqref{eq:RKHS:Hilbert:space}), we conclude that $H_{Y_j}=L(H_{\bar \beta})=L(H)=H$ with
\begin{align*}
\|h\|_{Y_j}^2&=\|L^{-1}h\|_{H_{\bar \beta}}^2\\
&=\int_{0}^{e^{2\lambda_j T}-1}\Big(\frac{d}{ds}\sqrt{2\lambda_j (s+1)}h\Big(\frac{1}{2\lambda_j}\log (s+1)\Big)\Big)^2\,ds+2\lambda_j h^2(0)\\
&=\int_0^T(\lambda_j h(t)+h'(t))^2\,dt+2\lambda_j h^2(0)\\
&=\lambda_j^2\int_0^Th^2(t)\,dt+\lambda_j(h^2(T)+h^2(0))+\int_0^T(h'(t))^2\,dt.
\end{align*}
This completes the proof.
\end{proof}
\subsubsection{RKHS of the SPDE}
We compute next the RKHS of the process $X$. Let us start with the following series representation, which is independent of $\mathcal{H}_1$.
\begin{lemma}\label{lem:RKHS:SPDE}
The RKHS $(H_{X},\|\cdot\|_{X})$ of the process $X$ in \eqref{eq:stochConv} satisfies $$H_X=\Big\{h=\sum_{j\geq 1}h_je_j:h_j\in H,\|h\|_{X}<\infty\Big\}\quad\text{and}\quad \|h\|_{X}^2=\sum_{j\geq 1}\|h_j\|_{Y_j}^2.$$
\end{lemma}
\begin{proof}
For simplicity, choose $\mu_j=j$ for all $j\geq 1$. Since $X(t)=\sum_{j\geq 1}j^{-1}Y_j(t)\tilde e_j$ with orthonormal basis $\tilde e_j=je_j$ of $\mathcal{H}_1$, the covariance operator $C_X$ of $X$ is isomorphic to $\bigoplus_{j\geq 1}j^{-2}C_{Y_j}$ with $C_{Y_j}:L^2([0,T])\rightarrow L^2([0,T])$ being the covariance operator of $Y_j$. Hence, using the definition of the RKHS given after \eqref{eq:RKHS:Hilbert:space}, the RKHS of $X$ consists of all elements of the form
\begin{align*}
h=C_X^{1/2}f=\sum_{j\geq 1}j^{-1}(C_{Y_j}^{1/2}f_j)\tilde e_j=\sum_{j\geq 1}(C_{Y_j}^{1/2}f_j) e_j
\end{align*}
with $f=\sum_{j\geq 1}f_j\tilde e_j\in L^2([0,T];\mathcal{H}_1)$ and we have
\begin{align}\label{eq:RKHS:SPDE:norm}
\|h\|_{X}^2&=\|C_X^{-1/2}h\|_{L^2([0,T];\mathcal{H}_1)}^2\nonumber\\
&=\|f\|_{L^2([0,T];\mathcal{H}_1)}^2=\int_0^T\|f(t)\|_{\mathcal{H}_1}^2dt=\sum_{j\geq 1}\|f_j\|_{L^2([0,T])}^2<\infty.
\end{align}
Using Lemma \ref{lemma:RKHS_norm_OU_process}, we can write $h=\sum_{j\geq 1}h_je_j$ with $h_j=C_{Y_j}^{1/2}f_j\in H$ and $$\|h_j\|_{Y_j}^2=\|C_{Y_{j}}^{-1/2}h_j\|_{L^2([0,T])}^2=\|f_j\|_{L^2([0,T])}^2.$$
Inserting this into \eqref{eq:RKHS:SPDE:norm}, we conclude that
\begin{align*}
\|h\|_{X}^2=\sum_{j\geq 1}\|f_j\|_{L^2([0,T])}^2=\sum_{j\geq 1}\|h_j\|_{Y_j}^2,
\end{align*}
which completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:RKHS:SPDE}] In the proof we write
\begin{align*}
\tilde H_X
& = \{h\in L^2([0,T];\mathcal{H}):A_{\theta} h,h'\in L^2([0,T];\mathcal{H})\}
\end{align*}
and
\begin{align*}
\|h\|_{\tilde{H}_X}^2
&=\|A_{\theta} h\|_{L^2([0,T];\mathcal{H})}^2
+\|h'\|_{L^2([0,T];\mathcal{H})}^2\\
&\quad\quad+\|(-A_{\theta})^{1/2} h(0)\|^2_{\mathcal{H}}+\|(-A_{\theta})^{1/2} h(T)\|^2_{\mathcal{H}}.
\end{align*}
By the RKHS computations in Lemma \ref{lem:RKHS:SPDE} it remains to check that $H_X=\tilde{H}_X$ and $\norm{h}_X=\norm{h}_{\tilde H_X}$ for all $h\in H_X$. First, let $h=\sum_{j\geq 1}h_je_j\in H_X$. Then $h$ is absolute continuous and we have
\begin{align}
h'&=\sum_{j\geq 1}h_j'e_j\in L^2([0,T];\mathcal{H}),\nonumber\\
A_{\vartheta}h&=-\sum_{j\geq 1}\lambda_jh_je_j\in L^2([0,T];\mathcal{H}).\label{eq:characterization:RKHS:SPDE}
\end{align}
Hence, $h\in\tilde H_X$ and therefore $H_X\subseteq\tilde{H}_X$. To see the second claim in \eqref{eq:characterization:RKHS:SPDE}, set $h_m=\sum_{j= 1}^mh_je_j$ and $g_m=-\sum_{j= 1}^m\lambda_jh_je_j$ for $m\geq 1$. Then, $h_m(t)$ and $g_m(t)$ are in $\mathcal{H}$ for all $t\in[0,T]$ and we have $A_{\vartheta}h_m=g_m$ because $(\lambda_j,e_j)_{j\geq 1}$ is an eigensequence of $-A_{\theta}$. Moreover $h_m(t)\rightarrow h(t)$ and $A_{\vartheta}h_m=g_m(t)\rightarrow g(t)=\sum_{j\geq 1}\lambda_jh_j(t)e_j$ for a.e.~$t$. Since $A_{\vartheta}$ is closed, we conclude that $A_{\vartheta}h(t)=g(t)$ for a.e.~$t$. Next, let $h\in\tilde H_X$. Since $(-A_{\theta})^{1/2}$ is self-adjoint, we have for all $0\leq t'\leq t\leq T$ (cf.~the proof of \cite[Theorem 5.9.3]{evans_partial_2010})
\begin{align*}
2\norm{(-A_{\theta})^{1/2}h(t)}^2_{\mathcal{H}}&=2\sc{(-A_{\theta})^{1/2}h(t)}{(-A_{\theta})^{1/2}h(t)}_{\mathcal{H}}= 2\sc{(-A_{\theta})h(t)}{h(t)}_{\mathcal{H}}\\
&\leq 2\norm{A_{\theta} h(t)}_{\mathcal{H}}\norm{h(t)}_{\mathcal{H}}\leq \norm{A_{\theta} h(t)}_{\mathcal{H}}^2+\norm{h(t)}_{\mathcal{H}}^2
\end{align*}
and
\begin{align*}
\frac{d}{dt}\norm{(-A_{\theta})^{1/2}h(t)}^2_{\mathcal{H}} &= 2\sc{(-A_{\theta})^{1/2}h(t)}{(-A_{\theta})^{1/2}h'(t)}_{\mathcal{H}} = 2\sc{(-A_{\theta})h(t)}{h'(t)}_{\mathcal{H}},
\end{align*}
where the absolute value of the latter term is bounded by $\norm{A_{\theta} h(t)}_{\mathcal{H}}^2+\norm{h'(t)}_{\mathcal{H}}^2$.
Letting $t_0\in[0,T]$ such that $\norm{(-A_{\theta})^{1/2}h(t_0)}^2_{\mathcal{H}}=T^{-1}\int_0^T\norm{(-A_{\theta})^{1/2}h(t)}^2_{\mathcal{H}}dt$, we deduce that
\begin{align*}
&\norm{(-A_{\theta})^{1/2}h(0)}^2_{\mathcal{H}}+\norm{(-A_{\theta})^{1/2}h(T)}^2_{\mathcal{H}}\\
&=\Big(\int_{t_0}^0+\int_{t_0}^T\Big)\frac{d}{dt}\norm{(-A_{\theta})^{1/2}h(t)}^2_{\mathcal{H}}dt+2T^{-1}\int_0^T\norm{(-A_{\theta})^{1/2}h(t)}^2_{\mathcal{H}}dt\\
&\leq 2\int_0^T\norm{A_{\theta} h'(t)}_{\mathcal{H}}^2dt+\int_0^T\norm{h(t)}_{\mathcal{H}}^2dt+\int_0^T\norm{h(t)}_{\mathcal{H}}^2dt,
\end{align*}
where we also used that $T\geq 1$. The latter can be written as
\begin{align}
&\norm{(-A_{\theta})^{1/2}h(0)}^2_{\mathcal{H}}+\norm{(-A_{\theta})^{1/2}h(T)}^2_{\mathcal{H}}\nonumber\\
&\leq 2\norm{A_{\theta}h}^2_{L^2([0,T];\mathcal{H})} + \norm{h}^2_{L^2([0,T];\mathcal{H})} + \norm{h'}^2_{L^2([0,T];\mathcal{H})},\label{eq:upper:bound:RKHS:norm:SPDE}
\end{align}
so that $\norm{h}_{\tilde H_X}<\infty$. Moreover, writing $h=\sum_{j\geq 1}h_je_j$, we have $(-A_{\theta})^{1/2}h(t)=\sum_{j\geq 1}\lambda_j^{1/2}h_j(t)e_j$ and the relations in \eqref{eq:characterization:RKHS:SPDE} continue to hold, as can be seen from the identities $\sc{(-A_{\vartheta})^{1/2}h(t)}{e_j}=\sc{h(t)}{(-A_{\vartheta})^{1/2}e_j}=\lambda_j^{1/2}h_j(t)$, $\sc{A_{\vartheta}h(t)}{e_j}=\lambda_jh_j(t)$ and $\sc{h'}{e_j}=\sc{h}{e_j}'=h_j'$ (for the latter, see also \cite[Proposition A.22]{liu_stochastic_2015}). It follows that
\begin{align*}
\norm{h}_{X}^2=\sum_{j\geq 1}(\lambda_j^2 \norm{h_j}^2_{L^2([0,T])} + \lambda_j(h_j^2(T)+h_j^2(0)) + \norm{h'_j}^2_{L^2([0,T])})=\|h\|_{\tilde{H}_X}^2<\infty.
\end{align*}
Hence, $h\in H_X$ and therefore $\tilde H_X\subseteq H_X$. Since we have also shown that the norms are equal, this completes the proof. The upper bound for the RKHS norm follows from inserting \eqref{eq:upper:bound:RKHS:norm:SPDE}.
\end{proof}
\subsubsection{RKHS of multiple measurements}
In this section, we deduce Theorem \ref{thm:upper:bound:RKHS:norm:M>1} from Theorem \ref{thm:RKHS:SPDE}. This requires the $K_1,\dots,K_M$ to lie in the dual space of $\mathcal{H}_1$. In the case $A_{\vartheta}=\Delta$, this requires the $K_1,\dots,K_M$ to lie in a Sobolev space of order $s>d/2-1$ (see the beginning of Section \ref{sec:RKHS:computations}). In Appendix \ref{app:RKHS:measurements:approximation:argument}, we give a second, slightly more technical proof based on an approximation argument, which provides the claim under the weaker assumption $K_1,\dots,K_M\in \mathcal{D}(A_{\theta})$.
\begin{proof}[First proof of Theorem~\ref{thm:upper:bound:RKHS:norm:M>1}]
For a sequence $(\mu_j)$ of non-decreasing, positive real numbers, take
$$H_\mu=\{f\in \mathcal{H}:\norm{f}_{H_\mu}^2=\sum_{j\geq 1}\mu_j^2\sc{f}{e_j}^2_{\mathcal{H}}<\infty\},$$
and take $\mathcal{H}_1=\mathcal{H}_\mu'$ to be the closure of $\mathcal{H}$ under the norm
\begin{align*}
\norm{z}^2_{\mathcal{H}_\mu'} = \sum_{j\geq 1}\frac{1}{\mu_j^2}\sc{z}{e_j}^2_{\mathcal{H}}.
\end{align*}
Then, $\mathcal{H}_\mu$ is continuously embedded in $\mathcal{H}$ and $\mathcal{H}_\mu'$ is indeed the dual of $\mathcal{H}_\mu$. Moreover, we can extend $\sc{f}{g}=\sc{f}{g}_{\mathcal{H}}$ to pairs $f\in \mathcal{H}_\mu$ and $g\in \mathcal{H}_\mu'$ and we have the (generalised) Cauchy-Schwarz inequality
\begin{align}\label{eq:generalized:CS}
|\sc{f}{g}|\leq \norm{f}_{\mathcal{H}_\mu}\norm{g}_{\mathcal{H}_\mu'}.
\end{align}
We choose the sequence $(\mu_j)$ such that \eqref{eq:existence:SPDE} holds, meaning that $X$ can be considered as a Gaussian random variable in $L^2([0,T];\mathcal{H}_\mu')$. For $K_1,\dots,K_M\in \mathcal{H}_\mu$, consider now the linear map
\begin{align*}
&L:L^2([0,T];\mathcal{H}_\mu')\rightarrow L^2([0,T])^M,\\
&Lf(t)= (\sc{K_k}{f(t)})_{k=1}^M, t\in[0,T].
\end{align*}
Then, $LX=X_K$ in distribution. Using \eqref{eq:generalized:CS}, it is easy to see that $L$ is a bounded operator with norm bounded by $(\sum_{k=1}^M\norm{K_k}_{\mathcal{H}_\mu}^2)^{1/2}$:
\begin{align*}
\sum_{k=1}^M\int_0^T\sc{K_k}{f(t)}^2dt&\leq
\Big(\sum_{k=1}^M\norm{K_k}_{\mathcal{H}_\mu}^2\Big)\norm{f}_{L^2([0,T];\mathcal{H}_\mu')}^2.
\end{align*}
Next, we show that $L(H_X)=H^M$. First, for $(h_k)_{k=1}^M\in H^M$, the function
\begin{align}\label{eq:inverse:image}
f=\sum_{k,l=1}^M G^{-1}_{k,l} K_kh_l\in H_{X}\quad\text{satisfies}\quad Lf=(h_k)_{k=1}^M.
\end{align}
Hence $H^ M \subseteq L(H_X)$. To see the reverse inclusion, let $f\in H_X$. Set $(h_k)_{k=1}^M=Lf$ such that $h_k(t)= \sc{K_k}{f(t)}$. By the definition of $H_X$ and properties of the Bochner integral (see, e.g., \cite[Proposition A.22]{liu_stochastic_2015}), the $h_k$ are absolutely continuous with derivatives $h_k'(t)=\sc{K_k}{f'(t)}$, and we have
\begin{align*}
\int_0^T (h'(t))^2dt\leq \norm{K_k}_{\mathcal{H}}^2\int_0^T\norm{f'(t)}_{\mathcal{H}}^2dt=\norm{K_k}_{\mathcal{H}}^2\norm{f'}_{L^2([0,T];\mathcal{H})}^2<\infty.
\end{align*}
We get $h_k\in H$ for all $k=1,\dots,M$. Hence, $L(H_X)\subseteq H^M$ and therefore $L(H_{X})=H^M$. It remains to prove the bound for the norm. Using \eqref{eq:inverse:image} and the behavior of the RKHS under linear transformation (see Proposition 4.1 in \cite{MR3024389}), we have
\begin{align*}
&\norm{(h_k)_{k=1}^M}_{X_K}\leq \norm{\sum_{k,l=1}^M G^{-1}_{k,l} K_kh_l}_{X}\\
&\leq 3\|\sum_{k,l=1}^M G^{-1}_{k,l}A_{\theta} K_kh_l\|_{L^2([0,T];L^2(\Lambda))}^2+\|\sum_{k,l=1}^M G^{-1}_{k,l} K_kh_l\|_{L^2([0,T];L^2(\Lambda))}^2\\
&+2\| \sum_{k,l=1}^M G^{-1}_{k,l}K_kh_l'\|_{L^2([0,T];L^2(\Lambda))}^2.
\end{align*}
Using the definition of $G_{A_{\theta}}$, the last display becomes
\begin{align*}
&\norm{(h_k)_{k=1}^M}_{X_K}\\
&\leq 3\int_0^T \sum_{k,l=1}^M(G^{-1}G_{A_{\theta}} G^{-1})_{kl}h_k(t)h_l(t)\,dt+\int_0^T \sum_{k,l=1}^M(G^{-1})_{kl}h_k(t)h_l(t)\,dt\\
&+2\int_0^T \sum_{k,l=1}^M(G^{-1})_{kl}h_k'(t)h_l'(t)\,dt,
\end{align*}
and the claim follows from standard results for the operator norm of symmetric matrices.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:upper:bound:RKHS:norm:M>1_Laplace}]
Since the Laplace operator is negative and self-adjoint, the stochastic convolution \eqref{eq:stochConv} is just the weak solution in \eqref{eq:weakSolution} and $\mathcal{H}=L^2(\Lambda)$. If $(K_k)_{k=1}^M=(K_{\delta,x_k})_{k=1}^M$ with $\norm{K}_{L^2(\R^d)}=1$, then $K_1,\dots,K_M$ have disjoint supports and satisfy the assumptions of Theorem \ref{thm:upper:bound:RKHS:norm:M>1} with $G=I_M$ being the identity matrix and $G_\Delta$ being a diagonal matrix with $(G_\Delta)_{kk}=\norm{\Delta K_{\delta,x_k}}^2_{L^2(\R^d)}$. By construction and the Cauchy-Schwarz inequality, we have $\norm{K_{\delta,x_k}}=1$ and $\norm{\Delta K_{\delta,x_k}}\leq \delta^{-2}\norm{\Delta K}_{L^2(\R^d)}$. From Theorem \ref{thm:upper:bound:RKHS:norm:M>1}, we obtain the RKHS $H_{X_K}=H^M$ of $X_K$ with the claimed upper bound on its norm, where we also used that $\delta^2\leq \norm{\Delta K}_{L^2(\R^d)}$ by assumption.
\end{proof}
|
train/arxiv
|
BkiUdbM5qhDACrLp74IA
| 5 | 1 |
\section{Turbulent kinetic energy in the convection zone}
We define the turbulent kinetic energy
in terms of the turbulent velocity fluctuations
$\delta v$ and the density $\rho $ as
\begin{equation}
E_\text{kin}^\text{turb} = \frac{1}{2}\int_\text{PCS} \delta v^2 \rho \mathrm{d}V,
\end{equation}
where the volume element $\mathrm{d}V$ includes the appropriate relativistic metric factors\footnote{The deviation of the Lorentz factor from unity is negligible in the proto-compact star because the fluid velocities are small.}, i.e., $\mathrm{d} V=2\pi \phi^6 r^2 \sin\theta \,\mathrm{d}r\, \mathrm{d}\theta$ in axisymmetry with the lapse function $\phi$. The integral is performed over the entire proto-compact star ($\mathrm{PCS}$), i.e., for densities above $10^{11}\, \mathrm{g}\,\mathrm{cm}^{-3}$. The turbulent velocity fluctuations are given by
\begin{equation}
\delta v^2=(v_r-\langle v_r\rangle)^2+
v_\theta^2,
\end{equation}
where $\langle v_r\rangle$ is the spherical Favre average of the radial velocity.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\columnwidth]{plots/ekin_turb.pdf}
\caption{Turbulent kinetic energy of the PCS as function of post-bounce time for the progenitors \texttt{z35} (grey) and \texttt{z85} (black) using the \texttt{SFHx} EoS (dashed lines) and \texttt{CMF} EoS (solid lines).}
\label{fig:ekin turb}
\end{figure}
We plot $E_\text{kin}^\text{turb}$ for all four models as function of time in Figure~\ref{fig:ekin turb}. The \texttt{CMF} models show higher turbulent kinetic energies compared to the \texttt{SFHx} models, especially for the lighter \texttt{z35} progenitor. In this case, the turbulent kinetic energy in the PCS convection zone is about an order of magnitude higher in the \texttt{CMF} EoS compared to \texttt{SFHx}. For \texttt{z85}, the difference is modest but robust, and the rise of the turbulent kinetic energy to the plateau at $\sim 5\times 10^{50}\, \mathrm{erg}$ occurs significantly earlier.
\section{Electron fraction gradient}
\begin{figure}[H]
\centering
\includegraphics[width=\columnwidth]{plots/dydr_evolv.pdf}
\caption{Evolution of the gradient $\mathrm{d}Y_e/\mathrm{d}r$
of the spherically-averaged electron fraction
as function of post-bounce time for the \texttt{CMF} model (left), \texttt{SFHx} (right) for \texttt{z85} (upper panel) and \texttt{z35} (lower panel). The black circles approximately track the PCS convection zone. Red dashes show buoyantly stable regions where $\omega_\mathrm{BV}^2 > 0$. Those areas are susceptible to quadrupolar perturbations at the frequency of the $g^2_1$-mode. The black-dashed vertical line roughly corresponds to the onset of the $g^2_1$-signal in the GW spectrograms.}
\label{fig:dydr}
\end{figure}
The differences in the Brunt-V\"ais\"al\"a frequency in the region below the PCS convection zone in the various models affect the frequency of the $g^2_1$-mode and may also have some bearing on its excitation by turbulent motions in the overlying convection zone. Aside from the different sound speed above nuclear saturation density, differences in the Brunt-V\"ais\"al\"a frequency between the \texttt{CMF} and \texttt{SFHx} models also arise because the electron fraction evolves differently during the post-bounce phase. To illustrate this effect we show
the gradient of the spherically-averaged electron fraction.
Initially, all models show a slight negative electron fraction gradient in the PCS core inside $\mathord{\sim}0.7\,\mathrm{M}_\odot$ (inner blue region in
Figure~\ref{fig:dydr}), and then a small ``hump'' that is visible as a red stripe and blue stripe further outside in Figure~\ref{fig:dydr}. With time, neutrino diffusion erases the hump, and the two blue regions merge in the \texttt{CMF} models. This process takes considerably longer in model \texttt{z85:SFHx}, and in model \texttt{z85:SFHx} the hump is still clearly present at the end of the simulation. The evolution of the electron fraction hump in the code is affected by a combination of factors that determine the neutrino opacities in this region, i.e., density, temperature, and neutron, proton, and neutrino chemical potentials. Hence there is no straightforward explanation for the faster disappearance of the hump in the \texttt{CMF} models at this time.
\newpage
\section{Decomposition of Terms Contributing to the Brunt-V\"ais\"al\"a frequency}
To elucidate how differences in the entropy and electron fraction gradients and various thermodynamic derivatives affect the Brunt-V\"ais\"al\"a frequency at the inner boundary of the PCS convection zone, we rewrite the relativistic
Brunt-V\"ais\"al\"a frequency as
\begin{align}\label{eq:bv}
\omega_\text{BV}^2= \dv{\alpha}{r}\frac{\alpha}{\rho h \phi^4}\frac{1}{c_s^2}
\left[\left(\pdv{ P}{s}\right)_{\!\!\tilde{\rho},Y_e}\dv{s}{r}+\left(\pdv{P}{Y_e}\right)_{\!\tilde{\rho},s}\dv{Y_e}{r}\right].
\end{align}
Here, $\alpha$ is the lapse function, $\phi$ is the conformal factor, $\rho$ is the baryonic mass density, $\tilde{\rho}$ is the total mass-energy density, $P$ is the pressure, $h=(\tilde{\rho}+P/c^2)/\rho$ is the relativistic enthalpy, $s$ is the specific entropy per baryon, $Y_\mathrm{e}$ is the lepton fraction, and $c_\mathrm{s}^2$ is the (adiabatic) sound speed\footnote{Note that the small effect of neutrino pressure and energy density is not included in Equation~(\ref{eq:bv}).}.
The individual terms in Equation~(\ref{eq:bv}) and their products are plotted in Figure~\ref{fig:z85} for the \texttt{z85} models and in Figure~\ref{fig:z35} for the \texttt{z35} models. Each figure shows the results for the \texttt{CMF} models (solid lines) and the \texttt{SFHx} models (dotted) at 4 different times.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\columnwidth]{plots/z35_0.pdf}
\caption{Individual terms in Equation~(\ref{eq:bv}) for \texttt{z35:CMF} (solid lines) and \texttt{z35:SFHx} (dotted) at four different post-bounce times. The left panel in the fourth row shows the term in square brackets of Equation~\ref{eq:bv}, i.g. $\displaystyle{\sum \equiv \left(\pdv{ P}{s}\right)_{\!\!\tilde{\rho},Y_\mathrm{e}}\dv{s}{r}+\left(\pdv{P}{Y_\mathrm{e}}\right)_{\!\tilde{\rho},s}\dv{Y_\mathrm{e}}{r}}$.}
\label{fig:z35}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.4\columnwidth]{plots/z85_0.pdf}
\caption{Individual terms for \texttt{z85:CMF} (solid lines) and \texttt{z85:SFHx} (dotted) at four different post-bounce times. See Figure~\ref{fig:z35}.}
\label{fig:z85}
\end{figure}
The figures show that
$\displaystyle{\left(\pdv{ P}{s}\right)_{\!\!\tilde{\rho},Y_\mathrm{e}}}$ and
$\displaystyle{\dv{s}{r}}$ are quite different for the \texttt{CMF} and \texttt{SFHx} models, but the product
$\displaystyle{\left(\pdv{ P}{s}\right)_{\!\!\tilde{\rho},Y_e} \dv{s}{r}}$ is rather similar. The electron fraction gradient becomes steeper in the \texttt{CMF} models, which pushes the $\Sigma$-term in square brackets in Equation~(\ref{eq:bv}),
$$\Sigma=\left(\pdv{ P}{s}\right)_{\!\!\tilde{\rho},Y_e}\dv{s}{r}+\left(\pdv{P}{Y_e}\right)_{\!\tilde{\rho},s}\dv{Y_e}{r}$$
to slightly lower values, compared to \texttt{z85:SFHx} where the radial profile of $\left(\pdv{P}{Y_e}\right)_{\!\tilde{\rho},s}\dv{Y_e}{r}$ is flatter and largely positive. The effect is less pronounced for the more massive progenitor \texttt{z85:CMF}; the $\Sigma$-term here is dominated by a larger entropy gradient. For both progenitors, \texttt{z85} and \texttt{z35}, the sound speed in the region below the PCS convection zone is increasingly higher in the \texttt{CMF} models, which systematically pushes $\omega_\mathrm{BV}$ to lower values. In the case of model \texttt{z85}, this effect explains the difference in the Brunt-V\"ais\"al\"a frequency, as the $\Sigma$-term is similar for the
\texttt{SFHx} and \texttt{CMF} EoS.
\section{Radially resolved GW signals}
For completeness, we show plots analogously to Figure~2 in the main text. Figures~\ref{fig:rgwz85CMF}, \ref{fig:rgwz85SFHx}, and \ref{fig:rgwz35SFHx}
present the results for model \texttt{z85:CMF}, \texttt{z85:SFHx}, and \texttt{z35:SFHx} respectively.
The figures show the radius- and frequency-dependent amplitude $\tilde{q}(r,f)$ of quadrupolar perturbations (first two panels from the left) for two different time intervals $\Delta t_1$ and $\Delta t_2$, which are indicated in the third panel by black and red dash-dotted vertical lines. The third panels also show the quark fraction (background color; only present in Figure~\ref{fig:rgwz85CMF}) and the adiabatic index (contour lines). The fourth panel shows the squared Brunt-V\"ais\"al\"a frequency and the spherically averaged specific entropy per baryon.
\begin{figure}[H]
\centering \includegraphics[width=1\columnwidth]{plots/radial_prof_z85_cmf_gamma_cmf_NEW.pdf}
\caption{Quadrupolar perturbations, quark fraction, adiabatic index, Brunt-V\"ais\"al\"a frequency and spherically averaged entropy for model
\texttt{z85:CMF}; see detailed description in text.}
\label{fig:rgwz85CMF}
\end{figure}
\begin{figure}[H]
\centering \includegraphics[width=1\columnwidth]{plots/radial_prof_z85_sfhx.pdf}
\caption{Quadrupolar perturbations, adiabatic index, Brunt-V\"ais\"al\"a frequency and spherically averaged entropy for model \texttt{z85:SFHx}; see detailed description in text. Note that the \texttt{SFHx} EoS is purely hadronic.}
\label{fig:rgwz85SFHx}
\end{figure}
\begin{figure}[H]
\centering \includegraphics[width=1\columnwidth]{plots/radial_prof_z35_sfhx.pdf}
\caption{Analogously to Figure~\ref{fig:rgwz85SFHx} but for model \texttt{z35:SFHx} where no signal is observed.}
\label{fig:rgwz35SFHx}
\end{figure}
\end{document}
|
train/arxiv
|
BkiUbh7xK6-gDz87OX_-
| 5 | 1 |
\section{Introduction}
The gravitational self-force (GSF) formalism deals with the two-body problem in general relativity by computing the
deviation from geodesic motion due to the gravitational field of the smaller object.
A recent work~\cite{Wardell:2021fyy} presented the first calculation of the waveforms obtained by
solving Einstein's equations in second-order gravitational self-force (2GSF) theory~\cite{Pound:2015tma, Pound:2017psq, Barack:2018yvs}.
This new result complements other recent achievements regarding the 2GSF calculation of the binding
energy of a particle around a Schwarzschild black hole~\cite{Pound:2019lzj} and the calculation
of the gravitational wave (GW) energy fluxes using the same approach~\cite{Warburton:2021kwk}.
Technically, 2GSF means expanding the metric up to second order in the
small mass ratio\footnote{GSF results are typically
obtained via expansions in $\epsilon$, but are often re-expressed as expansions in the symmetric mass ratio
$\nu = m_1 m_2/(m_1 + m_2)^2$.}
$\epsilon = m_2/m_1$ (with $m_2 \ll m_1$) and solving the Einstein equations order-by-order to obtain the metric perturbations while also solving for the motion of the black holes. This is often supplemented by an efficient method for handling the disparity in scales between the the slow radiation-reaction timescale on which
the orbit gradually shrinks and a fast timescale connected to orbital motion. This can be done, for instance, by employing osculating geodesics and applying
near-identity transformations to remove the dependence on orbital phases from the equations of motion,
a scheme recently adopted to obtain the evolution of quasi-circular
and eccentric insipirals driven by the first-order self-force~\cite{VanDeMeent:2018cgn,Lynch:2021ogr}.
A different approach (also relying on near-identity averaging transformations) is the two-timescale approximation~\cite{Hinderer:2008dm, Miller:2020bft,Pound:2021qin}, which takes
explicit advantage of the fact that the binary evolution naturally involves two different timescales. Although 2GSF theory is designed for extreme
mass ratios, both Refs.~\cite{Warburton:2021kwk,Wardell:2021fyy} showed the consistency, to some extent,
between 2GSF results and highly accurate numerical relativity (NR) simulations for comparable-mass binaries.
Long-inspiral, highly accurate NR simulations, as those obtained
using the SpEC code and made public via the Simulating eXtreme Spacetimes (SXS) catalog~\cite{SXS:catalog}, are currently limited to mass ratios\footnote{Actually, the $q>10$ simulations presented in Ref.~\cite{Yoo:2022erv} are not yet public,
but the $q=15$ simulation has been compared already to the EOB model we consider in this work~\cite{Nagar:2022icd},
and we also present a comparison to the GSF waveform in the following.} $q\lesssim 15$ (where $q \equiv m_1/m_2 \ge 1$).
Larger mass ratios are typically challenging
for NR methods, making NR simulations difficult to push into the natural domain of validity of the GSF approach.
However, the RIT NR group~\cite{Healy:2019jyf,Healy:2020vre,Healy:2022wdn} has recently started to explore
larger mass ratios via NR simulations~\cite{Rosato:2021jsq}, notably achieving the successful computation
of waveforms in the intermediate-mass-ratio (IMR) regime up to $q=128$~\cite{Lousto:2020tnb}.
The effective one body (EOB) approach~\cite{Buonanno:1998gg,Buonanno:2000ef,Damour:2000we,Damour:2001tu,Damour:2015isa}
to the general-relativistic two-body dynamics is a powerful analytical formalism that resums post-Newtonian (PN) results,
obtained and strictly valid in the low-velocity, weak field regime, to make them robust and predictive also
in the strong-field, fast velocity regime. The model is: (i) additionally informed by NR simulations to improve its
behavior through merger and ringdown and (ii) similarly benchmarked to NR data to test its accuracy all over the
parameter space. Within the EOB approach, the two-body dynamics is a deformation of the dynamics of
a test-particle on a Schwarzschild (or Kerr) black hole.
In particular, the spin-aligned \texttt{TEOBResumS}{} is currently the waveform model that presents the highest level
of faithfulness\footnote{Another widely used model for quasi-circular binaries, though less NR faithful by approximately
an order of magnitude, is {\tt SEOBNRv4HM}~\cite{Bohe:2016gbl,Cotesta:2018fcv}.}
with the largest set of NR simulations available~\cite{Nagar:2020pcj, Riemenschneider:2021ppj, Albertini:2021tbt}.
Exact results in the test-mass limit have been broadly exploited in the development
of EOB models. Historically, the first highly accurate EOB waveform templates
were validated using Regge-Wheeler-Zerilli (RWZ) perturbation theory~\cite{Nagar:2006xv,Damour:2007xr, Damour:2008gu, Bernuzzi:2010xj},
and EOB dynamics in the small or extreme-mass-ratio limit were used in the numerical computation
of test-mass waveforms, numerically solving the RWZ or Teukolsky equations~\cite{Nagar:2006xv, Bernuzzi:2010ty, Bernuzzi:2011aj,
Harms:2014dqa, Nagar:2014kha, Harms:2015ixa, Harms:2016ctx, Lukes-Gerakopoulos:2017vkj}.
The results from such numerical waveforms have then been especially useful in testing the resummation
choices of EOB functions and in checking some crucial elements of the EOB models~\cite{Bernuzzi:2012ku,
Nagar:2016ayt, Messina:2018ghh, Nagar:2019wrt, Chiaramello:2020ehz}.
However, all the above-mentioned test-mass studies are limited by the fact that the underlying metric is always the Schwarzschild
or the Kerr one. In effect, the motion of the particle is driven only by the time-averaged dissipative part of the
self-force (i.e. the fluxes) ignoring conservative contributions. In this sense, results coming from GSF theory
would be extremely useful to further tune EOB quantities in the small and extreme-mass-ratio regime.
In particular, the conservative part of the self-force allows the evaluation of several quantities that may
inform the EOB conservative sector, for instance the ISCO shift~\cite{Isoyama:2014mja} or Detweiler's
redshift variable. The latter has already been exploited to extract higher-order PN
information~\cite{Bini:2013rfa, Bini:2014nfa, Bini:2015bfb, Bini:2016cje}, and those
results have been already incorporated into EOB potentials~\cite{Barausse:2011dq, Antonelli:2019fmq,Akcay:2015pjz}.
The flexibility of the EOB approach is thus well-adapted, in principle, to give a faithful description of extreme-mass-ratio
inspirals (EMRIs)~\cite{Yunes:2009ef, Yunes:2010zj, Albanesi:2021rby},
modulo increasing the speed and the accuracy of current models in order to meet the needs of future
space-based detectors like LISA~\cite{LISA:2017pwj} and TianQin~\cite{TianQin:2015yph}.
In this paper we present a comprehensive analysis comparing the recently computed 2GSF waveforms\footnote{Although we can compute all waveform multipoles~\cite{Wardell:2021fyy}, we focus our analysis primarily on the dominant $\ell=m=2$ mode.} of~\cite{Wardell:2021fyy}
and EOB waveforms obtained with the state-of-the-art model \texttt{TEOBResumS}{}.
The analysis spans from comparable-mass binaries to the IMR regime.
In particular, we present explicit comparisons for $q = (7, 10, 15, 32, 64, 128)$. To benchmark these results,
we also revisit the 2GSF/NR phasing comparisons of Ref.~\cite{Wardell:2021fyy} when needed. To avoid possible systematics that
may arise when comparing waveforms in the time domain, we make crucial use of the gauge-invariant description
of the phasing provided by the $Q_{\omega}\equiv \omega^2/\dot{\omega}$ function (the inverse of the adiabaticity parameter),
where $\omega$ is the GW frequency. This kind of analysis was introduced in the context of comparing EOB and
NR waveforms during the late inspiral of binary neutron stars (BNS) systems, with the goal of understanding the
relevance of tidal effects during the last orbits~\cite{Baiotti:2010xh,Baiotti:2011am}. The precise calculation of this quantity
for NR simulations proved to be challenging for BNS~\cite{Bernuzzi:2012ci,Bernuzzi:2014owa}, while it was relatively
straightforward for binary black hole (BBH) simulations produced by the SXS collaboration~\cite{Damour:2012ky}. The $Q_{\omega}$ diagnostics were useful
for understanding precisely the impact of spin-spin effects in BNS~\cite{Dietrich:2018uni} as well as the origin of other effects
coming from systematics in waveform models~\cite{Messina:2019uby}. In this work, the use of a well-controlled $Q_{\omega}$ is
crucial in obtaining an improved quantitative understanding of the 2GSF/NR comparisons originally presented in Ref.~\cite{Wardell:2021fyy}.
This paper is organized as follows. Section~\ref{sec:GSF} outlines in some detail the basic elements of the 2GSF time-domain
waveform model 1PAT1{} introduced in Ref.~\cite{Wardell:2021fyy}, along with an internal analysis of the model's errors and domain of validity. The structure of the EOB model \texttt{TEOBResumS}{} is
briefly reviewed in Sec.~\ref{sec:EOB}. In Sec.~\ref{sec:gsf_nr} we present a novel 2GSF/NR comparison that updates
the results of Ref.~\cite{Wardell:2021fyy}: the analysis is based on the gauge-invariant phasing description provided by $Q_{\omega}$
and uses EOB waveforms as a benchmark. In Sec.~\ref{sec:waveform} we provide a comprehensive 2GSF/NR/EOB waveform
comparison up to $q=128$. Finally, Sec.~\ref{sec:nu_dependence} digs deeper into the origin of the 2GSF/EOB differences,
clearly pointing to an (expected) lack of 1GSF information within the EOB model. Conclusions are collected in Sec.~\ref{sec:conclusions}.
The paper is then completed by a few Appendices. In Appendix~\ref{sec:Qomg_clean} we report technical details
about the procedure for removing low and high-frequency oscillations from the NR $Q_{\omega}$. Appendix~\ref{sec:exactQ0andQ1} derives an asymptotic expansion of $Q_\omega$ in the small-mass-ratio limit. In Appendix~\ref{sec:eobnrgsf_q}
we complement and update the 2GSF/NR analysis of Ref.~\cite{Wardell:2021fyy} for comparable-mass binaries with
mass ratios from $q=1$ to $q=6$. Finally, in Appendix~\ref{sec:flux} we perform a comprehensive EOB/GSF/NR analysis
of the energy fluxes, also complementing the findings of Ref.~\cite{Warburton:2021kwk}.
We use natural units with $c=G=1$. In terms of our conventions for the individual masses, we denote the total mass
and symmetric mass ratio as $M\equiv m_1+m_2$ and $\nu = m_1 m_2/M^2$.
\section{GSF dynamics and the 1PAT1 model}
\label{sec:GSF}
We compute 2GSF waveforms following Ref.~\cite{Wardell:2021fyy}. The approach is based on the multiscale (or two-timescale) expansion of the Einstein equations in Ref.~\cite{Miller:2020bft} (specifically Appendix A of that reference) with three additional approximations described below. To help explain the additional approximations, we first review the exact 1PA formalism in Sec.~\ref{sec:exact 1PA}. The additional approximations are then described in Sec.~\ref{sec:1PA approximations}. Section~\ref{sec:1PA accuracy} discusses a 1PA model's intrinsic level of error and domain of validity, and Sec.~\ref{sec:1PAT1 errors} discusses the expected error from the additional approximations.
\subsection{Exact 1PA waveforms}\label{sec:exact 1PA}
The multiscale expansion assumes the binary's metric, in the limit $\epsilon\to0$, only depends on time through its dependence on the binary's mechanical variables: the two black holes' trajectories, masses, spins, etc. All functions, including the metric, are treated as functions of spatial coordinates $x^i$ and of the mechanical phase space coordinates, and they are all expanded in powers of $\epsilon$ at fixed values of those coordinates~\cite{Pound:2021qin}.
Restricted to the case of quasicircular orbits, with a slowly spinning primary and nonspinning secondary, this corresponds to the following expansion (through order $\epsilon^2$)~\cite{Miller:2020bft}:
\begin{multline}
\label{eq:metric}
\textbf{g}_{\alpha\beta} = g_{\alpha\beta} + \epsilon h^{(1)}_{\alpha\beta}(\phi_p,J_A, x^i) + \epsilon^2 h^{(2)}_{\alpha\beta} (\phi_p,J_A, x^i).
\end{multline}
Here $g_{\alpha\beta}$ represents the spacetime of the primary as if it were in isolation, meaning a Schwarzschild metric with constant mass $m_1^0$ and vanishing spin $s^0_1=0$. The variables $x^i=(r,\theta,\phi)$ are the usual Schwarzschild spatial coordinates, and $(\phi_p,J_A)$ are the phase space coordinates. Concretely, $\phi_p$ is the orbital azimuthal angle of the secondary (with the subscript $p$ denoting it as the ``particle''), and $J_A=(\Omega,\delta m_1,\delta s_1)$ are the binary's slowly evolving parameters: the orbital frequency $\Omega \equiv d\phi_p/dt$, a correction $\delta m_1\equiv (m_1-m_1^0)/\epsilon$ to the primary's mass, and the primary's rescaled spin $\delta s_1\equiv s_1/\epsilon$. Because the mass and spin only change by an amount $\sim\epsilon$ over the inspiral time $\sim 1/\epsilon$, they are treated perturbatively rather than altering $g_{\alpha\beta}$, and the parameters $\delta m_1$ and $\delta s_1$ are scaled by $\epsilon$ to make them order unity. In this section only, we use $\epsilon=m_2/m_1^0$ and work in units with $m^0_1=1$.
Since $\phi_p$ is a periodic coordinate on phase space, the metric is assumed to be periodic in it, allowing us to use a discrete Fourier series
\begin{equation}\label{eq:metric Fourier}
h^{(n)}_{\alpha\beta} = \sum_{m=-\infty}^\infty h^{(n,m)}_{\alpha\beta}(J_A, x^i)e^{-im\phi_p}.
\end{equation}
This expansion divides the metric perturbation into slowly evolving amplitudes and rapidly oscillating phase factors. The amplitudes $h^{(n,m)}_{\alpha\beta}$, orbital frequency, and orbital radius evolve on the radiation-reaction time $t_{rr} \sim 1/(\epsilon\Omega)$, while $\phi_p$ evolves on the orbital timescale $t\sim 1/\Omega$.
In Eqs.~\eqref{eq:metric} and \eqref{eq:metric Fourier}, $\phi_p$ and $J_A$ are functions of a hyperboloidal time $s$ that is equal to Schwarzschild time $t$ at the secondary's worldline, advanced time $v$ at the large black hole's horizon, and retarded time $u$ at future null infinity. The binary's evolution, through order $\epsilon^2$, is then given by expansions of the form
\begin{align}
\frac{d\phi_p}{ds} &= \Omega,\label{phidot}\\
\frac{d \Omega}{ds} &= \epsilon\left[F_0^\Omega(\Omega) + \epsilon F_1^\Omega(J_A)\right],\label{eq:Omegadot-v1}\\
\frac{d \delta m_1}{ds} &= \epsilon\mathcal{F}^{(1)}_{\cal H}(\Omega), \qquad \frac{d\delta s_1}{ds} = \epsilon\,\Omega^{-1}\,\mathcal{F}^{(1)}_{\cal H}(\Omega),\label{s1dot}
\end{align}
where $\mathcal{F}^{(1)}_{\cal H}$ is the leading-order energy flux through the black hole's horizon (i.e., the flux due to $h^{(1)}_{\mu\nu}$). The orbital radius is given in terms of $J_A$ as $r_p = r_0(\Omega) + \epsilon r_1(J_A)$, where $r_0=m_1^0(m_1^0\Omega)^{-2/3}$ is the test-mass relationship. It follows from these equations that $dr_p/ds$ has an expansion of the form $dr_p/ds=\epsilon[F^r_0(\Omega)+\epsilon F^r_1(J_A)]$.
Within this framework, the $n^{\rm th}$ PA order includes all terms contributing up to $\epsilon^{n+1}$ to the evolution of the orbital frequency, consistently with the terminology introduced in Ref.~\cite{Hinderer:2008dm}. $F^\Omega_0$ is the adiabatic (0PA) dissipation-driven rate of change, determined by the first-order dissipative GSF or energy flux, and the 1PA term $F^\Omega_1$ is determined by the full (conservative and dissipative) first-order GSF and second-order dissipation.
Substituting the expansions~\eqref{eq:metric}--\eqref{s1dot} into Einstein's equations,
one finds Fourier-domain equations for the amplitudes $h^{(n,m)}_{\alpha\beta}$~\cite{Miller:2020bft},
which are solved in the Lorenz gauge, order by order in $\epsilon$ for fixed values of $J_A$. (Note that in this process we never set $dJ_A/ds=0$; the nonzero $dJ_A/ds$ is fully accounted for everywhere it appears.) The amplitudes are further decomposed
on a basis of tensor spherical harmonics to reduce the Einstein equations to radial ordinary differential equations for each $\ell m$ mode. At future null infinity, the $\ell m$ mode of the waveform is extracted by transforming from the Lorenz gauge (in which $h^{(2)}_{\alpha\beta}$ is singular at null infinity~\cite{Pound:2015wva}) to a Bondi-Sachs gauge (in which $h^{(2)}_{\alpha\beta}$ is smooth there). In the usual basis of $s=-2$ spin-weighted harmonics ${}_{-2}Y_{\ell m}$, the $\ell m$ mode of the resulting (dimensionless) waveform can be written as $H_{\ell m}= R_{\ell m}(J_A,\epsilon)e^{-im\phi_p}$, or
\begin{align}\label{eq:1PA waveform}
H_{\ell m}=\left[\epsilon R^{(1)}_{\ell m}(\Omega)+\epsilon^2 R^{(2)}_{\ell m}(J_A)\right]e^{-im\phi_p}.
\end{align}
In this waveform construction, one first computes the amplitudes $h^{(1,m)}_{\alpha\beta}$ for a set of $\Omega$ values; from $h^{(1,m)}_{\alpha\beta}$, one computes $F^\Omega_0$ and $r_1$; from $F^\Omega_0$, $r_1$, and $h^{(1,m)}_{\alpha\beta}$, one computes $h^{(2,m)}_{\alpha\beta}$; and from all of the above, one computes $F^\Omega_1$. These are all computed and stored as functions of $\Omega$ prior to solving for $\phi_p(s)$ and $\Omega(s)$. Using the stored amplitudes $R^{(n)}_{\ell m}$ and driving forces $F^\Omega_n$, one can then rapidly generate the waveform modes \eqref{eq:1PA waveform} by solving the evolution equations~\eqref{phidot}--\eqref{s1dot}.
\subsection{The approximate 1PAT1 model}\label{sec:1PA approximations}
The 1PAT1 model in Ref.~\cite{Wardell:2021fyy} closely follows the exact 1PA waveform construction but with three simplifying approximations.
We start by expressing $F^\Omega_0$ and $F^\Omega_1$ in terms of energy fluxes rather than the local GSF. We define the binding energy as a function of $s$,
\begin{equation}\label{Ebind def}
E_{\rm bind}(s)\equiv M_{\rm Bondi}(s)-m_1(s)-m_2.
\end{equation}
The Bondi mass $M_{\rm Bondi}$ and primary mass $m_1$ can be directly calculated as functions of $J_A$ from the amplitudes $h^{(n,m)}_{\mu\nu}$ as described in Ref.~\cite{Pound:2019lzj} (see also \cite{Bonetto:2021exn}). Differentiating Eq.~\eqref{Ebind def} with respect to $s$, using the Bondi-Sachs mass-loss formula $dM_{\rm Bondi}/ds = -{\cal F}_\infty$ and the flux-balance law $dm_1/ds = {\cal F}_{\cal H}$~\cite{Ashtekar:2004cn}, and applying the chain rule $dE_{\rm bind}/ds = (\partial E_{\rm bind}/\partial J_A)dJ_A/ds$, we can rearrange for $d\Omega/ds$ to find
\begin{equation}
\frac{d\Omega}{ds} = -\frac{{\cal F}_\infty+{\cal F}_{\cal H}+\frac{\partial E_{\rm bind}}{\partial \delta m_1}\frac{d\delta m_1}{ds}+\frac{\partial E_{\rm bind}}{\partial \delta s_1}\frac{d\delta s_1}{ds}}{\partial E_{\rm bind}/\partial\Omega}.\label{Omegadot-exact}
\end{equation}
Here ${\cal F}_\infty$ is the energy flux to infinity, which is given in terms of the asymptotic amplitudes as
\begin{align}
\mathcal{F}_{\infty} &= \frac{1}{16\pi}\sum_{\ell m}\left|\frac{d}{ds}(R_{\ell m}e^{-im\phi_p})\right|^2 \nonumber\\
&= \frac{1}{16\pi}\sum_{\ell m}\left\{\epsilon^2 |m \Omega R^{(1)}_{\ell m}|^2+2\epsilon^3 {\rm Re}\left[m^2\Omega^2 R^{(2)}_{\ell m}R^{(1)*}_{\ell m}\right.\right.\nonumber\\
&\qquad\qquad\quad\left.\left. +im\Omega F^\Omega_0 R^{(1)*}_{\ell m}\partial_\Omega R^{(1)}_{\ell m}\right] + O(\epsilon^4) \right\}\label{flux}
\end{align}
(still in units $m_1^0=1$). This computation of $\mathcal{F}_{\infty}$ was carried out in Ref.~\cite{Warburton:2021kwk}. The other terms on the right-hand side of Eq.~\eqref{Omegadot-exact} can also be straightforwardly expanded, leading to an expression of the form~\eqref{eq:Omegadot-v1}.
So far we have made no approximations. The flux-based evolution equation for $\Omega$ follows directly from exact laws of GR together with our multiscale expansion; the formulas for $F^\Omega_0$ and $F^\Omega_1$ in terms of fluxes must necessarily agree with the formulas in terms of the local GSF (though numerically verifying the equality of the two formulas for $F^\Omega_1$ will be a crucial check in the future).
We now apply our three approximations:
\begin{enumerate}[label=(\roman*)]
\item We neglect 1PA terms involving $dm_1/ds$ and $d\delta s_1/ds$ in Eq.~\eqref{Omegadot-exact}. Specifically, we use only the leading-order horizon flux, ${\cal F}_{\cal H}=\epsilon^2{\cal F}^0_{\cal H}(\Omega)$, and we discard $\frac{\partial E_{\rm bind}}{\partial \delta m_1}\frac{d\delta m_1}{ds}$ and $\frac{\partial E_{\rm bind}}{\partial \delta s_1}\frac{d\delta s_1}{ds}$. This is motivated by the facts that (i) the subleading horizon flux has not yet been computed, and (ii) the horizon flux is numerically small compared to the flux to infinity.
\item We neglect the evolution of the black hole, setting $\delta m_1=\delta s_1=0$ and ignoring the evolution equations~\eqref{s1dot}, such that $m^0_1=m_1$ and $s_1=0$. This is motivated by the change in the black hole parameters having negligible effect on the asymptotic fluxes in Ref.~\cite{Warburton:2021kwk}.
\item Rather than using Ref.~\cite{Pound:2019lzj}'s direct measurement of $E_{\rm bind}$ from the Bondi mass and black hole mass, we use the binding energy obtained from the first law of compact binary mechanics~\cite{LeTiec:2011dp, LeTiec:2011ab}. This is motivated by the facts that (i) the $E_{\rm bind}$ computed in Ref.~\cite{Pound:2019lzj} was calculated for a different choice of time function $s$ than the fluxes computed in Ref.~\cite{Warburton:2021kwk}, and (ii) the first-law binding energy was found to be numerically very close to Ref.~\cite{Pound:2019lzj}'s directly measured binding energy.
\end{enumerate}
In addition to applying these approximations, we also rewrite $m_1$ and $m_2$ as functions of the total mass $M$ and symmetric mass ratio $\nu$, and then re-expand all quantities in powers of $\nu$ at fixed dimensionless frequency $\hat\Omega\equiv M\Omega$, truncating the re-expansion at 1PA order. This enforces the system's symmetry under interchange of the two masses, and it substantially improves the accuracy of the small-mass-ratio expansion for non-extreme mass ratios. It is unrelated to the three approximations above; the re-expansions could equally well be done in the exact 1PA formulas. To facilitate the re-expansion, we restore factors of $m_1$ and make all dependence on the masses explicit [such that $R^{(n)}_{\ell m}(\Omega)$ becomes $R^{(n)}_{\ell m}(m_1\Omega)$ before re-expansion, for example].
After these steps, the full set of evolution equations~\eqref{phidot}--\eqref{s1dot} are replaced by the simplified set
\begin{align}
\frac{d \phi_p}{ds} &= \Omega ,\\
\frac{d \Omega}{ds} &= \frac{\nu}{M^2} \left[ F_0(x) + \nu F_1(x) \right] ,\label{eq:Omegadot}
\end{align}
where $x \equiv (M\Omega)^{2/3} = \hat{\Omega}^{2/3}$
and
\begin{align}
F_0(x) &= a(x) \mathcal{F}^{(1)}(x) = a(x) \left[ \mathcal{F}^{(1)}_{\infty}(x) + \mathcal{F}^{(1)}_{\cal{H}}(x) \right] , \\
F_1(x) &= a(x) \mathcal{F}^{(2)}_{\infty} (x) - a^2(x) \mathcal{F}^{(1)}(x) \partial_{\hat{\Omega}} \hat{E}_{\rm SF}.
\end{align}
Here all functions of $x$ are dimensionless functions of the dimensionless variable $x$. We have expanded the flux~\eqref{flux} as ${\cal F}_\infty = \nu^2\mathcal{F}^{(1)}_{\infty}(x)+\nu^3\mathcal{F}^{(2)}_{\infty}(x)+O(\nu^4)$, where the superscripts indicate if the quantities are computed from the first-order amplitudes $h^{(1, m)}_{\mu\nu}$ or from the second-order ones, $h^{(2, m)}_{\mu\nu}$. We have similarly expanded the binding energy as $E_{\rm bind}=\nu M[\hat E_0(x)+\nu\hat E_{\rm SF}(x)+O(\nu^2)]$ and defined
\begin{align}
a(x) &\equiv - \left( \frac{\partial \hat{E}_0}{\partial \hat{\Omega}} \right)^{-1}
= \frac{3 x^{1/2} (1 - 3x)^{3/2}}{1-6x}.\label{a(x)}
\end{align}
The leading-order specific binding energy $\hat{E}_0(x)=\frac{1-2x}{\sqrt{1-3x}}-1$ is identical to the circular geodesic orbital energy of a test particle on a Schwarzschild background of mass $M$ (dependence on the nonzero $\dot r$ enters into the binding energy at subleading orders in $\nu$). We have changed notation from $F^\Omega_n$ to $F_n$ to distinguish between coefficients of $\epsilon$ and coefficients of $\nu$, but we note $F_0(x)=F^\Omega_0(\hat\Omega(x))$.
In Ref. ~\cite{Wardell:2021fyy}, two additional 1PA waveform models were presented: a second time-domain model, 1PAT2; and a frequency-domain model, 1PAF1. They used the same three approximations but alternative expansions. Here we restrict our attention to 1PAT1 as the more accurate of the two time-domain models.
Before moving to the next section, to fix conventions we write the final strain waveform as
\be
h_+ - i h_\times = \dfrac{M}{D_L}\sum_\ell \sum_{m=-\ell}^{\ell}h_{\ell m}{}_{-2}Y_{\ell m}(\theta,\phi),
\ee
where $D_L$ indicates the luminosity distance.
As an expansion in powers of $\nu$ at fixed $(\phi_p,x)$, the modes read
\begin{equation}\label{1PAT1 hlm - nu}
h_{\ell m} = \left[\nu R^{(1)}_{{\ell m}}+\nu^2\left(R^{(2)}_{\ell m}+R^{(1)}_{\ell m}-\hat\Omega \partial_{\hat\Omega}R^{(1)}_{\ell m}\right)\right]e^{-im\phi_p},
\end{equation}
where $R^{(n)}_{\ell m}=R^{(n)}_{\ell m}(\hat\Omega)$ are the amplitudes in Eq.~\eqref{eq:1PA waveform}.\footnote{To derive this, note that $H_{\ell m}$ in Eq.~\eqref{eq:1PA waveform} is defined with the luminosity distance in units of $m_1$, as the $\ell m$ mode of a component of $\lim_{r\to\infty}\frac{r}{m_1} (\epsilon h^{(1)}_{\mu\nu} +\epsilon^2 h^{(2)}_{\mu\nu})$, while $h_{\ell m}$ is instead defined from the limit $\lim_{r\to\infty}\frac{r}{M}$. This implies $h_{\ell m} = \frac{m_1}{M}H_{\ell m}$.} In practice, we work with
the Regge-Wheeler-Zerilli normalization convention and address the waveform as $\Psi_{\ell m}\equiv h_{\ell m}/\sqrt{(\ell+2)(\ell+1)\ell(\ell-1)}$. This waveform is separated into amplitude and phase with the convention
\be
\label{eq:RWZnorm}
\Psi_{\ell m} = A_{\ell m} e^{-i \phi_{\ell m}},
\ee
and we define the frequency as $\omega_{\ell m} \equiv \dot{\phi}_{\ell m}$. In practice we will always state values of $\omega_{\ell m}$ in units of $1/M$, but for clarity we will sometimes distinguish between the dimensionful quantity $\omega_{\ell m}$ and the dimensionless quantity $M\omega_{\ell m}$.
Although we can compute all waveform multipoles~\cite{Wardell:2021fyy}, in this paper we restrict our analysis to $\ell=m=2$. In our computation of the fluxes, we include modes up to $\ell=30$ in ${\cal F}^{(1)}$ and up to $\ell=5$ in ${\cal F}^{(2)}_\infty$. We use a large-$\ell$ fit to approximate the ($\lesssim 1\%$) contribution of higher-$\ell$ modes to ${\cal F}^{(2)}_\infty$. Specifically, since the flux modes fall off exponentially with $\ell$ \cite{Barack:2007tm} we consider a model of the form $\alpha e^{\beta \ell}$ and determine the constants $\alpha$ and $\beta$ by fitting this model to the $\ell=\{3,4,5\}$ modes of ${\cal F}^{(2)}_\infty$. We then verify the robustness of our fit by computing modes $\ell=\{6, \ldots, 10\}$ in a couple of representative test cases and comparing against the model. The net result of using this fit is a ${\cal F}^{(2)}_\infty$ that is at least an order of magnitude more accurate than without the fit.
\subsection{Intrinsic error and domain of validity}\label{sec:1PA accuracy}
We first assess the domain of validity of an exact 1PA model before discussing the uncertainty that arises from our three additional approximations.
By construction, a complete 1PA model, expressed in terms of $(\hat\Omega,\nu)$, has errors $O(\nu^3)$ in the waveform amplitude and $O(\nu)$ in the waveform phase. This error estimate follows immediately from the structure of the expansions~\eqref{eq:metric}--\eqref{s1dot} (re-expanded in powers of $\nu$). It applies both pointwise at each fixed frequency and uniformly on any fixed interval $[\hat\Omega_i,\hat\Omega_f]$ with $0<\hat\Omega_i<\hat\Omega_f<\hat\Omega_{\rm LSO}$. Here $\hat\Omega_{\rm LSO}= 6^{-3/2}$ is the Schwarzschild geodesic frequency of the last stable orbit.
However, a 1PA model is {\it not} uniformly accurate over the whole interval $(0,\hat\Omega_{\rm LSO})$. Near $\hat\Omega_{\rm LSO}$, $d\Omega/dt$ grows large due to the divergent factor in Eq.~\eqref{a(x)}, and the particle transitions into a plunge trajectory; near $\hat\Omega=0$, missing small-$\hat\Omega$ terms will cause large cumulative phase shifts. This lack of uniformity can have significant impact at finite $\nu$, particularly due to the transition to plunge.
We determine the domain of validity of a complete 1PA model by excluding the boundary regions where missing PN or transition-to-plunge effects dominate over 1PA effects. More precisely, we define the domain of validity $(\hat\Omega^*_{i},\hat\Omega^*_{f})$ as the interval in which (i) the error is small compared to 1PA terms, and (ii) all omitted terms in the phase vanish in the limit $\nu\to0$. Note that these two conditions are distinct because condition (i) on its own could allow a large error in the phase as long as that error remained small compared to the 1PA contribution. Also note that, importantly, $\hat\Omega^*_i$ and $\hat\Omega^*_f$ will depend on $\nu$.
To find $\hat\Omega^*_{i}$ and $\hat\Omega^*_{f}$, we write $\phi_p$ as a function of $\hat\Omega$, $\phi_p = \int\frac{\hat\Omega}{d\hat\Omega/d\hat s}d\hat\Omega$, with $\hat s\equiv s/M$. The integrand can be expanded to 2PA order as
\begin{align}
\phi'&\equiv\frac{\hat\Omega}{d\hat\Omega/d\hat s}\nonumber\\
&= \frac{\hat\Omega}{F_0 \nu}-\frac{\hat\Omega F_1}{(F_0)^2}+\frac{\left[(F_1)^2-F_0 F_2\right]\hat\Omega \nu}{(F_0)^3}+O(\nu^2)\nonumber\\
&\equiv \frac{1}{\nu}\phi'_0(\hat\Omega)+\phi'_1(\hat\Omega)+\nu\phi'_2(\hat\Omega)+O(\nu^2),\label{phiprime}
\end{align}
and the associated phase as
\begin{equation}\label{phi expansion}
\phi_p = \frac{1}{\nu}\phi^0_p(\hat\Omega) + \phi_p^1(\hat\Omega) + \nu\phi_p^2(\hat\Omega)+O(\nu^2),
\end{equation}
where $\phi^n_p=\int \phi'_n d\hat\Omega$ and we have started from an expansion of the form $d\hat\Omega/d\hat s = \nu\sum_{n\geq0}\nu^n F_n(x)$ (but assumed none of the three additional approximations). In our 1PA approximation, the error should be dominated by the 2PA term $\nu\phi'_2$ in $\phi'$ and associated $\nu\phi_p^2$ term in $\phi_p$. We are therefore interested in the size of $\phi'_2$ near $\hat\Omega=0$ and $\hat\Omega_{\rm LSO}$.
Near $\hat\Omega=0$, we use a PN expansion to estimate the behaviour of the 1PA and 2PA terms.
It is straightforward to derive $\phi'$, via the balance law, from Eqs.~(232) and (313) of Ref.~\cite{Blanchet:2013haa}.
Explicitly,
\begin{multline}\label{dphidOmega - small omega}
\phi' = \frac{5}{96 \nu x^4}\left\{\nu^0\left[1+O(x)\right] +\nu\left[\dfrac{11}{4}x+O(x^2)\right]\right.\\
\left.+\nu^2\left[\dfrac{617}{144} x^2+O(x^{5/2})\right]+O(\nu^3)\right\}.
\end{multline}
The 2PA term begins at 2PN order, behaving as $\nu\phi'_2 \sim \nu x^{-2}=\nu \hat\Omega^{-4/3}$, while the 1PA term begins at 1PN order, behaving as $\phi'_1\sim x^{-3}=\hat\Omega^{-2}$. We can see that the conditions $\hat\Omega\ll1$ and $\nu\ll1$ automatically enforce $\nu\phi'_2\ll \phi'_1$. However, we {\it cannot} take $\hat\Omega^*_i$ to be arbitrarily small: the phase error behaves as $\int^{\hat\Omega}_{\hat\Omega^*_i}\nu\phi'_2d\hat\Omega\sim \nu\int^{\hat\Omega}_{\hat\Omega^*_i} \hat\Omega^{-4/3} d\hat\Omega$, implying that it diverges in the limit $\hat\Omega^*_i\to 0$. Our requirement that the phase error vanishes when $\nu\to0$ implies $\nu(\hat\Omega^*_i)^{-1/3}\xrightarrow{\nu\to0}0$, or
\begin{align}\label{Omega_i}
\hat\Omega^*_i\sim \nu^{\delta_i} \quad \text{with } 0<\delta_i<3.
\end{align}
\begin{table*}[t]
\begin{center}
\begin{ruledtabular}
\begin{tabular}{c c c c c c}
& $F^{\Delta\Omega}_{0}$ & $F^{\Delta\Omega}_{1}$ & $F^{\Delta\Omega}_{2}$ & $F^{\Delta\Omega}_{3}$ & $F^{\Delta\Omega}_{4}$ \\
\hline
\hline
$F_0$ & $(\Omega-\Omega_{\rm LSO})^{-1}$ & 0 & $(\Omega-\Omega_{\rm LSO})^{0}$ & 0 & $(\Omega-\Omega_{\rm LSO})$\\
$F_1$ & 0 & $0\times (\Omega-\Omega_{\rm LSO})^{-3}$ & 0 & $(\Omega-\Omega_{\rm LSO})^{-2}$ & 0\\
$F_2$ & $(\Omega-\Omega_{\rm LSO})^{-6}$ & 0 & $(\Omega-\Omega_{\rm LSO})^{-5}$ & 0 & $(\Omega-\Omega_{\rm LSO})^{-4}$\\
$F_3$ & 0 & $0\times (\Omega-\Omega_{\rm LSO})^{-8}$ & 0 & $(\Omega-\Omega_{\rm LSO})^{-7}$ & 0
\end{tabular}
\end{ruledtabular}
\end{center}
\caption{\label{tab:near-LSO}Near-LSO behavior of the first few $n$PA driving forces $F_{n}$. In the near-LSO limit, each $F_n$ is a sum of the terms in its row. In the $\nu\ll(\hat\Omega-\hat\Omega_{\rm LSO})$ limit, each $F^{\Delta\Omega}_n$ is a sum of the terms in its column. $F^{\Delta\Omega}_1$ identically vanishes, but we keep the terms in its column to illustrate the generic structure of the expansions.}
\end{table*}
Near $\hat\Omega_{\rm LSO}$, we carry out a similar analysis. The dynamics during the transition to plunge is well known and has recently been developed with a systematic asymptotic expansion~\cite{Compere:2021zfj}. We consider a variant of that expansion that allows us to directly examine the dependence on $\hat\Omega$. In a region of width $|\hat\Omega-\hat\Omega_{\rm LSO}|\sim\nu^{2/5}$, the evolution timescale changes from the long radiation-reaction time $\sim 1/(\nu\Omega)$ of the inspiral to the much shorter transition time $\sim 1/(\nu^{1/5}\Omega)$~\cite{Buonanno:2000ef,Ori:2000zn,Compere:2021zfj}. We can therefore change the frequency variable to $\Delta\hat\Omega\equiv \nu^{-2/5}(\hat\Omega-\hat\Omega_{\rm LSO})$, which is $O(\nu^0)$ in the transition region, and adopt an expansion
\begin{equation}\label{transition expansion}
\frac{d\Delta\hat\Omega}{d\hat s} = \nu^{1/5}\sum_{n\geq0}\nu^{n/5} F_n^{\Delta\Omega}(\Delta\hat\Omega) ,
\end{equation}
along with, e.g.,
\begin{equation}
r_p=6M+\nu^{2/5}\sum_{n\geq0}\nu^{n/5}R_n(\Delta\hat\Omega).
\end{equation}
We will only require a small amount of information from this expansion, leaving a complete development to a separate paper. Specifically, we will appeal to the equation governing $F_0^{\Delta\Omega}$:
\begin{equation}\label{FDOmega0}
\left[9 \sqrt{6}\frac{d}{d\Delta\hat\Omega}\left(F^{\Delta\Omega}_0 \frac{dF^{\Delta\Omega}_0}{d\Delta\hat\Omega}\right)-\Delta\hat\Omega\right]\! F^{\Delta\Omega}_0 = - \frac{f^t_{(1)}}{48},
\end{equation}
where $f^\mu_{(1)}$ is the self-force due to $h^{(1)}_{\mu\nu}$ evaluated at the LSO. This equation is straightforwardly found by substituting the above expansions into the self-forced equation of motion $\frac{D^2x^\mu_p}{d\tau^2}=\nu f^\mu_{(1)}$, with $x^\mu_p=\{t,r_p(\Delta\Omega,\nu^{1/5}),\pi/2,\phi_p\}$.
To extract the relevant information from the expansion~\eqref{transition expansion}, we note that it must agree with the inspiral expansion~\eqref{eq:Omegadot-v1} in the following sense: If we re-express Eq.~\eqref{transition expansion} in terms of $\nu$ and $\hat\Omega$ and re-expand it for small $\nu$ at fixed $\hat\Omega$, and if we expand Eq.~\eqref{eq:Omegadot-v1} near the LSO, then in both cases we arrive at a double expansion for small $\nu$ and small $(\hat\Omega-\hat\Omega_{\rm LSO})$. Since they are both expansions of the same function, these two double expansions must agree term by term.
The re-expansion of~\eqref{transition expansion} for small $\nu$ at fixed $\hat\Omega$, written as an expansion of $d\hat\Omega/d\hat s=\nu^{2/5}d\Delta\hat\Omega/d\hat s$, has the structure
\begin{align}
\frac{d\hat\Omega}{d\hat s} &=\nu^{3/5}\sum_{n\geq0}\nu^{n/5}F_n^{\Delta\Omega}(\Delta\Omega)\nonumber\\
&=\nu^{3/5}\sum_{n\geq0}\nu^{n/5}\sum_{k}\frac{\nu^{2k/5}}{(\hat\Omega-\hat\Omega_i)^k}F^{\Delta\Omega}_{n,k},
\end{align}
where the coefficients $F^{\Delta\Omega}_{n,k}$ are constants. On the other hand, the re-expansion of $d\hat\Omega/d\hat s = \nu\sum_{n_{\rm PA}\geq0}\nu^{n_{\rm PA}} F_{n_{\rm PA}}(x)$ for small $(\hat\Omega-\hat\Omega_{\rm LSO})$ has the structure
\begin{equation}
d\hat\Omega/d\hat s = \nu\sum_{n_{\rm PA}\geq0}\nu^{n_{\rm PA}}\sum_{k}\frac{F_{n_{\rm PA},k}}{(\hat \Omega-\hat\Omega_{\rm LSO})^k}.
\end{equation}
Comparing the powers of $\nu$ in the two double expansions, we read off the relationship $3+n+2k = 5(n_{\rm PA}+1)$, or $n+2k = 5n_{\rm PA}+2$. This implies that for an even $n$PA order, all terms near the LSO must match terms in $F^{\Delta\Omega}_n$ with $n$ odd; and for an odd $n$PA order, they must match terms with $n$ even. We can also rearrange the relationship to obtain $k = 1+\frac{1}{2}(5n_{\rm PA}-n)$, which tells us the power of $(\hat\Omega-\hat\Omega_{\rm LSO})$ that can be identified with a particular $n$PA order and a given order in the transition expansion~\eqref{transition expansion}. This structure is summarized in Table~\ref{tab:near-LSO}. In the table, we have highlighted that $F^{\Delta\Omega}_1=0$. We can establish that the leading term in $F^{\Delta\Omega}_1$, $\sim (\Omega-\Omega_{\rm LSO})^{-3}$, vanishes by directly comparing to our numerical results for $F_1$; the complete analysis to be presented elsewhere shows $F^{\Delta\Omega}_1$ identically vanishes.
It is clear from the expansion~\eqref{phiprime} and the scalings in Table~\ref{tab:near-LSO} that near the LSO, the condition $\nu\phi'_2\ll \phi'_1$ is equivalent to $\nu^3 F_2\ll \nu^2 F_1$. Substituting the near-LSO behavior, this becomes $\nu (\hat\Omega - \hat\Omega_{\rm LSO})^{-6}F_{2,6}\ll (\hat\Omega - \hat\Omega_{\rm LSO})^{-2}F_{1,2}$, or
\begin{equation}\label{near-LSO Omega}
|\hat\Omega - \hat\Omega_{\rm LSO}|\gg \left(\frac{F_{2,6}\nu}{F_{1,2}}\right)^{1/4}.
\end{equation}
Unlike in the PN limit, this constraint is stronger than the condition that the error in $\phi_p$ vanishes for $\nu\to0$: for $\int^{\hat\Omega^*_f}\nu\phi'_2 d\hat\Omega\sim \nu(\hat\Omega^*_f-\hat\Omega_{\rm LSO})^{-3}$ to vanish in the limit, we require $|\hat\Omega^*_f- \hat\Omega_{\rm LSO}|\gg \nu^{1/3}$, which is automatically satisfied if Eq.~\eqref{near-LSO Omega} is satisfied. Therefore, the upper limit on the frequency should satisfy
\begin{align}\label{Omega_f}
|\hat\Omega^*_f - \hat\Omega_{\rm LSO}|\sim \nu^{\delta_f} \quad \text{with } 0<\delta_f<1/4.
\end{align}
Combining the above results, we conclude that a 1PA approximation is uniformly accurate, with $o(\nu^0)$ phase errors, on a domain $(c_i\nu^{\delta_i},\hat\Omega_{\rm LSO}-c_f\nu^{\delta_f})$, where $0<\delta_i<3$ and $0<\delta_f<1/4$ and $c_i$ and $c_f$ are constants. Strictly speaking, this is a statement about scaling rather than a statement about absolute error. It says that if we can determine that the phase error is acceptably small (through comparison with NR, for example) for one mass ratio on a specific frequency interval, then for smaller mass ratios the error will remain acceptable (and tend toward zero) not just on that interval, but on a larger interval that tends toward $(0,\hat\Omega_{\rm LSO})$ at the rates $\nu^{\delta_i}$ and $\nu^{\delta_f}$. However, the main qualitative takeaway is that because the exponent $\delta_f$ is at most $1/4$, the upper limit tends toward $\hat\Omega_{\rm LSO}$ extremely slowly: for an equal-mass system, the factor $\nu^{\delta_f}$ is 0.7 or larger; decreasing the mass ratio to $\nu=1/100$ only reduces this factor to 0.3. In other words, the effects of the transition to plunge appear to be significant in a far larger frequency interval than one might expect.
For our EOB-GSF-NR comparisons, we will need to choose a more definite frequency cutoff prior to the LSO. We will consider two options: (i) the critical frequency $\Omega_{\rm critical}$ at which the evolution stops (i.e., $d\Omega/ds=0$) in the 1PAT1 model, and (ii) a frequency $\Omega_{\rm break}$ at which the two-timescale approximation has broken down. The critical frequency exists because $F_0$ and $F_1$ have opposite sign, such that they cancel when $|F_1|=|\nu^{-1}F_0|$. This always occurs prior to $\Omega_{\rm LSO}$ but after $\Omega_{\rm break}$. For the latter, we will say the two-timescale expansion has broken down when the dominant phase error $\nu\phi'_2$ becomes equal in magnitude to the 1PA term $\phi'_1$. We can estimate this frequency following the analysis that led to Eq.~\eqref{near-LSO Omega}, which implies
\begin{equation}\label{Omega break unevaluated}
\hat\Omega_{\rm break} = \hat\Omega_{\rm LSO} - \left(\frac{F_{2,6}\nu}{F_{1,2}}\right)^{1/4}.
\end{equation}
Since $F_{2,6}=F^{\Delta\Omega}_{0,6}$, we can find this coefficient from the leading-order transition dynamics. Substituting $F^{\Delta\Omega}_0=\frac{F^{\Delta\Omega}_{0,1}}{\Delta\hat\Omega}+\frac{F^{\Delta\Omega}_{0,6}}{\Delta\hat\Omega^6}+O(\Delta\hat\Omega^{-11})$ into Eq.~\eqref{FDOmega0} and solving for the coefficients, one quickly finds $F^{\Delta\Omega}_{0,6}=\sqrt{\frac{3}{2}}\frac{M^3(f^t_{(1)})^3}{2048}$, which evaluates to $F^{\Delta\Omega}_{0,6}\approx -1.7\times10^{-12}$. We find $F_{1,2}$ by observing that in Eq.~\eqref{eq:Omegadot}, the contribution to $F_1$ from the ${\cal F}^{(1)}$ term is more than an order of magnitude larger than the contribution from the ${\cal F}^{(2)}_\infty$ term near the LSO. Since ${\cal F}^{(1)}$ is finite at the LSO, this allows us to easily read off the coefficient of $\frac{1}{(\hat\Omega-\hat\Omega_{\rm LSO})^2}$, whch we find to be $F_{1,2}\approx -3.5\times 10^{-6}$. Combining these results in Eq.~\eqref{Omega break unevaluated} gives the breakdown frequency
\begin{equation}\label{Omega break}
\hat\Omega_{\rm break} \approx \hat\Omega_{\rm LSO} - 0.026\nu^{1/4}.
\end{equation}
The corresponding waveform frequencies $\omega_{\rm critical}\approx 2\hat\Omega_{\rm critical}$ and $\omega_{\rm break}\approx 2\hat\Omega_{\rm break}$
are displayed in the second and the third column of Table~\ref{tab:Dphi}.
For the time-domain phasing (and the corresponding $Q_{\omega}$ analysis
developed in the following) we will consider the GSF evolution only up to the breakdown frequency. We stress that these frequencies represent agressive choices of cutoff: both $\hat\Omega_{\rm break}$ and (especially) $\hat\Omega_{\rm critical}$ fall outside the interval allowed by the condition~\eqref{near-LSO Omega}. More conservative choices of cutoff might be preferable, but we opt for what appears to us to be the cleanest option. Similarly, while we caution that the breakdown frequency~\eqref{Omega break} is an asymptotic approximation in the small-$\nu$ limit, not a statement based on absolute error at specific mass ratios, we find it to be a convenient choice that also correctly predicts the frequency at which divergent terms in ${\cal F}^{(2)}$ begin to qualitatively change the total flux's behaviour; see the plots in Appendix~\ref{sec:flux}, where this qualitative change is clearly visible.
Our focus here has been on the near-LSO behaviour. No analogous breakdown frequency is available in the low-frequency limit because, as explained above, the error terms in the 1PA approximation of $\phi'$ remain small compared to the 1PA terms for arbitrarily low frequencies. However, one must be cautious because the phase error diverges in the limit $\Omega^*_i\to0$ (at fixed $\nu$). In general we must first ensure the 1PA approximation is sufficiently accurate on some finite interval for some values of $\nu$ before expanding the interval for smaller $\nu$, following Eq.~\eqref{Omega_f}.
\subsection{Uncontrolled and numerical errors}\label{sec:1PAT1 errors}
The errors discussed above are intrinsic to a 1PA model. They are controlled in the usual sense of perturbation theory: we understand their scalings with the small parameter and can, in principle, reduce the small-$\nu$ error by proceeding to 2PA order. (Errors due to the transition to plunge can likewise be eliminated by developing a complete inspiral-merger-ringdown model.)
We now consider the three additional approximations described above, which are sources of {\it un}controlled errors in our 1PAT1 model. These are errors in the 1PA terms themselves, and we do not have a precise estimate of their magnitude (though of course, like all 1PA terms, they make an order-$\nu^0$ contribution to the phase). Comparisons with NR suggest that these errors are numerically small, but their precise impact cannot be assessed without reference to a complete 1PA model.
We first consider the approximations related to ignoring 1PA terms that arise from the black hole's evolution. These terms enter into the frequency evolution~\eqref{Omegadot-exact} in three ways: through the terms $\frac{\partial E_{\rm bind}}{\partial \delta m_1}\frac{d\delta m_1}{ds}$ and $\frac{\partial E_{\rm bind}}{\partial \delta s_1}\frac{d\delta s_1}{ds}$ in Eq.~\eqref{Omegadot-exact}, which behave as $\sim \nu^3 {\cal F}_{\cal H}^{(1)}$; through corrections $\propto\delta m_1$ and $\propto\delta s_1$ to the leading-order binding energy and leading-order flux (at both the horizon and infinity); and through the contribution of the subleading horizon-flux $\nu^3{\cal F}^{(2)}_{\cal H}$. The first two of these could be almost immediately included in our evolution, but they have negligible impact. The terms proportional to ${\cal F}^{(1)}_{\cal H}$ are suppressed by a factor $\sim{\cal F}^{(1)}_{\cal H}/{\cal F}^{(1)}_\infty$ relative to the other 1PA terms; this factor is very small, reaching $\approx3.3\times10^{-4}$ at the LSO and decaying rapidly away from the LSO, with a PN scaling $\sim x^4$. Similarly, the terms directly proportional to $\delta m_1$ and $\delta s_1$ are highly suppressed. Over the entire inspiral up to the LSO, the change in the black hole mass is $\nu\delta m_1\approx\nu^2\int_0^{\Omega_{\rm LSO}} ({\cal F}^{(1)}_{\cal H}/\dot\Omega)d\Omega\approx 1.2\times10^{-5} \nu M$, and if the black hole starts with zero spin then it accumulates a spin $s_1\approx 2.7\times 10^{-4} \nu M^2$. Hence, the resulting 1PA terms are suppressed by a factor $\lesssim 10^{-4}$ relative to other 1PA terms.
The last neglected black-hole-evolution term in Eq.~\eqref{Omegadot-exact}, stemming from $\nu^3{\cal F}^{(2)}_{\cal H}$, is harder to estimate and would require a significant new calculation to incorporate into our model. However, we can obtain a rough estimate from a PN analysis. At the first two orders in the mass ratio, the known PN terms in the horizon flux are~\cite{Taylor:2008xy}
\begin{multline}
{\cal F}_{\cal H} = \frac{32}{5}\nu^2 x^9\left\{\nu^0[1-5x+O(x^2)]\right.\\
\left.-2\nu[2-9x+O(x^2)]+O(\nu^2)\right\}.
\end{multline}
The fluxes to infinity, restricted to the same number of PN terms, are~\cite{Blanchet:2013haa}
\begin{multline}
{\cal F}_\infty=\frac{32}{5} \nu^2 x^5\left\{\nu^0\left[1-\frac{1247}{336}x+O(x^2)\right]\right. \\
\left.- \nu\left[\frac{35}{12} x - \frac{9271}{504}x^2+O(x^3)\right]+O(\nu^2)\right\}.
\end{multline}
We can gain some confidence in the accuracy of these expressions by noting that they correctly predict the ratio of the leading-order-in-$\nu$ fluxes to one digit at the LSO, ${\cal F}^{(1)}_{\cal H}/{\cal F}^{(1)}_{\infty}\approx 3\times 10^{-4}$. At the first subleading order in $\nu$, they predict ${\cal F}^{(2)}_{\cal H}/{\cal F}^{(2)}_{\infty}\approx 3\times 10^{-2}$ at the LSO, decaying rapidly to $\approx 2\times 10^{-4}$ at $x=1/20$. Moreover, as we pointed out in the previous section, the contribution of ${\cal F}^{(2)}_{\infty}$ is already significantly smaller than other 1PA contributions to the phase evolution in the strong field.
We therefore conclude that all 1PA effects of the black hole evolution would not materially impact any of our comparisons in this paper.
\begin{figure}[t]
\center
\includegraphics[width=\columnwidth]{fig01.pdf}
\caption{\label{fig:Q1-EFL-ESF} Difference between $Q_{\omega}^1$ computed using the first-law binding energy and the second-order self-force binding energy from Ref.~\cite{Pound:2019lzj}. The frequency range is chosen for easy comparison with Fig.~\ref{fig:Q0andQ1}.}
\end{figure}
Our use of the first-law binding energy may have a substantially larger effect. This is the final of the three approximations outlined in Sec.~\ref{sec:1PA approximations}. Assessing its impact is difficult. As a rough guide, we compare the phase evolution using two versions of the binding energy (while noting it is unclear which of them lies closer to the true result): (i) the first-law binding energy, and (ii) the direct calculation from the Bondi mass in Ref.~\cite{Pound:2019lzj}, specifically the ``conservative'' approximant calculated there. Rather than using $\phi'$ to estimate errors in the phase evolution, as we did in the previous section, to assist the comparisons in later sections we use the dimensionless quantity $Q_\omega=\omega^2/\dot\omega$, which is $\approx \omega \phi'$. This has an asymptotic expansion
\be\label{Q expansion}
Q_{\omega}(\omega,\,\nu) = \frac{Q_{\omega}^{0}(\omega)}{\nu} + Q_{\omega}^1(\omega) + Q_{\omega}^2(\omega) \nu + O(\nu^2),
\ee
with the first two terms derived explicitly in Appendix~\ref{sec:exactQ0andQ1}. The different binding energies first enter in the 1PA term, $Q^1_\omega$. Figure~\ref{fig:Q1-EFL-ESF} displays the absolute difference between the two results for $Q^1_\omega$ (the relative difference, for comparison, is $<2\%$ across all frequencies considered). As we discuss later, this difference is appreciable, and it is comparable to the EOB-GSF difference shown later in Fig.~\ref{fig:Q0andQ1}. Only in the late inspiral does the EOB-GSF difference grow significantly larger.
In addition to the above approximations, our 1PAT1 model also contains numerical error. This enters primarily in the calculation of ${\cal F}^{(2)}_\infty$; all other sources of numerical error are negligible. We estimate our error in ${\cal F}^{(2)}_\infty$ to vary from $\sim0.01\%$ (near the LSO) to $\lesssim 1\%$ (for $x\approx 0.02$). This may be comparable to the 1PA effects of the black hole's evolution but is subdominant compared to the uncertainty due to the binding energy.
All of the errors described in this section have an impact comparable to or smaller than other differences considered in Ref.~\cite{Wardell:2021fyy}. In particular, for mass ratios $q\lesssim10$ there are larger differences between the various formulations of the 1PA phase evolution: 1PAT1, 1PAT2, 1PAF1, or leaving the fraction in Eq.~\eqref{Omegadot-exact} unexpanded. However, those differences all vanish in the limit $\nu\to0$, while the 1PA sources of error described here leave a $\nu$-independent impact on the phase.
\section{EOB dynamics and waveform}
\label{sec:EOB}
We work here with the most advanced version of the \texttt{TEOBResumS}{}~\cite{Nagar:2020pcj,Riemenschneider:2021ppj}
EOB waveform model for nonprecessing quasi-circular binaries (see Ref.~\cite{Gamba:2021ydi} for
the spin-precessing version).
All technical details of the model are discussed extensively in Refs.~\cite{Nagar:2018zoe,Nagar:2020pcj,Riemenschneider:2021ppj},
so that we report here only the main conceptual elements to orient the reader. The conservative dynamics
is described by a Hamiltonian $H_{\rm EOB}$~\cite{Nagar:2018zoe},
depending on the EOB potentials $A(R)$ and $B(R)$ (that include spin-spin interactions~\cite{Damour:2014sva}),
given as a function of the EOB mass-reduced phase-space variables $(r,\varphi,p_\varphi,p_{r_*})$,
related to the physical ones by $r=R/M$ (relative separation), $p_{r_*}=P_{R_*}/\mu$
(radial momentum), $\varphi$ (orbital phase),
$p_\varphi=P_\varphi/(\mu M)$ (angular momentum) and $t=T/M$ (time),
and we replace the conjugate momentum $p_r$ with the
\virg{tortoise} rescaled variable
$p_{r_*}\equiv (A/B)^{1/2}p_r$.
The Hamiltonian equations for the relative dynamics read
\begin{subequations}
\begin{align}
\dot{\varphi} &= \Omega = \partial_{p_\varphi} \hat{H}_{\rm EOB}, \\
\dot{r} &= \left( \frac{A}{B} \right)^{1/2} \partial_{p_{r_*}} \hat{H}_{\rm EOB}, \\
\dot{p}_\varphi &= \hat{{\cal F}}_\varphi , \\
\dot{p}_{r_*} &= - \left( \frac{A}{B} \right)^{1/2} \partial_{r} \hat{H}_{\rm EOB},
\end{align}
\end{subequations}
where $\Omega$ is the orbital frequency
and $\hat{{\cal F}}_\varphi$ is the radiation reaction force accounting for mechanical
angular momentum losses due to GW emission\footnote{Note that within this context
we are assuming that the radial force $\hat{{\cal F}}_r=0$, that is equivalent to a
gauge choice for circular orbits~\cite{Buonanno:2000ef}.}, notably including
both the asymptotic and the horizon contribution~\cite{Nagar:2011aa, Damour:2012ky}.
The flux at infinity includes all multipoles up to $\ell=8$ in a special factorized
and resummed form~\cite{Damour:2008gu,Nagar:2019wds, Nagar:2020pcj} so to improve
the behavior of the original PN series in the strong-field, fast velocity regime.
The complete quadrupole EOB waveform is written as
\be
\label{eq:h_eob}
h_{22} = h_{22}^{\rm Newt} \hat{h}_{22} \hat{h}_{22}^{\rm NQC}
\ee
where $h_{22}^{\rm Newt}$ is the Newtonian contribution, $\hat{h}_{22}$ the higher-order
PN correction in factorized and resummed form~\cite{Damour:2008gu}
and $\hat{h}_{22}^{\rm NQC}$ the next-to-quasi-circular factor informed by NR simulations.
We do not give additional details on $(\hat{h}_{22},\hat{h}_{22}^{\rm NQC})$ but rather
direct the reader to Refs.~\cite{Nagar:2021gss,Riemenschneider:2021ppj}.
Here it is sufficient to recall that the purpose of the NR-informed NQC factor is to correct
the purely analytical waveform so that it is consistent with the NR one around merger,
an approach originally introduced in the extreme-mass-ratio limit~\cite{Damour:2007xr}.
Although our focus here will be on the inspiral, and not on the ringdown, let us remember
that the model provides a complete analytical description of the ringdown waveform that is
informed by NR simulations~\cite{Damour:2014yha,DelPozzo:2016kmd,Nagar:2021gss}.
For the purposes of this paper, we use a private {\tt MATLAB} implementation of \texttt{TEOBResumS}{},
instead of the publicly available $C$ one~\cite{teobresums}, in which NQC corrections
are usually determined by iterating the evolution 3 times~\cite{Damour:2009kr}.
\section{Numerical relativity, gravitational self-force and the $Q_\omega$ diagnostic}
\label{sec:gsf_nr}
Before comparing EOB and GSF results it is useful to discuss direct GSF/NR phasing
comparisons, complementing the discussion of Ref.~\cite{Wardell:2021fyy}.
To do so, we focus here on two specific NR datasets from the SXS catalog: $q=7$, SXS:BBH:0298, and
a 20-orbit long $q=10$ binary, SXS:BBH:0303, that has a {\it rather small} initial eccentricity ($\sim 10^{-5}$).
Note that Ref.~\cite{Wardell:2021fyy} selected the SXS:BBH:1107 dataset for $q=10$, that
is 30-orbits long, but it is also marred by a larger eccentricity.
The main purpose of this section is to use an intrinsic measure of the NR phase evolution to
obtain careful GSF/NR comparisons. To do so, we use the $Q_\omega$ function, defined as
\begin{equation}
\label{eq:Qomg}
Q_{\omega} = \frac{\omega^2}{\dot{\omega}},
\end{equation}
where $\omega\equiv \omega_{22}$ is the waveform frequency.
From this, the accumulated phase difference
in the time-domain in the frequency interval $(\omega_1,\omega_2)$ is given by the integral
\be
\label{eq:Dphi_from_Q}
\Delta\phi_{(\omega_1,\omega_2)}=\int_{\omega_1}^{\omega_2}Q_\omega d\log\omega \ .
\ee
The use of this
diagnostic was essential to produce reliable EOB/NR phasing comparisons for coalescing binary
neutron star systems~\cite{Baiotti:2010xh, Baiotti:2011am, Bernuzzi:2014owa}. In that particular
case, the $Q_\omega$ analysis was an important check on the reliability of standard time-domain
phasing comparisons that depend on two shift ambiguities: an arbitrary phase shift and an arbitrary
time shift.
For BBH systems, a systematic phasing analysis involving the $Q_\omega$ function dates
back to Ref.~\cite{Damour:2012ky}, which focused on EOB/NR phasing comparisons for nonspinning
binaries. For the highly accurate SXS data, Ref.~\cite{Damour:2012ky} demonstrated the equivalence
of the time-domain and frequency-domain analyses. In particular it showed that one can rely on
a time-domain phasing analysis to inform the EOB model using SXS data because of the excellent
EOB/NR agreement found during the inspiral.
At the technical level, Ref.~\cite{Damour:2012ky} pointed out that the extraction of a quantitatively
useful $Q_\omega$ function from NR data is a challenging process. In particular, one has to remove
the {\it many} numerical oscillations of spurious origin (either high-frequency or low-frequency) that
prevent any quantitatively reliable comparison with any other alternative representation of the binary
phasing (for example, EOB or GSF).
\begin{figure*}[t]
\center
\includegraphics[width=0.42\textwidth]{fig02a.pdf}
\hspace{5 mm}
\includegraphics[width=0.42\textwidth]{fig02b.pdf}
\caption{\label{fig:Qomg_all}NR, GSF and EOB comparison of $Q_\omega$ functions obtained from the
phase of $\psi_4^{22}$. Note that the GSF curve is consistently {\it above} both the NR and EOB ones. For comparison we also include the small-$\nu$ expansion truncated at 1PA order, $Q_\omega=\nu^{-1}Q_\omega^0(\omega)+Q_\omega^1(\omega)$, with the exact coefficients calculated from 1GSF and 2GSF data.}
\end{figure*}
\begin{figure*}[t]
\center
\includegraphics[width=0.42\textwidth]{fig03a.pdf}
\hspace{5 mm}
\includegraphics[width=0.42\textwidth]{fig03b.pdf}
\caption{\label{fig:DQomg}Differences between the $Q_\omega$ curves from Fig.~\ref{fig:Qomg_all}.
The logarithmic integral of these yields the phase differences given in Table~\ref{tab:Dphi_Q}.
Note that below $\omega\approx0.055$ the computation of $Q_{\omega}^{\rm NR}$
is not reliable.}
\end{figure*}
The successful computation of $Q_{\omega}$ in Ref.~\cite{Damour:2012ky} was based not on the strain quadrupole
waveform, but rather on the {\it curvature} waveform, i.e. the Weyl scalar\footnote{Note that we simplify here
the notation and define $\psi_4^{22}\equiv R\psi_4^{22}$, where $R$ is the extraction radius.} $\psi_4^{22}$.
We use $\psi_4^{22}$ instead
of $h$ because the former is less affected by various kinds of high-frequency and low-frequency noise and
it is simpler to obtain a $Q_{\omega}$ that is qualitatively and quantitatively reliable.
For our $Q_\omega$ analysis\footnote{Later, we will revert to using $\omega$ to denote the frequency of the $\ell = m = 2$ strain multipole.} we thus adopt the phase convention
\be\label{psi4 phase}
\psi_4^{22}=|\psi_4^{22}|e^{-i\phi_{22}} ,
\ee
and define the corresponding frequency $\omega\equiv \dot{\phi}_{22}$.
For each SXS dataset, we take $\psi_4^{22}$ data from the SXS catalog, corrected for the spurious motion
of the center of mass and extrapolated to infinity with extrapolation order $N=3$.
Although $N=4$ extrapolation order would be the ideal choice for the inspiral, we work with $N=3$
to be consistent with the time-domain phasings shown in Figs.~\ref{fig:phasings_q7} and \ref{fig:phasings_q10},
for which the choice is always $N=3$ as a compromise between the early evolution and the merger.
The NR $\psi_4^{22}$ $Q_{\omega}$ is computed using the technique described in Sec.~IIIB of Ref.~\cite{Damour:2012ky},
that aims at removing various kind of spurious oscillations that emerge when taking finite-difference time derivatives of $\phi_{22}$.
More precisely, after the successive application of Savitzky-Golay filters on $\omega$ and $\dot{\omega}$ to
remove the high-frequency noise, the final result is obtained by fitting the Newton-normalized $Q_\omega$
with a suitably chosen rational function.
Following Ref.~\cite{Damour:2012ky}, we define the Newtonian part of $Q_\omega$ as
\be
Q_\omega^N(\omega)=\dfrac{5}{3\nu}2^{-7/3}\omega^{-5/3} \ ,
\ee
and the Newton-normalized function reads
\be
\label{eq:hatQ}
\hat{Q}_\omega(\omega) = Q_\omega/Q_\omega^N \ .
\ee
The function $\hat{Q}_\omega$ is finally fitted on
a given frequency interval with a rational function of the form
\be
\label{eq:Qom_ratio}
\hat{Q}_\omega^{\rm fit}=\dfrac{1 + n_1 x + n_2 x^{3/2} + n_3 x^2 + n_4 x^{5/2}+n_5 x^3}{1 + d_1 x + d_2 x^2 + d_3 x^3}
\ee
where $x\equiv (\omega/2)^{2/3}$. Although we are just following step-by-step the
technique applied in Ref.~\cite{Damour:2012ky}, for completeness we collect
all useful technical details in Appendix~\ref{sec:Qomg_clean}.
Figure~\ref{fig:Qomg_all} compares the results of computing three different $Q_\omega$'s for $q=7$ (left) and $q=10$ (right): (i) the NR one computed
using the technique described thus far (black solid line); (ii) the GSF one, simply obtained by taking
the time-derivatives of the strain waveform and applying a low-pass filter to remove high-frequency
noise (blue solid line) and (iii) the EOB one (red dashed line). We also display the small-$\nu$ expansion truncated at 1PA order, $Q_\omega=\nu^{-1}Q_\omega^0(\omega)+Q_\omega^1(\omega)$, calculated from GSF data using Eqs.~\eqref{Q0 exact} and \eqref{Q1 exact}. The main panel of the figure shows the full $Q_\omega$
functions, while the inset focuses on a smaller frequency interval in order to highlight the difference between
the three curves. The figure is quantitatively complemented by Fig.~\ref{fig:DQomg}, which shows
various differences between $Q_\omega$'s, that is: $\Delta Q_\omega^{XY}\equiv Q_\omega^X-Q_\omega^Y$
where $(X,Y)$ can be EOB, GSF or NR.
Fig.~\ref{fig:DQomg} illustrates that the estimate of the NR $Q_\omega$ is not reliable before
$\omega\sim 0.055$, due to boundary effects related to fitting procedure.
If we focus on the part of the plot for $\omega>0.055$, the GSF description
yields a $Q_\omega$ that is noticeably different from the other two, with a somewhat smaller difference for $q=10$ than for $q=7$. Even for the $q=10$ case, $\Delta Q^{\rm GSFNR}_\omega$ remains of order unity on a large frequency interval. By contrast,
$\Delta Q^{\rm EOBNR}_\omega$ remains consistently close to zero across all frequencies. Finally, Fig.~\ref{fig:DQomg} also indicates that, although agreement between $Q_\omega^{\rm GSF}$ and $Q_\omega^{\rm EOB}$ improves at lower frequencies, there is still a noticeable difference, even outside the frequency interval where it was possible to reliably compute $Q_\omega^{\rm NR}$.
The effect that all of this has on the waveform phasing is made quantitative in Table~\ref{tab:Dphi_Q}, which lists the phase differences accumulated
on the $(\omega_1,\omega_2)$ frequency interval evaluated using Eq.~\eqref{eq:Dphi_from_Q}. These dephasings can be compared to (and are compatible with) Fig.~4 in Ref.~\cite{Wardell:2021fyy}; the frequency interval $(0.055,0.095)$ we use here roughly corresponds to the interval between the square and the circle in the third panel of that figure or between the downward and upward triangle in the fourth panel, for example.
The fact that $Q_\omega^{\rm GSF}$ is always above $Q_\omega^{\rm NR}$ (or $Q_\omega^{\rm EOB}$) physically
means that the system is inspiralling {\it more slowly} than it should according to the NR prediction, and is reflected in the fact that the phase differences are positive. One should be careful not to read too much into this as, for example, a similar analysis with the 1PAF1 model yields the opposite result. In that case $Q_\omega^{\rm GSF}$ {\it underestimates} the true value, making the system inspiral {\it more quickly} than it should. In the next section we will rephrase this finding
also in terms of more intuitive waveform comparisons in the time-domain.
The data for the truncated expansion $\nu^{-1}Q_\omega^0+Q_\omega^1$ in these plots also reveals valuable information. Because $Q_\omega$ is a nonlinear function of $\dot\Omega$, $Q_\omega$ as calculated from the 1PAT1 model contains contributions at all orders in $\nu$. The difference between $Q^{\rm GSF}_\omega\equiv Q^{\rm 1PAT1}_\omega$ and $\nu^{-1}Q_\omega^0+Q_\omega^1$ in these figures suggests that these higher-order effects in $Q^{\rm GSF}_\omega$ are significant at these mass ratios; and in particular, the behaviour of $\nu^{-1}Q_\omega^0(\omega)+Q_\omega^1(\omega)$ near the LSO tells us that the higher-order effects are entirely responsible for the divergence of $Q^{\rm GSF}_\omega$ at the LSO. On the other hand, these higher-order contributions in $Q^{\rm GSF}_\omega$ will differ from the true values of $Q^n_\omega$ for $n>1$, as the true values will receive contributions from $n$PA terms in $\dot\Omega$. The difference between $Q^{\rm NR}_\omega$ and $\nu^{-1}Q_\omega^0+Q_\omega^1$ tells us that these true higher-order terms are also significant (and significantly different than those in 1PAT1) at these mass ratios. We return to these points in Sec.~\ref{sec:nu_dependence}.
\begin{table}[t]
\begin{center}
\begin{ruledtabular}
\begin{tabular}{c c c c c}
$q$ & $(\omega_1,\omega_2)$ &$\Delta\phi^{\rm GSFNR}$ & $\Delta\phi^{\rm GSFEOB}$ & $\Delta\phi^{\rm EOBNR}$ \\
\hline
\hline
7 & $(0.055,0.095)$ & 0.8703 & 0.8486 & 0.0217 \\
10 & $(0.055,0.095)$ & 0.4627 & 0.4581 & 0.00463 \\
\end{tabular}
\end{ruledtabular}
\end{center}
\caption{\label{tab:Dphi_Q}Accumulated phase differences (in radians) obtained by integrating the $\psi_4^{22}$ $Q_{\omega}$
curves of Fig.~\ref{fig:DQomg} between frequencies $(\omega_1,\omega_2)$.
Note that for these mass ratios and over this frequency range the EOB/NR phase differences are smaller than GSF/NR by
more than an order of magnitude.}
\end{table}
\begin{figure*}[t]
\center
\includegraphics[width=0.32\textwidth]{fig04a.pdf}
\includegraphics[width=0.32\textwidth]{fig04b.pdf}
\includegraphics[width=0.32\textwidth]{fig04c.pdf}
\caption{\label{fig:phasings_q7}
Triple EOB/NR/GSF comparison for $q=7$. Left: EOB/NR phasing using SXS:BBH:0298.
The alignment frequency window is $[\omega_{\rm L}, \omega_{\rm R}] = [0.044, 0.05]$ (indicated by vertical dash-dotted lines),
and the phase difference accumulated at the NR merger (dashed blue line) is $\Delta\phi^{\rm EOBNR}_{22}=-0.27$.
The dotted vertical grey line indicates the time at which $\omega_{22}^{\rm NR} = \omega_{22}^{\rm GSF_{break}}$,
and the EOB/NR phase difference at that point is $-0.02$.
Middle: GSF/NR phasing comparison using the {\it same} alignment window. One gets $\Delta\phi^{\rm GSFNR}_{22}\simeq -0.55$
at $\omega_{22}^{\rm GSF_{break}}$, and here the dotted vertical grey line indicates indeed the time at which
$\omega_{22}^{\rm GSF} = \omega_{22}^{\rm GSF_{break}}$. Note that we show in grey the last part of the GSF waveform up to the critical frequency,
but evaluate the phase difference at a time corresponding to the breakdown frequency (see Table~\ref{tab:Dphi}).
Right: EOB/GSF phasing with alignment window $[\omega_{\rm L}, \omega_{\rm R}] = [0.023, 0.025]$,
that yields $\Delta\phi^{\rm EOBGSF}_{22}=1.26$ at $\omega_{22}^{\rm GSF_{break}}$.
Here the dashed grey line indicates the EOB last stable orbit (LSO).}
\end{figure*}
\begin{figure*}[t]
\center
\includegraphics[width=0.32\textwidth]{fig05a.pdf}
\includegraphics[width=0.32\textwidth]{fig05b.pdf}
\includegraphics[width=0.32\textwidth]{fig05c.pdf}
\caption{\label{fig:phasings_q10}
Triple EOB/NR/GSF comparison for $q=10$. Left: EOB/NR phasing using SXS:BBH:0303.
The alignment frequency window is $[\omega_{\rm L}, \omega_{\rm R}] = [0.049, 0.059]$ (indicated by vertical dash-dotted lines),
and the phase difference accumulated at the NR merger (dashed blue line) is $\Delta\phi^{\rm EOBNR}_{22}=-0.24$.
The dotted vertical grey line indicates the time at which $\omega_{22}^{\rm NR} = \omega_{22}^{\rm GSF_{break}}$,
and the EOB/NR phase difference at that point is $-0.03$.
Middle: GSF/NR phasing comparison using the {\it same} alignment window. One gets $\Delta\phi^{\rm GSFNR}_{22}\simeq -0.27$
at $\omega_{22}^{\rm GSF_{break}}$, and here the dotted vertical grey line indicates indeed the time at which
$\omega_{22}^{\rm GSF} = \omega_{22}^{\rm GSF_{break}}$. Note that we show in grey the last part of the GSF waveform up to the critical frequency,
but evaluate the phase difference at a time corresponding to the breakdown frequency (see Table~\ref{tab:Dphi}).
Right: EOB/GSF phasing with alignment window $[\omega_{\rm L}, \omega_{\rm R}] = [0.023, 0.028]$,
that yields $\Delta\phi^{\rm EOBGSF}_{22}=0.75$ at $\omega_{22}^{\rm GSF_{break}}$.
Here the dashed grey line indicates the EOB last stable orbit (LSO).}
\end{figure*}
\begin{figure}[t]
\center
\includegraphics[width=0.42\textwidth]{fig06.pdf}
\caption{\label{fig:phasing_q10_1107} Comparison agaist the $q=10$ simulation SXS:BBH:1107,
using the same alignment interval as was used for SXS:BBH:0303, namely $[\omega_{\rm L}, \omega_{\rm R}] = [0.049, 0.059]$.
The final accumulated phase difference is $-0.23$. If the alignment interval is moved to lower frequencies,
$[\omega_{\rm L}, \omega_{\rm R}] = [0.040, 0.047]$, we get $\Delta\phi^{\rm GSFNR}_{22}= -0.26$.
As previously, we show in grey the last part of the GSF waveform up to the critical frequency,
but evaluate the phase difference at a time corresponding to the breakdown frequency.}
\end{figure}
\section{Comparing waveforms in the time domain}
\label{sec:waveform}
\subsection{EOB/NR/GSF: comparable-mass case}
Let us now complement the $Q_\omega$-based analysis with additional information
obtained using more standard phasing comparisons in the time domain. Unlike
the gauge-invariant $Q_\omega$ phase analysis, to align the two waveforms in
the time-domain we need to specify an arbitrary phase shift and an arbitrary time shift.
We follow here a well-tested procedure analogous to the one described in Sec.~V A of Ref.~\cite{Baiotti:2011am},
which in turn stems from Sec.~VI A of Ref.~\cite{Boyle:2008ge}. In the latter it was pointed out
that by simply matching the GW phase and frequency at a fiducial time in an NR simulation, one does not obtain a robust
estimate of the phase difference, especially when the chosen time corresponds to a low frequency where the NR waveform
is contaminated by noise and residual eccentricity.
One needs instead to consider an interval and to minimize the phase difference over this interval.
Given two $\ell =m =2$ waveform strain multipoles in the form~\eqref{eq:RWZnorm},
and considering the frequency $\omega = \omega_{22}$, we choose a frequency interval $[\omega_L, \omega_R]$ which we use to define
a common time interval $[t_L, t_R]$ for the two waveforms. Since a given frequency interval will not necessarily correspond to the same time interval in two difference waveforms, we here set the time interval using the NR waveform when comparing EOB or GSF to NR,
and using the GSF waveform when comparing EOB to GSF. We then interpolate the other waveforms onto a common grid of time steps within this interval. Given that the time interval is made up of $N$ numerical points, we have two timeseries
of the phase $\phi_1(t_i)$ and $\phi_2(t_i)$, where $i = 1, ..., N$, that allow us to define the quantity
\be
\Delta \phi (t_i , \tau, \alpha) = \left[ \phi_2 (t_i - \tau) - \alpha \right] - \phi_1(t_i) \, .
\ee
We then determine $\tau$ and $\alpha$, respectively the time and the phase shift, so that they minimize
the root-mean-square deviation of $\Delta \phi$ over $[t_L, t_R]$,
\be
\sigma = \sqrt{\frac{1}{N} \sum_{i = 1}^{N}\left[\Delta \phi (t_i , \tau, \alpha)\right]^2} \, .
\ee
For a given value of $\tau$, the minimization of $\sigma$ is faster if one optimizes $\alpha$ by defining it analytically as
$\alpha = \frac{1}{N} \sum_{i=1}^{N} \phi_2 (t_i - \tau) - \phi_1(t_i)$.
We note in passing that $\sigma$ also gives a useful estimate of phase errors, and in the
waveform alignment considered in the following it is always of order $10^{-4}$.
Finally, the two obtained waveforms are
\begin{subequations}
\begin{align}
\Psi_{22}^1 &= A_{22}^1(t_1) e^{- i \phi_1(t_1)} \, , \\
\Psi_{22}^2 &= A_{22}^2(t_2 - \tau) e^{- i [\phi_2(t_2 - \tau) - \alpha ]} \, ,
\end{align}
\end{subequations}
and the second one is again interpolated onto the time grid of the first.
Evidently, any computation of the phase difference between two waveforms will depend on the frequency interval over which the comparison is made. For the purposes of GW data analysis, it is the phase error over a fixed frequency interval that is most relevant. One may also wish to compute a {\it total} accumulated phase difference by aligning the waveforms in the infinite past and computing the phase difference at some time near the end of the waveform. However, as described in Sec.~\ref{sec:1PA accuracy}, this is not sensible when using a 1PA GSF model: the phase error in the model will be larger for larger frequency intervals, and it will ultimately become infinite if the frequency interval starts in the infinite past, at $\omega=0$. Restating the discussion in Sec.~\ref{sec:1PA accuracy} in terms of $Q_\omega$, we can say that the phase error $\int_0^\omega \Delta Q_\omega d \log \omega$ will diverge unless $\Delta Q_\omega$ tends to zero as $\omega \to 0$. From the analysis around Eq.~\eqref{dphidOmega - small omega}, we find $\Delta Q_\omega\sim \omega \nu\phi'_2\sim \nu \omega^{-1/3}$, blowing up in the $\omega\to0$ limit.
We therefore focus here on computing phase differences over a finite portion of the inspiral and consider how those differences depend on how much of the inspiral is included.
Figure~\ref{fig:phasings_q7} focuses on the $q=7$ binary and shows the time-domain
phasing comparison between EOB, NR and GSF, where the alignment frequency interval
is $[\omega_L,\omega_R]=[0.044,0.05]$ for the EOB/NR and GSF/NR comparisons (left and middle panels),
while $[\omega_L,\omega_R]=[0.023,0.025]$ for the EOB/GSF one (rightmost panel). The
dotted line in the part of the figure including the EOB and NR mergers indicates the point
corresponding to the breakdown of the two-timescale approximation that GSF calculations are based on (see Table~\ref{tab:Dphi}).
The left panel of Fig.~\ref{fig:phasings_q7} illustrates the EOB/NR phase agreement. We see that $\Delta\phi^{\rm EOBNR}_{22}\equiv \phi^{\rm EOB}_{22}-\phi^{\rm NR}_{22}$ remains {\it flat} (oscillating around zero) for most of the inspiral, then it is
$-0.02~\text{rad}$ when $\omega_{22}^{\rm NR} = \omega_{22}^{\rm GSF_{break}}$
and it remains less than $0.5~\text{rad}$ through plunge. Note that the estimated NR phase uncertainty at merger\footnote{This uncertainty is estimated by comparing the simulation with the highest available resolution
to the one with next-to-highest resolution.} for this dataset is rather small, $\delta\phi_{\rm mrg}^{\rm NR}=-0.0775$~rad. The middle panel of Fig.~\ref{fig:phasings_q7} displays
the corresponding 2GSF/NR phase comparison, obtained using the same alignment window.
We see that $\Delta \phi_{22}^{\rm GSFNR}$ is oscillating around zero initially, but then
decreases to reach $\simeq -0.55$~rad at $\omg_b$, a value significantly larger the EOB/NR dephasing.
Since we do not have longer NR simulations at hand, we use a longer EOB waveform to gain
some more insights on the dephasing over a larger portion of the inspiral. The fact that the top-left panel of Fig.~\ref{fig:phasings_q7}
indicates that the \texttt{TEOBResumS}{} model offers an excellent description of the phasing over the full
inspiral of SXS:BBH:0298 suggests that it will give a similarly good representation
of the true waveform also at lower frequencies.
In the right panel of Fig.~\ref{fig:phasings_q7} we show an EOB/GSF comparison with
the alignment interval chosen in the very early inspiral, $[\omega_L,\omega_R]=[0.023,0.025]$.
In this case the phase difference accumulated up to $\omg_b=0.109$
is $\sim 1.2646$.
\begin{table*}[t]
\begin{center}
\begin{ruledtabular}
\begin{tabular}{c c c c | c c c | c}
$q$ & $\omega_{22}^{\rm GSF_{break}}$ & $\omega_{22}^{\rm GSF_{critical}}$ & $\omega_{22}^{\rm EOB_{LSO}}$
& $[\omega_L, \omega_R]$
& $\Delta\phi_{22,t}^{\rm EOBGSF}$ & $\Delta\phi^{\rm EOBGSF}_{22,Q_\omega}$
& $\Delta\phi_{22,t}^{\rm EOB_{6PN}GSF}$ \\
\hline
\hline
7 & 0.10618 & 0.12032 & 0.15707 & [0.023, 0.025] & 1.2646 & 1.2639 & \dots \\
10 & 0.10820 &0.12360 &0.15127 & [0.023, 0.028] & 0.7455 & 0.7438 & \dots \\
15 & 0.11050 & 0.12678 & 0.14644 & [0.023, 0.028] & 0.3782 & 0.3775 & 0.4772 \\
32 & 0.11455 & 0.12747 & 0.14104 & [0.023, 0.033] & $-0.1267$ & $-0.1266$ & 0.0656 \\
64 & 0.11784 & 0.12743 &0.13858 & [0.023, 0.033] & $-0.5091$ & $-0.5085$ & $-0.1213$ \\
128 & 0.12068 & 0.12778 & 0.13733 & [0.023, 0.027] & $-1.1287$ & $-1.1278$ & $-0.2677$
\end{tabular}
\end{ruledtabular}
\end{center}
\caption{\label{tab:Dphi}From left to right: the mass ratio $q$;
the frequency related to the breakdown of the two-timescale approximation, $\omega_{22}^{\rm GSF_{break}}$;
the frequency at which the first-order and second-order forcing terms in the GSF evolution cancel each other, $\omega_{22}^{\rm GSF_{critical}}$;
the adiabatic LSO GW frequency $\omega_{22}^{\rm EOB_{LSO}}$; the phase difference, computed up to $\omega_{22}^{\rm GSF_{break}}$, either
using the time-domain alignment or the $Q_\omega$ analysis. The consistency between the two values confirms the robustness of the EOB/GSF phasings.
The last column shows the time-domain phase difference obtained by improving the $\ell=m=2$ \texttt{TEOBResumS}{} resummed radiation reaction
with a 6PN test-mass term~\cite{Nagar:2022icd}.}
\end{table*}
To check the possible presence of systematics related to the alignment ambiguities, we also computed the
corresponding dephasing using the EOB and GSF $Q_{\omega}$'s.
Our interest, as per the right panel of Fig.~\ref{fig:phasings_q7}, is on the EOB-GSF phase difference
between initial time, $t_1$, and final time, $t_2$, corresponding to $\omg_b$.
With both $Q_{\omega}^{\rm EOB}$ and $Q_{\omega}^{\rm GSF}$ at hand, the equivalent of the time-domain
phasing up to $t_2$ is obtained as
\begin{align}\label{eq:Qint - fixed t interval}
\Delta\phi_{Q_{\omega}}^{\rm EOBGSF} &= \int_{\omega^{\rm EOB}(t_1)}^{\omega^{\rm EOB}(t_2)} Q_{\omega}^{\rm EOB} d\log \omega^{\rm EOB} \nonumber\\
& \qquad - \int_{\omega^{\rm GSF}(t_1)}^{\omega^{\rm GSF}(t_2)} Q_{\omega}^{\rm GSF} d\log \omega^{\rm GSF}.
\end{align}
The result of this calculation is shown in Table~\ref{tab:Dphi} and is in excellent agreement with the time-domain dephasing
also given in the same table. This confirms, a posteriori, the reliability of our dephasing estimates.
The same procedure and conclusions we drew for $q =7$ also hold for the $q=10$ case: the various time-domain phasings are
shown in Fig.~\ref{fig:phasings_q10}.
Here the accumulated GSF phase difference, compared to either EOB or NR (see middle panel of Fig.~\ref{fig:phasings_q10})
is a factor of $\sim 2 $ smaller than the $q=7$ case.
To benchmark our analysis, we can also check the robustness of our conclusions using a different $q=10$ dataset
available in the SXS catalog, SXS:BBH:1107. This simulation was also considered in Ref.~\cite{Wardell:2021fyy};
it has a larger initial eccentricity but also starts from a larger initial separation than SXS:BBH:0303. Figure~\ref{fig:phasing_q10_1107}
shows the GSF/NR phasing comparison using the same alignment interval as for SXS:BBH:0303.
If the alignment interval is lowered to $[\omega_L,\omega_R]=[0.040,0.047]$, the phase difference up
to $\omg_b$ increases by $\sim 10\%$, from $-0.23$ to $-0.26$. This supports our previous understanding
that the accumulated phase difference increases as a larger portion of the inspiral is considered.
A reader might note that the GSF-NR dephasings reported in this section are substantially smaller than those in the previous section. This difference is not due to our use of $Q_\omega$ in one analysis and direct measurements of $\phi(t)$ in the other. Instead the distinction is between dephasings on a fixed \textit{time} interval or on a fixed \textit{frequency} interval. If we integrate $Q_\omega$ over a fixed time interval, as in Eq.~\eqref{eq:Qint - fixed t interval}, then the resulting dephasing will agree with a direct measurement of $\phi(t)$ on that interval.
This equivalence is shown by the results
in Table~\ref{tab:Dphi}, where the time interval corresponds to the one used for the waveforms (after the alignment).
But the EOB and GSF frequency intervals are not the same on this time interval, namely $\omega^{\rm EOB}(t_1) \ne \omega^{\rm GSF}(t_1)$
and $\omega^{\rm EOB}(t_2) \ne \omega^{\rm GSF}(t_2)$. The phase difference obtained by this integration
can be compared to the one yielded by the waveform aligned in the time domain, and correspondently
brings informations about the waveform dephasing. By contrast, integrating the difference in $Q_{\omega}$ on a fixed
\textit{frequency interval}, as done in Table~\ref{tab:Dphi_Q}, yields an accumulated phase that gives information
about the adiabaticity of the models on that frequency interval. Since for each model a fixed frequency
interval corresponds to a different time interval, namely $t^{\rm EOB}(\omega_1) \ne t^{\rm GSF}(\omega_1)$
and $t^{\rm EOB}(\omega_2) \ne t^{\rm GSF}(\omega_2)$, the phase differences evaluated in this way \textit{cannot}
be compared to those of the time-domain alignment.
In conclusion, our comprehensive analysis here complements Ref.~\cite{Wardell:2021fyy}, and it (i) demonstrates the limitations of the 1PAT1 model for comparable mass binaries, and (ii) reaffirms the high fidelity of \texttt{TEOBResumS}{} for these mass ratios.
\begin{figure*}[t]
\center
\includegraphics[width=0.43\textwidth]{fig07a.pdf}
\hspace{8mm}
\includegraphics[width=0.43\textwidth]{fig07b.pdf}\\
\vspace{8mm}
\includegraphics[width=0.43\textwidth]{fig07c.pdf}
\hspace{8mm}
\includegraphics[width=0.43\textwidth]{fig07d.pdf}
\caption{\label{fig:phasingsEOBGSF} EOB/GSF phasings for $q = \{15, 32, 64, 128\}$ binaries.
The vertical dash-dotted lines in the left panels indicate the times corresponding to the $[\omega_L,\omega_R]$
alignment interval. In the right panels the dotted line corresponds to $\omega_{22}^{\rm GSF_{\rm break}}$, while the dashed line
indicates the adiabatic EOB LSO. The part of the GSF waveforms past the breakdown frequency is colored in light grey.}
\end{figure*}
\begin{figure*}[t]
\center
\includegraphics[width=0.42\textwidth]{fig09a.pdf}
\qquad
\includegraphics[width=0.42\textwidth]{fig09b.pdf}
\caption{\label{fig:phasing_q15}
EOB/NR and GSF/NR comparisons for $q=15$. Left: EOB/NR phasing using SXS:BBH:2247~\cite{Yoo:2022erv}.
The alignment frequency window is $[\omega_{\rm L}, \omega_{\rm R}] = [0.048, 0.063]$ (indicated by vertical dash-dotted lines),
and the phase difference accumulated at the NR merger (dashed blue line) is $\Delta\phi^{\rm EOBNR}_{22}=-0.61$.
The dotted vertical grey line indicates the time at which $\omega_{22}^{\rm NR} = \omega_{22}^{\rm GSF_{break}}$,
and the EOB/NR phase difference at that point is -0.09.
Middle: GSF/NR phasing comparison using the {\it same} alignment window. One gets $\Delta\phi^{\rm GSFNR}_{22}\simeq -0.20$
at $\omega_{22}^{\rm GSF_{break}}$, and here the dotted vertical grey line indicates indeed the time at which
$\omega_{22}^{\rm GSF} = \omega_{22}^{\rm GSF_{break}}$. Note that we show in grey the last part of the GSF waveform up to the critical frequency,
but evaluate the phase difference at a time corresponding to the breakdown frequency (see Table~\ref{tab:Dphi}).}
\end{figure*}
\subsection{EOB/GSF comparisons for intermediate-mass-ratio binaries, $q\geq 15$}
\label{sec:EOB-GSF-IMRI}
Let us turn now to {\it larger} mass ratios, in a regime that should be closer to the natural
domain of validity of 2GSF calculations and thus of the 1PAT1{} model.
We focus here on four illustrative mass ratios, $q = (15,\, 32, \, 64, \,128)$.
These mass ratios are chosen for consistency with Ref.~\cite{Nagar:2022icd}, that presents
direct EOB/NR phasing comparisons in the IMR regime using the recent, breakthrough NR
simulations of Refs.~\cite{Lousto:2020tnb} and~\cite{Yoo:2022erv}. We want to investigate here whether the 1PAT1{}
model can give us complementary information to the one obtained in~\cite{Nagar:2022icd}.
Reference~\cite{Nagar:2022icd} probed two things. On the one hand, using RIT~\cite{Lousto:2020tnb} data,
it showed an excellent EOB/NR agreement, within the NR uncertainty, in the transition
from late inspiral to plunge for $q=15$ and $q=32$, and similarly the consistency of late plunge
and merger for $q=64$ and $q=128$. On the other hand, the use of a $q=15$ SXS long-inspiral
simulation~\cite{Yoo:2022erv} allowed to probe the \texttt{TEOBResumS}{} waveform through full inspiral up to
merger, getting a $\simeq -0.6$~rad dephasing at merger (see Fig.~15 and~16 of~\cite{Nagar:2022icd}).
In addition, Ref.~\cite{Nagar:2022icd} also pointed out that the EOB/NR phasing disagreement
at merger can be reduced by $50\%$ by only incorporating an additional 6PN test-particle
correction in the $\ell=m=2$ multipole of the radiation reaction.
Thus, \texttt{TEOBResumS}{} provides then a baseline test of the 1PAT1 waveforms, given that it is inherently
accurate in the early inspiral (automatically recovering a high-order PN expansion there) and is NR-tested
for $q=15$ and $q=32$. A priori, since \texttt{TEOBResumS}{} naturally incorporates a certain amount of test-particle information,
we expect that the differences between 2GSF and EOB waveforms will reduce as $q$ is increased, until $q$ is
sufficiently large that high-order-in-$\nu$ information becomes insignificant while small errors in low-order-in-$\nu$
terms become significant; for $q$ beyond that point, we expect the dephasing between EOB and 2GSF waveforms
to increase due to any failure of the current version of \texttt{TEOBResumS}{} to precisely capture 0PA
and 1PA effects (e.g., for the lack of the test-particle $\ell=m=2$ term pointed out above).
Our comparisons will bear out these expectations, consistently with the analysis of~\cite{Nagar:2022icd}.
We consider waveforms that start at rather low frequency and have many cycles. As discussed above, this will lead to
larger cumulative errors in the GSF waveforms (as compared to the frequency interval used in our comparisons
for comparable masses). But it provides the most stringent tests of our waveform models, and the errors in the GSF
model can in any case be expected to decrease with increasing $q$.
Therefore, when aligning EOB to GSF, $\omega_L$ is always chosen very low. Then, $\omega_R$ (corresponding to
the second vertical line in Fig.~\ref{fig:phasingsEOBGSF}) is increased progressively until the phase difference
remains substantially flat on the scale of the plot. The so-obtained alignment intervals for each mass ratio are
displayed in Table~\ref{tab:Dphi}.
Figure~\ref{fig:phasingsEOBGSF} illustrates the high EOB/GSF consistency during the full inspiral, with phase differences accumulated at the time
corresponding to $\omg_b$ of $(+0.38, -0.13,-0.52,-1.13)$~rad for $q=(15,32,64,128)$, respectively.
These numbers are substantially confirmed by the $Q_{\omega}$ analysis, as shown in Table~\ref{tab:Dphi}.
In addition to the absolute magnitude of the phase differences reported in Table~\ref{tab:Dphi}, there is important
information in their {\it sign}: $\Delta\phi_{22}^{\rm EOBGSF}$ (computed either way) at $\omg_b$
is positive up to $q=15$, but it becomes negative for all other values of $q$. By simply inspecting
the values of $\Delta\phi_{22}^{\rm EOBGSF}$ at $\omg_b$ one deduces that
$\Delta\phi_{22}^{\rm EOBGSF}\sim 0$ should occur at $q\simeq 26$.
Physically this means that up to $q\sim 26$ the gravitational interaction encoded within the EOB model is,
loosely speaking, {\it more attractive} (the phase acceleration is larger) than the one predicted by the GSF
model. For $q>26$ it is the opposite.
The dephasings in Table~\ref{tab:Dphi} can be compared against the internal error estimates in the 1PAT1 model.
If we assume that for $q\lesssim10$ a 1PA model's error is dominated by 2PA contributions, $\propto \nu$ [cf. Eq.~\eqref{phi expansion}],
then we can estimate the error at larger $q$ as $\delta\phi^{\rm GSF}_{22}\approx \frac{\nu}{\nu_{q_0}}\delta\phi^{\rm GSF}_{22,q_0}$,
where $q>q_0$. Using $q_0=7$ and $\delta\phi^{\rm GSF}_{22,q_0}=\Delta\phi_{22,q_0}^{\rm EOBGSF}$
(since the error in EOB is very small at this mass ratio), we obtain the
error estimates $\delta\phi^{\rm GSF}_{22}=(0.68, 0.34, 0.17, 0.089)$~rad for $q=(15,32,64,128)$;
using $q_0=10$, we obtain the broadly compatible estimates $\delta\phi^{\rm GSF}_{22}=(0.53, 0.26, 0.14, 0.069)$~rad.
Crucially, these estimates assume the first-law binding energy is the correct one to use in the 1PA energy balance law.
They also assume the same frequency interval is used for all mass ratios, while our dephasing measurements
$\Delta\phi_{22}^{\rm EOBGSF}$ use different breakdown frequencies. But we can show, using the near-LSO approximations
from Sec.~\ref{sec:1PA accuracy}, that the accumulated error between two breakdown frequencies
[approximately $ 2\int^{\omega^{\rm break}/2}_{\omega^{\rm break}_{q_0}/2}\phi'_2 d\hat\Omega$, from
Eq.~\eqref{phi expansion}] is several orders of magnitude smaller than our estimated total cumulative
error $\delta\phi^{\rm GSF}_{22}$. Based on our estimates of $\delta\phi^{\rm GSF}_{22}$,
we can therefore say that the EOB-GSF dephasing $\Delta\phi_{22}^{\rm EOBGSF}$ may
be smaller than 1PAT1's error for $q=15$ and $q=32$, but $\Delta\phi_{22}^{\rm EOBGSF}$ is
substantially larger than $\Delta\phi^{\rm GSF}_{22}$ for $q=64$ and $q=128$. This, combined
with our observations above, suggests that the turnover where 1PAT1 becomes more accurate
than \texttt{TEOBResumS}{} likely lies somewhere in the range $26\lesssim q<64$.
We can glean more information by comparison with error-controlled NR simulations at mass ratios where they are available.
This can be done for $q=15$ and, to a certain extent, for $q=32$, building upon the results of Ref.~\cite{Nagar:2022icd}.
For $q=15$, Fig.~\ref{fig:phasing_q15} compares the SXS waveform to the \texttt{TEOBResumS}{} and 1PAT1{} ones,
using the same alignment window of Fig.~15 of Ref.~\cite{Nagar:2022icd}. We find $\Delta\phi_{22}^{\rm EOBNR}\approx -0.09$~rad and $\Delta\phi_{22}^{\rm GSFNR}\approx -0.20$~rad
at $\omg_b$ (dotted line in the right panels of the figure), that approximately occurs 2.5 orbits before merger.
From this, we conclude that the 1PAT1{} model is a less faithful representation of the phasing up to $\omg_b$
than \texttt{TEOBResumS}{}, in line with our expectation above. The error in both models is small, but we note that this dephasing is on a narrower frequency interval than the interval of our error estimate $\delta\phi^{\rm GSF}_{22}\approx 0.53-0.68$ obtained above.
This is analogous to the $q=(7,10)$ cases discussed above, where one has to be careful to compare dephasings over a consistent (frequency or time) interval.
The situation looks different for the $q=32$ case. Here, $\Delta\phi_{22}^{\rm EOBGSF}$ is globally smaller,
reaching only $\sim -0.13$~rad 5GW cycles ($\sim 2.5$ orbits) before merger. This value is consistent with our
estimated error in 1PAT1{} as well as with the $\Delta\phi^{\rm EOBNR}_{22}$ phase difference for
$q=32$ shown in Fig.~5 of Ref.~\cite{Nagar:2022icd}.
We can therefore say that NR, GSF, and EOB are all consistent with one another at this mass ratio.
Moreover, it appears that at this mass ratio \texttt{TEOBResumS}{} {\it correctly bridges the gap} between the two
very different approaches to the solution of Einstein's equations: GSF and NR. The 1PAT1 model can provide
in principle very accurate inspirals (modulo the uncertainty in the binding energy), but only for sufficiently large mass ratios.
On the contrary, the RIT $q=32$ NR simulation of~\cite{Lousto:2020tnb}, compared with \texttt{TEOBResumS}{}
in Ref.~\cite{Nagar:2022icd}, delivers a robust and accurate description of the transition to merger and ringdown,
but currently suffers from a rather large phase uncertainty ($\sim 0.6$~rad in total) during the whole simulated
inspiral of $\sim 12$~orbits. \texttt{TEOBResumS}{} matches both of these models within their internal error estimates
in their respective domains of validity, as well as providing the only complete inspiral-merger-ringdown model of the three at this mass ratio.
Although this mutual consistency of the three approaches around $q\approx 30$ is reassuring,
a more precise assessment of the numerical errors is needed. This is probably only possible with higher-accuracy,
longer NR waveforms, as mentioned in Ref.~\cite{Nagar:2022icd}.
At present, for $q=64$ and $q=128$, we can say that $\Delta\phi_{22}^{\rm EOBGSF}$ is larger than
$\delta\phi_{22}^{\rm GSF}$ and comparable to $\Delta\phi_{22}^{\rm EOBNR}$. For $q=64$, $\Delta\phi_{22}^{\rm EOBGSF}$
is loosely consistent with $\Delta\phi_{22}^{\rm EOBNR}$ reported in the bottom-left panel of Fig.~5 of Ref.~\cite{Nagar:2022icd},
although in this case (and in the $q=128$ case as well) it was not possible to deliver a robust estimate of the
NR phase uncertainty because of the lack of a complete convergent series.
Moreover, for $q=128$ the EOB/NR phase difference at a point corresponding to $\omg_b$ ($\sim 8$ cycles before
merger) is already too large (see again Fig.~5 of~\cite{Nagar:2022icd}) to allow us any additional
quantitative assessment. Therefore, though we can estimate that 1PAT1{} is more accurate than \texttt{TEOBResumS}{}
for these mass ratios, and increasingly so for higher $q$, we cannot precisely quantify the accuracy of
the 1PAT1{} waveforms beyond our rough internal error estimates. This is further complicated by the
uncertainty in the 1PAT1{} model arising from the choice of binding energy.
As a prelude to the next section, and building upon the finding of Sec.~VA of Ref.~\cite{Nagar:2022icd}, it is interesting
to investigate how the EOB/GSF results above change when the $\ell=m=2$ 6PN test-mass coefficient is included in
the radiation reaction. The corresponding EOB/GSF dephasings are listed in the last column of Table~\ref{tab:Dphi}.
The interesting finding is that the EOB/GSF dephasing {\it increases} for $q=15$, while it {\it decreases} for the other
mass ratios. This is thus a further indication of the correctness of our reasoning up to now, supporting the idea that
the EOB/GSF discrepancy for large mass ratios (say $\gtrsim 32$) is due to the analytical incompleteness
of \texttt{TEOBResumS}{}, while for smaller mass ratios (e.g., $q=15$) the EOB/GSF difference is due to errors in the 1PAT1 model.
Besides the sensitivity to the correction to the radiation reaction,
\texttt{TEOBResumS}{} incorporates only part of the known linear-in-$\nu$ analytical contributions and was designed
primarily for comparable-mass binaries. This difference essentially lies in the EOB potentials, $(A,D,Q)$.
The $A$ function includes analytical information {\it only} up to 4PN, while both $D$ and $Q$ contain information
only up to 3PN. These functions are thus {\it different} from the {\it exact} GSF ones that incorporate the
complete linear-in-$\nu$ information, and that were calculated in Ref.~\cite{Akcay:2015pjz}. The analysis
of the next section will find evidence that this is likely among the causes of the EOB/GSF differences for large
values of $q$, together with a needed upgrade of the dissipative sector of the model, as the last column of
Table~\ref{tab:Dphi} already indicates.
\section{On the origin of the GSF/EOB differences}
\label{sec:nu_dependence}
We have assessed, using two different methods, the existence of a nonnegligible phase
difference between GSF and \texttt{TEOBResumS}{} waveforms up to the GSF breakdown frequency.
Thanks to several EOB/NR/GSF comparisons, we can safely state that the 1PAT1{} description
of the inspiral is a less accurate representation of the true waveform than \texttt{TEOBResumS}{} for $q\leq 15$.
By contrast, there seems to exist a region of mutual EOB/NR/GSF consistency in the range $25\lesssim q \lesssim 32$.
For larger values of $q$, the GSF model becomes increasingly more accurate than the EOB model.
Let us now attempt to investigate the origin of these differences by analyzing the structure of $Q_{\omega}$
as a function of $\nu$. We return to the asymptotic expansion~\eqref{Q expansion}, which we restate here for convenience:
\be
\label{eq:Qo_exp}
Q_{\omega}(\omega;\nu) = \frac{Q_{\omega}^{0}(\omega)}{\nu} + Q_{\omega}^1(\omega) + Q_{\omega}^2(\omega) \nu + O(\nu^2) .
\ee
Here the 0PA term, $Q_{\omega}^{0}$, is identical to $Q_{\omega}$ for a test-mass on a Schwarzschild background subject to
leading-order dissipation (i.e., the order-$\nu$ dissipative self-acceleration or order-$\nu^2$ energy flux).
The 1PA term, $Q_{\omega}^{1}$, incorporates the conservative contributions of the first-order self-acceleration as well
as the first subleading dissipative contribution (i.e., the order-$\nu^2$ dissipative self-acceleration or order-$\nu^3$
energy flux, both of which are themselves affected by the full order-$\nu$ self-acceleration). Finally, the 2PA term,
$Q_{\omega}^2$, contains the conservative contribution of the order-$\nu^2$ self-acceleration and third-order dissipative
information (i.e., the order-$\nu^3$ dissipative self-acceleration or order-$\nu^4$ energy flux).
Given the resummed structure of the EOB Hamiltonian, the actual $Q_{\omega}^{\rm EOB}$ has in fact
an {\it infinite} number of $\nu$-dependent terms and Eq.~\eqref{eq:Qo_exp} is formally obtained by expanding in $\nu$.
As discussed previously, $Q_{\omega}^{\rm GSF}$ also has
non-zero contributions from all higher-order $Q_{\omega}^n$ when expanded in powers of $\nu$, but it only exactly captures $Q_{\omega}^0$ and $Q_{\omega}^1$; this is straightforwardly seen from the expansion in Appendix~\ref{sec:exactQ0andQ1}.
Our aim here is to extract the three functions $Q_{\omega}^0$, $Q_{\omega}^1$ and $Q_{\omega}^2$ from 1PAT1{}
and \texttt{TEOBResumS}{} and compare them. This will give us a more precise quantitative understanding
of the differences between the two models.
To do so, we proceed as follows. We consider mass ratios\footnote{The $q = (26, 36)$ datasets have been exploited to have a more robust estimate of the fit coefficients. The $q = 36$ dataset is not considered
elsewhere in this work since it does not yield additional significant information to the other comparisons.} $q=(7,10,15,26,32,36,64,128)$ and
a range $[\omega_{\rm min},\omega_{\rm max}] = [0.023, 0.09]$ with spacing $\Delta \omega = 0.001$.
Here, the maximum value is chosen so as to be sufficiently far from the
possible breakdown of the underlying approximations in the 1PAT1{} model.
\begin{figure}[t]
\center
\includegraphics[width=0.45\textwidth]{fig10.pdf}
\caption{\label{fig:nuQomg} Numerical data and fits of $\nu Q_{\omega}$ for a set of mass ratios $q = (7,10,15,26,32,36,64, 128)$ at
a fixed frequency, $\omega = 0.055$. One sees here how EOB and GSF (and the corresponding fits) are in agreement
for $ q \gtrsim 26$ ($\nu \lesssim 0.036$).}
\end{figure}
For each value of $\omega$ we fit $Q_\omega(\omega;\nu)$ using Eq.~\eqref{eq:Qo_exp}.
Figure~\ref{fig:nuQomg} shows the outcome of the fit versus $\nu$ for $\omega=0.055$.
The same procedure is repeated for each value of $\omega$ within $[0.023, 0.09]$.
This eventually gives the functions $\{Q_{\omega}^0(\omega),Q_{\omega}^1(\omega),Q_{\omega}^2(\omega)\}$, that are shown in Fig.~\ref{fig:Q0andQ1}.
We also show in the same figure the ``exact'' $Q_{\omega}^0(\omega)$ and $Q_{\omega}^1(\omega)$,
computed from 1GSF and 2GSF quantities using the formulas derived in Appendix~\ref{sec:exactQ0andQ1}.
The fitted values of 1PAT1's $Q_{\omega}^0(\omega)$ and $Q_{\omega}^1(\omega)$ lie close to the exact values,
broadly validating the fitting procedure, but they do begin to noticeably deviate at high frequencies.
This might suggest that the fits are contaminated by the more complicated $\nu$ dependence of the transition to plunge,
even significantly below $\omega^{\rm break}$. This is further testified by Fig.~\ref{fig:diffQ1GSF},
which shows how excluding mass ratios $q = \{7, 10\}$ from the fit, the result is closer to the exact one.
However, the deviation is sufficiently small that it cannot alter our conclusions.
The left panel of Fig.~\ref{fig:Q0andQ1} indicates that there is very good EOB/GSF agreement in the $Q_{\omega}^0$ part.
This is not surprising given the highly accurate energy flux incorporated within \texttt{TEOBResumS}{},
that builds upon~\cite{Damour:2008gu,Damour:2009kr}. The EOB flux includes all multipoles up to $\ell=8$.
Each multipole is factorized and resummed following Ref.~\cite{Damour:2008gu} and currently includes up to (relative)
test-mass 6PN information~\cite{Messina:2018ghh,Nagar:2019wds,Nagar:2020pcj}. The GSF $Q_{\omega}^0$ is fully
determined by the first-order GSF flux through the horizon and infinity. In the 1PAT1{} model this was computed to
machine precision by summing the fully relativistic modes up to $\ell=30$. Since the GSF calculation is
effectively exact,\footnote{In practice, the GSF flux is only evaluated to a given number of digits. In this case
we evaluated it to machine precision, but this can be pushed further by increasing the numerical accuracy
to which the 1SF fluxes are computed.} we can be confident that the residual EOB/GSF difference is associated
with the fact that \texttt{TEOBResumS}{} is {\it not} analytically complete, as already pointed out above;
as explained in Sec.~\ref{sec:EOB-GSF-IMRI} it is missing higher-order PN information and
higher-$\ell$ contributions. The smallness of the difference means it will only become significant
when it is comparable to $Q_{\omega}^1$ in absolute terms. Given that $\Delta Q_{\omega}^0 \sim 10^{-2}$
and $Q_{\omega}^1 \sim 20$ this will only happen for large mass ratios $q \gtrsim 10^3$.
By contrast, the $Q_{\omega}^1$ and $Q_{\omega}^2$ terms point to more significant differences between \texttt{TEOBResumS}{} and 1PAT1{}.
This is not unexpected given that there are approximations present in $Q_{\omega}^1$ in both models, and that 1PAT1{} is not
directly controlling the error in $Q_{\omega}^2$ since it is neglecting potentially important 2GSF conservative and 3GSF
dissipative contributions. We note that over much of the frequency range considered, the difference $\Delta Q^1_\omega$
is comparable to (or smaller than) the uncertainty in $Q^1_\omega$ stemming from the choice of binding energy,
as shown in Fig.~\ref{fig:Q1-EFL-ESF}. It is therefore impossible to conclude which result lies closer to the
true $Q^1_\omega$ for $\omega\lesssim 0.07$. For $\omega\gtrsim 0.07$, the picture is clearer, as the
difference $\Delta Q^1_\omega$ becomes significantly larger than the uncertainty in the 1PAT1{} result.
For $Q^2_\omega$, it is again not entirely clear which of the two models is more accurate, but in this case
no credence should be given to the 1PAT1{} result: since all 2PA terms in $\dot\Omega$ are missing, 1PAT1{}'s $Q^2_\omega$
could be entirely incorrect. Similarly, since \texttt{TEOBResumS}{} has been optimized for comparable-mass binaries,
it may be possible that \texttt{TEOBResumS}{} contains significant errors in both $Q_{\omega}^1$ and $Q_{\omega}^2$ that effectively
cancel one another for $q\lesssim 10$. On the other hand, \texttt{TEOBResumS}{} should at least represent the
correct $Q^2_\omega$ in the small-frequency limit, where it reduces to the PN value, leading us to
infer that \texttt{TEOBResumS}{}'s $Q^2_\omega$ is probably more reliable than 1PAT1{}'s.
Interestingly, Fig.~\ref{fig:diffQomg_GSF} shows that both models have only small contributions
from $Q_\omega^n$ beyond $n=2$. In other words, in the range of masses and frequencies considered in this
section, $Q_\omega^{\rm EOB}$ and $Q_\omega^{\rm GSF}$ are well represented by only
the first three terms in the expansion~\eqref{eq:Qo_exp}. Therefore our analysis of those three
terms should provide a fairly complete picture of the two models.
\begin{figure*}[t]
\center
\includegraphics[width=0.32\textwidth]{fig11a.pdf}
\includegraphics[width=0.32\textwidth]{fig11b.pdf}
\includegraphics[width=0.32\textwidth]{fig11c.pdf}
\caption{\label{fig:Q0andQ1} The coefficients $Q_{\omega}^0$ (left), $Q_{\omega}^1$ (center) and $Q_{\omega}^2$ (right) from the expansion
\eqref{eq:Qo_exp}, fitted for a set of values of $\omega$ and $\nu$.
The lower panels display the EOB-GSF difference. The two models show good agreement for $Q_0$,
but differ in $Q_{\omega}^1$ and $Q_{\omega}^2$. This slight disagreement in $Q_{\omega}^0$ is related to the fact that the EOB potentials implemented in \texttt{TEOBResumS}{} do not incorporate the full linear-in-$\nu$ knowledge.}
\end{figure*}
\begin{figure}[t]
\center
\includegraphics[width=0.45\textwidth]{fig12.pdf}
\caption{\label{fig:diffQ1GSF} Changing the mass ratios included in the GSF fit: the inclusion of $q=(7,10)$ makes the fit less reliable
and drives it away from the exact result at higher frequencies.}
\end{figure}
\begin{figure}[t]
\center
\includegraphics[width=0.45\textwidth]{fig13a.pdf} \\
\vspace{0.5mm}
\includegraphics[width=0.45\textwidth]{fig13b.pdf}
\caption{\label{fig:diffQomg_GSF} The relative difference between the $Q_{\omega}$ function and the corresponding fits
evaluated from the 1PAT1{} waveforms (top) and the \texttt{TEOBResumS}{} ones (bottom).
Interestingly, the magnitude of the difference is small, suggesting that both models
have a small contribution to $Q_{\omega}$ beyond $Q_{\omega}^2$. }
\end{figure}
To assess how much each of these three terms in the expansion of $Q_{\omega}$ impact on the phasing,
we can estimate three contributions to the phase difference on the frequency interval $(\omega_1,\omega_2)$:
\begin{align}
\label{eq:dphis}
\Delta \phi_0 &\equiv \frac{1}{\nu} \int_{\omega_1}^{\omega_2} \left( Q_{\omega}^{0, \rm EOB} - Q_{\omega}^{0, \rm GSF} \right) d\log\omega,\\
\Delta \phi_1 &\equiv \int_{\omega_1}^{\omega_2} \left( Q_{\omega}^{1, \rm EOB} - Q_{\omega}^{1, \rm GSF}\right) d\log\omega, \\
\Delta \phi_2 &\equiv \nu \int_{\omega_1}^{\omega_2} \left( Q_{\omega}^{2, \rm EOB} - Q_{\omega}^{2, \rm GSF}\right) d\log\omega,
\end{align}
so that the total phase difference between $(\omega_1,\omega_2)$ is
\be
\Delta\phi^{\rm EOBGSF}_{(\omega_1,\omega_2)}=\Delta\phi_0 + \Delta\phi_1 + \Delta\phi_2.
\ee
The result of this calculation over the frequency interval $(\omega_1,\omega_2)=(0.023,0.09)$ is displayed
in Table~\ref{tab:dphi_Qis}. For comparison, we note that the uncertainty in the choice of binding energy contributes an uncertainty $\Delta\phi_1^{\rm GSF}\approx 0.45$~rad in 1PAT1's 1PA phase $\phi_1$ on this frequency interval, which is not dramatically smaller than the EOB-GSF difference $\Delta \phi_1$. We also stress again that these phase differences \textit{cannot} be compared to those
obtained integrating $Q_{\omega}$ on a fixed time interval, namely they should not be contrasted to the ones in Table~\ref{tab:Dphi}.
They can instead be compared to the ones in Table~\ref{tab:Dphi_Q} for $q = (7,10)$, with which they are consistent, given the larger frequency interval used here.
However, Table~\ref{tab:dphi_Qis} yields a deeper understanding of why EOB and GSF apparently agree best
around $q \sim 26$, as was seen within the time-domain analysis\footnote{We also verify this conclusion in Fig.~\ref{fig:phasing26}, which shows the time-domain phasing for $q = 26$. The accumulated phase difference between \texttt{TEOBResumS}{} and 1PAT1{} waveforms at the GSF breakdown
frequency ($\omega_{\rm break} = 0.11050$) is $\Delta \phi^{\rm EOBGSF}_{22, t} = -0.0194$.
The alignment interval we use here is $[0.025, 0.033]$, and the integration of $Q_{\omega}$ yields $\Delta \phi^{\rm EOBGSF}_{22, Q_{\omega}} = -0.0204$. When adding the 6PN term in the EOB flux, the accumulated phase difference becomes $\Delta \phi^{\rm EOBGSF}_{22, t} =0.1321$.}. In fact, this empirical
deduction is a simple consequence of the fact that the contributions
$\Delta \phi_1$ and $\Delta \phi_2$ largely cancel for this mass ratio; for smaller mass ratios the dephasing is dominated
by $\Delta \phi_2$ while for larger mass ratios it is dominated by $\Delta \phi_1$. From the perspective of EOB, this corresponds to errors in EOB's 1PA term fortuitously cancelling higher-PA terms. From the perspective of GSF, it corresponds to the 1PA model's 2PA error terms becoming sufficiently small that they are comparable to the errors in EOB and NR (in line with the discussion in the previous section). The cancellation point will change if the first-law binding energy turns out to be the incorrect choice for the 1PA evolution, but this overall picture should remain the same.
Quite generally, then, we learn from Table~\ref{tab:dphi_Qis} that the impact of $Q_{\omega}^2$
decreases when increasing $q$, which is of course expected since it is multiplied by $\nu$. We also learn that
the errors in $Q_{\omega}^0$ and $Q_{\omega}^1$ contribute more than $Q_{\omega}^2$ when $q\gtrsim 26$.
Therefore, the takeaway messages are (i) that EOB can be improved for $q \gtrsim 26$ by including more information in
$Q_{\omega}^0$ and $Q_{\omega}^1$; (ii) that the error in the GSF model is probably dominated by its incorrect 2PA term $Q_{\omega}^2$, even for $q\lesssim10$ where 3PA and higher terms might have been significant; (iii) that the two formalisms approximately meet each other at $q \sim 26$ as a result of fortuitous cancellation of the dephasings coming from $Q_{\omega}^1$ and $Q_{\omega}^2$; and (iv) to ensure the 1PA model's accuracy in the small-mass-ratio regime and obtain more reliable internal estimates of its error, we must determine the correct binding energy.
Point (i) is specifically useful on the EOB side, since it allows us to detect the weaknesses of
the current model if one wants to push it to the IMR regime. In particular, the improvement at 1PA can be
achieved by the implementation of GSF-informed potentials. As the mass ratio increases past $q \gtrsim 10^3$, however, the impact of the 0PA term on the dephasing
will prevail over all others. This means that for EMRI systems the most relevant and urgent update of the EOB model concerns the 0PA flux,
implying the need of incorporating more test-mass information into the radiation reaction. Both the implementation of
GSF-informed potentials and of a different flux in an EOB model specifically targeted for EMRIs will be presented and compared to
2GSF in an upcoming work~\cite{Albertini:2022dmc}.
Because of the uncertainty in 1PAT1's value of $Q^1_\omega$, it is hard to say the extent to which \texttt{TEOBResumS}{}'s value of $Q^1_\omega$ must be improved. However, we note that this uncertainty does not affect our ability to incorporate 1PA information into the EOB model. The flux ${\cal F}^{(2)}_\infty$, for example, does not make use of the binding energy (or utilize the other two approximations described in Sec.~\ref{sec:1PAT1 errors}); it should therefore be exact up to numerical error. Similarly, the EOB potentials can be informed by independent, conservative 1GSF information without appealing directly to the 1PAT1 model.
Point (ii) gives analogous useful insight on the GSF side. The effective $Q_{\omega}^2$ in 1PAT1 is probably a significant overestimate of the true value, and this overestimate might dominate the model's error.
This could suggest that alternative formulations of the 1PA evolution equations with smaller contributions to $Q_{\omega}^2$ may significantly improve the phase accuracy at lower mass ratios. However, Figs.~\ref{fig:Qomg_all} and \ref{fig:DQomg} clearly show that the true value of $Q_{\omega}^2$ is not negligible, meaning a model that simply sets it to zero may incur similar levels of error as 1PAT1. We leave a more detailed study of this for future work.
Ultimately, we return again to the need for longer, higher-accuracy, lower-eccentricity, smaller-mass-ratio NR simulations. With such simulations, one could hope to obtain independent estimates of the true values of $Q_\omega^1$ and $Q_\omega^2$, helping to lift the uncertainties discussed in this section.
We note, finally, that the considerations above hold assuming that a small mass ratio expansion yields a faithful representation
of the waveform. Interestingly, Fig.~\ref{fig:diffQomg_GSF} suggests that this may be the case.
The figure shows
that for both GSF and EOB models $Q_{\omega}$ is largely encapsulated in the three coefficients $Q_{\omega}^0$, $Q_{\omega}^1$, and $Q_{\omega}^2$, with only a
small residual accounted for by higher-order terms.
\begin{table}[t]
\begin{center}
\begin{ruledtabular}
\begin{tabular}{c c c c c}
$q$ & $\Delta \phi_0$ & $\Delta \phi_1$ & $\Delta \phi_2$ & $\Delta \phi_{(\omega_1, \omega_2)}^{\rm EOBGSF}$\\
\hline
\hline
7 & 0.011 & 0.538 & $-1.690$ & $-1.141$ \\
10 & 0.015 & 0.538 & $-1.277$ & $-0.724$ \\
15 & 0.021 & 0.538 & $-0.905$ & $-0.347$ \\
\hline
26 & 0.034 & 0.538 & $-0.551$ & $0.021$ \\
32 & 0.042 & 0.538 & $-0.454$ & $0.125$ \\
64 & 0.081 & 0.538 & $-0.234$ & $0.384$ \\
128 & 0.159 & 0.538 & $-0.119$ & $0.578$ \\
\end{tabular}
\end{ruledtabular}
\end{center}
\caption{From left to right, the columns report: the mass ratio $q$, the phase differences due to the first three term in
the expansion of $Q_{\omega}$, and the sum of these latter. The $\Delta \phi$'s are obtained using the definition~\eqref{eq:dphis},
integrating over the frequency interval $(\omega_1, \omega_2) = (0.023, 0.09)$.}
\label{tab:dphi_Qis}
\end{table}
\begin{figure}[t]
\center
\includegraphics[width=0.45\textwidth]{fig14.pdf}
\caption{\label{fig:phasing26} EOB/GSF time-domain phasing for $q = 26$. The alignment interval
is indicated by the dash-dotted lines in the left panels. The accumulated dephasing up to the GSF breakdown frequency is $\Delta \phi^{\rm EOBGSF}_{22, t} = -0.0194$,
confirming the high agreement between EOB and GSF for this mass ratio.}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We have provided a comprehensive comparison between $\ell=m=2$ gravitational waveforms obtained
with a 2GSF-based approach~\cite{Wardell:2021fyy} and the state-of-the-art EOB model \texttt{TEOBResumS}{}~\cite{Nagar:2020pcj,Riemenschneider:2021ppj,Albertini:2021tbt}.
Among the two available EOB models (the other being {\tt SEOBNRv4HM}~\cite{Cotesta:2018fcv}),
\texttt{TEOBResumS}{} shows the highest level of NR faithfulness and has been checked to be consistent
with the plunge and ringdown phase of state-of-the-art NR simulations~\cite{Lousto:2020tnb} for large mass ratios up to $q=128$~\cite{Nagar:2022icd}.
On the 2GSF side, we work with the time-domain 1PAT1{} model introduced in~\cite{Wardell:2021fyy}.
This model is limited to the inspiral phase, since it has not yet incorporated a model for the transition from inspiral
to plunge. The 1PAT1{} waveforms are reliable up to some frequency before the Schwarzschild LSO GW frequency,
$\omega_{\rm LSO}^{\rm Schw}=0.136$, where the two-timescale approximation on which the model is
built ceases to be valid. Our analysis is thus limited to the inspiral waveform only, up to dimensionless
GW frequency $\omega\lesssim 0.1$. Note that this frequency is always smaller than the LSO frequency $\omega_{\rm LSO}^{\rm EOB}(\nu)$
predicted by the EOB model for any mass ratio, and one has $\omega_{\rm LSO}^{\rm EOB}(\nu)>\omega_{\rm LSO}^{\rm Schw}$~\cite{Buonanno:2000ef}.
We also benchmarked our findings with NR waveform data, similarly but more thoroughly than was done in Ref.~\cite{Wardell:2021fyy}, and we provided a detailed analysis of the 1PAT1 model's sources of error and domain of validity.
Our conclusions are as follows:
\begin{enumerate}[label=(\roman*)]
\item We have found that effects of the transition to plunge are significant over a larger frequency interval than one might expect, restricting 1PAT1's domain of validity to orbital frequencies much smaller than the ``breakdown frequency'' $\approx\Omega^{\rm Schw}_{\rm LSO}-0.026\nu^{1/4}$. Similarly, we have stressed that GSF models should not be pushed too far into the weak-field regime, as they will accumulate arbitrarily large error when the initial frequency approaches zero (though the frequency interval can be broadened for smaller $\nu$). We have also highlighted the use of the first-law binding energy as a source of significant uncertainty in 1PAT1's phasing, $\sim 0.5$~rad for all mass ratios.
\item We have revisited the 2GSF/NR comparison of Ref.~\cite{Wardell:2021fyy} in more detail using the gauge-invariant description
of the gravitational phase provided by the function $Q_{\omega}=\omega^2/\dot{\omega}$ extracted from the Weyl scalar $\psi_4^{22}$.
The use of this quantity is {\it crucial} to have access to a reliable description of the NR $Q_{\omega}$, as noted long ago
in Ref.~\cite{Damour:2012ky}.
We focus on mass ratios $q=7$ and $q=10$, and
our novel analysis allows us to conclude that for these mass ratios the 1PAT1{} waveform introduces accumulated dephasings $\lesssim 1$~rad up to frequency $\sim 0.1$. As expected, larger phase differences are found for smaller values of $q$, as described in detail
in Appendix~\ref{sec:eobnrgsf_q}.
\item Focusing again on mass ratios $q=7$ and $q=10$, we have similarly extensively compared time-domain and
frequency-domain phasing analysis using 1PAT1{}, NR and \texttt{TEOBResumS}{}, in order to eliminate possible systematics that may arise when choosing the alignment window.
\item We have explored the level of agreement between 1PAT1{} and \texttt{TEOBResumS}{} for mass ratios $q=(15,32,64,128)$.
We performed several types of EOB/GSF phasing comparison both in the time domain and using $Q_{\omega}$, notably carefully
cross-checking the results obtained with the two approaches.
Thanks to complementary information gained from a recent EOB/NR comparison~\cite{Nagar:2022icd}, and also considering a long-inspiral $q = 15$
SXS dataset, we concluded that 1PAT1{}
is less accurate than \texttt{TEOBResumS}{} up to frequency $\sim 0.1$ also for $q=15$, in analogy with
the $q\leq 10$ mass ratios mentioned above, though in this case the dephasing between the two models is much smaller, 0.38~rad over a long inspiral (i.e., large frequency interval).
\item By contrast, we found a region of excellent EOB/GSF phase agreement around $q\sim 26$, although the 2GSF/EOB
differences are found to increase again for larger mass ratios up to $q = 128$. Simple error estimates suggest that the 1PA model's error should be significantly below the disagreement between the two models for $q=64$ and $q=128$, implying that the 1PAT1 waveform should be more accurate than the EOB model for these mass ratios, and increasingly so for higher mass ratios. However, this is complicated by (i) the uncertainty in 1PAT1 due to its choice of binding energy, and (ii) our limited knowledge of the magnitude of the true 2PA coefficient in the phase. Since this is a region where no long-inspiral, error-controlled NR simulations are available, it is therefore difficult to state precisely the
limitations of both \texttt{TEOBResumS}{} and 1PAT1{}.
\item To attempt a partial clarification of these issues, we provided a novel analysis of the contributions to the phasing, analyzing both
the 2GSF and the EOB $Q_{\omega}$'s as expansions in $\nu$. This allowed us to single out quantitatively the main differences between the
two approaches in the small-$\nu$ regime. We found that the two models do differ ($\sim 0.5$~rad) already at the level of $Q_{\omega}^1$, but this is again complicated by the uncertainty due to choice of binding energy; the difference in $Q_{\omega}^1$ between the two models' only becomes larger than the uncertainty at high frequencies.
For $q\sim 26$ there is a compensation between the difference in $Q_{\omega}^1$ and a contribution $\sim -0.5$~rad from $Q_{\omega}^2$ that largely cancels it to give an overall good 2GSF/EOB agreement.
\item For larger values of $q\gtrsim 10^3$, small differences in $Q_{\omega}^0$ that are negligible for comparable mass ratios become
more and more relevant. These can be attributed to incomplete analytical information in \texttt{TEOBResumS}{} and point to an important area for future improvement.
\item 2PA terms in $Q_{\omega}$ are significant at least for mass ratios $q\lesssim30$. While the 1PAT1 model includes an effective $Q_{\omega}^2$, its value appears to be a large overestimate. If the model's choice of binding energy is shown to be correct, than this overestimate of $Q_{\omega}^2$ is likely the dominant source of error for all mass ratios up to a point at sufficiently large $q$ when small numerical errors in the 0PA or 1PA terms dominate.
\end{enumerate}
\begin{table*}[t]
\begin{center}
\begin{ruledtabular}
\begin{tabular}{c c c c c c c c c c c}
$q$ & $\omega_1$ & $\omega_2$ & $n_1$ & $n_2$ & $n_3$ & $n_4$ & $n_5$ & $d_1$ & $d_2$ & $d_3$ \\
\hline
\hline
7 & 0.04522 & 0.10477 & $-219.98419$ & 2265.56464 & $-890.00345$ & $-22823.02252$ & $35185.66602$ & $86.79492$ & 289.17836 & $-6440.91349$ \\
10 & 0.04522 & 0.10477 & 0.10756 & $-160.50258$ & 165.53483 & 1726.39116 & $-3008.63838$ & $-21.20574$ & 92.70890 & 247.75797 \\
\end{tabular}
\end{ruledtabular}
\end{center}
\caption{Coefficients enterting the fitting function for the Newton-rescaled $\hat{Q}_\omega$ function obtained from the curvature waveform $\psi_4^{22}$
for the SXS:BBH:0298 ($q=7$) and SXS:BBH:0303 ($q=10$) NR datasets. We use waveforms extrapolated to infinity with $N=3$ extrapolation order.
In the second and third columns we reports the values of the boundaries of the frequency interval $(\omega_1,\omega_2)$ on which the fits are performed.
See Figs.~\ref{fig:Qomg_clean_7}-\ref{fig:Qomg_clean_10} for the visual behavior of the fit.}
\label{tab:Qomg_coeff}
\end{table*}
\begin{figure*}[t]
\center
\includegraphics[width=0.35\textwidth]{fig15a.pdf}
\hspace{5 mm}
\includegraphics[width=0.35\textwidth]{fig15b.pdf}
\caption{\label{fig:Qomg_clean_7} Removal of high-frequency and low-frequency oscillations from the $Q_\omega$ function for
SXS:BBH:0298 ($q=7$). The residual, low-frequency,
oscllations in $\Delta \hat{Q}_\omega$ average to zero. The coefficients of the fit to $\hat{Q}_\omega$ are listed in Table~\ref{tab:Qomg_coeff}.}
\end{figure*}
\begin{figure*}[t]
\center
\includegraphics[width=0.35\textwidth]{fig16a.pdf}
\hspace{5 mm}
\includegraphics[width=0.35\textwidth]{fig16b.pdf}
\caption{\label{fig:Qomg_clean_10}Removal of high-frequency and low-frequency oscillations from the $Q_\omega$ function for
SXS:BBH:0303 ($q=10$). The residual, low-frequency,
oscllations in $\Delta \hat{Q}_\omega$ average to zero. The coefficients of the fit to $\hat{Q}_\omega$ are listed in Table~\ref{tab:Qomg_coeff}.}
\end{figure*}
In broad terms, our analysis gives cause for optimism that EOB and 2GSF models can ultimately provide reliable waveforms in the entire $q\gtrsim 10$ regime, both for the LVK collaboration and for use in third-generation ground-based detectors such as Einstein Telescope~\cite{Hild:2009ns} or Cosmic Explorer~\cite{Evans:2021gyd}. The two models we considered currently agree within $\sim 0.5$~rad over a large frequency interval for mass ratios in the range $15\lesssim q\lesssim 64$, and there are clear paths to improvement both within and beyond that range.
On the EOB waveform modeling side, the next challenge will be to improve \texttt{TEOBResumS}{} to build a new, GSF-faithful EOB
model that is closer in phasing to the 1PAT1{} model for large mass ratios. This will substantially happen by including 1GSF information
in the conservative nonspinning dynamics, building upon the results of Ref.~\cite{Akcay:2015pjz}. Note, however, that our
$Q_{\omega}$ analysis indicates that improvements in $Q_{\omega}^0$ are {\it also} needed, i.e. concerning the 0PA flux, and these improvements are going to be
progressively more important as the mass ratio goes into the extreme-mass-ratio regime. The development of such a GSF-informed
model and the evaluation of its performance against 1PAT1{} will be presented in an upcoming work~\cite{Albertini:2022dmc}.
Our work also suggests several needed improvements on the GSF waveform modelling side. The most critical is the inclusion of the final plunge, merger, and ringdown. Similarly, to model waveforms of any length, the model must incorporate small-frequency, PN information (though this is a lower priority, as EOB already provides a robust framework for combining weak- and strong-field information). It is also clear that the 1PAT1 model has an unnecessarily large (and incorrect) $Q_{\omega}^2$, and that this is likely the model's dominant source of phase error. Alternatives to this model that include a more faithful $Q_{\omega}^2$ will be considered in future work. To make the model fully reliable in the inspiral phase, we must also calculate the internally consistent binding energy, revisiting the calculation in Ref.~\cite{Pound:2019lzj}, or calculate the 1PA term in $\dot\Omega$ using the local second-order self-force; ultimately, to be entirely confident in these calculations, we should obtain consistent values for $\dot\Omega$ using both methods. However, we note that the improved accuracy of the 1PAT1 model at larger $q$ (e.g., when compared to a $q=15$ SXS waveform) suggests that the first-law binding energy probably lies very close to the true value.
Finally, we stress that long-inspiral, highly-accurate NR simulations with $q\geq 10$ are needed to achieve a precise evaluation of the accuracy of GSF and EOB models in this regime. Only a sparse number of
simulations of typical SXS accuracy are required. All that is needed is sufficient data to clearly see the behaviour $Q_\omega=Q^0_\omega(\omega)/\nu+Q^1_\omega(\omega)+\nu Q^2_\omega(\omega)+O(\nu^2)$ and determine an order-of-magnitude estimate of $Q^2_\omega$, which should enable sufficiently precise estimates of the error in a 1PA approximation. This kind of procedure is well established and already possible using simulations with $q\leq10$~\cite{vandeMeent:2020xgc,Albalat:2022vcy}, but the conclusions would be far more robust with higher-$q$ data; as our analysis has shown, small-$\nu$ fits of $q\lesssim 10$ data can be problematic at high frequencies. Simulations of longer inspirals for $q\sim 10$ would also provide an important additional check of GSF's low-frequency behaviour. For now, EOB waveforms provide the only independent benchmark on the early inspiral phase of 2GSF waveforms; and conversely, 2GSF waveforms provide the only benchmark on the large-$q$, strong-field inspiral phase of EOB waveforms.
\acknowledgements
We are grateful to Rossella Gamba for critical observations and comments
on the manuscript.
We thank J. Yoo, V. Varma, M. Giesler, M. Scheel, C. Haster, H. Pfeiffer, L. Kidder, and
M. Boyle for sharing with us the $q=15$ waveform of Ref.~\cite{Yoo:2022erv} before having it available through the SXS catalog.
A.A. has been supported by the fellowship Lumina Quaeruntur No.
LQ100032102 of the Czech Academy of Sciences.
A.P. acknowledges the support of a Royal Society University Research Fellowship. N.W. acknowledges support from a Royal Society - Science Foundation Ireland University Research Fellowship via grants UF160093 and RGF\textbackslash R1\textbackslash180022.
This work makes use of the Black Hole Perturbation Toolkit \cite{BHPToolkit} and Simulation Tools \cite{SimulationToolsWeb}.
|
train/arxiv
|
BkiUdHI5qoYAygk2gFzS
| 5 | 1 |
\section{Motivation}
The properties of magnetic atoms arranged in a face-centered cubic (fcc) lattice have been a cornerstone for the development of an understanding of antiferromagnetic order in solids. Anticipated theoretically on the basis of measurements of the magnetic susceptibility~\cite{1932:Neel:AnnPhysParis, 1938:Bitter:PhysRev, 1941:vanVleck:JChemPhys, 1950:Anderson:PhysRev, 1971:Neel:Science}, neutron scattering in the transition metal oxides MnO, NiO, and CoO~\cite{1949:Shull:PhysRev, 1951:Shull:PhysRev} marked the starting point of the microscopic discovery of antiferromagnetism~\cite{1954:Lidiard:RepProgPhys, 1955:Nagamiya:AdvPhys}. Crystallizing in a NaCl structure, the transition metal ions in these systems form a fcc sublattice, supporting $\langle111\rangle$ magnetic order, now also referred to as fcc type-II antiferromagnetism. With the development of the notion of super-exchange interactions in transition metal oxides~\cite{1934:Kramers:Physica, 1959:Anderson:PhysRev, 1959:Kanamori:JPhysChemSolids, 2001:Pask:PhysRevB}, a well-founded qualitative and quantitative account of the underlying interactions driving antiferromagnetism on a more general level was initiated. In turn, materials featuring fcc sublattices of the magnetic atoms offer an important point of reference for the identification of new facets of antiferromagnetic ordering phenomena.
A materials class in which the physical properties of fcc sublattices are at the heart of an exceptionally wide range of properties are Heusler compounds, cf.\ Ref.~\onlinecite{2011:Graf:ProgSolidStateCh} for a comprehensive review. Two major subclasses may be distinguished. Full-Heusler compounds, $X_{2}YZ$, crystallize in the L2$_{1}$ structure (space group $Fm\bar{3}m$), comprising four fcc sublattices at $(0,0,0)$, $(1/4,1/4,1/4)$, $(1/2,1/2,1/2)$, and $(3/4,3/4,3/4)$. In comparison, half-Heusler compounds, $XYZ$, crystallize in the non-centrosymmetric C1$_{\textrm{b}}$ structure (space group $F\bar{4}3m$), where one of the fcc sublattices is vacant. Quaternary as well as so-called inverse Heusler compounds also belong to space group $F\bar{4}3m$. The constituent elements $X$, $Y$, and $Z$ may be selected from large parts of the periodic table, which permits to realize a wide range of physical properties reaching from metallic to insulating behavior and from diamagnetism to ferromagnetism. Further examples include superconductors, thermoelectrics, heavy-fermion compounds, and shape memory alloys, even topological insulators are predicted. Since all of these properties may be achieved within the same crystal structure, all-Heusler-devices are promising goals for future research.
Antiferromagnetism in Heusler compounds is rather rare, in particular when compared to prevalent ferromagnetism. Known examples are typically based on $5f$ or $6f$ elements~\cite{1991:Adroja:JMagnMagnMater, 1996:Seaman:PhysRevB, 2005:Gofryk:SolidStateCommun, 2005:Gofryk:PhysRevB, 2008:Casper:ZanorgallgChem}, in which the magnetism is carried by localized $f$ electrons, or heavy $4d$ or $5d$ transition metals~\cite{1968:Webster:JApplPhys, 1971:Hames:JApplPhys, 1972:Masumoto:JPhysSocJpn, 1975:Campbell:JPhysFMetPhys, 1986:Helmholdt:JLessCommonMet}.
Among half-Heusler compounds based on $3d$ transition metals, CuMnSb may be the only example exhibiting antiferromagnetism~\cite{1968:Endo:JPhysSocJpn}. At low temperatures, Forster \textit{et al.} reported antiferromagnetic order in polycrystalline samples with large ordered moments of $(3.9\pm0.1)~\mu_{\mathrm{B}}/\mathrm{f.u.}$, in which ferromagnetic $\{111\}$ planes of $\langle111\rangle$-oriented manganese moments couple antiferromagnetically with their neighboring planes~\cite{1968:Forster:JPhysChemSolids}. Thus, the antiferromagnetism in CuMnSb represents precisely the type-II form observed in the transition metal oxides. In view of the large ordered moments on the manganese site this behavior suggests local-moment magnetism. However, as the C1$_b$ crystal structure lacks inversion symmetry, additional weak spin--orbit interactions may be present. Further, CuMnSb is metallic.
Recently, ab initio calculations by M\'{a}ca \textit{et al.} based on a Heisenberg model demonstrated that type-II antiferromagnetism may in fact not be the ground state of perfectly ordered CuMnSb~\cite{2016:Maca:PhysRevB}. Instead, antiferromagnetic states with moments oriented along $\langle100\rangle$, a tetragonal arrangement with alternating double layers of opposite spins along the $\langle210\rangle$ directions, or even more complex spin states are expected. Point defects, however, may stabilize the antiferromagnetic structure observed experimentally. The authors determine that in particular Mn antisites on the Cu lattice and Mn interstitials favor ordered moments along $\langle111\rangle$ already at concentrations of a few percent, i.e., values that are, for instance, consistent with the relatively large residual resistivity of ${\sim}50~\mu\Omega$cm reported in the literature~\cite{1982:Schreiner:SolidStateCommun, 1989:Otto:JPhysCondensMatter, 1995:Kirillova:PhysStatusSolidiB, 2006:Boeuf:PhysRevB, 2012:Maca:JMagnMagnMater}. Such pronounced sensitivity of the physical properties on defects and disorder is a quite common phenomenon in Heusler compounds, resulting in modified or even drastically altered physical properties as compared to the expectations for the perfectly ordered host material~\cite{1999:Orgassa:PhysRevB, 2004:Miura:PhysRevB, 2004:Picozzi:PhysRevB, 2011:Graf:ProgSolidStateCh}. Unraveling this intimate connection, in turn, seems to be essential for both technological applications and the fundamental understanding of the underlying physics.
In this context, the nature and origin of the antiferromagnetism in CuMnSb is also puzzling in so far, as the low-temperature properties combine characteristics that are believed to be hallmarks of either local-moment or itinerant magnetism~\cite{2003:Pfleiderer:PhysicaB, 2006:Boeuf:PhysRevB}. Notably, large ordered moments and the commensurate magnetic structure are contrasted by metallic electrical resistivity, a relatively low transition temperature, and a peculiar stability of the magnetic order in magnetic fields~\cite{2004:Doerr:PhysicaB}. In addition, distinctly different values of the N\'{e}el temperature $T_{\textrm{N}}$ (50~K, 55~K, 62~K), the fluctuating moment $m_{\textrm{eff}}$ ($6.3~\mu_{\mathrm{B}}/\mathrm{f.u.}$, $5.6~\mu_{\textrm{B}}/\mathrm{f.u.}$, $5.2~\mu_{\mathrm{B}}/\mathrm{f.u.}$), and the Curie-Weiss temperature $\mathit{\Theta}_{\mathrm{CW}}$ ($-250$~K, $-160$~K, $-120$~K) were reported by B{\oe}uf \textit{et al.}~\cite{2006:Boeuf:PhysRevB}, Endo~\cite{1970:Endo:JPhysSocJpn}, and Helmholdt \textit{et al.}~\cite{1984:Helmholdt:JMagnMagnMater}. Presumably, these discrepancies are also related to structural disorder or the presence of a magnetic impurity phase~\cite{1970:Endo:JPhysSocJpn}.
Speculations about a half-metallic character, in analogy to isostructural NiMnSb~\cite{1983:deGroot:PhysRevLett, 1995:vanLeuken:PhysRevLett}, were clarified by electronic structure calculations that reproduced the antiferromagnetically ordered moments of $4~\mu_{\mathrm{B}}/\mathrm{f.u.}$ and identified CuMnSb as a compensated semi-metallic compound~\cite{2005:Jeong:PhysRevB}. According to Jeong \textit{et al.}, the antiferromagnetic phase may be pictured heuristically as self-doped Cu$^{1+}$Mn$^{2+}$Sb$^{3-}$ and is characterized by a small, semi-metallic density of states at the Fermi level with a finite minority band occupation. A large electron mass enhancement is predicted due to spin fluctuations while the hole masses are expected to stay normal~\cite{2005:Jeong:PhysRevB, 2007:Podgornykh:JMagnMagnMater}.
These spin fluctuations may also be associated with the comparatively low transition temperature in a compound that is in close chemical and structural proximity to high-temperature ferromagnets, such as NiMnSb, and promising candidates for antiferromagnetic spintronics, such as CuMnAs~\cite{2016:Wadley:Science}. Thus, understanding the nature of magnetism in CuMnSb may not only contribute to the design of tailored antiferromagnetic Heusler compounds, but also help to capture the mutual interplay of local-moment and itinerant magnetism and provide fresh input to the wide field of research in which spin fluctuations play an important role. Despite the multitude of unresolved issues, in particular concerning the origin of the antiferromagnetism, and the apparent sensitivity of the physical properties on sample quality, however, only polycrystalline specimens were studied in the literature so far. Open questions thereby concern whether the puzzling combination of properties persists in phase-pure single crystals and whether a commensurate $\langle111\rangle$-oriented antiferromagnetic structure is the magnetic ground state.
In our paper, we report single-crystal growth of CuMnSb by means of the optical floating-zone technique. Careful X-ray powder diffraction allows us to identify the tetragonal ferrimagnet Mn$_{2}$Sb as impurity phase in polycrystalline specimens. Its formation may be suppressed by a small excess of antimony in the initial starting composition as reported in Ref.~\onlinecite{1970:Endo:JPhysSocJpn}. Measurements on our phase-pure single crystals permit to resolve several ambiguities arising from earlier reports. We find fluctuating moments of nearly 4~$\mu_{\mathrm{B}}/\mathrm{f.u.}$ inferred from the Curie-Weiss-like temperature dependence of the magnetization, ordered moments of the same size inferred from neutron scattering, and a corresponding contribution to the entropy of $R\,\ln9$, consistently implying a local-moment character of the magnetism in CuMnSb.
As an unexpected new aspect magnetization, specific heat, and transport data further identify a change in the electronic or magnetic structure at a temperature $T^{*} \approx 34$~K, well below the onset of antiferromagnetic order at the N\'{e}el temperature $T_{\mathrm{N}} = 55$~K. Using powder and single-crystal neutron diffraction, we are able to unambiguously attribute this observation to a canting of the commensurate antiferromagnetic state without uniform magnetic moment. The magnetic space group thereby changes from $R[I]3c$ for $T^{*} < T < T_{\mathrm{N}}$ to $C[B]c$ for $T < T^{*}$. Thus, the canted antiferromagnetism below $T^{*}$ is consistent with the predictions by M\'{a}ca \textit{et al.}~\cite{2016:Maca:PhysRevB}.
Our paper is organized as follows. In Sec.~\ref{Methods}, we describe the sample preparation and the experimental methods used in this study. In Sec.~\ref{Results}, we address the low-temperature properties of CuMnSb. We begin with the magnetization and a discussion of the magnetic anisotropy, before we continue with data on the specific heat and entropy. Next, we turn to the transport properties, namely electrical resistivity, thermal conductivity, Seebeck coefficient, and Hall effect. As a central aspect, we resolve the magnetic structure of phase-pure CuMnSb and its change as a function of temperature by means of powder and single-crystal neutron diffraction data. Finally, we summarize our findings in Sec.~\ref{Conclusions}.
\section{Experimental methods}
\label{Methods}
\subsection{Single-crystal growth and characterization}
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure01}
\caption{Signatures of a Mn$_{2}$Sb impurity phase in CuMnSb prepared from stoichiometric weight ratio of the starting elements. (a)~Real part of the susceptibility as a function of temperature for two float-zoned samples prepared from stoichiometric initial weight (gray curve) and with slight antimony excess (blue curve), respectively. The broad maximum around 200~K indicates the presence of a magnetically ordering impurity phase. (b)~High-resolution X-ray powder diffractogram obtained through long counting times. Measuring where no Bragg peaks are expected in the space group $F\bar{4}3m$ of CuMnSb, we identify the impurity phase as tetragonal Mn$_{2}$Sb.}
\label{figure01}
\end{figure}
The objective of our study was the preparation of phase-pure high-quality single crystals of CuMnSb. In a seminal study, Endo reported the observation of a broad maximum in the susceptibility around 200~K in arc-melted CuMnSb samples prepared from a stoichiometric weight ratio of the starting elements~\cite{1970:Endo:JPhysSocJpn}. This maximum was attributed to the presence of a Mn$_{2}$Sb impurity phase. In pure form, Mn$_{2}$Sb crystallizes in the tetragonal space group $P4/nmm$ and displays ferrimagnetic order with a Curie temperature $T_{\mathrm{C}} = 550$~K, followed by a change of the easy direction from the $c$-axis at high temperatures to the basal plane around 240~K~\cite{1957:Wilkinson:JPhysChemSolids}. While the metallurgical details of this putative Mn$_{2}$Sb phase could not clarified, it was noted that the formation of the impurity phase could be suppressed empirically by a small antimony excess in the starting elements, where the lattice constant of CuMnSb was found to be unchanged.
For our study single crystals of CuMnSb were grown by means of optical float-zoning using an ultra-high vacuum compatible preparation chain~\cite{2016:Bauer:RevSciInstrum2}. The preparation started from high-purity elements (6N copper, precast 4N manganese, and 6N antimony). Polycrystalline feed rods were cast by means of an inductively heated rod casting furnace~\cite{2016:Bauer:RevSciInstrum}. The feed rods were optically float-zoned at a rate of 5~mm/h while counter-rotating at 6~rpm in a high-purity argon atmosphere of 2.5~bar~\cite{2011:Neubauer:RevSciInstrum}. No evaporation losses were observed during sample preparation, presumably due to the relatively low melting point of CuMnSb of ${\sim}800~^{\circ}\mathrm{C}$.
Figure~\ref{figure01}(a) shows that the ac susceptibility of float-zoned CuMnSb as prepared from a stoichiometric ratio of the starting elements (gray symbols). A pronounced maximum around 200~K is observed, characteristic of the impurity phase. Standard X-ray diffraction with a Siemens D5000 diffractometer using copper $K_{\alpha1}$ radiation, cf.\ open symbols and solid line in Fig.~\ref{figure01}(b), suggested at first sight phase-pure CuMnSb with space group $F\bar{4}3m$ with volume fractions of impurity phases below the detection limit. However, for long exposure times at diffraction angles where CuMnSb possesses no intensity maxima (solid symbols) tiny additional peaks could be observed. The position of these peaks were characteristic of tetragonal Mn$_{2}$Sb suggesting a volume fraction well below 1\%. This finding highlights the ac susceptibility as an exceptionally sensitive probe for the detection of magnetically ordering impurity phases in systems with a very low intrinsic susceptibility, such as CuMnSb.
Further, as shown in Fig.~\ref{figure01}(a), for a preparation process with a starting weight ratio of Cu : Mn : Sb of 1 : 1 : 1.035 the susceptibility no longer displays the anomalous contribution at high temperature (blue symbols). Comparison with the susceptibility data of the sample prepared from the stoichiometric starting composition suggests as an upper boundary of the volume fraction of Mn$_{2}$Sb less than 0.01\%. Hence, consistent with the results of Endo~\cite{1970:Endo:JPhysSocJpn}, we find that the formation of Mn$_{2}$Sb may be suppressed by a small antimony excess in the starting composition, permitting the preparation of phase-pure CuMnSb. All results presented in the following were obtained on such phase-pure samples.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure02}
\caption{Key characteristics of the single crystal grown for this study. (a)~Optically float-zoned single crystal of CuMnSb. The scale on the bottom is given in millimeter. Growth direction was from right to left. (b)~Threefold X-ray Laue pattern of a cubic $\langle111\rangle$ axis.}
\label{figure02}
\end{figure}
Shown in Fig.~\ref{figure02}(a) is a photograph of the float-zoned ingot. Using X-ray Laue diffraction, the final 15~mm of the ingot were identified as a single crystal across the entire cross-section, cf.\ Fig.~\ref{figure02}(b). We note that copper fluorescence limited the quality of the Laue pictures somewhat. From the end of the single crystal, a disc of 1~mm thickness was cut with a $\langle110\rangle$ direction along its axis. Starting with this disc, a bar of $6\times1\times1~\mathrm{mm}^{3}$ and a platelet of $6\times1\times0.2~\mathrm{mm}^{3}$ were prepared, both with their longest edge parallel to $\langle100\rangle$ and their shorter edges parallel to $\langle110\rangle$. Measurements of the ac susceptibility, magnetization, specific heat, and thermal transport were carried out on the bar-shaped sample. The electrical transport properties were studied on the platelet. In addition, a cuboid of $3\times3\times2~\mathrm{mm}^{3}$ with two faces oriented along $\langle100\rangle$ and four along $\langle110\rangle$ was prepared from the bottom of the single-crystal ingot. This sample was used in our neutron diffraction studies at HEiDi. The remaining tilted cylinder with a diameter and height of 6~mm and 11~mm, respectively, was used for neutron diffraction at RESI, MIRA, and DNS. All data presented in the following are shown as recorded, without correcting the effects of demagnetizing fields, since the absolute value of the magnetization in CuMnSb is tiny.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure03}
\caption{Salient properties observed in X-ray powder diffraction. (a)~Powder diffractogram of float-zoned material. Measured data (open symbols) and a Rietveld refinement based on the cubic half-Heusler space group $F\bar{4}3m$ (solid red line) are in excellent agreement. (b)~Schematic depiction of the crystal structure. (c)~Lattice constant $a$ as a function of temperature. (d)~Isotropic displacement parameters $B_{\mathrm{iso}}$ as a function of temperature. The dashed lines are linear guides to the eye.}
\label{figure03}
\end{figure}
X-ray powder diffraction at different temperatures was carried out on a Bruker D8 Advance with a closed-cycle cryostat using copper $K_{\alpha1}$ and $K_{\alpha2}$ radiation. Shown in Fig.~\ref{figure03}(a) are typical data collected near room temperature. A Rietveld refinement (solid red line) is in excellent agreement with phase-pure CuMnSb with the half-Heusler space group $F\bar{4}3m$, cf.\ Fig.~\ref{figure03}(b). Down to the lowest temperatures studied, no hints suggesting a structural phase transition were observed. As shown in Fig.~\ref{figure03}(c), the cubic lattice constant $a$ monotonically decreases from $6.095~\textrm{\AA}$ to $6.074~\textrm{\AA}$, consistent with values reported in the literature~\cite{1952:Nowotny:MonatshChem, 1952:Castelliz:MonatshChem}. The isotropic displacement parameters for the three elements, see Fig.~\ref{figure02}(d), also decrease linearly as a function of decreasing temperature. Their relatively large absolute values suggests considerable structural disorder as discussed in the following.
Simultaneous refinement of the X-ray and neutron powder data, cf.\ Sec.~\ref{Powder}, permitted to obtain a quantitative estimate of the dominant type of defects. In our assessment we considered antisite disorder in the form of mixed occupancies at two atomic sites at a time. The data was refined with respect to the degree of mixing as free parameter. As the main result we find 1.5\% antisite disorder of the Cu and Mn atoms, in excellent agreement with reports of 1.6\% of Cu/Mn antisite disorder in samples investigated by M\'{a}ca \textit{et al.}~\cite{2016:Maca:PhysRevB}. Such defect concentrations may seem high for intermetallic compounds, but in fact are rather low for half-Heusler compounds.
\subsection{Low-temperature bulk and transport properties}
The ac susceptibility was measured in a 9~T Quantum Design physical properties measurement system (PPMS) at an excitation frequency of 911~Hz and an excitation amplitude of 1~mT. The magnetization was determined in a 9~T Oxford Instruments vibrating sample magnetometer, using an excitation amplitude of about 1~mm at a frequency of 62.35~Hz. The specific heat was measured in a 14~T PPMS using a small-pulse method, where the heat pulses had a typical size of 0.5\% of the current sample temperature.
For measurements of the thermal conductivity and the Seebeck coefficient, flattened silver wires of 0.25~mm diameter were glued onto the bar-shaped sample in a four-terminal configuration (heater, hot thermometer, cold thermometer, cold bath) using silver epoxy~\cite{2013:Gangl:Master}. Utilizing the thermal transport option of the 14~T PPMS, heat pulses of ${\sim}3\%$ of the current temperature were applied while continuously sweeping the temperature from 300~K to 2~K at a rate of 0.5~K/min.
For measurements of the electrical resistivity and the Hall effect, gold wires of $25~\mu$m diameter were spot-welded onto the platelet sample in a six-terminal configuration. The measurements were performed in a 14~T Oxford Instruments magnet system using a standard low-current digital lock-in technique at an excitation frequency of 22.08~Hz. The electrical resistivity, $\rho_{xx}$, and the Hall resistivity, $\rho_{xy}$, were inferred from the longitudinal and transverse voltage pick-up following symmetrization and anti-symmetrization, respectively~\cite{2013:Ritz:PhysRevB}. All geometry factors were determined from digital photographs of the samples and sample contacts recorded with an optical microscope.
\subsection{Neutron diffraction}
All neutron scattering data presented in this paper were collected at the Heinz Maier-Leibnitz Zentrum~(MLZ). Powder diffraction was performed on the high-resolution powder diffractometer SPODI~\cite{2012:Hoelzel:NuclInstrumMethA, 2015:Hoelzel:JLSFR} in Debye-Scherrer geometry at an incident neutron wavelength of $1.548~\textrm{\AA}$. The sample was prepared from float-zoned material by means of a cryomill and filled into a thin-walled vanadium cylinder. The diffraction data were corrected for geometrical aberrations and the curvature of the Debye-Scherrer rings.
Single-crystal diffraction was carried out on the 4-circle single-crystal diffractometers RESI~\cite{2015:Pedersen:JLSFR} and HEiDi~\cite{2015:Meven:JLSFR}. On RESI thermal neutrons with a wavelength of $1.041~\textrm{\AA}$ were used. A MAR345 image plate detector allowed for fast scans of reciprocal space planes, where the scattering intensities were integrated by means of the EVAL-14 method~\cite{2003:Duisenberg:JApplCrystallogr}. The temperature dependence of selected Bragg peaks was studied using a counting tube. On HEiDi hot neutrons with a wavelength of $0.794~\textrm{\AA}$ were used. All data were recorded with a counting tube and scattering intensities were integrated using the Lehmann-Larsen algorithm~\cite{1977:Will:JMagnMagnMater}.
The multi-purpose instrument MIRA~\cite{2015:Georgii:JLSFR, 2018:Georgii:NuclInstrumMethodsPhysResA} was used as a triple-axis spectrometer at an incident neutron wavevector of $1.396~\textrm{\AA}^{-1}$. We show data inferred from energy scans at the $\frac{1}{2}(111)$ position in reciprocal space. On the diffuse neutron scattering spectrometer with polarization analysis DNS~\cite{2001:Schweika:PhysicaB, 2015:Su:JLSFR}, we carried out diffraction experiments at an incident neutron wavelength of $4.2~\textrm{\AA}$, and the neutron polarization parallel to the momentum transfer, referred to as $x$-polarization, for the middle part of the detector bank. Additionally, in these diffraction experiments the clear separation in reciprocal space as well as the distinct evolution as a function of temperature allowed to distinguish between nuclear (non-spin-flip) and magnetic (spin-flip) reflections. Intensity maxima stemming from a $\lambda/2$ contamination of the incoming neutron beam were manually removed from the data.
In all neutron scattering experiments, low temperatures were provided by top-loading closed-cycle cryostats. When not stated otherwise, the samples were glued on top of bespoke sample holders made of aluminum using a small amount of GE varnish. Data collected at SPODI, RESI, and HEiDi were analyzed by means of Rietveld and least-square refinements, respectively, using the software packages FullProf~\cite{1993:RodriguezCarvajal:PhysicaB} and Jana2006~\cite{2014:Petricek:ZKristallogr}.
\section{Experimental results}
\label{Results}
\subsection{Magnetization and specific heat}
\label{Magnetization}
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure04}
\caption{Typical magnetization and susceptibility data. (a)~Temperature dependence of the magnetization for typical magnetic fields applied along the major crystallographic axes. A kink marks the N\'{e}el temperature $T_{\mathrm{N}}$. A cusp below $T_{\mathrm{N}}$ is referred to as $T^{*}$. (b)~Inverse normalized magnetization, $H/M$, as a function of temperature. The solid gray line indicates Curie-Weiss-like behavior at high temperatures. (c)~Temperature derivative of the normalized magnetization, $\mathrm{d}M/H\mathrm{d}T$, as a function of temperature illustrating how $T^{*}$ and $T_{\mathrm{N}}$ are inferred. Inset: Close-up view of the regime around $T^{*}$.}
\label{figure04}
\end{figure}
We begin the presentation of our experimental results with the magnetization shown in Fig.~\ref{figure04}(a). As a function of decreasing temperature, the magnetization increases up to a maximum that may be attributed to the onset of antiferromagnetic order at the N\'{e}el temperature $T_{\mathrm{N}}$, consistent with neutron diffraction presented below. Increasing the applied magnetic field increases the absolute value of the magnetization but leaves the shape of the curve unchanged. In particular, $T_{\mathrm{N}}$ is unchanged up to 9~T. A maximum value of the magnetization of 0.25~$\mu_{\mathrm{B}}/\mathrm{f.u.}$ at 9~T corresponds to $1/16$ of the ordered moment reported from neutron scattering~\cite{1968:Forster:JPhysChemSolids}. For different magnetic field directions, namely field parallel to $\langle100\rangle$ (solid line), $\langle110\rangle$ (dotted line), and $\langle111\rangle$ (dashed line), we observe essentially the same behavior, characteristic of isotropic magnetic properties. The absolute value of the magnetization varies by only ${\sim}5\%$ with $\langle100\rangle$ being magnetically softest and $\langle110\rangle$ being hardest.
The lack of field dependence of $T_{\mathrm{N}}$ and the small absolute value of the magnetization were previously discussed in view of an itinerant character of the magnetism in CuMnSb. In the context of the weak anisotropy of the system, however, we rather consider it as the hallmark of strong isotropic exchange interactions. Consequently, we expect that the magnetic moments smoothly rotate towards the field direction as a function of increasing magnetic field with a saturation field in excess of 100~T. This assumption is also corroborated by Ref.~\onlinecite{2004:Doerr:PhysicaB}, reporting an essentially linear increase of the magnetization as a function of fields up to 50~T.
In addition to the maximum at $T_{\mathrm{N}}$, we observe a small cusp at $T^{*} < T_{\mathrm{N}}$ for all field values and directions studied. Such an anomaly was not reported in earlier studies. As will be established below by means of neutron scattering, this signature may be attributed to an antiferromagnetic spin canting without emergence of a uniform magnetic moment. In previous studies, this delicate rearrangement of the magnetic structure, in fact representing the magnetic ground state of CuMnSb, presumably may have been suppressed by an abundance of point defects as suggested in a recent ab initio study~\cite{2016:Maca:PhysRevB}.
Shown in Fig.~\ref{figure04}(b) is the inverse normalized magnetization, $H/M$, as a function of temperature. Data for different magnetic field values collapse on a single curve, illustrating the essentially linear field dependence of the magnetization in the field range studied. Consequently, $H/M$ provides a valid estimate of the inverse susceptibility $(\mathrm{d}M/\mathrm{d}H)^{-1}$. We observe Curie-Weiss-like behavior above $T_{\mathrm{N}}$ (dark solid line) characterized by a fluctuating Curie-Weiss moment of $m_{\mathrm{CW}} = (3.95\pm0.05)~\mu_{\mathrm{B}}/\mathrm{f.u.}$ and a Curie-Weiss temperature of $\mathit{\Theta}_{\mathrm{CW}} = -(160\pm8)$~K.
The fluctuating moment is distinctly smaller than the values reported in the literature~\cite{2006:Boeuf:PhysRevB, 1970:Endo:JPhysSocJpn, 1984:Helmholdt:JMagnMagnMater}. Note, however, that in samples incorporating magnetic impurity phases, such as Mn$_{2}$Sb, temperature sweeps of the susceptibility or $H/M$ typically exhibit a pronounced evolution as a function of field. The latter, in turn, may lead to an erroneous extrapolation of $m_{\mathrm{CW}}$ and $\mathit{\Theta}_{\mathrm{CW}}$. In our phase-pure specimens, the excellent agreement of the size of the fluctuating moment and the ordered moment, inferred from the neutron scattering experiments presented below and the literature~\cite{1968:Forster:JPhysChemSolids}, strongly implies a local-moment character of the magnetism in CuMnSb. Furthermore, the ratio $f = -\mathit{\Theta}_{\mathrm{CW}} / T_{\mathrm{N}} \approx 3$ suggests that geometric frustration may play a role~\cite{1994:Ramirez:AnnuRevMaterSci}.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure05}
\caption{Magnetic phase diagram of CuMnSb. The temperatures $T^{*}$ and $T_{\mathrm{N}}$ are inferred from the temperature derivative of the normalized magnetization, $\mathrm{d}M/H\mathrm{d}T$. We distinguish three regimes; paramagnet (PM), antiferromagnet (AFM), and canted antiferromagnet.}
\label{figure05}
\end{figure}
The N\'{e}el temperature $T_{\mathrm{N}}$ and the temperature of the cusp at $T^{*}$ are tracked best in the temperature derivative of the normalized magnetization, $\mathrm{d}M/H\mathrm{d}T$, depicted in Fig.~\ref{figure04}(c). Here, we associate $T_{\mathrm{N}}$ with a zero crossing and $T^{*}$ with a local maximum. The resulting magnetic phase diagram is shown in Fig.~\ref{figure05}. While $T_{\mathrm{N}}$ is independent of the magnetic field, $T^{*}$ is suppressed by a few Kelvin within the field range studied. Three regimes may be distinguished; a paramagnet (PM) at high temperatures, a commensurate antiferromagnet (AFM) at intermediate temperatures, and a canted antiferromagnet at low temperatures.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure06}
\caption{Specific heat and entropy of CuMnSb. (a)~Specific heat as a function temperature. A clear lambda anomaly is observed at the N\'{e}el transition. The phonon contribution $C_{\mathrm{ph}}$ may be approximated by a Debye model with the Debye temperature $\mathit{\Theta}_{\mathrm{D}} = 275$~K. Magnetic fields up to 14~T show no substantial effect. Inset: Determination of the Sommerfeld coefficient $\gamma_{0}$. (b)~Specific heat after subtraction of the phonon contribution divided by temperature, $C_{\mathrm{el}}/T$, as a function of temperature. The N\'{e}el temperature is inferred by means of a entropy-conserving construction. Around $T^{*}$ a broad maximum is observed. Inset: Non-phonon contribution to the entropy $S_{\mathrm{el}}$.}
\label{figure06}
\end{figure}
These findings are corroborated by the specific heat shown in Fig.~\ref{figure06}(a). At high temperatures, the specific heat approaches the Dulong-Petit limit of $C_{\mathrm{DP}} = 9R$ and is dominated by phonon contributions. The latter may be approximated very well in terms of a Debye model with a Debye temperature $\mathit{\Theta}_{\mathrm{D}} = 275$~K (solid gray line). As illustrated in the inset, when the phonons freeze out at low temperatures, we extract a Sommerfeld coefficient $\gamma_{0} = 3~\mathrm{mJ\,mol}^{-1}\mathrm{K}^{-2}$. This small value is characteristic of a material with only weak electronic correlations and hence a local-moment nature of the magnetism. At the N\'{e}el temperature $T_{\mathrm{N}}$, a clear lambda anomaly is observed, indicating a second-order phase transition. A faint bulge may be perceived around $T^{*}$. Magnetic fields up to 14~T possess no significant influence on the specific heat curve.
For further analysis, we subtract the phonon contribution from the measured data. The remaining contribution to the specific heat divided by temperature, $C_{\mathrm{el}}/T$, is depicted in Fig.~\ref{figure06}(b). The value of $T_{\mathrm{N}}$ as inferred from a entropy-conserving construction (gray shading) is in excellent agreement with the value of $T_{\mathrm{N}}$ inferred from the magnetization. Around $T^{*}$ a broad maximum provides evidence of considerable contributions to the specific heat associated with canting of the magnetic structure. Numerically integrating $C_{\mathrm{el}}/T$ yields the associated entropy $S_{\mathrm{el}}$ as shown in the inset of Fig.~\ref{figure06}(b). Around $T_{\mathrm{N}}$ most of the entropy has been released and the slope distinctly changes. At high temperatures, $S_{\mathrm{el}}$ approaches a value of $R\,\ln9$, consistent with a local moment of $4~\mu_{\mathrm{B}}$.
\subsection{Electrical and thermal transport properties}
\label{Transport}
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure07}
\caption{Electrical resistivity of CuMnSb. (a)~Electrical resistivity as a function of temperature for current along $\langle100\rangle$. A kink marks the N\'{e}el temperature $T_{\mathrm{N}}$. Inset: Temperature dependence of the exponent $\alpha$ of a polynomial description of the measured data. (b)~Temperature derivative of the electrical resistivity, $\mathrm{d}\rho_{xx}/\mathrm{d}T$, as a function of temperature. The N\'{e}el temperature is inferred in analogy to the entropy-conserving construction in specific heat data. Around $T^{*}$ a minimum is observed.}
\label{figure07}
\end{figure}
Shown in Fig.~\ref{figure07}(a) is the electrical resistivity, $\rho_{xx}$, of single-crystal CuMnSb as a function of temperature for current along $\langle100\rangle$. As a function of decreasing temperature the electrical resistivity decreases monotonically. A distinct kink is associated with the onset of antiferromagnetic order at $T_{\mathrm{N}}$. We infer a residual resistivity of $\rho_{0} = 37~\mu\Omega\,\mathrm{cm}$, which is slightly smaller than the values reported in the literature~\cite{1982:Schreiner:SolidStateCommun, 1989:Otto:JPhysCondensMatter, 1995:Kirillova:PhysStatusSolidiB, 2006:Boeuf:PhysRevB, 2012:Maca:JMagnMagnMater}. The residual resistivity ratio is 4.2. While the value of $\rho_{0}$ is comparatively high for a transition metal compound, it is rather typical for Heusler compounds with their inherent proneness to structural disorder.
Similar to the magnetization, the detailed position of $T_{\mathrm{N}}$ and $T^{*}$ may be inferred most accurately from the temperature derivative of the resistivity depicted in Fig.~\ref{figure07}(b). Note that the general shape of $\mathrm{d}\rho_{xx}/\mathrm{d}T$ strongly resembles $C_{\mathrm{el}}/T$. This suggests that the scattering observed in the electrical resistivity follows Fermi's golden rule with the corresponding density of state dominating the specific heat. The N\'{e}el temperature $T_{\mathrm{N}}$ is defined in analogy to the entropy-conserving construction of the specific heat. In comparison, in the vicinity of $T^{*}$ $\mathrm{d}\rho_{xx}/\mathrm{d}T$ displays a local minimum, while a broad maximum is observed in $C_{\mathrm{el}}/T$ indicating that this anomaly may be associated with a different type of scattering mechanism. The values of both $T_{\mathrm{N}}$ and $T^{*}$ are in excellent agreement with the values extracted from other quantities.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure08}
\caption{Thermal transport properties of CuMnSb. Thermal conductivity (solid symbols) and Seebeck coefficient (open symbols) as a function of temperature for a temperature gradient along $\langle100\rangle$. A weak kink marks the N\'{e}el temperature $T_{\mathrm{N}}$ in the thermal conductivity.}
\label{figure08}
\end{figure}
In contrast to the electrical resistivity, the signatures of the magnetic ordering transitions are less pronounced in the thermal transport properties of CuMnSb. As shown in Fig.~\ref{figure08}, the thermal conductivity $\kappa$ decreases monotonically as a function of decreasing temperatures. Both the general shape of the curve and the absolute value of the conductivity are characteristic of a bad metal. Around $T_{\mathrm{N}}$ a weak change of slope is observed. The Seebeck coefficient $S_{\mathrm{Seebeck}}$ is positive and decreases monotonically as a function of decreasing temperatures, exhibiting no anomalies at $T_{\mathrm{N}}$ or $T^{*}$.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure09}
\caption{Magnetoresistance and Hall effect of CuMnSb. (a)~Relative change of the electrical resistivity as a function of field for typical temperatures. (b)~Field dependence of the Hall effect for typical temperatures. (c)~Temperature dependence of the normal Hall coefficient $R_{0}$. The solid line is determined from temperature sweeps at $\pm3$~T, the open symbols are inferred from field sweeps.}
\label{figure09}
\end{figure}
Figure~\ref{figure09}(a) shows the magnetoresistance, $\Delta\rho_{xx} = (\rho_{xx}/\rho_{xx}^{H=0}) - 1$. Under increasing field the resistance increases quadratically at low temperatures, reaching about $+6\%$ at 14~T. With increasing temperature the magnetoresistance decreases and turns very weakly negative between 60~K and 100~K with very little curvature.
The Hall resistivity $\rho_{xy}$ is essentially linear as a function of field as shown in Fig.~\ref{figure09}(b). The slope is negative, resulting in a negative Hall constant $-R_{0}$ (we note that the minus sign arises from the definition of the Hall constant in the Hall conductivity). The temperature dependence of $R_{0}$ is best extracted from temperature sweeps of the Hall resistivity in fixed fields via $R_{0} = -\rho_{xy}/(\mu_{0}H)$. Corresponding data inferred from measurements at $\pm3$~T are shown as solid line in Fig.~\ref{figure09}(c). Open symbols denote values derived from field sweeps. The dashed line is a guide to the eye. The N\'{e}el temperature $T_{\mathrm{N}}$ is associated with a maximum and a distinct kink in the Hall constant. At $T^{*}$ we observe a weak change of slope. From $R_{0}$ we estimate an averaged charge carrier concentration $n = (R_{0}e)^{-1}$ of the order of $2\cdot10^{21}~\mathrm{cm}^{-3}$. This value is typical for a bad metal, suggesting a small density of states at the Fermi level, and the positive sign indicates dominant hole-like conduction. Our results are consistent with the calculations of Jeong \textit{et al.} predicting a semi-metallic state with heavy electron masses and normal hole masses~\cite{2005:Jeong:PhysRevB}.
\subsection{Powder neutron diffraction}
\label{Powder}
\begin{figure*}
\includegraphics[width=1.0\linewidth]{figure10}
\caption{Powder neutron diffraction as recorded at SPODI. Diffraction data are shown for temperatures between 85~K (red curve) and 4~K (blue curve). In addition to the intensity maxima due to nuclear scattering, a set of magnetic intensity maxima emerges with decreasing temperature below $T_{\mathrm{N}}$ (orange arrows), followed by one additional peak at small angles below $T^{*}$ (green arrow at $2\mathit{\Theta} = 12.7^{\circ}$).}
\label{figure10}
\end{figure*}
Typical data observed in powder neutron diffraction at SPODI for temperatures between 4~K and 85~K and small scattering angles are shown in Fig.~\ref{figure10}. At temperatures above the N\'{e}el temperature $T_{\mathrm{N}}$ (red curves), Bragg peaks corresponding to the nuclear scattering according to space group $F\bar{4}3m$ only are observed, in excellent agreement with the X-ray diffraction data, cf.\ Fig.~\ref{figure03}(a). Below $T_{\mathrm{N}}$ (purple curves), a set of additional maxima emerges (marked by orange arrows) that increase with decreasing temperature, characteristic of magnetic order. Below $T^{*}$ (blue curves), an additional intensity maximum appears at a small angle, $2\mathit{\Theta} = 12.7^{\circ}$. Its intensity is rather weak but clearly discernible, increasing as a function of decreasing temperature.
In order to account for the observed diffraction patterns, we superimpose the nuclear structure in space group $F\bar{4}3m$ with a magnetic structure. For these refinements, we attribute the magnetic moment exclusively to the Mn sites and use the magnetic form factor of the Mn$^{2+}$ ion, according to the description of CuMnSb as self-doped Cu$^{1+}$Mn$^{2+}$Sb$^{3-}$~\cite{2005:Jeong:PhysRevB}. Assuming that the magnetic propagation vector is of the form $\bm{k} = \frac{1}{2}\langle111\rangle$, consistent with previous reports~\cite{1968:Forster:JPhysChemSolids}, we find excellent agreement with our experimental data. Note that there are four independent domains $\bm{k}_{i}$ as described by $\bm{k}_{1} = \frac{1}{2}[111]$, $\bm{k}_{2} = \frac{1}{2}[\bar{1}\bar{1}1]$, $\bm{k}_{3} = \frac{1}{2}[\bar{1}1\bar{1}]$, and $\bm{k}_{4} = \frac{1}{2}[1\bar{1}\bar{1}]$.
For the half-Heusler environment of CuMnSb, the irreducible representations leaving $\bm{k}$ invariant yield five non-centrosymmetric magnetic space groups. Group $R[I]3m$ is non-magnetic and may be ignored, while fits using group $P[I]1$ did not converge. The three remaining groups are $R[I]3c$, $C[B]m$, and $C[B]c$. In our cubic setting, $R[I]3c$ corresponds to moments that are collinear to $\bm{k}$, i.e., oriented along $\langle111\rangle$. In $C[B]m$, the moments point along a $\langle110\rangle$ axes perpendicular to $\bm{k}$. In $C[B]c$, the moments may possess finite components both parallel and perpendicular to the propagation vector.
In Ref.~\onlinecite{1968:Forster:JPhysChemSolids}, commensurate type-II antiferromagnetic order was suggested comprising magnetic moments parallel to the $\langle111\rangle$ axes. These moments are ferromagnetically aligned within the respective $\{111\}$ plane, while neighboring planes couple antiferromagnetically with each other. This spin arrangement is consistent with space group $R[I]3c$ and schematically illustrated by the rose arrows in Figs.~\ref{figure11}(a) and \ref{figure11}(b). We find that for temperatures $T^{*} < T < T_{\mathrm{N}}$ Rietveld refinements of our diffraction data are in excellent agreement with this magnetic structure. In particular, the lack of a magnetic $\frac{1}{2}(111)$ satellite at $2\mathit{\Theta} = 12.7^{\circ}$, cf.\ Fig.~\ref{figure10}, implies that the magnetic moments are aligned parallel to $\langle111\rangle$.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure11}
\caption{Refinement of powder neutron diffraction data of CuMnSb. \mbox{(a),(b)}~Schematic illustration of the relevant magnetic space groups. In $R[I]3c$ the moments are perpendicular to the nuclear $(111)$ plane (rose arrows), in $C[B]c$ they are canted by the angle $\delta$ (red arrows). (c)~Low-temperature diffraction data ($T = 4$~K) and Rietveld refinement based on the nuclear space group $F\bar{4}3m$ and the magnetic space group $C[B]c$. Inset: Enlarged view illustrating the difference between the magnetic space groups $R[I]3c$ and $C[B]c$.}
\label{figure11}
\end{figure}
In turn, the emergence of the latter maximum for temperatures $T < T^{*}$ indicates that a canting away from the $\langle111\rangle$ direction occurs, described by the angle $\delta$. The resulting magnetic space group is $C[B]c$, where $R[I]3c$ corresponds to $C[B]c$ with $\delta = 0$. Neighboring moments cant in opposite directions as sketched by the red arrows in Figs.~\ref{figure11}(a) and \ref{figure11}(b). Consequently, no net ferrimagnetic moment is expected, consistent with the magnetization data. As shown in Fig.~\ref{figure11}(c), a Rietveld refinement based on this canted antiferromagnetic structure is in excellent agreement with the experimental low-temperature diffraction data up to high diffraction angles. The inset highlights the salient difference of the refinements for the magnetic space groups $R[I]3c$ and $C[B]c$, i.e., that the maximum at $12.7^{\circ}$ may only be explained by the latter. Refinements using the magnetic space group $C[B]m$ are significantly worse than those using $R[I]3c$ or $C[B]c$ (not shown).
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure12}
\caption{Ordered magnetic moment, tilting angle, and lattice parameter inferred from the refinement of the powder neutron data. (a)~Temperature dependence of the magnetic moment $m_{s}$ and the angle $\delta$. (b)~Lattice constant $a$ as a function of temperature. The solid red lines are fits to the data. The dashed lines represent guides to the eye.}
\label{figure12}
\end{figure}
Figure~\ref{figure12}(a) shows the temperature dependence of the ordered magnetic moment $m_{s}$ as inferred from the refinements described above. We extract a value of 3.8~$\mu_{\mathrm{B}}/\mathrm{f.u.}$ at low temperatures, in good agreement with both our magnetization data and previous neutron scattering studies~\cite{1968:Forster:JPhysChemSolids}. As a function of increasing temperature, the moment decreases according to $m_{s} / m_{s,0} = \sqrt{1 - (T/T_{\mathrm{N}})^{2}}$ and vanishes at $T_{\mathrm{N}}$, as indicated by the red solid line. Within the resolution of the present data, the canting angle $\delta$ remains at about $11^{\circ}$ for $T < T^{*}$.
The temperature dependence of the nuclear lattice constant inferred from the same refinements is depicted in Fig.~\ref{figure12}(b). The absolute values are in good agreement with the results of our X-ray diffraction. Moreover, due to the finer temperature steps recorded in neutron scattering, we are able to resolve a weak change of slope around $T_{\mathrm{N}}$. We attribute this finding to magnetostriction, where the lattice constant increases with the third power of the temperature for $T < T_{\mathrm{N}}$ while exhibiting an essentially linear behavior for larger temperatures.
\subsection{Single-crystal neutron diffraction}
\label{SingleCrystal}
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure13}
\caption{Single-crystal neutron diffraction data recorded at RESI and HEiDi. (a)~Intensity distribution in the $(hk0)$ plane at low temperature. Maxima (dark contrast) arise from the nuclear structure. (b)~Intensity distribution in the $(hk\frac{1}{2})$ plane. Maxima at half reciprocal lattice spacing are attributed to the commensurate antiferromagnetic structure. (c)~Rocking scans for typical nuclear Bragg peaks. Gaussian fits (solid lines) indicate a full width at half maximum of about $1^{\circ}$. (d)~Rocking scans for a typical magnetic reflection at two temperatures. (e)~Normalized intensity for typical nuclear (gray symbols) and magnetic (colored symbols) Bragg peaks as a function of temperature. The red solid red line is a fit to the magnetic data. At $T^{*}$ a weak change of slope is observed.}
\label{figure13}
\end{figure}
The magnetic structure proposed above as based on the refinement of the powder diffraction data is corroborated by single-crystal neutron diffraction data. Shown in Figs.~\ref{figure13}(a) and \ref{figure13}(b) are maps of the the reciprocal space planes $(hk0)$ and $(hk\frac{1}{2})$, recorded on the diffractometer RESI at low temperatures. Maxima in the $(hk0)$ plane are attributed to the half-Heusler nuclear structure. In contrast, maxima in the $(hk\frac{1}{2})$ plane are characteristic of the commensurate antiferromagnetic order with a doubling of the magnetic unit cell in real space.
Rocking scans of a large number of nuclear and magnetic Bragg peaks were carried out on the diffractometer HEiDi. Typical data for three different reflections are shown in Fig.~\ref{figure13}(c). The maxima are symmetric and very well described by Gaussian fits (solid lines). We obtain full widths at half maximum of about $1^{\circ}$, indicating a small mosaicity for this type of compound and in turn an excellent sample quality. Rocking scans on magnetic reflections, illustrated for $(1\bar{3}1) + \bm{k}_{3}$ in Fig.~\ref{figure13}(d), may also be described by Gaussians of similar full width at half maximum. In contrast to nuclear peaks, however, their intensity strongly depends on temperature as presented in Fig.~\ref{figure13}(e).
In order to compare the different maxima, their intensity has been normalized by the intensity inferred from an extrapolation to zero temperature. While the nuclear Bragg peaks are essentially independent of temperature in the temperature range studied, the magnetic intensity vanishes at the N\'{e}el temperature $T_{\mathrm{N}}$. As indicated by the red solid line, with increasing temperature the intensity decreases as $I / I_{0} = 1 - (T/T_{\mathrm{N}})^{2}$, i.e., it scales with the square of the ordered magnetic moment. At $T^{*}$, a distinct change of slope is observed for all magnetic reflections.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure14}
\caption{Single-crystal neutron diffraction data of the canting of the antiferromagnetic order as recorded at HEiDi and MIRA. \mbox{(a),(b)}~Rocking scans around the $\frac{1}{2}\{111\}$ positions characteristic of the canted antiferromagnetism in the magnetic space group $C[b]c$ at temperatures well below $T^{*}$ and $T^{*} < T < T_{\mathrm{N}}$, respectively. (c)~Temperature dependence of the intensity of the $\frac{1}{2}(111)$ maximum on a logarithmic scale. The dashed lines are guides to the eye.}
\label{figure14}
\end{figure}
The change of slope is a consequence of the spin canting at low temperatures, $T < T^{*}$. The most prominent hallmark of the latter, however, is the emergence of weak magnetic intensity maxima at the $\frac{1}{2}\{111\}$ positions illustrated in Fig.~\ref{figure14}. For the cubic half-Heusler structure four antiferromagnetic domains may be expected, denoted $\bm{k}_{i}$, with a total of eight magnetic satellites at $\pm\bm{k}_{1}$, $\pm\bm{k}_{2}$, $\pm\bm{k}_{3}$, and $\pm\bm{k}_{4}$. We carefully checked the corresponding positions in reciprocal space on HEiDi and observed all six satellites that were accessible experimentally. As shown in Fig.~\ref{figure14}(a), the maxima may be described by Gaussians of a full width at half maximum of about $1^{\circ}$, i.e., akin to all other Bragg peaks. The observation of essentially identical intensities of all satellites indicates that the four antiferromagnetic domains are populated equally.
Refinements of the data recorded at HEiDi at low temperature, taking into account a large number of magnetic Bragg peaks (not shown), are also in excellent agreement with a magnetic structure in space group $C[B]c$. We obtain ordered moments of 3.8~$\mu_{\mathrm{B}}/\mathrm{f.u.}$ and a canting angle $\delta = 14^{\circ}$, both perfectly consistent with our powder neutron data within the error bars. As shown in Fig.~\ref{figure14}(b), there are no $\frac{1}{2}\{111\}$ satellites at temperatures between $T^{*}$ and $T_{\mathrm{N}}$, consistent with zero canting angle or the magnetic space group $R[I]3c$, respectively.
The detailed temperature dependence of the $\frac{1}{2}(111)$ intensity is finally depicted in Fig.~\ref{figure14}(c). These data were recorded at the spectrometer MIRA in triple-axis geometry by integrating the intensity around zero energy transfer. Note the logarithmic intensity scale. As a function of increasing temperature the intensity monotonically decreases, where three regimes may be distinguished. At low temperatures, the satellite intensity decreases $I \propto T^{3.2}$, essentially vanishing at $T^{*}$. At intermediate temperatures $T^{*} < T < T_{\mathrm{N}}$ small but finite intensity is observed, putatively indicating spin fluctuations that preempt the spin canting at lower temperatures. For high temperatures $T > T_{\mathrm{N}}$, CuMnSb is paramagnetic and neither at $\frac{1}{2}(111)$ nor at any other magnetic Bragg position significant intensity is observed.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure15}
\caption{Single-crystal neutron diffraction data of CuMnSb recorded at DNS. \mbox{(a)--(c)}~Intensity distribution of nuclear contributions in the $(hkk)$ plane for three different temperatures. (d)~Magnetic intensity distribution for $T > T_{\mathrm{N}}$. No maxima are observed. (e)~For $T < T_{\mathrm{N}}$ maxima appear at positions consistent with commensurate antiferromagnetic order. (f)~For $T < T^{*}$ an additional maximum at $\frac{1}{2}(\bar{1}\bar{1}\bar{1})$ is characteristic of finite spin canting (red square). Note that the peak intensities of the nuclear reflections correspond to ${\sim}10^{2}$ on the linear color scale. Remnants from the nuclear maxima are attributed to insufficient flipping ratio corrections and were manually masked.}
\label{figure15}
\end{figure}
Single-crystal neutron diffraction data as recorded at DNS, depicted in Fig.~\ref{figure15}, corroborate and summarize our findings nicely. The very weak diffuse rings of intensity may be attributed to powder-like contributions stemming from the surface of the large single crystal. As shown in the left column, the nuclear contribution exhibits intense peaks at $(hkl)$ positions consistent with the half-Heusler structure, namely $(\bar{1}\bar{1}\bar{1})$ and $(\bar{2}00)$. As a function of decreasing temperature (top to bottom), the scattering pattern does not change.
In contrast, the magnetic contribution (right column) exhibits a distinct evolution as a function of temperature. Above $T_{\mathrm{N}}$, see Fig.~\ref{figure15}(d), no magnetic signal is observed. Below $T_{\mathrm{N}}$, see Fig.~\ref{figure15}(e), peaks emerge that are consistent with commensurate antiferromagnetic structure and $\bm{k} = \frac{1}{2}\langle111\rangle$. Note the characteristic doubling of the magnetic compared to the nuclear unit cell. Below $T^{*}$, see Fig.~\ref{figure15}(f), a weak additional maximum appears at the $\frac{1}{2}(\bar{1}\bar{1}\bar{1})$ position, marked by the red square. This maximum is the characteristic of finite components of the magnetic moment perpendicular to $\bm{k}$, i.e., the canting of the moment direction away from $\bm{k}$.
\section{Conclusions}
\label{Conclusions}
In summary, to the best of our knowledge, we have grown for the first time large single crystals of the half-Heusler compound CuMnSb. Using a tiny Sb excess in the starting composition, the single crystal grown is phase-pure. Magnetization, specific heat, electrical resistivity, and Hall effect measurements on these phase-pure specimens consistently suggest a local-moment character of the antiferromagnetism in a metallic environment. These thermodynamic and transport quantities clearly exhibit anomalies at the onset of magnetic order at the N\'{e}el temperature $T_{\mathrm{N}} = 55$~K, as well as a second anomaly at $T^{*} \approx 34$~K well below $T_{\mathrm{N}}$ that has not been reported before.
Below the N\'{e}el temperature $T_{\mathrm{N}} = 55$~K, our neutron scattering data identify commensurate type-II antiferromagnetic order with propagation vectors and magnetic moments aligned along the $\langle111\rangle$ directions, corresponding to the magnetic space group $R[I]3c$. This form of magnetic order is consistent with the fcc structure of the Mn sublattice and the well-understood antiferromagnetism in the transition metal oxides MnO, NiO, and CoO.
However, using powder and single-crystal neutron diffraction, we unambiguously connect the anomaly at $T^{*}$ with a spin canting, where the moments tilt away from $\langle111\rangle$ axes by a finite angle $\delta \approx 11^{\circ}$. Neither data recorded at RESI nor DNS suggest further diffraction peaks at other locations. Thus the scattering information appears to be complete. Taken together, a canted antiferromagnetic structure is stabilized without uniform magnetic moment, described by the magnetic space group $C[B]c$. This result appears to be rather surprising in view of the fcc Mn sublattice and the large ordered moments.
Based on the fundamental symmetries of the crystal structure it is hard to reconcile the observed canting with subleading interactions. In contrast, our results are in excellent agreement with the calculations of the magnetic ground state by M\'{a}ca \textit{et al.}~\cite{2016:Maca:PhysRevB} and underscore that high-quality samples are crucial when trying to resolve the intrinsic properties of compounds that are sensitive to disorder and defects. Therefore, our findings will be of great interest not only for a wide range of Heusler compounds but also for other materials supporting magnetic order on fcc sublattices.
\begin{acknowledgments}
We wish to thank P.~B\"{o}ni, G.~Brandl, M.~Gangl, F.~Kortmann, J.~K\"{u}bler, D.~Mallinger, S.~Mayr, V.~Pet\v{r}\'{i}\v{c}ek, J.~Schilling, and A.~Schneidewind for fruitful discussions and assistance with the experiments. Parts of the data were collected on HEiDi, jointly operated by RWTH Aachen and Forschungszentrum J\"{u}lich GmbH (JARA collaboration). Financial support by the Deutsche Forschungsgemeinschaft (DFG) through TRR80 (projects E1 and F2) and by the European Research Council (ERC) through Advanced Grant 291079 (TOPFIT) is gratefully acknowledged. A.R.\ and A.B.\ acknowledge financial support through the TUM graduate school.
\end{acknowledgments}
|
train/arxiv
|
BkiUcXjxK0-nUh8iJB6S
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
Although image recognition has been actively studied recently, the
performance in challenging environments still needs improvements \cite{Hong2021Crafting}.
In sensitive application such as mobility sensing and head-mounted
wearables, the devices have to be robust against various kinds of
difficulties such as low-light, high dynamic range (HDR) illuminance,
motion blur, and camera shake. One possible solution is to use image
enhancement and restoration methods. A lot of DNN-based low-light
image enhancement \cite{lv2018mbllen,zhang2019kindling,jiang2021enlightengan,wei2018deep,guo2020zero,ma2022toward},
denoising \cite{zhang2022idr,monakhova2022dancing,tu2022maxim}, and
deblur \cite{zhang2022pixel,whang2022deblurring,tu2022maxim} methods
were proposed and improved the pre-captured sRGB image quality. They
are quite useful for improving pre-captured image quality, but a recent
work \cite{Hong2021Crafting} showed that the accuracy gain using
them as a preprocessing of image recognition models was limited since
they already lost some information and restoring the lost information
is difficult.
\begin{figure}
\centering
\def\svgwidth{1.0\columnwidth}
\scriptsize\import{figs/}{raw_aug.pdf_tex}
\caption{The concept of the proposed noise-accounted RAW augmentation. The
conventional augmentation (a) is applied to the output of an ISP.
It generates images that cannot be taken with any ambient light intensities
due to the nonlinear operations in the ISP. Instead, ours (b) apply
augmentation before an ISP. It generates realistic pixel intensity
distribution that can be taken when the light intensity is different.
Moreover, the noise amount is also corrected to minimize the domain
gap between real and augmented ones.}
\label{fig:rawaug}
\end{figure}
Another possible solution is to prepare a dataset of the difficult
environment \cite{morawski2022genisp,chen2018learning}. However,
the mentioned datasets only cover one or a few difficulties and to
create datasets in various environments is too expensive. Especially,
manual annotation of challenging scenes is difficult and time-consuming.
For example, we can see almost nothing in usual sRGB images under
extremely low-light environments due to heavy noise. In addition,
some regions in HDR scenes suffer from halation or blocked-up shadows
because the 8-bit dynamic range of usual sRGB images cannot fully
capture the real world, e.g. 0.000001 $[cd/m^{2}]$ under the starlight
and 1.6 billion $[cd/m^{2}]$ under the direct sunlight \cite{reinhard2010high}.
Heavy motion blur and camera shake also make annotation difficult.
Some works took paired short and long exposure image, and the long-exposure
clean image was used for annotation or ground truth \cite{Hong2021Crafting,ignatov2017dslr,ignatov2020replacing,jiang2019learning}.
The limitation is that the target scene needed to be motionless if
the pairs are taken sequentially with one camera \cite{Hong2021Crafting}
or positional calibration is needed if the pairs are taken with synchronized
cameras \cite{ignatov2017dslr,ignatov2020replacing}. Some works used
a beam splitter to capture challenging images and their references
without calibration \cite{wang2022neural,jiang2019learning}. However,
it is difficult to apply to dark and blurry scenes because one of
them becomes blurry or dark. Moreover, HDR images cannot be taken
in the same way because some regions become overexposed or underexposed
in both of the cameras.
To this end, we aim to train image recognition models that work in
various environments only using a training dataset in simple environments
like bright, low dynamic range, or blur-less. In this case, image
augmentation or domain adaptation is important to overcome the domain
gap between easy training data and difficult test data. However, we
believe usual augmentation on sRGB space is ineffective because it
does not take nonlinear mapping of an ISP into consideration. Especially,
tone mapping drastically changes the RAW image values that are roughly
proportional to physical brightness \cite{wang2020practical}. Contrast,
brightness, and hue augmentation on sRGB space result in unrealistic
images that cannot be taken under any different ambient light intensity
as shown in Fig. \ref{fig:rawaug}(a). In contrast, we propose augmentation
on RAW images. In other words, augmentation is applied before ISP
to diminish the domain shift as shown in Fig. \ref{fig:rawaug}(b).
Another possible source of domain gap is the noise amount and noise
distribution difference. To tackle these problems, we propose a method
to align both light intensity and noise domain. In fact, recent works
showed that adding physics-based realistic noise improves the performance
of DNN-based denoisers \cite{wang2020practical,wei2020physics,brooks2019unprocessing,zamir2020cycleisp}
and dark image recognition \cite{Hong2021Crafting,cui2021multitask}.
Although their proposed sensor noise modelings were accurate, they
assumed that original bright images were noise free. In contrast,
we propose to modify the noise amount after contrast, brightness,
and hue conversion considering the noise amount of the original images.
It enables a more accurate alignment of the noise domain. Even in
bright images, there might be some dark parts due to shadow or the
color of objects and their prior noise can not be ignored. Another
merit of our method is that it can take an input of a dark image which
already contains a lot of noise. In addition to noise amount alignment
after color jitter augmentation, we show the importance of noise alignment
after blur augmentation which is firstly proposed in this paper.
Our contributions are as follows:
\begin{itemize}
\item It is the first work to emphasize the importance of augmentation before
ISP for image recognition to the best of our knowledge.
\item Noise amount alignment method is proposed to reduce the noise domain
gap after RAW image augmentation. In contrast to previous works, our
proposed method takes into account prior noise in the input image.
It enables more accurate alignment and use of any strength of augmentation
and even already noisy input.
\item We show qualitative analysis for the validity of our sensor noise
modeling and corresponding noise accounted augmentation. We prove
that our proposed noise-accounted RAW image augmentation has the edge
over the previous methods.
\end{itemize}
\section{Related Works}
\subsection{Recognition in Difficult Environment}
Many works have tackled image recognition in difficult environments.
For low-light environments, several works improved the accuracy by
replacing a traditional ISP with a powerful DNN-based ISP to create
clean images for downstream image recognition models \cite{diamond2021dirty,morawski2022genisp,liu2022deep}.
Even though these methods are promising because there is no information
loss, computational cost is the problem. Another approach is direct
RAW image recognition without ISP \cite{schwartz2021isp,Hong2021Crafting}.
Their image recognition models benefit from the richest information
and improve the accuracy under low-light. However, several works reported
ISPs especially tone mapping is helpful for machine vision \cite{wu2019visionisp,hansen2021isp4ml}.
We argue that direct RAW image recognition works well if the images
have low dynamic range. Another approach is domain adaptation or related
methods which support low-light recognition with bright images \cite{sasagawa2020yolo,Hong2021Crafting,cui2021multitask}.
For HDR environments, some works have proposed DNN-based auto-exposure
control \cite{tomasi2021learned,onzon2021neural} to improve downstream
recognition. Also, multi-frame HDR synthesis methods \cite{dudhane2022burst,bhat2021deep}
can be used as a preprocessing, but camera motion makes them challenging.
Luminance normalization method was also introduced to improve recognition
performance under varying illumination conditions \cite{jenicek2019no}.
For blurry environments, deblurring methods were actively studied
\cite{whang2022deblurring,zhang2019deep}. These DNN-based methods
successfully restored clear images from heavy blurred ones.
Differ from the above, we aim to do image recognition under all the
above difficulties using simple scene training data with our proposed
augmentation method. We did not use domain adaptation methods since
these methods were usually used in a setting where the target domain
is equal to or smaller than the source domain \cite{goodfellow2014generative}.
On the contrary, in our setting, the distribution of target domain
is much wider than that of source domain.
\subsection{Image Conversion on RAW}
Recently, several methods \cite{brooks2019unprocessing,lv2018mbllen,cui2021multitask}
converted light sRGB images into realistic dark images by the following
procedures. First, they inverted an ISP pipeline to generate RAW-like
images followed by illumination change on the RAW data space with
plausible sensor noise. Afterward, degraded sRGB was generated by
applying the forward ISP pipeline. By this operation, ISP's non-linear
operation could be avoided and short exposure or dark environment
could be simulated. With a similar intention, we propose to apply
augmentation before ISPs to train image recognitiom model.
\subsection{Noise Modeling and Noise Amount Alignment}
In the electronic imaging sensor community, detailed noise modelings
based on electric current and circuit have been studied \cite{suh2010column,el1998modeling,konnik2014high,gow2007comprehensive}.
They are precise but difficult to be applied to the image-to-image
conversion. Thus, in the machine vision community, simplified pixel
value-based noise modelings were proposed based on electric noise
modelings \cite{wang2020practical,wei2020physics,brooks2019unprocessing}.
Although the noise model of \cite{wei2020physics} was well designed
with a high degree of freedom Tukey lambda distribution \cite{joiner1971some},
we are based on the well-established heteroscedastic Gaussian model
\cite{brooks2019unprocessing,punnappurath2022day,foi2008practical,zamir2020cycleisp}
because it was still well fitted to real sensor noise and we can consider
prior noise in original images which will be explained later.
Recently, adding realistic, model-based sensor noise to the ground
truth clean images was proved to be helpful to train DNN-based denoiser
\cite{wang2020practical,zamir2020cycleisp,wei2020physics,brooks2019unprocessing}
and low-light object detection models \cite{Hong2021Crafting,cui2021multitask}.
Although they use highly consistent noise models, they regarded original
images as noise-free. In contrast, we propose to modify the noise
amount after image conversion considering the noise amount of the
original images. It enables a more accurate alignment of the noise
domain and enables the use of any intensity of augmentation and already
noisy images as input.
\section{Methodology}
In this section, we introduce our noise model, calibration procedure,
and proposed noise-accounted RAW image augmentation.
\subsection{Noise Model}
First of all, we briefly introduce our noise model for later explanation
although it is based on the well-established heteroscedastic Gaussian
model \cite{brooks2019unprocessing,punnappurath2022day,foi2008practical,zamir2020cycleisp}.
The number of photons $u$ hit the photodiode of each pixel is converted
to a voltage with quantum efficiency $\alpha$. It is then followed
by some processes to read out the voltage, in which noise $n_{d}$
is inevitably mixed. Then, analog gain $g$ is multiplied to amplify
the value. Lastly, the voltage is converted to a digital value. We
simplify and summarize the noise after analog gain as $n_{r}$. Since
it is common to use analog gain which has better signal to noise ratio
(SNR), we omit the digital gain term in our noise model. To sum up,
the photon-to-RAW pixel value conversion can be formulated as,
\begin{equation}
x=g\left(\alpha u+n_{d}\right)+n_{r}.\label{eq:1}
\end{equation}
We approximate $n_{d}$ and $n_{r}$ as Gaussian noise $\mathcal{N}\left(0,\sigma_{d}^{2}\right)$
$\mathcal{N}\left(0,\sigma_{r}^{2}\right)$ and the number of photons
$u$ itself obeys the Poisson distribution $\mathcal{P}\left(\bar{u}\right)$
where $\bar{u}$ is the expected number of photons. If $\bar{u}$
is large enough, we can approximate as $\mathcal{P}\left(\bar{u}\right)\fallingdotseq\mathcal{N}\left(\bar{u},\bar{u}\right)$
\cite{foi2008practical}. Thus, our noise model is as follows:
\begin{equation}
x\sim g\left(\alpha\mathcal{N}\left(\bar{u},\bar{u}\right)+\mathcal{N}\left(0,\sigma_{d}^{2}\right)\right)+\mathcal{N}\left(0,\sigma_{r}^{2}\right).\label{eq:2}
\end{equation}
We show the validity of the Gaussian approximation of $n_{d}$, $n_{r}$,
and $\mathcal{P}\left(\bar{u}\right)$ in the Section \ref{subsec:Calibration-of-the}.
We don't follow the further development of the formula in \cite{foi2008practical}
for our purpose.
Gaussian distribution has the following convenient natures;
\begin{equation}
\begin{cases}
X\sim\mathcal{N}\left(\mu_{X},\sigma_{X}^{2}\right)\\
Y\sim\mathcal{N}\left(\mu_{Y},\sigma_{Y}^{2}\right)\\
X+Y\sim\mathcal{N}\left(\mu_{X}+\mu_{Y},\sigma_{X}^{2}+\sigma_{Y}^{2}\right)\\
cX\sim\mathcal{N}\left(c\mu_{X},c^{2}\sigma_{X}^{2}\right)
\end{cases},\label{eq:gauss}
\end{equation}
if $X$ and $Y$ are independent, and that's why we choose the simple
Gaussian approximation instead of the recently proposed more expressive
noise model \cite{wei2020physics}. These natures enable the proposed
noise-accounted RAW augmentation to account for prior noise in input
images. Furthermore, they further simplify our noise model as,
\begin{equation}
x\sim\mathcal{N}\left(g\alpha\bar{u},g^{2}\alpha^{2}\bar{u}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\right).\label{eq:4}
\end{equation}
Because the expected number of photon $\bar{u}$ is inconvenient
to use in image-to-image conversion, we replace it with the expected
pixel value $\mu_{x}=g\alpha\bar{u}$ and our final noise model is
defined as
\begin{equation}
x\sim\mathcal{N}\left(\mu_{x},\sigma_{x}^{2}=g\alpha\mu_{x}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\right).\label{eq:noise_model}
\end{equation}
\subsection{Noise Model Calibration }
\begin{figure*}
\centering
\def\svgwidth{1.85\columnwidth}
\scriptsize\import{figs/}{calibration.pdf_tex}
\caption{Our noise model calibration procedures to a target sensor.}
\label{fig:calibration}
\end{figure*}
Our sensor noise model shown in Eq. (\ref{eq:noise_model}) has three
parameters, $\alpha$, $\sigma_{d}^{2}$, and $\sigma_{r}^{2}$, which
have to be calibrated per target sensor. We capture a series of raw
images of a color checker as shown in Fig. \ref{fig:calibration}(a).
We then calculate the mean $\mu_{x}$ and variance $\sigma_{x}^{2}$
along the time direction of each pixel position. We calculate them
along the time direction instead of the spatial direction as performed
in \cite{wang2020practical} since lens distortion changes the luminance
of the same color patch. These operations are performed several times
by changing the analog gain and exposure time. Eventually, we get
various sets of $\left\{ \mu_{x},\sigma_{x}^{2}\right\} $ for each
analog gain. Note that we calculate mean and variance without separating
RGB channels because there is no significant difference in noise properties.
In Eq. (\ref{eq:noise_model}), $\mu_{x}$ and $\sigma_{x}^{2}$ have
a linear relationship per analog gain $g_{n}$,
\begin{equation}
\sigma_{x}^{2}=a_{g_{n}}\mu_{x}+b_{g_{n}}.\label{eq:6}
\end{equation}
Therefore, we solve linear regression to estimate $a_{g_{n}}$ and
$b_{g_{n}}$ per gain like Fig. \ref{fig:calibration}(b). In addition,
we use RANSAC \cite{fischler1981random} to robustly take care of
outlier $\left\{ \mu_{x},\sigma_{x}^{2}\right\} $ pairs.
Finally, we estimate $\alpha$, $\sigma_{d}^{2}$, and $\sigma_{r}^{2}$
from the following redundant simultaneous equations by least-squares
method,
\begin{equation}
\begin{cases}
a_{g_{1}}=g_{1}\alpha\\
\:\:\:\vdots\\
a_{g_{n}}=g_{n}\alpha
\end{cases}\label{eq:7-1}
\end{equation}
\begin{equation}
\begin{cases}
b_{g_{1}}=g_{1}^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\\
\:\:\:\vdots\\
b_{g_{n}}=g_{n}^{2}\sigma_{d}^{2}+\sigma_{r}^{2}
\end{cases}.\label{eq:8}
\end{equation}
As the procedure above, we can calibrate the sensor noise model without
using any special devices. We later show that our sensor model and
the calibration method represent the real sensor noise with enough
preciseness.
\subsection{Noise-Accounted RAW Augmentation}
We propose augmentation before ISP instead of the usual augmentation
after ISP to generate realistic images. Furthermore, we improve the
reality of the augmented images by considering the sensor noise model.
Unlike the previous works \cite{wang2020practical,wei2020physics,cui2021multitask,Hong2021Crafting,brooks2019unprocessing,zamir2020cycleisp},
ours takes the prior noise amount of input images into account. It
generates more realistic noise since even bright images have some
extent of noise. Especially, dark parts due to shadow or the color
of objects might have a non-negligible amount of noise. Moreover,
it allows any brightness of input images different from previous works.
Specifically, we introduce how to adjust noise amount after contrast,
brightness, hue, and blur augmentation.
\subsubsection{Color Jitter Augmentation}
Contrast, brightness, and hue augmentation simulate different exposure
time, light intensity, and analog gain. Hence, we firstly assume to
multiply the exposure time, light intensity, and analog gain by $p_{e}$,
$p_{i}$, and $p_{g}$ respectively. Because $p_{e}$ and $p_{i}$
equally change the number of photon $u$ in the case of our noise
model, we rewrite them as $p_{u}=p_{e}p_{i}$. Then, images in the
above environment settings $x_{new}$ can be rewritten as,
\begin{equation}
x_{new}\sim\mathcal{N}\left(\begin{split}(p_{g}g)\alpha(p_{u}\bar{u}),\\
(p_{g}g)^{2}\alpha^{2}(p_{u}\bar{u})+(p_{g}g)^{2}\sigma_{d}^{2}+\sigma_{r}^{2}
\end{split}
\right).\label{eq:9}
\end{equation}
Based on Eq. (\ref{eq:gauss}), it can be expanded as
\begin{align}
x_{new} & \sim\mathcal{N}\left((p_{g}p_{u})\alpha g\bar{u},\:(p_{g}p_{u})^{2}(g^{2}\alpha^{2}\bar{u}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})\right)\nonumber \\
& \:\:\:\;+\mathcal{N}\left(\right.0,\:-(p_{g}p_{u})^{2}(g^{2}\alpha^{2}\bar{u}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})\label{eq:10}\\
& \:\:\:\:\;\:\:\;\:\:\:\:\:\:\;\:\:\:\:\:\:\:+(p_{g}g)^{2}\alpha^{2}(p_{u}\bar{u})+(p_{g}g)^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\left.\right).\nonumber
\end{align}
By inserting $\mu_{x}=g\alpha\bar{u}$ and original pixel value,
$x_{pre}\sim\mathcal{N}\left(\mu_{x},g\alpha\mu_{x}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\right)$,
it can be expressed with a pixel value-based equation as follows;
\begin{align}
x_{new} & \sim p_{u}p_{g}x_{pre}+\nonumber \\
& \:\:\:\;\mathcal{N}(0,\:p_{u}(1-p_{u})p_{g}^{2}g\alpha\mu_{x}\label{eq:11}\\
& \:\;\:\:\;\:\:\:\:\:\:\;\:\:+(1-p_{u}^{2})p_{g}^{2}g^{2}\sigma_{d}^{2}+(1-p_{u}^{2}p_{g}^{2})\sigma_{r}^{2}).\nonumber
\end{align}
Because the expected original pixel value $\mu_{x}$ in the Gaussian
term is impossible to obtain, we approximate it as $\mu_{x}=x_{pre}$.
Based on this equation, we can precisely simulate as if exposure time,
light intensity, and analog gain were $p_{e}$, $p_{i}$, and $p_{g}$
times. Then, let's come back to contrast, brightness, and hue augmentation.
When contrast is multiplied by $p_{c}$ and brightness is changed
by $p_{b}$, it can be expressed as,
\begin{equation}
x_{new}=p_{c}x_{pre}+p_{b}.\label{eq:12}
\end{equation}
This function is represented as multiplication by $\frac{\left(p_{c}x_{pre}+p_{b}\right)}{x_{pre}}$.
Therefore, noise-accounted contrast and brightness augmentation is
finally defined as,
\begin{equation}
\begin{cases}
random\:p_{c},\:p_{b}\\
random\:p_{u},\:p_{g}\:(where\:p_{u}p_{g}=\frac{\left(p_{c}x_{pre}+p_{b}\right)}{x_{pre}},\:p_{u},p_{g}>0)\\
Eq.(\ref{eq:11})\:(\mu_{x}\leftarrow x_{pre})
\end{cases}.\label{eq:contrast}
\end{equation}
We can also convert hue by changing $p_{c}$ and $p_{b}$ per color
filter position in the RAW bayer.
\subsubsection{Blur Augmentation}
Next, we introduce noise-accounted blur augmentation. Usual blur augmentation
makes noise smaller than real blur because the noise $n_{d}$ and
$n_{r}$ are smoothed although their noise amounts in reality are
not related to how fast you shake a camera or object movements. Only
the photon number-related noise is smoothed in real motion blur. The
blurred pixel can be expressed as,
\begin{equation}
x_{new}\sim\mathcal{N}\left(g\alpha\sum_{k}w_{k}\bar{u_{k}},\:g^{2}\alpha^{2}\sum_{k}w_{k}\bar{u_{k}}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2}\right),\label{eq:14}
\end{equation}
where $\sum_{k}w_{k}=1$ is the blur kernel. With similar equation
manipulation from Eq. (\ref{eq:9}) to Eq. (\ref{eq:11}) , noise-accounted
blur augmentation is
\begin{align}
x_{new} & \sim\mathcal{N}\left(\sum_{k}w_{k}g\alpha\bar{u_{k}},\:\sum_{k}w_{k}^{2}(g^{2}\alpha^{2}\bar{u_{k}}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})\right)\nonumber \\
& \:\;\:\:+\mathcal{N}(0,\:-\sum_{k}w_{k}^{2}(g^{2}\alpha^{2}\bar{u_{k}}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})\nonumber \\
& \:\;\:\:\;\:\:\:\:\:\:\;\:\:\;\:\:\:\:+g^{2}\alpha^{2}\sum_{k}w_{k}\bar{u_{k}}+g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})\label{eq:blur}\\
& =\sum_{k}w_{k}x_{pre}+\mathcal{N}(0,\:g\alpha\sum_{k}(1-w_{k})w_{k}x_{pre,k}\nonumber \\
& \:\;\:\;\:\:\;\:\:\:\:\:\:\;\:\:\;\:\:\:\:\:\:\;\:\:\;\:\:\;\:\:\:\;\:\:\;\:\:+(1-\sum_{k}w_{k}^{2})(g^{2}\sigma_{d}^{2}+\sigma_{r}^{2})).\nonumber
\end{align}
We account for prior noise but not for prior blur amounts because
most of images in usual datasets are not blurred very much. Furthermore,
estimating the prior blur amount is difficult.
In addition, please note that augmentation to make images clean (make
it brighter and deblurred) is inevitably difficult with these noise-accounted
RAW image augmentation. Clipping noise variance in Eq. (\ref{eq:11})
to zero forcibly enables brightening but brightening too much causes
a mismatch of noise domain.
\section{Evaluation}
\subsection{Dataset}
\begin{figure*}
\centering
\def\svgwidth{1.9\columnwidth}
\scriptsize\import{figs/}{dataset.pdf_tex}
\caption{Some examples of our introduced dataset. The upper row shows the training
dataset collected in simple environment while the lower row shows
the challenging test dataset collected in various environments from
dark to birght, HDR, and with or without handshake blur.}
\label{fig:dataset}
\end{figure*}
Although our method is applicable to any computer vision task, we
chose a human detection task as a target because of its wide usage.
we prepared a RAW image dataset for human detection task captured
with a internally developed sensor. As mentioned earlier, our objective
is to train image recognition models that work in various environments
despite only using a training dataset in simple environments. So,
most of training images were taken under normal light conditions with
fixed camera position in several environments. Note that moderately
dark and HDR images are also included to some extent in the training
dataset. The analog gain was set to 6dB for outdoor, 12dB for indoors,
and 32db for moderately dark nights to generate realistic easy images
without auto-exposure. On the other hand, the test images were taken
under HDR or extremely dark environment. In addition, about 50\% of
them were taken with strong camera shake. Moreover, the analog gain
was chosen from 3dB, 6dB, 12dB, and 24dB regardless of the environment.
Both of the datasets were taken with around 1 fps to increase diversity
between images.
We manually annotated the human bounding boxes of both training and
test data. Because precise annotation of test data on sRGB was impossible
due to the noise and blur, we applied an offline ISP per image and
then annotated the bounding boxes. We manually set adequate ISP parameters
per image and had to change the parameters several times to grasp
the entire image. To avoid annotating large training dataset like
these, it is desirable to train the model with a simple dataset. In
total, we collected 18,880 images for the training and 2,800 images
for the test. The examples are shown in Fig. \ref{fig:dataset}.
\subsection{Implementation Details}
We mainly tested with TTFNet \cite{liu2020training} whose backbone
was ResNet18 \cite{he2016deep}. The network was trained for 48 epochs
from scratch using Adam optimizer \cite{kingma2014adam} and a cosine
decay learning rate scheduler with a linear warmup \cite{loshchilov2016sgdr}
for the first 1,000 iterations whose maximum and minimum learning
rates were 1e-3 and 1e-4. We implemented a simple software ISP consisting
of only a gamma tone mapping. We implemented two types of gamma tone
mapping. One was the simplest gamma tone mapping, $y=x^{\frac{1}{\gamma}}\:(0\leq x\leq1)$.
The $\gamma$ was set to 5 after tuning with a rough grid search manner.
The other was a gamma tone mapping parameterized with three parameters
\cite{mosleh2020hardware}. Because the grid search for three parameters
is time-consuming, we tuned the parameters with backpropagation together
with the detector's weights as performed in \cite{onzon2021neural,wu2019visionisp}.
We did not use other ISP functions because they were known to have
less impact on image recognition compared with tone mapping \cite{hansen2021isp4ml}.
We also prepared an elaborated black-box ISP consisting of many functions
in addition to a tone mapping function. The parameters were tuned
for human perceptual quality by experts. We only used the elaborated
black-box ISP under the conventional training pipeline. In other words,
we experimented under ISP-augmentation-detection order, due to the
hardware limitation. If contrast augmentation was used, the hue was
also changed with a probability of 50\%. In detail, the contrast factor
per color channel $p_{c,c}$ was randomized from the base contrast
factor $p_{c,base}$ by (-0.2$p_{c,base}$, 0.2$p_{c,base}$) after
$p_{c,base}$ was randomly decided. If blur augmentation was used,
we applied a random-sized blur kernel with a probability of 50\%.
Random shift and random scale augmentation whose maximum transformations
were 10\% and 3\% of the input size were also applied with the probability
of 80\% before color jitter augmentation. The input size to the detector
was $\left(576,\,352,\,3\right)$. We evaluated the performance of
the detector with average precision ([email protected]:0.95) \cite{lin2014microsoft}.
\subsection{Calibration of the Noise Model\label{subsec:Calibration-of-the}}
For each analog gain of 6dB, 12dB, and 24dB, we captured two burst
seqences with different illumination. Each sequence consisted of 100
images. $24\times24$ Bayer pixels were sampled from each of the 24
color patches to calculate the mean and variance. In total $2\times24\times24\times24$
pairs of mean and variance sets were obtained per analog gain to estimate
noise model. Thanks to the various color fillters, exposure values,
and color patches, two sequences was enough to ensure diversity. The
lines in Fig. \ref{fig:calib_result} shows the estimated linear relationship
of Eq. (\ref{eq:6}). The coefficients of determination, $R^{2}$,
for these line estimation were 0.9833, 0.9884, 0.9862 for 6, 12, 24dB
respectively. A high $R^{2}$ value indicated that the noise intensity
was well modeled against illumination intensity. Also, the $R^{2}$
of the Eq. (\ref{eq:7-1}) and Eq. (\ref{eq:8}) were $1.0000$ and
$0.9984$. It means the noise intensity was well modeled against analog
gain. Based on the above, our noise model and the calibration method
were well suited to the sensor in terms of noise intensity.
\begin{figure}
\centering
\def\svgwidth{1.0\columnwidth}
\scriptsize\import{figs/}{calib_result.pdf_tex}
\caption{The calibration result of the sensor noise model. The dots in the
left graph are mean and variance pairs of each pixel and the lines
are the estimated linear relationship per analog gain. Since we plot
a large numbers of dots, they seem to be widely spread. In the other
hand, the histogram plot of per analog gain (right) shows a clear
difference between them.}
\label{fig:calib_result}
\end{figure}
\begin{figure}
\centering
\def\svgwidth{1.0\columnwidth}
\scriptsize\import{figs/}{shapiro.pdf_tex}
\caption{The result of Shapiro-Wilk \cite{shapiro1972approximate} test per
expected pixel value. We tested whether the 100 pixel values of each
position fall under Gaussian distribution and most of the p-values
were larger than 0.05 (a). The right (b) shows the cases where p \textless{}
0.05. It indicates the small p-values came from the sparsity not the
skew.}
\label{fig:shapiro}
\end{figure}
Then, we checked the validity of the shape of the distribution. All
of the noises were assumed to follow a Gaussian distribution. Especially,
it is unclear whether the approximation $\mathcal{P}\left(\bar{u}\right)\fallingdotseq\mathcal{N}\left(\bar{u},\bar{u}\right)$
\cite{foi2008practical} is true. Therefore, the Shapiro-Wilk test
\cite{shapiro1972approximate} was performed. If the p-value of the
test is higher than 0.05, it indicates the null hypothesis that the
data are normally distributed cannot be rejected with more than a
95\% confidence interval. Fig. \ref{fig:shapiro}(a) shows that most
of them were higher than 0.05, but some results for dark pixels were
less than 0.05. However, the distributions of the dark pixels were
like Fig. \ref{fig:shapiro}(b). It was not very skewed and the sparsity
causes the small p-value. Thereby, we concluded that all the noise
sources can be regarded as Gaussian noise.
Based on the above, our sensor noise model and the calibration method
represent the sensor noise well in terms of both intensity and distribution.
\subsection{Statistical Validation of the Noise Alignment\label{subsec:Statistical-Validation-of}}
Before checking the effectiveness of the proposed noise-accounted
RAW augmentation to computer vision applications, the statistical
validity was evaluated one more time by utilizing the sequential images
of the color checker. The evaluation method was as follows. First,
the contrast of the sequential images was changed with or without
noise consideration. Second, mean and variance pairs along the sequential
dimension were calculated. Third, the distributions of real and converted
pairs were examined. If the real and converted pairs matched well,
it means the converted images have the same noise amount as the original
real images.
We then compared contrast conversion method utilizing three different
noise alignment methods, i.e., no noise consideration, usual noise-accounted
method that disregards prior noise in the input \cite{Hong2021Crafting,cui2021multitask,wang2020practical,wei2020physics},
and our proposed noise alignment method. Fig. \ref{fig:statistical_val}
shows the comparison result. It indicates that prior noise consideration
is unnecessary if pixels are darkened considerably. However, a small
contrast factor caused a noise domain gap even if the input was bright
like Fig. \ref{fig:statistical_val} (right). In contrast, our noise-accounted
conversion always converted images with realistic noise. It implies
that our proposed method is suited for various strengths of augmentation.
If prior noise is not accounted, the inputs always have to be darkened
considerably and already dark images are difficult to use.
\begin{figure}
\centering
\def\svgwidth{1.0\columnwidth}
\scriptsize\import{figs/}{noise_effect.pdf_tex}
\caption{The statistical validation of the noise alignment. The green dots
represents real image data and the others are converted from it. The
conversions are \texttimes 0.1 and \texttimes 0.5 contrast conversion
with several methods. The (a) is from dark-to-extreme-dark conversion
and the (b) is bright-to-dark conversion.}
\label{fig:statistical_val}
\end{figure}
\subsection{Augmentation Parameters Tuning}
The optimal augmentation parameters should be different between augmentation
before and after ISP. To make a fair comparison, we roughly tuned
both of the augmentation parameters. The strategy was as follows.
First, we searched the appropriate range of contrast factor $p_{c}$
and brightness perturbation $p_{b}$ successively. To be robust to
any illumination changes, we re-parameterized $p_{b}$ as $p_{b}=\hat{p_{b}}min(x)$
and randomized the $\hat{p_{b}}$ instead of $p_{b}$. Lastly, we
searched for the appropriate max blur distance $p_{d}$. In these
experiments, we do not account for sensor noise. We used the elaborated
black-box ISP for the augmentation after ISP setting and the simplest
gamma function for the augmentation before ISP setting.
The results were shown by Table \ref{tab:tuning}. The best parameter
settings were used in the next section.
\begin{table*}
\caption{The augmentation hyperparameter tuning for a fair comparison between
before and after ISP augmentation. We tuned the range of contrast,
brightness, and blur distance one by one, and the previous best parameters
were taken over.}
\centering\scalebox{0.9}{
\begin{tabular}{c|ccccc|ccccc}
\hline
& \multicolumn{10}{c}{[email protected]:0.95 {[}\%{]}}\tabularnewline
\hline
& \multicolumn{5}{c|}{augmentation after ISP (tuned for the black-box ISP)} & \multicolumn{5}{c}{augmentation before ISP (tuned for the simplest ISP)}\tabularnewline
\hline
\multirow{2}{*}{contrast} & 0.2-1 & 0.1-1 & 0.2-5 & 0.1-10 & 0.05-20 & 0.1-1 & 0.02-1 & 0.01-1 & 0.005-1 & 0.01-1.1\tabularnewline
& 41.9 & 41.8 & 43.2 & \textbf{45.1} & 44.5 & 36.9 & 39.8 & \textbf{40.4} & 35.8 & 38.9\tabularnewline
\hline
\multirow{2}{*}{brightness} & 0-0 & -0.1-0.1 & -0.2-0.2 & -0.5-0.5 & -0.7-0.7 & 0-0 & -0.1-0.1 & -0.2-0.2 & -0.5-0.5 & -0.7-0.7\tabularnewline
& 45.1 & 44.5 & 44.5 & \textbf{45.2} & 44.7 & 40.4 & \textbf{40.9} & 39.9 & 39.4 & 39.4\tabularnewline
\hline
\multirow{2}{*}{blur distance} & 0-0 & 0-3 & 0-5 & 0-9 & 0-13 & 0-0 & 0-3 & 0-5 & 0-9 & 0-13\tabularnewline
& 45.2 & \textbf{46.8} & 45.7 & 45.9 & 37.9 & 40.9 & 39.8 & 39.6 & 40.6 & \textbf{43.3}\tabularnewline
\hline
\end{tabular}}
\label{tab:tuning}
\end{table*}
\subsection{Evaluation of the Noise-Accounted RAW Augmentation}
In this section, the proposed noise alignment was also applied. As
Table \ref{tab:result} shows, noise alignment for both color jitter
and blur augmentation improves the accuracy against difficult test
data. It suggests the noise model decreased the noise domain gap well
and the noise domain is an important factor in the DNN detector. In
the color jitter augmentation only setting, we improved the accuracy
from the general noise alignment method \cite{foi2008practical,makitalo2012optimal,Hong2021Crafting,cui2021multitask,liu2014practical,wei2020physics,punnappurath2022day,zamir2020cycleisp,brooks2019unprocessing}
by considering prior noise. In the color jitter and blur augmentation
setting, noise-accounted color jitter augmentation plus normal blur
augmentation did not improve much from no noise-accounted settings.
Instead, noise alignment in both color jitter and blur augmentation
improved the accuracy. It indicates that random noise is not effective
and realistic noise is important. If we compared the accuracy under
the same simplest gamma tone mapping setting, our proposed noise-accounted
RAW image augmentation doubled the accuracy of conventional augmentation
after ISP. Furthermore, when parameterized gamma tone mapping was
used as our simple ISP, the accuracy was even superior to the elaborated
black-box ISP consisting of many functions in addition to a tone mapping
function. As the visualization results in Appendix shows, the elaborated
black-box ISP output more perceivable images. It suggests that minimizing
the domain gap caused by augmentation is more important than the superiority
of the ISP. We might improve the accuracy more by an elaborated ISP
and the proposed augmentation.
\begin{table}
\caption{Evaluation of the noise-accounted RAW augmentation. The color augmentation
contains default hue augmentation plus tuned contrast and brightness
augmentation. The \emph{w/o prior} means the prior input noise was
disregarded like many of the previous noise-accounted image conversion
methods \cite{foi2008practical,makitalo2012optimal,Hong2021Crafting,cui2021multitask,liu2014practical,wei2020physics,punnappurath2022day,zamir2020cycleisp,brooks2019unprocessing}.
Because we adopt the well-established heteroscedastic Gaussian model
Eq. (\ref{eq:2}), it is identical to the noise alignment of \cite{brooks2019unprocessing,punnappurath2022day,foi2008practical,zamir2020cycleisp}.
In this experiment, we also used the parameterized gamma tone mapping
as the simple ISP, although it can't be used in the augmentation after
ISP settings because the gradient from the detection loss is needed
to tune.}
\centering
\scalebox{0.75}{
\begin{tabular}{ccc|c|c|c}
\hline
\multicolumn{2}{c}{} & & \multicolumn{3}{c}{[email protected]:0.95 {[}\%{]}}\tabularnewline
& & & black-box & \multicolumn{2}{c}{simple ISP }\tabularnewline
\multicolumn{2}{c}{augmentation} & noise & ISP & simplest & parameterized\tabularnewline
\hline
\multirow{4}{*}{Color} & after & - & 45.2 & 19.3 & -\tabularnewline
\cline{2-6} \cline{3-6} \cline{4-6} \cline{5-6} \cline{6-6}
& \multirow{3}{*}
\begin{tabular}{c}
before \tabularnewline
(ours)\tabularnewline
\end{tabular}} & - & - & 40.9 & 44.4\tabularnewline
& & w/o prior & - & 43.5 & 47.7\tabularnewline
& & ours & - & \textbf{44.6} & \textbf{48.1}\tabularnewline
\hline
\multirow{4}{*}
\begin{tabular}{c}
Color\tabularnewline
+\tabularnewline
Blur\tabularnewline
\end{tabular}} & after & - & 46.8 & 20.4 & -\tabularnewline
\cline{2-6} \cline{3-6} \cline{4-6} \cline{5-6} \cline{6-6}
& \multirow{3}{*}
\begin{tabular}{c}
before\tabularnewline
(ours)\tabularnewline
\end{tabular}} & - & - & 43.3 & 43.8\tabularnewline
& & ours$\dagger$ & - & 43.4 & 47.9\tabularnewline
& & ours & - & \textbf{45.3} & \textbf{48.3}\tabularnewline
\hline
\end{tabular}
}
\scalebox{0.75}{
$\dagger$: The noise alignment was only applied to the color jitter
augmentation.
}
\label{tab:result}
\end{table}
As mentioned earlier, there were noise dealing works in noise-related
fields like denoising. We compared ours with these methods in the
detection task. One was K-Sigma transform \cite{wang2020practical},
a kind of noise domain generalization. It normalized images as the
pixel value and the standard deviation of noise have a linear correlation.
The other was noise amount notification with a concatenation of noise
variance map \cite{brooks2019unprocessing}. To obey the previous
settings, direct RAW input without an ISP was also compared. Color
jitter and blur augmentation were also applied to the methods different
from the previous papers for a fair comparison. Table \ref{tab:result-1}
shows the comparison results. As to the K-Sigma transform, simply
applying \textquotedbl aug.\textquotedbl{} before or after the K-Sigma
transforms gives better result. However, there is a theoretical problem
in both cases. If \textquotedblleft aug.\textquotedblright{} is applied
after the transform, the linear relation between pixel value and noise
amount is retained but pixel intensity becomes inconsistent. On the
other hand, applying \textquotedblleft aug.\textquotedblright{} before
the transform makes the noise amount unrealistic. Changing the augmentation
to \textquotedblleft our aug.\textquotedblright{} makes the intensity
and noise realistic and improved the accuracy. From the experiment,
we found the proposed augmentation boosts previous noise-dealing methods
if ISP was used. However, if ISP was not used, noise accounted augmentation
slightly deteriorated the accuracy. We argue that it is because the
intensity distribution was too difficult and unrealistic clean images
might help training. However, the overall accuracy was lower than
with ISP. Also, for the detection task, only our proposed method was
enough. It might be an indication that, unlike the denoising task
which should focus on noise, it is important to make the detector
to focus on pixel intensity distribution for the detection task.
\begin{table}
\caption{The comparison results with other noise-dealing techniques. The \textquotedblleft aug.\textquotedblright{}
means contrast and blur augmentation without noise-accounted and \textquotedblleft our
aug.\textquotedblright{} means with noise-accounted. We used the simplest
gamma function in the ISP.}
\centering
\begin{tabular}{c|cc}
\hline
& \multicolumn{2}{c}{[email protected]:0.95 {[}\%{]}}\tabularnewline
method & w/o ISP & w/ ISP\tabularnewline
\hline
concat \cite{brooks2019unprocessing} & 16.5 & 21.5\tabularnewline
aug. + concat \cite{brooks2019unprocessing} & \textbf{35.0} & 31.6\tabularnewline
our aug. + concat \cite{brooks2019unprocessing} & 33.7 & 40.4\tabularnewline
K-Sigma \cite{wang2020practical} & 14.3 & 27.5\tabularnewline
K-Sigma \cite{wang2020practical} + aug. & 25.0 & 34.1\tabularnewline
aug. + K-Sigma \cite{wang2020practical} & 26.6 & 42.1\tabularnewline
our aug. + K-Sigma \cite{wang2020practical} & 26.3 & 44.0\tabularnewline
our aug. & 32.8 & \textbf{45.3}\tabularnewline
\hline
\end{tabular}
\label{tab:result-1}
\end{table}
\section{Conclusion}
We propose a noise-accounted RAW augmentation method in which augmentation
was applied before ISP to minimize the luminance domain gap and a
sensor noise model was taken into account to minimize the noise domain
gap. Unlike previous noise-accounted methods, ours takes the prior
input noise into account. It minimizes the domain gap more and enables
the use of already noisy images as training data. Thanks to the realistic
augmentation, our method improved the detection accuracy in difficult
scenes compared to the conventional methods. In the future, we would
like to investigate whether the proposed augmentation with an elaborate
ISP improves the computer vision performance even further. We are
glad if this work shed light on the importance of RAW images.
\url{}
{\small
\bibliographystyle{ieee_fullname}
|
train/arxiv
|
BkiUeR3xK7ICUyBu02_k
| 5 | 1 |
\section{Introduction}
\documentclass[preprints,article,accept,moreauthors,pdftex]{Definitions/mdpi}
\firstpage{1}
\makeatletter
\setcounter{page}{\@firstpage}
\makeatother
\pubvolume{xx}
\issuenum{1}
\articlenumber{5}
\pubyear{2019}
\copyrightyear{2019}
\history{Received: date; Accepted: date; Published: date}
\Title{Wind profiling in the lower atmosphere from wind-induced perturbations to multirotor UAS}
\newcommand{\orcidauthorA}{0000-0000-000-000X}
\Author{Javier Gonz\'alez-Rocha$^{1}*$, Stephan F.~J. De Wekker $^{2}$, Shane D. Ross$^{1}$ and Craig A. Woolsey $^{1,}$}
\AuthorNames{Javier Gonz\'alez-Rocha, Firstname Lastname and Firstname Lastname}
\address{%
$^{1}$ \quad Department of Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, VA 24060, U.S.A; [email protected] (J.G.R.); [email protected] (S.D.R); [email protected] (C.A.W.)\\
$^{2}$ \quad Department of Environmental Sciences, University of Virginia, Charlottesville, VA 22903, U.S.A.;
[email protected] (S.F.J.D)}
\corres{Correspondence: e-mail: [email protected]; Tel.:+1-540-231-2019}
\abstract{We present a model-based approach to wind velocity profiling using motion perturbations of a multirotor unmanned aircraft system (UAS) in both hovering and steady ascending flight. A state estimation framework was adapted to a set of closed-loop rigid body models identified for an off-the-shelf quadrotor. The quadrotor models used for wind estimation were characterized for hovering and steady ascending flight conditions ranging between 0 and 2 m/s. The closed-loop models were obtained using system identification algorithms to determine model structures and estimate model parameters. The wind measurement method was validated experimentally above the Virginia Tech Kentland Experimental Aircraft Systems Laboratory by comparing quadrotor and independent sensor measurements from a sonic anemometer and two SoDARs. Comparison results demonstrated quadrotor wind estimation in close agreement with the independent wind velocity measurements. Wind velocity profiles were difficult to validate using time-synchronized SoDAR measurements, however. Analysis of the noise intensity and signal-to-noise ratio of the SoDARs proved that close-proximity quadrotor operations can corrupt wind measurement from SoDARs.}
\keyword{Unmanned Aircraft Systems, System Identification, Wind Estimation, Multi-Rotor, Drone, Atmospheric Science, Wind Profile, Boundary Layer Meteorology}
\usepackage{pmat}
\usepackage{booktabs}
\usepackage[normal]{subfigure}
\usepackage{booktabs}
\usepackage{multirow}
\usepackage{amsfonts}
\usepackage{amssymb}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{array}}{\begin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\newcommand{\begin{figure}}{\begin{figure}}
\newcommand{\end{figure}}{\end{figure}}
\newcommand{\reff}[1]{(\ref{#1})}
\newcommand{\bfseries\itshape}{\bfseries\itshape}
\newcommand{\bm}[1]{\mbox{\boldmath$#1$}}
\newcommand{A \hspace{-0.05in} R}{A \hspace{-0.05in} R}
\newcommand{\mcrot}[4]{\multicolumn{#1}{#2}{\rlap{\rotatebox{#3}{#4}~}}}
\begin{document}
\section{Introduction}
\label{s:introduction}
Measuring wind velocity near the Earth's surface is critical to understanding the surface-atmosphere interactions driving the dynamic state of the atmospheric boundary layer (ABL). How the ABL evolves with space and time influences phenomena that impact public health and safety~\cite{gonzalez2019sensing,barbieri2019intercomparison,jacob2018considerations,chilson2019moving,smith2017catalyzing}. For example, the transport of air pollutants, pollen and spores~\cite{villa2016overview,nolan2018coordinated,nolan2019method,carranza2018vista}, wind power supply to smart grid systems~\cite{chao2010surface,Fairley2018building,phuangpornpitak2013opportunities,colak2015critical,wildmann2017measuring}, forecast of local weather~\cite{barbieri2019intercomparison,jacob2018considerations,chilson2019moving,smith2017catalyzing}, air traffic control at airports ~\cite{alsalous2017evaluation,tang2010accurate,tang2011lagrangian,knutson2015lagrangian}, the spread and management of wildfires~\cite{rabinovich2018toward,da2017unmanned,al2017review,Xingetal2019}, and emissions mitigation of greenhouse gases~\cite{duren2019california,smith2017fugitive,andersen2018auav,roldan2015mini} are all affected by the dynamic state of the ABL. Therefore, mitigation of adverse conditions affected by the dynamic state of the ABL requires accurate measurements of wind velocity over micro- and mesoscale domains~\cite{barbieri2019intercomparison,greene2019environmental,varentsov2019experience}. However, observations of wind velocity at high spatial resolution are difficult due to the cost and limited mobility of conventional atmospheric sensing technology.
Advancing capabilities for wind sensing with multirotor unmanned aircraft systems (UAS) can help fill the existing gap in ABL observations~\cite{jacob2018considerations,chilson2019moving,smith2017catalyzing}. This is due to the effectiveness of multirotor UAS for probing the ABL over complex terrain or water where reliable operation of in situ or remote sensors is prohibitively difficult or difficult. In general, multirotor UAS are mobile, portable, low cost, and easy to operate. Improving upon existing wind sensing algorithms to expand the flight envelope in which a multirotor can accurately measure the wind velocity can significantly enhance their utility for on-demand targeted observations inside the ABL. Increasing the capability of multirotor UAS for wind sensing in this way can supplement ground-based and airborne atmospheric observations currently used to characterize the ABL.
Existing wind sensing approaches with multirotor UAS consist of direct and indirect methods. Direct methods involve the retrieval of wind velocity from a flow sensor onboard a multirotor. Examples include various types of anemometers~\cite{wolf2017wind,de2014designing,donnell2018wind,hollenbeck2018wind,Hollerbeck2019pitch} and other air data systems~\cite{prudden2016flying}. The choice of sensor depends on the sensor size and power requirements, the aircraft payload capacity, and the airframe configuration of the multirotor aircraft. Indirect methods, on the other hand, estimate wind velocity from wind induced perturbations to the aircraft motion and do not require a separate airflow sensor. Conventional model-based approaches to wind estimation have involved kinematic~\cite{neumann2015real,brosy2017simultaneous}, point mass~\cite{palomaki2017wind,gonzalez2019sensing,donnell2018wind}, and rigid body models~\cite{gonzalez2019sensing} of control-augmented quadrotor dynamics, which characterize how a quadrotor responds to disturbances under feedback stabilization. A comparison of all three models in~\cite{gonzalez2019sensing} demonstrated that both the accuracy and bandwidth of wind estimates increases with fidelity of the vehicle motion model.
To date, model-based wind estimation approaches have only incorporated models that are appropriate for \textit{hovering} flight~\cite{gonzalez2019sensing}. Measuring wind velocity only while hovering limits the speed at which the aircraft can sample the lower atmosphere. The limitation of stationary sampling is largely due to the limited endurance of multirotor aircraft (typically less than 20 minutes). However, many research and operational applications require atmospheric sampling over horizontal and vertical distances. Therefore, there is a need to develop wind estimation algorithms that allow a multirotor UAS to move while accurately measuring wind velocity within the ABL.
This paper presents a method for estimating vertical profiles of the horizontal wind velocity using a dynamic rigid body model of a quadrotor in hovering and steady-ascending equilibrium flight conditions. The method presented here, referred to as the
{\it dynamic rigid body wind profiling} method or
{\it DRBWindPro} method for short, is an extension of the wind sensing algorithm presented in \cite{gonzalez2019sensing} to measure wind velocity in hovering flight. The extension of the wind sensing algorithm incorporates dynamic rigid body models characterized from system identification for equilibrium flight conditions corresponding to steady ascent rates ranging from 0 to 2~m/s. The models from system identification were used to estimate the wind velocity in the vicinity of ground-based in situ and remote atmospheric sensors. Quadrotor wind estimates and wind measurements from ground-based atmospheric sensors were then compared to determine the accuracy of the DRBWindPro method.
The organization of this paper is as follows. Section~\ref{sec:materials_and_methods} introduces materials and methods used for model-based wind estimation. This section includes the formulation of aircraft dynamics, system identification of aircraft models, and the design of a state observer for wind estimation. The ground-based wind measurement methods are described in Section~\ref{sec:experimental_validation_of_wind_estimates}. In Section~\ref{sec:results} results from system identification experiments and comparison of multirotor wind velocity measurements with ground-based measurements are presented. Section~\ref{sec:discussion} presents a thorough discussion of results from system identification and from comparing multirotor and ground-based wind measurements. Finally, a summary of findings and future work to extend the utility of multirotor UAS for wind sensing are presented in Section~\ref{sec:conclusion}.
\section{Materials and Methods}
\label{sec:materials_and_methods}
\subsection{Modeling Framework}
\label{ss:modeling_framework}
The equations of motion for a control-augmented (i.e., feedback-stabilized) quadrotor can be expressed as a system of first-order, nonlinear, time-invariant ordinary differential equations~\cite{gonzalez2017measuring}:
\begin{equation} \bm{\dot{x}} = \bm{f}(\bm{x},\bm{u},\bm{w}(t,\bm{x})),\hspace{1cm} \bm{x}(t_0) = \bm{x}_0\end{equation}
relating the rate of change $\bm{\dot{x}}$ of the vehicle's 12-dimensional state $\bm{x}$ (i.e., position, attitude, velocity, and angular velocity), to the state itself, the control inputs $\bm{u}$, and wind disturbances $\bm{w}(t,\bm{x})$ varying over time and space. Moreover, when the aircraft motion is modeled as a small perturbation from some equilibrium flight condition that corresponds to a constant vertical ascent speed denoted by $V_{z_\mathrm{eq}}$, the nonlinear dynamics describing the control-augmented motion of the quadrotor is well approximated by a linear model. As a result, one may infer wind velocity from wind-induced motion perturbations to a quadrotor employing estimation theory developed for linear systems.
Linear approximations of quadrotor dynamics for wind estimation are considered in this study for hovering and steady-ascending motions satisfying trim flight conditions. For a quadrotor, trim flight conditions are satisfied when both translational rates $\bm{v}$ and rotational rates $\bm{\omega}$ remain constant over time, i.e., $\bm{\dot{v}}\equiv \bm{0}$ and $\bm{\dot{\omega}}\equiv \bm{0}$. Linear approximations of quadrotor dynamics for hovering and ascending flight are in the form,
\begin{equation} \frac{d}{dt}\bm{{\tilde{x}}} = \bm{A}\bm{\tilde{x}}+ \bm{B}\bm{\tilde{u}}+\bm{\Gamma}\bm{w},\label{eqn:linearized_dynamics}\end{equation}
where the vectors $\bm{\tilde{x}} = \bm{x}-\bm{x}_{\rm eq}$ and $\bm{\tilde{u}} = \bm{u}-\bm{u}_{\rm eq}$ denote, respectively, small deviations in the state and input vectors from their steady-state values. Additionally, the state matrix $\bm{A}\in\mathbb{R}^{12\times12}$ models unforced dynamics, the input matrix $\bm{B}\in\mathbb{R}^{12\times4}$ characterizes applied forcing, and the disturbance matrix $\bm{\Gamma}\in \mathbb{R}^{12\times3}$ captures wind-induced perturbations. This model form is used to sense wind velocity at different steady motion conditions (i.e., different steady ascent rates $V_{z_{\rm eq}}$) using state estimation once state, input, and disturbance matrices are known.
\subsection{Aircraft System Identification}
\label{ss:aircraft_system_identification}
Aircraft system identification is used to characterize the state and input matrices $\bm{A}$ and $\bm{B}$ for a quadrotor flying in still air conditions (i.e., $\bm{w}(\bm{x},t) \approx \bm{0}$ m/s). In general, this modeling approach is a multi-faceted process that relies on input-output flight test data to characterize bare-airframe or control-augmented dynamic models for an aircraft, depending on application. Figure~\ref{fig:closed-loop_open-loop} shows a schematic of the inputs $\bm{u}$ and outputs $\bm{y}$ used to identify bare-airframe and control-augmented models. A bare-airframe model, assuming actuator dynamics to be negligible, is identified using control signals from the flight controller $\bm{\mu}_{\rm ctrl}$ and the vehicle's measured dynamic response $\bm{y}$. A control-augmented model, alternatively, is identified using the reference signal $\bm{\delta}_{\rm r}$ from pilot-induced joystick commands and the vehicle's measured dynamic response $\bm{y}$. Which model is identified depends on its application. For wind estimation purposes with an off-the-shelf quadrotor, we use the latter because it does not require knowledge of the onboard flight controller architecture.
\begin{figure}[tbh!]
\centering
\includegraphics[width = 11cm]{ControlAugmented.jpg}
\caption{A schematic of input-output signals for closed-loop and open-loop mappings. }
\label{fig:closed-loop_open-loop}
\end{figure}
The quadrotor models from system identification are for steady-state equilibrium flight conditions corresponding to the hovering and steady ascending flight: $V_{z_{\rm eq}} = \{0.0,0.5,1.0,1.5,2.0\}$~m/s. The identification of each model involved separately determining four sub-models that describe the plunge, yaw, roll, and pitch dynamics of the quadrotor; see Figure~\ref{fig:quadrotor_modes}. In this process, stepwise regression was used first to determine the parameter structure of each model. Results from stepwise regression were then used to estimate model parameters using an output error algorithm. This approach to system identification was used to minimize the set of parameters being estimated at one time and to avoid overparameterized models.
\begin{figure}[b]
\centering
\includegraphics[width = 13cm]{quadrotormode1.jpg}
\caption{Quadrotor plunge, yaw, roll, and pitch modes }
\label{fig:quadrotor_modes}
\end{figure}
\subsubsection{Multirotor UAS Platform}
\label{Multirotor_UAS_Platform}
The multirotor UAS used to measure the wind velocity is an off-the-shelf 3DR Solo quadrotor shown in Figure~\ref{fig:3DR Solo}. This aircraft is 25 cm tall with a 46 cm diagonal between motor shafts. Fully equipped with a lithium polymer battery pack and a 3-axis camera gimbal, the quadrotor weighs 1.5 kg and has a payload capacity of 0.5 kg. The propellers used with the quadrotor are a Master Airscrew $10\times 4.5$ propeller set. The quadrotor's autopilot is a Pixhawk 2.1 Green Cube manufactured by ProfiCNC. The autopilot operates using open-source Arducopter firmware and is compatible with MissionPlanner and Solex telemetry software. On board the Pixhawk 2.1 Green Cube are the sensors listed in Table \ref{table:pixhawk-integrated_sensors} that are part of the autopilot's attitude and heading reference system (AHRS).
\begin{table}[tbh!]
\centering
\begin{tabular}{ccccccccccccccc}
\toprule
\multirow{2}{*}{State Measurement} & \multirow{2}{*}{Sate Variables} &&
\multicolumn{4}{c}{ Sensor Type \& Sampling Rate }\\
\cline{4-7} \
&&& \multicolumn{2}{c}{ Direct } & \multicolumn{2}{c}{ Indirect } \\
\toprule \midrule
\multirow{2}{*}{Position} & \multirow{2}{*}{ $ \{x,y,z\}$} & & \multirow{2}{*}{GPS} & \multirow{2}{*}{5Hz} & Barometer & 8 Hz \\ &&&&& Extended Kalman Filter & 8 Hz \\\midrule
\multirow{3}{*}{Attitude} & \multirow{3}{*}{$\{\phi,\theta,\psi \}$} &&\multirow{3}{*}{---}& \multirow{3}{*}{---} & Gyroscope & 18 Hz \\&&&&& Accelerometer & 18 Hz \\&&&&& Extended Kalman Filter & 8 Hz \\\midrule
Translational & \multirow{2}{*}{ $\{u,v,w \} $ } && \multirow{2}{*}{GPS} & \multirow{2}{*}{5Hz} & Accelerometer & 18 Hz \\Velocity&&&&& Extended Kalman Filter & 8 Hz \\\midrule
Angular Velocity & $\{p,q,r\} $ && Gyroscope & 18 Hz & --- & --- \\\midrule
\bottomrule
\end{tabular}
\caption{State measurements from autopilot's AHRS.}
\label{table:pixhawk-integrated_sensors}
\end{table}
\begin{figure}
\centering
\subfigure[Quadrotor platform]{\includegraphics[width=50mm]{3DRSoloFlying.jpg}\label{subfig:sonic_anemometer}}\qquad
\subfigure[Quadrotor dimensions]{\includegraphics[width=50mm]{3DRSoloDimensionsFrontTop.jpg}\label{subfig:SoDAR}}
\caption{ Atmospheric sensors used for validation of UAS-based wind sensing. }
\label{fig:3DR Solo}
\end{figure}
\subsubsection{System Identification Flight Testing}
\label{sss:system_identification_flight_testing}
System identification flight experiments were conducted outdoors in an open field adjacent to the Virginia Tech Kentland Experimental Aircraft Systems (KEAS) Laboratory to characterize quadrotor linear models for wind estimation. The flight experiments were designed to identify models approximating the quadrotor dynamics about the equilibrium flight conditions corresponding to $V_{z_{\rm eq}} = \{0.0, 0.5, 1.0, 1.5, 2.0\}$ m/s. The experiments required exciting the aircraft from each flight equilibrium in calm atmospheric conditions (i.e., $\bm{w}(\bm{x},t) \approx \bm{0}$ m/s) to minimize the impact of exogenous excitations on the system identification process. The input-output measurements used for system identification consisted of pilot-induced, sinusoidal joystick commands and the vehicle's measured dynamic response.
The system identification experiments were performed in two parts. A first set of experiments were performed to identify the quadrotor's hovering flight dynamics. This required exciting from equilibrium flight the quadrotor's plunge, yaw, roll and pitch dynamics shown in Figure~\ref{fig:quadrotor_modes}.
A second set of experiments was conducted to identify quadrotor models for constant ascent rates varying between 0.5 and 2 m/s. This involved exciting the quadrotor's roll and pitch dynamics from equilibrium flight conditions corresponding to $V_{z_{\rm eq}}>0$. For the latter case, the plunge and yaw dynamics of the quadrotor were assumed to be well approximated by models identified for hovering flight considering that the vehicle's response to wind perturbations in steady-ascending flight is dominated by roll and pitch motions. Measurements from both sets of system identification experiments were then used to identify the model structures and parameter estimates approximating the quadrotor's dynamics for all five operating conditions specified by $V_{{z}_{\rm eq}}$.
\subsubsection{Model Structure Determination}
\label{ss:model_structure_determination}
The parameter structure of each model was determined from input-output measurements employing the stepwise regression algorithm described in~\cite{klein2006aircraft}. Using this approach, a set of postulated regressors, $\bm{\chi} = \{\chi_1,\chi_2,\cdots,\chi_n\}$ is tested one at a time to determine which ones significantly improve the fit of the model
\begin{equation} z(k) = a_{0}+\sum_{i = 1}^{m}a_i\chi_{i}(k),\hspace{1cm} k= 1,2,\cdots,N\end{equation}
where $z$ is the quadrotor's measured response, $a_0$ is the model bias, $\bm{a} = \{a_0,a_1,\cdots,a_m\}$ is the set of model coefficients associated with $ m $ regressors, and $N$ is the sample size of measurements. How well each model structure fits the observed data as regressors are added or removed is determined using the $F_0$ statistic and coefficient of determination $R^2$ metrics. The $F_0$ statistic quantifies how much each regressor contributes to the fit of the model. The coefficient of determination quantifies how well the model output matches the measured data. Using both metrics, a total of four parameter structures were identified to characterize the quadrotor's plunge, yaw, roll, and pitch dynamics.
\subsubsection{Parameter Estimation}
\label{ss:mm_parameter_estimation}
The model structures determined from step-wise regression were used to initialize the estimation of model parameters using the output error algorithm described in \cite{klein2006aircraft}. The output error algorithm estimates model parameters using the output of the linear aircraft model described by Equation~\reff{eqn:linearized_dynamics} in still air conditions and using the N sample points of measured flight data, which are assumed to be corrupted by sensor noise $\bm{\eta}$. The model and measurements used by the output error method are summarized below:
\begin{eqnarray}
\frac{d}{dt}\bm{\tilde x} &=& \bm{A}\bm{\tilde{x}}+\bm{B}\bm{\tilde{u}},\hspace{1cm} \tilde{x}(0) = x_0 \label{eqn:oe_state_model}\\ \bm{y} &=& \bm{C}\bm{\tilde{x}}+\bm{D}{\bm{\tilde{u}}} \label{eq:oe_output_model}\\
\bm{z}(k) &=& \bm{y}(k)+\bm{\eta}(k)\hspace{1cm} k = 1,2,\cdots,N \label{eqn:oe_measurement_model}
\end{eqnarray}
This formulation of the output error method assumes that the model being identified is free of process noise, making numerical propagation of state measurements possible. Moreover, output error parameter estimation assumes flight measurements to be corrupted with uncorrelated, zero-mean Gaussian noise $\bm{\eta} \in \mathcal{N}(\bm{0},\bm{R}_{\rm Cov})$ such that the covariance matrix of measurement noise is diagonal,
\[\mathrm{Cov}(\bm{\eta}(k))=E[\bm{\eta}(k)\bm{\eta}^{T}(k)]=\bm{R}_{\rm Cov}\]
Using this framework, parameter estimates are tuned iteratively while minimizing the cost function,
\begin{equation} J = \tfrac{1}{2}\sum^{N}_{i=1}[\bm{y}(k)-\bm{z}(k)]^T\bm{R}_\mathrm{Cov}^{-1}[\bm{y}(k)-\bm{z}(k)]\end{equation}
which is the uncertainty-weighted residual between the model output and observation measurements.
Employing the output error approach, three sets of parameters were estimated and averaged to characterize quadrotor models for hovering and steady vertical ascent conditions. The quadrotor models characterized from averaged parameter estimates were validated using a separate flight test data set collected during system identification experiments.
\subsubsection{Model Validation}
\label{sss:mm_model_validation}
Linear models approximating steady-flight quadrotor dynamics were validated using input-output data collected separately during system identification flight experiments. The validation process for linear models involved comparing model outputs and state measurements corresponding to pilot-generated excitations using the root-mean-squared error (RMSE) metric: \[RMSE = \sqrt{\frac{1}{N}\sum_{k=1}^{N}(y(k)-z(k))^2}\] where $y$ is the model output, $z$ is the state measurements, and $N$ is the measurement sample size. In general, small RMSE values are indicative of accurate parameter estimates. Results from the RMSE quantification were used to assess the goodness of each model prior to designing a state observer for wind estimation.
\subsection{Observer Synthesis}
\label{ss:observer_synthesis}
To synthesize observers for wind velocity estimation, the dynamic rigid body wind sensing method presented in \cite{gonzalez2019sensing} was adapted. Therefore, assuming absolute measurements from the GPS antenna and AHRS on board the quadrotor to be available, the output equation, as in \cite{gonzalez2019sensing}, is of the form
\[
\bm{y} = \mathbb{I}_{12} \tilde{\bm{x}} + \begin{pmat}({.}) \bm{0}_3 \cr \bm{0}_3 \cr\- \mathbb{I}_3 \cr \bm{0}_3 \cr \end{pmat} \bm{V}_{\rm w}
\]
where output measurements of translational velocity are the summation of both air-relative and wind velocity (with identity and zero matrices written in short notation, e.g., $\mathbb{I}_{12} \in \mathbb{R}^{12\times 12}$). The quadrotor's output measurement and identified models were then used to formulate wind-augmented models for the set of operating conditions prescribed by $V_{z_{\rm eq}}$.
Wind velocity was estimated using the quadrotor models identified from system identification in a state observer framework. State observers were developed based on wind-augmented models corresponding to each of five equilibrium flight conditions. Each wind-augmented model is obtained by reformulating~\reff{eqn:linearized_dynamics} such that the wind disturbance is part of the wind-augmented state vector: $\bm{x}_{\rm A} = [\bm{\tilde{x}}^T,\bm{w}^T]^T$. Here, as in \cite{gonzalez2019sensing}, variations of wind velocity with respect to time were assumed to vary slowly relative to the dynamics of the quadrotor such that $\frac{d}{dt}\bm{w} \approx \bm{0}$. Therefore, wind-augmented dynamic models corresponding to each flight equilibrium were defined as follows:
\begin{equation}
\frac{d}{dt}\bm{x}_{\rm A} =
\underbrace{\begin{pmat}({|}) \bm{A} & \bm{\Gamma} \cr\-
\bm{0}_{3 \times 12} & \bm{0}_3 \cr \end{pmat}}_{\bm{A}_{\rm A}} \bm{x}_{\rm A}
+ \underbrace{\begin{pmat}({}) \bm{B} \cr\- \bm{0}_{3 \times 4} \cr \end{pmat}}_{\bm{B}_{\rm A}} \tilde{\bm{u}} \hspace{1cm}
\bm{y} = \underbrace{\begin{pmat}({...|})
\mathbb{I}_3 & \bm{0}_3 & \bm{0}_3 & \bm{0}_3 & \bm{0}_3 \cr
\bm{0}_3 & \mathbb{I}_3 & \bm{0}_3 & \bm{0}_3 & \bm{0}_3 \cr
\bm{0}_3 & \bm{0}_3 & \mathbb{I}_3 & \bm{0}_3 & \mathbb{I}_3 \cr
\bm{0}_3 & \bm{0}_3 & \bm{0}_3 & \mathbb{I}_3 & \bm{0}_3 \cr
\end{pmat}}_{\bm{C}_{\rm A}} \bm{x}_{\rm A}
\label{eqn:AugmentedDynamics}
\end{equation}
where $\bm{A}_{\rm A}\in \mathbb{R}^{15\times15}$ is the wind-augmented state matrix, $\bm{B}_{\rm A}\in\mathbb{R}^{15\times4}$ is the wind-augmented input matrix, and $\bm{C}_{\rm A}\in\mathbb{R}^{12\times 15}$ is the wind-augmented output model.
To verify the observability of the augmented dynamic model, an observability analysis was conducted to determine if wind estimates can be constructed from the model and output measurements. The system is observable if and only if the observability matrix defined below is column-wise full rank.
\[
\bm{\mathcal{O}}(\bm{C}_{\rm A},\bm{A}_{\rm A})
= \begin{pmat}({})
\bm{C}_{\rm A} \cr
\bm{C}_{\rm A} \bm{A}_{\rm A} \cr
\vdots \cr
\end{pmat}
= \begin{pmat}({...|})
\mathbb{I}_3 & \bm{0}_3 & \bm{0}_3 & \bm{0}_3 & \bm{0}_3 \cr
\bm{0}_3 & \mathbb{I}_3 & \bm{0}_3 & \bm{0}_3 & \bm{0}_3 \cr
\bm{0}_3 & \bm{0}_3 & \mathbb{I}_3 & \bm{0}_3 & \mathbb{I}_3 \cr
\bm{0}_3 & \bm{0}_3 & \bm{0}_3 & \mathbb{I}_3 & \bm{0}_3 \cr\-
\bm{0}_3 & \bm{G}_w& \mathbb{I}_3 & \bm{0}_3 & \mathbb{I}_3 \cr
\bm{0}_3 & \bm{0}_3 & \bm{0}_3 & \mathbb{I}_3 & \bm{0}_3 \cr
\bm{0}_3 & \bm{G}_g & - d_w \bm{e}_3 \bm{e}_3^T & \bm{0}_3 & \bm{0}_3 \cr
\bm{0}_3 & \bm{0}_3 & \bm{D}_{m_v} & \bm{D}_{m_\omega} & \bm{0}_3 \cr\-
\vdots & \vdots & \vdots & \vdots & \vdots \cr
\end{pmat}
\]
The analysis shows that the observability matrix is full rank, i.e.,~$\rm{rank}\left[\bm{\mathcal{O}}(\bm{C}_{\rm A},\bm{A}_{\rm A})\right] =15$. Therefore, computing a suitable observer gain matrix $\bm{G}_\mathrm{O}$, state estimates of the following observer will converge to the state of the system~\ref{eqn:AugmentedDynamics}
\begin{equation}
\frac{d}{dt}~\widehat{\bm{x}_{\rm A}} =
\bm{A}_{\rm A} \widehat{\bm{x}_{\rm A}}
+ \bm{B}_{\rm A} \bm{u}
+ \bm{G}_\mathrm{O}\left( \bm{y} - \bm{C}_{\rm A} \widehat{\bm{x}_{\rm A}} \right)
\label{eqn:Observer} \end{equation}
Because the augmented state vector includes the wind velocity, it follows that the state estimator (9) provides a convergent estimate of $\bm{w}$, provided the underlying assumptions hold (e.g., small perturbations from the nominal state).
\section{Experimental Validation of Wind Estimates}
\label{sec:experimental_validation_of_wind_estimates}
\subsection{Field Experiment Setup}
\label{ss:field_experiment_setup}
Field experiments were performed at the KEAS Laboratory on June 5th, 2018 from 9:00 to 20:30 EDT to validate wind estimates from the quadrotor both hovering (i.e., $V_{z_{\rm eq}} = 0$ m/s) and ascending vertically with constant rates, $V_{z_{\rm eq}} =\{0.5,1.0,1.5,2.0\}$~m/s. During the experiment, data were collected from several in situ and remote sensors shown in Figure~\ref{fig:atmospheric_sensors}. The in situ sensor shown in Figure~\ref{subfig:Gill_SA_Sensor} is a Gill WindSonic (SA) that was mounted on top of a telescoping tower to measure wind velocity at 10~m above ground level (AGL). The remote sensor shown in Figure~\ref{subfig:ASC_SoDAR_Sensor} is an ASC4000i SoDAR (LR SoDAR) and was used to measure wind velocity from 30 to 400 m AGL. The remote sensor shown in Figures~\ref{subfig:Remtech_SoDAR_Sensor} is a Remtech SR-SoDAR (SR SoDAR) and measured wind velocity profiles from 10 to 200~m AGL. The performance characteristics of all three sensors are listed in Table~\ref{table:independent_sensors}. The ground setup of all three sensors and the location of flight operations is shown in Figure~\ref{fig:experiment_setup}. Using this sensor setup quadrotor wind velocity estimates were validated.
\begin{figure}[tbh!]
\centering
\subfigure[Gill WindSonic (SA)]{\includegraphics[width = 4.5 cm]{WindSonic.jpg}
\label{subfig:Gill_SA_Sensor}}\qquad
\subfigure[ASC LR-SoDAR (SR SoDAR)]{\includegraphics[width = 4.5 cm]{LR-SoDAR.jpg}\label{subfig:ASC_SoDAR_Sensor}}\qquad
\subfigure[Remtech PA-0 (SR SoDAR)]{\includegraphics[width = 4.5cm]{SR-SoDAR.jpg}\label{subfig:Remtech_SoDAR_Sensor}} \qquad
\subfigure[Experimental Setup ]{\includegraphics[width = 12 cm]{KEASFieldCampaignC.jpg}
\label{fig:experiment_setup}}\qquad
\caption{Wind sensors used for comparison of quadrotor wind estimates from 10 to 120 m AGL.}
\label{fig:atmospheric_sensors}
\end{figure}
\begin{table}[tbh!]\footnotesize
\centering
\resizebox{15.5cm}{!}{%
\begin{tabular}{ccccccccccccccc}
\toprule
\multirow{2}{*}{Make/Model} & \multirow{2}{*}{Descriptor}& \multirow{2}{*}{Range} &\multicolumn{2}{c}{Resolution}&&
\multicolumn{2}{c}{ Accuracy }\\
\cline{4-5} \cline{7-8}\
&&& {Spatial} & {Temporal} && {Wind Speed} & { Wind Direction} \\
\bottomrule \midrule
ASC &LR-SoDAR & 30 - 410 m & 5 m &30 s & &$\small < 5$ m/s above 2 m/s &$ 2^\circ$ above 2 m/s \\
Remtech PA-0 & SR-SoDAR& 10 - 200 m & 10 m & 300 s&& < 0.2 m/s above 6 m/s & $3^\circ$ above 2 m/s \\
Gill WindSonic &SA& N/A & N/A & 0.25 s && $< 1.0\%$ at 12 m/s&$ 0.5^\circ$ at 12 m/s \\
\midrule
\bottomrule
\end{tabular}}
\caption{Accuracy specifications for sonic anemometer and SoDARs}
\label{table:independent_sensors}
\end{table}
\subsection{Ground-Based Observations}
\label{ss:Ground-Based Observations}
To validate quadrotor wind estimates with ground-based sensors, sonic anemometer and SoDAR observations were first compared starting from mid morning until evening on June 5th, 2018. Wind measurements from the SA and SR-SoDAR were sampled at 10 m AGL from 13:00 to 20:30 EDT. Measurements from the LR-SoDAR were not included in comparisons at 10 m AGL since it only measures above 30 m AGL. Wind observations from the SR-SoDAR and LR-SoDAR were also compared at 30, 70, and 120 m AGL from 9:30 to 20:30 EDT. In this process, sonic anemometer and LR-SoDAR measurements were averaged over 300 second intervals for uniform comparison with SR-SoDAR measurements. The agreement across sensors was quantified using as metrics the mean bias error (MBE) and RMSE of wind observations while assuming wind spatial variations to be negligible over the lateral footprint of the sensors; see Figure~\ref{fig:atmospheric_sensors}. Results from the comparison were used to aid the assessment of quadrotor wind estimates.
\subsection{Wind Data Measurements}
\label{ss:wind_data_measurements}
Following the comparison among ground-based sensors, wind observations from the sonic anemometer and SoDAR sensors were used to validate quadrotor wind estimates from hovering and steady-ascending flight operations. Wind estimates from the quadrotor hovering at 10 m AGL were validated using SA and SR-SoDAR wind observations. Quadrotor wind profiles extending from 10 to 120 m AGL were validated using wind measurements from the SR- and LR-SoDARs. Considering that the quadrotor samples the atmosphere in continuous vertical ascent, quadrotor wind velocity estimates were averaged over 10-m intervals for comparison with SoDAR measurements. Outcomes from both comparisons were used to benchmark the performance of quadrotor wind estimation relative to conventional atmospheric sensors.
\section{Results}
\label{sec:results}
\subsection{System Identification}
\subsubsection{Model Structure Determination}
\label{ss:model_structure_determination_results}
The quadrotor flight dynamic model for hovering and steady ascending flight is decomposed into four sub-models that describe plunging, yawing, rolling, and pitching motion. Table~\ref{table:model_structure_results} shows all four model forms and associated parameters. The plunge model is a system of two first-order ordinary differential equations parameterized by propulsive and damping parameters. The yaw model is a system of two first-order ordinary differential equations with rotational damping and torque parameters. Finally, the roll and pitch models are systems of four first-order ordinary differential equations.
\begin{table}[tbh!]
\centering
\begin{tabular}{ccl}
\toprule
Model && \multicolumn{1}{c}{Parameter Structure} \\
\toprule \midrule
Plunge && $ \left(\begin{array}{c} \dot{z}\\ \dot{w}\end{array}\right) = \left(\begin{array}{cc} 0 & 1\\ 0 & Z_{w} \end{array}\right)\left(\begin{array}{c} z \\ w\end{array}\right)+\left(\begin{array}{c} 0 \\ Z_{\delta}\end{array}\right)\delta_{\rm plunge}$\\ \\
Yaw & & $\left(\begin{array}{c} \dot{\psi}\\ \dot{r}\end{array}\right) = \left(\begin{array}{cc}0 & 1\\ N_\psi & N_{r} \end{array}\right)\left(\begin{array}{c}\psi \\ r\end{array}\right)+\left(\begin{array}{c} 0 \\ N_{\delta}\end{array}\right)\delta_{\rm yaw}$\\ \\
Roll & &$\left(\begin{array}{c}\dot{y}\\\dot{\phi} \\\dot{v}\\\dot{p} \end{array}\right) = \left(\begin{array}{cccc} 0& 0&1& 0\\0 &0 &0 &1 \\ 0&Y_{\phi} &Y_{v}&0\\0& L_{\phi}& 0& L_{p}\end{array}\right) \left(\begin{array}{c}y\\\phi \\v\\p \end{array}\right)+\left(\begin{array}{c}0\\ 0\\0\\L_{\delta} \end{array}\right)\delta_{\rm roll}$ \\ \\
Pitch & & $\left(\begin{array}{c}\dot{x}\\ \dot{\theta} \\ \dot{u} \\ \dot{q} \end{array}\right) = \left(\begin{array}{cccc} 0&0 &1 & 0\\ 0 & 0 & 0 & 1\\0 &X_{\theta} &X_{u}&0\\ 0&M_{\theta}&0&M_{q}\end{array}\right) \left(\begin{array}{c}x\\ \theta \\ u \\ q \end{array}\right)+\left(\begin{array}{c}0\\ 0 \\ 0 \\ M_{\delta} \end{array}\right)\delta_{\rm pitch}$\\
\midrule \bottomrule
\end{tabular}
\caption{The plunge, yaw, roll and pitch model structures of the quadrotor determined from system identification flight experiments and step-wise regression algorithm presented in~\cite{klein2006aircraft}.}
\label{table:model_structure_results}
\end{table}
\subsubsection{Parameter Estimation}
\label{ss:parameter_estimation}
Three sets of quadrotor model parameters were estimated for each of five equilibrium flight conditions. Each set of parameters was estimated using the model structures determined from step-wise regression and the output error algorithm described in Section~\ref{ss:aircraft_system_identification}. Model parameters for the plunge, yaw, roll, and pitch model structures were first estimated for hovering flight conditions (i.e., $\bm{v} = \bm{0}$ and $\bm{\omega}=\bm{0}$). Subsequently, roll and pitch model parameters were estimated for constant vertical ascent flight conditions varying from 0.5 to 2.0~m/s. We assume the plunge and yaw model parameters to be invariant with vertical ascent rate. Model parameter estimates for each flight equilibrium were averaged to obtain nominal models for wind estimation. Averaged parameter estimates and standard error (SE) values for plunge and yaw models are listed in Table~\ref{table:plunge_yaw_parameters}. Additionally, averaged roll and pitch model parameters and standard error values are listed for hovering and ascending flight conditions in Tables~\ref{table:oe_roll_parameters} and \ref{table:oe_pitch_parameters}, respectively.
\begin{table}[tbh!]
\centering
\begin{tabular}{cccccccccccccc}
\toprule
\multirow{2}{*}{Speed} & \multicolumn{4}{c}{Plunge Model }&&
\multicolumn{4}{c}{ Yaw Model } \\
\cline{2-5} \cline{7-9} \\[-1em]
&Parameter & Value & SE & Units & & Parameter & Value & SE & Units \\
\toprule \midrule
\multirow{3}{*}{0-2 ~m/s}&$Z_w$ & -0.55 &0.28 &1/s &&$N_{\psi}$& -1.71& 0.41 &1/s$^2$\\&$Z_\delta$ & -1.71 &0.79 & 1/kg & &$N_r$&-0.84 & 0.53& 1/s \\
&-- & -- &-- & --&& $N_\delta$ &2.41 & 1.18 &$1/(\rm kg\cdot m^2)$ \\
\midrule
\bottomrule
\end{tabular}
\caption{Nominal plunge and yaw model parameter estimates.}
\label{table:plunge_yaw_parameters}
\end{table}
\begin{table}[tbh!]
\centering
\resizebox{15.5cm}{!}{\begin{tabular}{cccccccccccccccccccccccccc}
\toprule
Pitch Model &\multicolumn{2}{c}{ 0.0 m/s }&& \multicolumn{2}{c}{ 0.5 m/s }&& \multicolumn{2}{c}{ 1.0 m/s }&& \multicolumn{2}{c}{ 1.5 m/s }&& \multicolumn{2}{c}{ 2.0 m/s }&&\multirow{2}{*}{Units}\\
\cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12} \cline{14-15} \\[-1em]
Parameters&Value& SE & &Value& SE& & Value & SE && Value& SE&& Value& SE\\
\toprule\midrule
$Y_\phi$ & 3.28 &0.37 && 2.91 & 0.34&& 4.73 & 0.87 && 4.68 & 0.21&&6.62&0.63&& m/s$^2$ \\
$Y_v$ & -0.49 &0.68 && -0.31 & 0.04&& -0.70 & 2.33 && -0.62 & 0.14&&-1.06&0.25&& 1/s \\
$L_\phi$ & -4.54 &4.17 && -3.95 & 0.12&& -5.87 & 2.55 && -4.07 & 0.26&&-5.92&0.10&& 1/s$^2$\\
$L_p$ & -1.09 &2.62 && -1.15 & 0.22&& -1.62 & 1.99 && -0.82 & 0.23&&-1.80&1.17&& 1/s\\
$L_\delta$ & 4.62 &3.55 && 5.76 & 0.32&& 8.52 & 2.28 && 6.27 & 0.31&&9.68&0.65&& $1/(\rm kg\cdot m^2)$ \\
\midrule
\bottomrule
\end{tabular}}
\caption{Nominal roll model parameter estimates.}
\label{table:oe_roll_parameters}
\end{table}
\begin{table}[tbh!]
\centering
\resizebox{15.5cm}{!}{\begin{tabular}{cccccccccccccccccccccccccc}
\toprule
Pitch Model &\multicolumn{2}{c}{ 0.0 m/s }&& \multicolumn{2}{c}{ 0.5 m/s }&& \multicolumn{2}{c}{ 1.0 m/s }&& \multicolumn{2}{c}{ 1.5 m/s }&& \multicolumn{2}{c}{ 2.0 m/s }&\multirow{2}{*}{Units}\\
\cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12} \cline{14-15} \\[-1em]
Parameters&Value& SE & &Value& SE& & Value & SE && Value& SE&& Value& SE\\
\toprule\midrule
$X_\theta$ & -4.03 &0.10 &&-3.94 &0.12 && -6.27 &0.78 &&-5.48 &0.14 && -8.02&0.68& m/$s^2$ \\
$X_u$ & -0.71 &0.56 &&-0.61 &0.08 && -0.80 &0.19 && -0.67 &0.08 && -1.24&0.28& 1/s \\
$M_\theta$ & -6.23 &1.67 &&-5.20 &0.11 && -8.63 &2.64 && -4.44 &0.23 && -7.78&2.69& 1/s$^2$\\
$M_q$ & -1.46 &0.87 &&-1.42 &0.35 && -2.63 &0.65 && -1.27 &0.50 && -2.09&0.84& 1/s\\
$M_\delta$ & 6.61 &0.36 &&6.32 &0.28 && 10.80 &1.98 && 6.81 &0.40 && 10.70&0.64& $1/(\rm kg\cdot m^2)$ \\
\midrule
\bottomrule
\end{tabular}}
\caption{Nominal pitch model parameter estimates.}
\label{table:oe_pitch_parameters}
\end{table}
The dependence on vertical ascent rate was also characterized for roll and pitch quadrotor parameters. Results from this characterization are shown in Figure~\ref{fig:quadrotor_roll_pitch_parameters} where roll and pitch model parameters are plotted as a function of ascent rate. Each parameter estimate appears with absolute error bars, colored in black, representing the range of estimates obtained from the three experimental data sets. Orange-colored bars were also included to denote minimum and maximum values across all five ascent rates. Zeroth- and first-order polynomials were fit to the parameter estimates as a function of ascent rate. The first-order fit, on the other hand, characterizes the trend in parameter values with respect to ascent rate. Note that only a subset of parameters exhibit clear trends with respect to ascent rate. It is possible that these local, small-perturbation models do exhibit high sensitivity to ascent rate, as suggested by Figure~\ref{fig:quadrotor_roll_pitch_parameters}. If so, then these results may suggest flight regimes to be avoided when estimating wind velocity from platform motion; regions of high parameter sensitivity may produce less accurate wind estimates.
For the aircraft and dynamic model considered here, the parameters vary less at lower ascent rates (0.5 m/s or less). Thus, one might expect more accurate wind measurements during slower climbs. It is possible, however, that the variation in parameter estimates is an artifact of the data collection method for system identification. At higher climb rates, it is more difficult to manually generate the rich and precisely timed excitation signals needed for model identification. An automated approach to system identification may improve the repeatability of parameter estimates.
\begin{figure}[tbh!]
\centering
\includegraphics[width =\textwidth]{ParameterEstimates.jpg}
\caption{Quadrotor roll and pitch model parameter estimates. }
\label{fig:quadrotor_roll_pitch_parameters}
\end{figure}
\subsubsection{Model Validation}
\label{ss:model_validation}
Models characterized from step-wise regression and output error methods were validated by comparing the model output and aircraft's response to an excitation input. Agreement between the model output and measured response is compared using the RMSE metric discussed in Section~\ref{sss:mm_model_validation}. Results from this validation are shown in Figure~\ref{fig:SID_validation} for the plunge, yaw, roll, and pitch models characterized for hovering flight $V_{z_{\rm eq}} = 0$. Results from the RMSE assessment for the plunge and yaw models are shown in Table~\ref{table:roll_pitch_SID_validation_Results}. The RMSE results for the pitch and roll models associated constant vertical ascent rates ranging between 0 and 2 m/s are also shown in Table~\ref{table:roll_pitch_SID_validation_Results}.
\begin{table}[tbh!]
\centering
\resizebox{15.5cm}{!}{%
\begin{tabular}{cccccccccccccccccc}
\toprule
\multirow{2}{*}{Ascent} &\multicolumn{3}{c}{ Plunge Model }&& \multicolumn{3}{c}{ Yaw Model }&& \multicolumn{3}{c}{ Roll Model }&&
\multicolumn{3}{c}{ Pitch Model} &\\
\cline{2-4} \cline{6-8} \cline{10-12} \cline{14-16} \\[-1em] Rate &Par. & RMSE & Units & & Par. & RMSE & Units & & Par. & RMSE & Units & & Par. & RMSE & Units \\
\toprule\midrule
\multirow{2}{*}{0 m/s} &\multirow{2}{*}{$w$}&\multirow{2}{*}{0.44}&\multirow{2}{*}{m/s}&& \multirow{2}{*}{$r$}&\multirow{2}{*}{2.59}&\multirow{2}{*}{rad/s}& &$v$ & 0.23 & m/s& &$u$ & 0.12 & m/s \\
&&&&&&&& &$p$ &0.39 & rad/s& &$q$ & 0.19 & rad/s \\ \midrule
\multirow{2}{*}{0.5 m/s}&\multirow{2}{*}{$w$}&\multirow{2}{*}{0.44}&\multirow{2}{*}{m/s}&& \multirow{2}{*}{$r$}&\multirow{2}{*}{2.59}&\multirow{2}{*}{rad/s}& &$v$ & 0.31 & m/s& &$u$ & 0.59 & m/s \\
& &&&&&&&& $p$ & 0.21 & rad/s& &$q$ & 0.31 & rad/s \\ \midrule
\multirow{2}{*}{1.0 m/s} &\multirow{2}{*}{$w$}&\multirow{2}{*}{0.44}&\multirow{2}{*}{m/s}&& \multirow{2}{*}{$r$}&\multirow{2}{*}{2.59}&\multirow{2}{*}{rad/s}& &$v$ & 0.73 & m/s& &$u$ & 0.38 & m/s \\
&&&&&&&& &$p$ & 0.90 & rad/s& &$q$ & 0.37 & rad/s \\ \midrule
\multirow{2}{*}{1.5 m/s}&\multirow{2}{*}{$w$}&\multirow{2}{*}{0.44}&\multirow{2}{*}{m/s}&& \multirow{2}{*}{$r$}&\multirow{2}{*}{2.59}&\multirow{2}{*}{rad/s}& &$v$ & 0.38 & m/s& &$u$ & 0.46 & m/s \\
& &&&&&&&&$p$ &0.48 & rad/s& &$q$ & 0.51 & rad/s \\\midrule
\multirow{2}{*}{2.0 m/s} &\multirow{2}{*}{$w$}&\multirow{2}{*}{0.44}&\multirow{2}{*}{m/s}&& \multirow{2}{*}{$r$}&\multirow{2}{*}{2.59}&\multirow{2}{*}{rad/s}& &$v$ &0.48 & m/s& &$u$ & 0.37 & m/s \\
&&&&&&&& &$p$ & 0.71 & rad/s& &$q$ & 0.28 & rad/s \\
\midrule
\bottomrule
\end{tabular}}
\caption{Validation results for plunge, yaw, roll and pitch models.}
\label{table:roll_pitch_SID_validation_Results}
\end{table}
\begin{figure}[tbh!]
\centering
\subfigure[Plunge model validation]{\includegraphics[width = 77mm]{PlungeValidation0ms.jpg}
\label{subfig:plunge_model0ms}}
\subfigure[Yaw model validation]{\includegraphics[width = 77mm]{YawValidation0ms.jpg}\label{subfig:yaw_model0ms}}\\
\subfigure[Roll model validation]{\includegraphics[width = 77mm]{RollValidation0ms.jpg}
\label{subfig:roll_model0ms}}
\subfigure[Pitch model validation]{\includegraphics[width = 77mm]{PitchValidation0ms.jpg}\label{subfig:pitch_model0ms}}
\caption{ Validation of plunge, yaw, roll, and pitch models identified for quadrotor hovering flight.}
\label{fig:SID_validation}
\end{figure}
\subsection{Comparison of Wind Velocity Measurements}
\subsubsection{Sonic Anemometer and SoDAR Comparison}
\label{s:SoDAR_and_Sonic_Anemometer_Comparison}
Sonic anemometer and SR-SoDAR wind observations were collected from 15:00 to 20:30~EDT to assess their agreement at 10 m AGL. Observations were compared using the mean bias error and RMSE metrics described in Section~\ref{ss:Ground-Based Observations} using sonic anemometer observations as reference. During the sampling period, prevailing wind conditions were observed to be from the northwest direction with wind speeds ranging from 0 and 6 m/s as shown in Figure~\ref{subfig:wind_measurements_10m}. Results from the comparison are tallied in Table~\ref{table:wind_velocity_observations_10m}. The mean bias error of SR-SoDAR wind speed and wind direction observations was measured to be 0.7 m/s and -0.8$^\circ$, respectively (see Figure~\ref{fig:comparison_bias_10m}). The corresponding RMSE of wind speed and wind direction measurements was determined to be 1.0 m/s and 19.0$^\circ$, respectively. These results were used to assess the accuracy wind estimates from the quadrotor hovering at 10 m AGL as well.
\begin{figure}[tbh!]
\centering
\subfigure[]{\includegraphics[width =\textwidth]{REMSonComp.jpg}\label{subfig:wind_measurements_10m}}
\subfigure[]{\includegraphics[width =\textwidth]{GILREMBias.jpg}\label{subfig:wind_bias_10m}}
\caption{ Comparison of wind speed and wind direction observations collected from the
sonic anemometer and SR-SoDAR at 10 m AGL on June, 5th 2018 from 16:30 to 20:30 EDT. }
\label{fig:comparison_bias_10m}
\end{figure}
\begin{table}[tbh!]
\centering
\begin{tabular}{cccccccccccccc}
\toprule
\multirow{2}{*}{Sensor} & \multirow{2}{*}{Height}& \multicolumn{3}{c}{Wind Speed}&\multirow{2}{*}{Units}&
\multicolumn{3}{c}{ Wind Direction} &\multirow{2}{*}{Units}\\
\cline{3-5} \cline{7-9} \\[-1em] & &Mean & MBE& RMSE& & Mean & MBE & RMSE \\
\toprule\midrule
SA &\multirow{2}{*}{10 m} &2.0 &\multirow{2}{*}{0.7}&\multirow{2}{*}{1.0} & m/s& 284.1 & \multirow{2}{*}{-0.8}&\multirow{2}{*}{19.0} & $^\circ$ \\
SR-SoDAR & &2.7 & & & m/s & 320.0& & & $^\circ$ \\
\midrule
\bottomrule
\end{tabular}
\caption{Comparison of wind speed and wind direction observations collected from the
sonic anemometer and SR-SoDAR at 10 m AGL on June, 5th 2018 from 16:30 to 20:30 EDT.}
\label{table:wind_velocity_observations_10m}
\end{table}
\subsubsection{SoDAR Comparison}
\label{SoDAR Comparison}
Measurements of wind speed and wind direction collected from the LR- and SR-SoDARs were compared from 9:00 to 20:30 EDT to assess agreement across wind observations at 30, 70, and 110~m~AGL. Wind conditions during the sampling period were prevalent from the northwest direction with five-minute averages of wind speeds ranging from 1.0 to 8.0~m/s and increasing with height (see Figure~\ref{fig:SoDAR_Measurements}). Results from the comparison of SoDAR wind observations are shown both in Figure~\ref{fig:SoDAR_bias_measurements} and Table~\ref{table:wind_velocity_observations_30_110m} using observations from the SR-SoDAR as reference. The maximum mean bias errors for wind speed and wind direction were observed to be -0.7~m/s and $-7.2^\circ$ at 70 and 30~m AGL, respectively. The maximum RMSE values, on the other hand, were observed for to be 1.2~m/s and $34.9^\circ$ at 110~m AGL. Note that the RMSE values of wind speed and direction agree to within 10\% at all three altitudes. Combined, these results were used to assess the accuracy of quadrotor wind profile estimates.
\begin{table}[tbh!]
\centering
\begin{tabular}{cccccccccccccc}
\toprule
\multirow{2}{*}{Sensor} & \multirow{2}{*}{Height}& \multicolumn{3}{c}{Wind Speed}&\multirow{2}{*}{Units}&
\multicolumn{3}{c}{ Wind Direction} &\multirow{2}{*}{Units}\\
\cline{3-5} \cline{7-9} \\[-1em] & &Mean & MBE & RMSE& & Mean & MBE & RMSE \\
\toprule\midrule
SR-SoDAR &\multirow{2}{*}{30 m} &4.1 &\multirow{2}{*}{-0.5}&\multirow{2}{*}{1.1} & m/s& 305.6 & \multirow{2}{*}{-7.2 }&\multirow{2}{*}{34.8} & $^\circ$ \\
LR-SoDAR & &3.6 && & m/s & 298.5& & & $^\circ$ \\
\midrule
SR-SoDAR &\multirow{2}{*}{70 m} &4.5 & \multirow{2}{*}{-0.7}&\multirow{2}{*}{1.1} & m/s& 310.3 & \multirow{2}{*}{-6.1}&\multirow{2}{*}{32.7} & $^\circ$ \\
LR-SoDAR & &3.8 & & & m/s & 301.3& & & $^\circ$ \\
\midrule
SR-SoDAR &\multirow{2}{*}{110 m} &4.6&\multirow{2}{*}{-0.6}&\multirow{2}{*}{1.2} & m/s& 312.5& \multirow{2}{*}{-0.9}&\multirow{2}{*}{34.9} & $^\circ$ \\
LR-SoDAR & &4.0 & & & m/s & 300.6 & & & $^\circ$ \\
\midrule
\bottomrule
\end{tabular}
\caption{Comparison of wind speed and wind direction observation from the
sonic anemometer and SR-SoDAR at 10 m AGL collected on June, 5th 2018 from 16:30 to 20:30 EDT.}
\label{table:wind_velocity_observations_30_110m}
\end{table}
\begin{figure}[tbh!]
\centering
\subfigure[]{\includegraphics[width =13.5cm]{SoDAR110.jpg}\label{subfig:ASCREM110m}}
\subfigure[]{\includegraphics[width =13.5cm]{SoDAR70.jpg}\label{subfig:ASCREM70m}}
\subfigure[]{\includegraphics[width =13.5cm]{SoDAR30.jpg}\label{subfig:ASCREM30m}}
\caption{ Wind speed and wind direction observations collected from SoDARs at (a) 110~m AGL , (b) 70~m AGL, and (c) 30~m AGL on June 5th, 2018 from 9:00 to 20:30 EDT. }
\label{fig:SoDAR_Measurements}
\end{figure}
\begin{figure}[tbh!]
\centering
\subfigure[]{\includegraphics[width =13.5cm]{Bias110.jpg}\label{subfig:ASCREM110m_bias}}
\subfigure[]{\includegraphics[width =13.5cm]{Bias70.jpg}\label{subfig:ASCREM70m_bias}}
\subfigure[]{\includegraphics[width = 13.5cm]{Bias30.jpg}\label{subfig:ASCREM30m_bias}}
\caption{ Comparison of wind speed and wind direction observations collected from the SR- and LR-SoDAR at (a) 110~m AGL, (b) 70~m AGL, and (c) 30~m AGL on June 5th, 2018 from 9:00 to 20:30 EDT.}
\label{fig:SoDAR_bias_measurements}
\end{figure}
\subsubsection{Validation of Quadrotor Wind Estimates}
Quadrotor wind estimates at 10 m AGL were validated using sonic anemometer and SR-SoDAR wind observations. The validation of quadrotor wind estimates was conducted for three flights occurring between 18:05 and 20:17 EDT. As shown in Figure~~\ref{subfig:wind_measurements_10m} with rose-colored vertical bands, the time lapse of each flight was approximately 10 minutes. To validate quadrotor wind estimates, the bias error of five-minute averages was determined using sonic anemometer and SR-SoDAR observations as reference. Biases of five-minute estimates were then averaged to determine the nominal accuracy of quadrotor wind estimates. Results from this analysis are shown in Figure~\ref{fig:hover_measurements} and Table~\ref{table:SA_SRSoDAR_bias}. For quadrotor wind speed estimates, the absolute value average of mean bias errors and RMSE values were measured to be 0.3 m/s and 1.0 m/s relative to sonic anemometer and SR-SoDAR, respectively. Wind direction quadrotor estimates had absolute value averages of absolute errors and RMSE values equal to $9.9^\circ$ relative to sonic anemometer and SR-SoDAR measurements as well. Therefore, quadrotor wind estimates from hovering flight were characterized by an accuracy comparable to that of conventional ground-based wind sensors.
\begin{figure}[tbh!]
\centering
\subfigure[]{\includegraphics[width = 120mm]{HoverB1.jpg}
\label{subfig:Hove1}}
\subfigure[]{\includegraphics[width = 120mm]{HoverB2.jpg}
\label{subfig:Hover1818}}
\subfigure[]{\includegraphics[width = 120mm]{HoverB3.jpg}
\label{subfig:HoverC}}
\caption{Wind speed and direction from the SA, SR-SoDAR, and quadrotor at 10~m~AGL. For clarity of comparison results, quadrotor and SoDAR five-minute averages were offset by 12 seconds.}
\label{fig:hover_measurements}
\end{figure}
\begin{table}[tbh!]
\centering
\begin{tabular}{cccccccccccccc}
\toprule
\multirow{2}{*}{Flight Time} & \multirow{2}{*}{Height}& \multicolumn{2}{c}{Wind Speed Bias Error}&&
\multicolumn{2}{c}{Wind Direction Bias Error } \\
\cline{3-4} \cline{6-7} \\[-1em] & &SA & SR-SoDAR& & SA& SR-SoDAR& \\
\toprule\midrule
\multirow{2}{*}{18:05-18:15 EDT} &\multirow{2}{*}{10 m} & 0.0 m/s & 1.2 m/s && $7.0^\circ$ &$-4.0^\circ$ \\ & & 0.5 m/s&0.7 m/s && $4.0^\circ$&$-6.0^\circ$
\\ \midrule
18:19-18:27 EDT & 10 m &-0.5 m/s&-0.8 m/s&&$4.7^\circ$&$12.7^\circ$ \\ \midrule
20:08-20:17 EDT&10 m& - 0.1 m/s&-1.2 m/s&& $-23.6^\circ$ &$-17.0^\circ$
\\
\midrule
\multicolumn{2}{c}{\textbf{Absolute Value Average}} & 0.3 m/s & 1.0 m/s && $9.9^\circ$ & $9.9^\circ$& \\\midrule
\bottomrule
\end{tabular}
\caption{Comparison of five-minute averages of wind speed and wind direction observation from the
quadrotor, sonic anemometer, and SR-SoDAR at 10 m AGL collected on June, 5th 2018 from 18:05 to 20:17 EDT.}
\label{table:SA_SRSoDAR_bias}
\end{table}
To validate quadrotor wind estimates while ascending vertically at constant rates varying between 0.5 and 2 m/s SoDAR wind observations were used. Preliminary results from validation experiments revealed distinct anomalies in SoDAR wind measurements during quadrotor operations, which took place from 15:30 to 20:00~EDT as shown in Figure~\ref{fig:SoDAR_Measurements} with green-colored vertical bands. During quadrotor operations, wind observations were persistently dropped or not recorded by the SR-SoDAR. Wind observations from the LR-SoDAR, on the other hand, reported an abrupt change in wind speed and wind direction. These anomalies prompted additional investigation to determine the impact of quadrotor operations on the reliability of SoDAR wind observations.
To determine the impact of quadrotor operations on the reliability of SoDAR observations, the two-part assessment detailed in Appendix~\ref{s:a_reliability_study_of_SoDAR_Wind_Measurements} was conducted across the duration of validation experiments. Examination of the spatial footprint of quadrotor operations showed the quadrotor entering the sampling volume of SoDAR observation at approximately 60 m AGL. Both the description of the analysis and results are found in Appendix~\ref{s:a_reliability_study_of_SoDAR_Wind_Measurements}. The quadrotor flying through the sampling volume of SoDAR observations correlated with both an increase noise intensity and a decrease in signal-to-noise ratio corresponding to wind measurements. As shown in Figures~\ref{fig:LR-SoDAR_SR} and \ref{fig:SR_Noise}, the noise intensity of wind velocity measurements increased for $\bm{u}$ and $\bm{v}$ components during quadrotor operations. Correspondingly, the signal-to-noise ratio dropped during quadrotor operations as shown in Figures~\ref{fig:LR-SoDAR_SNR}-\ref{fig:SR-SoDAR_SNR}. Based on these observations, making time-synchronized comparisons of quadrotor and SoDAR wind measurements at the same location for validation purposes is infeasible.
To circumvent corrupted SoDAR wind observations, quadrotor wind profiles were validated using SoDAR measurements collected 15 minutes prior to and following quadrotor operations. As a result, fair comparisons of quadrotor and SoDAR wind measurements could only be made during period of low wind variability. Otherwise, quadrotor wind estimates were ruled inconclusive. Wind variability was considered to be low when SoDAR wind observations were in general agreement for the duration of the comparison period. We also note that the assessment of quadrotor estimates was performed qualitatively instead of quantitatively due to the non-uniformity of quadrotor SoDAR wind observations across time and space. Consequently, only a subset of wind estimates was validated successfully applying this criteria.
In total, four sets of quadrotor wind velocity profiles corresponding to ascent rates $V_{z_{\rm eq}}>0$ were compared to SoDAR wind observations for validation. Results from validation assessments are shown in Figure~\ref{fig:wind_valocity_profiles} for each ascent rate with quadrotor wind estimates averaged both over 1 second and 10 meter intervals. There was no clear dependency of the performance of the quadrotor wind estimates on ascent rate. Overall, quadrotor wind speed and direction estimates demonstrated good performance tracking SoDAR observations while profiling the lower atmosphere. Especially during the time period that the quadrotor ascended at 1.5 m/s, spatial and temporal variability in wind speed and direction were small and quadrotor winds compared very well with the sodar measurements . All other wind estimates were difficult to corroborate with SoDAR measurements during periods of significant wind variability. These findings are further reinforced by quadrotor and SoDAR comparisons shown in Figures~\ref{fig:ascent_1p0ms}-\ref{fig:ascent_2p0ms} for ascent rates of 1, 1.5 and 2 m/s.
\begin{figure}[tbh!]
\centering
\subfigure[]{\includegraphics[width = 67mm]{Profile10p5ms.jpg}
\label{subfig:0p5ms1}}
\subfigure[]{\includegraphics[width = 67mm]{Profile11ms.jpg}
\label{subfig:1p0ms1}}\\
\subfigure[]{\includegraphics[width = 67mm]{Profile11p5ms.jpg}
\label{subfig:1p5ms1}}
\subfigure[]{\includegraphics[width = 67mm]{Profile12ms.jpg}
\label{subfig:1p5ms4}}\\
\caption{ Comparison of quadrotor and SoDAR wind speed and wind direction profile measurements ascending vertically from 10 to 120 m AGL at constant rates of (a) 0.5 m/s, (b) 1 m/s, (c) 1.5 m/s, and (d) 2 m/s.}
\label{fig:wind_valocity_profiles}
\end{figure}
\section{Discussion}
\label{sec:discussion}
Five nominal models were identified for the control-augmented rigid body dynamics of a quadrotor in both hovering and steady vertical ascending flight. For each model, the plunge, yaw, roll, and pitch dynamics were characterized separately using step-wise regression and output error methods. Step-wise regression was employed to determine the structure for each model using a set of candidate regressors. Model structures determined from step-wise regression were used to estimate parameter coefficients using the output error method. Trends in parameter variations with respect to ascent rate were found to be monotonic only for a subset of parameters. Significant variations in parameter estimates can be the result of correlation between regressors used for parameter estimation. Correlation among regressors is likely to occur when regressors change proportionality in system identification experiments. Nonetheless, the identified models were then used to test the observability of wind-augmented models for wind estimation. Future work will focus on improving upon validation experiments to study more closely the sensitivity of model parameters.
Following the identification of quadrotor models, field experiments were conducted to validate quadrotor estimates using wind observations from a sonic anemometer and two SoDARs. Wind observations from ground-based sensors were compared at 10, 30, 70 and 110~m AGL to determine their agreement prior to validating quadrotor wind estimates. Results from these comparisons show an overall good agreement between sonic anemometer and SR-SoDAR measurements at 10~m AGL and SoDARs at 30, 70, 110~m AGL across the entire time lapse of observations for each sensor pair. The mean bias error and RMSE of observations from the sonic anemometer and SR-SoDAR at 10~m AGL did not exceed values, respectively, 0.7~m/s and 1.0~m/s for wind speed and -0.8$^\circ$ and 19.0$^\circ$ for wind direction. These same error metrics for SoDAR observations at 30, 70, and 110~m AGL had maximum values of -0.7~m/s and 1.2~m/s for wind speed and -7.2$^\circ$ and 34.8$^\circ$ for wind directions. However, acute wind speed and wind direction errors were observed across SoDAR measurements during quadrotor operations. This motivated additional studies to discern the impact of quadrotor operations on the error of SoDAR observations.
To determine the impact of quadrotor operations on the reliability of SoDAR observations, the noise intensity and signal to noise ratio (SNR) of SoDAR wind measurements at heights of 30, 70, and 110~m AGL were assessed during and in the absence of quadrotor flights. Experimental data suggest that quadrotor operations had minimal impact on the accuracy of SoDAR measurements when these devices were measuring wind velocity at 10~m AGL. Quadrotor profiles, on the other hand, were determined to impact SoDAR wind observations between 30 and 120~AGL. This fact is attributed to the quadrotor traversing through the field of view of both SoDARs when sampling at higher altitudes. Because of these observations, the comparisons of quadrotor and SoDAR data were modified to avoid the corrupted SoDAR data.
Quadrotor wind estimates hovering at 10~m AGL were validated first using wind measurements from both the sonic anemometer and the SR-SoDAR. Considering that quadrotor operations at 10 m AGL did not have a significant impact on SoDAR observations, wind velocity measurements were compared to all ground-based assets. Results from the comparison demonstrated good agreement across 5-minute averages in spite of measurements not being sampled at coincident locations. The averaged mean bias error and RMSE values of quadrotor wind speed and wind direction estimates were determined to be 0.3 m/s and 9.9$^\circ$ relative to sonic anemometer observations and 1 m/s and 9.9$^\circ$ relative to SR-SoDAR measurements. In contrast with wind estimation results from ~\cite{gonzalez2019sensing}, comparison results demonstrate an overall improvement in the accuracy of the RBWindPro algorithm estimating wind speed and wind direction using state measurements from a quadrotor hovering.
Validating quadrotor wind profile estimates proved to be difficult for various reasons. First, wind observations from different sensors are inherently not uniform. The non-uniformity across sensors is attributed to how each sensor measures wind velocity, the air volume that is sampled, and the sampling duration. Second, quadrotor operations were found to corrupt SoDAR wind observations. Consequently, quadrotor wind estimates had to be validated using SoDAR observations measured before and after quadrotor operations. These factors limited the performance assessment of quadrotor wind estimation to periods where atmospheric conditions were fairly stationary across the SoDAR sampling domain.
Based on SoDAR observations, wind speed and wind direction were fairly stationary while quadrotor wind profiles were conducted ascending vertically at 1.5 m/s between 18:35 and 19:06 EDT. The quadrotor wind estimates ascending at 1.5 m/s follow wind speed and wind direction SoDAR observations closely. The same is also true for all quadrotor wind profiles conducted during this same period as is shown in Figure~\ref{fig:ascent_1p5ms}. Good performance estimating wind direction was also observed for quadrotor profiles ascending at 0.5 m/s between 15:30 and 15:46 EDT. All other quadrotor estimates were difficult to validate due to high wind variability during sampling periods. All things considered, positive validation results for a subset of quadrotor wind estimates provides motivation for comprehensive validation experiments that avoid or mitigate for quadrotor and SoDAR interactions.
Future work will involve improving validation experiments using lessons learned from this study for a more comprehensive performance assessment of quadrotor wind profiling. Improvements to field experiments will require making three modifications. First, one must increase the spatial separation between SoDARs to ensure quadrotor operations do not interfere with wind field measurements. Second, one might incorporate an additional sonic anemometer mounted on a separate quadrotor, to measure wind velocity profiles directly next the quadrotor estimating wind velocity profiles from motion perturbations. Lastly, because of the lateral separation between the ground-based sensors and the quadrotor, one should conduct validation experiments when atmospheric conditions are relatively homogeneous and stationary, and significant uniformity of the wind field sampled by atmospheric sensors is expected.
\section{Conclusions}
\label{sec:conclusion}
An off-the-shelf quadrotor can be used to obtain model-based wind velocity estimates as long as the motion data logged on board the autopilot is accessible to the user. However, the accuracy of wind velocity estimates depends on how well the motion model characterizes the dynamics of the quadrotor for its operating condition. This paper extends a model based framework exploiting the rigid body dynamics of a quadrotor for hovering-flight wind estimation to estimate wind velocity along a vertical path in the lower atmosphere. The extension involved characterizing rigid body models for equilibrium flight conditions corresponding to each of five steady-ascending rates: $V_{z_{\rm eq}} = \{0.0, 0.5,1.0,1.5,2.0\}$~m/s. Each quadrotor model was characterized employing stepwise regression and output error parameter estimation. An observability analysis confirmed the feasibility of estimating wind velocity using the identified model structures. Trends in parameter estimates also suggest that slower ascent rates may result in more accurate wind estimates. Significant variations in parameter estimates for higher ascent rates can be the outcome of limitations generating manually the rich and precisely timed excitation signals needed for model identification. Further studies are required to investigate this possibility in depth.
Field experiments were conducted to validate quadrotor wind estimates using in-situ and remote-sensing atmospheric sensors. Results from validation experiments demonstrated quadrotor wind estimates in hovering flight to be within within small error of sonic anemometer and SoDAR wind observations. Quadrotor wind profile estimates, on the other hand, were difficult to validate comprehensively because quadrotor operations affect the reliability of SoDAR wind measurements. However, in instances when atmospheric conditions were relatively invariant prior to and after quadrotor operations, quadrotor wind estimates demonstrated very good agreement with wind speed and wind direction from SoDAR measurements. Overall, this study demonstrates the feasibility of model-based vertical wind profiling using multirotor UAS in the lower atmosphere.
\vspace{6pt}
\authorcontributions{J.G.R developed the model-based wind sensing methodology presented in this manuscript, characterized vehicle models using aircraft system identification, led field experiments to validate quadrotor wind estimates, curated data from field experiments, and led the writing of the manuscript. S.F.J.D co-led field experiments, provided sonic anemometer wind data, provided guidance for the analysis wind measurements, and assisted in writing the manuscript. S.D.R assisted with validation experiments, provided guidance for the analysis of wind measurements. C.A.W. provided guidance for the analysis of wind measurements, and assisted in writing the manuscript. }
\funding{This research was supported in part by grants from the National Science Foundation (NSF) under grant number AGS 1520825 (Hazards SEES: Advanced Lagrangian Methods for Prediction, Mitigation and Response to Environmental Flow Hazards) and DMS 1821145 (Data-Driven Computation of Lagrangian Transport Structure in Realistic Flows) as well as the NASA Earth and Space Science Fellowship under grant number 80NSSC17K0375. We declare that opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.}
\acknowledgments{We thank Jean-Michel Fahmi, Virginia Tech, for serving as a pilot in command for some UAS missions conducted for media and documentation purposes.}
\conflictsofinterest{The authors declare no conflict of interest.}
\abbreviations{The following abbreviations are used in this manuscript:\\
\noindent
\begin{tabular}{@{}ll}
ABL & Atmospheric boundary layer \\
Cov & Covariance \\
LR & Long range \\
LTI & Linear time invariant \\
MBE & Mean bias error\\
RMSE & Root mean squared error\\
SA & Sonic Anemometer \\
SNR & Signal-to-noise ratio \\
SoDAR & Sounding detection and ranging\\
SR & Short range\\
UAS& Small unmanned aircraft systems \\
ctrl & Control\\
ref & Reference\\
\end{tabular}}
|
train/arxiv
|
BkiUdj85qU2Apwwg8Aw0
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
Chessboard complexes and their relatives are well studied objects
of topological combinatorics with applications in group theory,
representation theory, commutative algebra, Lie theory,
computational geometry, and combinatorics. The reader is referred
to \cite{Jo-book} and \cite{Wa03} for surveys and to \cite{Jo07},
\cite{ShaWa04} for a guide to some of the latest developments.
Chessboard complexes originally appeared in \cite{Ga79} as coset
complexes of the symmetric group, closely related to Coxeter and
Tits coset complexes. After that they have been rediscovered
several times. Among their avatars are ``complexes of partial
injective functions'' \cite{ZV92}, ``multiple deleted joins'' of
$0$-dimensional complexes \cite{ZV92} (implicit in \cite{Sar91}),
the complex of all partial matchings in a complete bipartite
graph, the complex of all non-taking rook configurations
\cite{BLVZ} etc.
Recently a naturally defined subcomplex of the chessboard complex,
here referred to as ``cycle-free chessboard complex'', has emerged
in the context of stable homotopy theory (\cite{AuFie07} and
\cite{Fie07}). Ault and Fiedorowicz introduced this complex and
its suspension $Sym^{(p)}_*$ as a tool for evaluating the
symmetric analogue for the cyclic homology of algebras,
\cite{Lod98}. They conjectured that
$H_i\left(Sym^{(p)}_*\right)=0$ if $i<p/2$, and verified this
conjecture for some small values of $p$ and $i$.
In this paper we prove this conjecture (Theorem~\ref{thm:main}) by
showing that $Sym^{(p)}_*$ is actually $\gamma_p$-connected where
$\gamma_p = \left[\frac 23(p-1)\right]$ (see Corollary
\ref{posl}). We also show (Theorem~\ref{thm:tight1}) that this
result cannot be improved if $p=3k+2$ for some $k$ and give
evidence that the bound should be tight in the general case.
\subsection{Graph complexes}
\label{sec:graph complexes}
Chessboard complexes $\Delta_{m,n}$ and their relatives are
examples of {\em graph complexes}. A graph (digraph, multigraph)
complex is (in {\em topological combinatorics}) a family of graphs
(digraphs, multigraphs) on a given vertex set, closed under
deletion of edges. Monograph \cite{Jo-book}, based on the author's
Ph.D.\ degree thesis \cite{Jo-thesis}, serves as an excellent
source of information about applications of graph complexes in
algebraic and geometric/topological combinatorics and related
fields.
The appearance of a monograph solely devoted to the exposition and
classification of simplicial complexes of graphs is probably a
good sign of a relative maturity of the field. After decades of
development, some of the central research themes and associated
classes of examples have been isolated and explored, the technique
is codified and typical applications described.
However, the appearance of a relative of the chessboard complex in
the context of symmetric homology $HS_\ast(A)$ of algebras is
perhaps of somewhat non-standard nature and deserves a comment.
Ault and Fiedorowicz showed in \cite{AuFie07} (Theorem 6) that
there exists a spectral sequence converging strongly to
$HS_\ast(A)$ with the $E^1$-term
\begin{equation}\label{eqn:spectral}
E^1_{p,q} = \bigoplus_{\overline{u}\in X^{p+1}/S_{p+1}}
\widetilde{H}_{p+q}(EG_{\overline{u}}\ltimes_{G_{\overline{u}}}
N\mathcal{S}_p/N\mathcal{S}_p'\, ;k).
\end{equation}
They emphasized (loc.\ cit.) the importance of the problem of
determining the homotopy type of the space
$N\mathcal{S}_p/N\mathcal{S}_p'$ and introduced a much more
economical complex $Sym_\ast^{(p)}$ which computes its homology.
The complex $Sym_\ast^{(p)}$ turned out to be isomorphic to the
suspension $\Sigma(\Omega_{p+1})$ of a subcomplex $\Omega_{p+1}$
of the chessboard complex $\Delta_{p+1}=\Delta_{p+1,p+1}$, one of
the well studied graph complexes!
\medskip
It is interesting to compare this development with the appearance
of {\em $2$-connected graph complexes} \cite{Vass-99} in the
computation of the $E^1$-term of the main Vassiliev spectral
sequence converging to the cohomology
\begin{equation}\label{eqn:knot}
H^i(\mathcal{K}\setminus\Sigma)\cong
\bar{H}_{\omega-i-1}(\Sigma)\cong \bar{H}_{\omega-i-1}(\sigma)
\end{equation}
of the space $\mathcal{K}\setminus\Sigma$ of non-singular knots in
$\mathbb{R}^n$. This spectral sequence arises from a filtration
$\sigma_1\supset\sigma_2\supset\ldots$ of a simplicial resolution
$\sigma$ of the space (discriminant) $\Sigma$ of singular knots in
$\mathcal{K}$. As a tool for computing
$E^1_{i,j}=\bar{H}_{i+j}(\sigma_i\setminus\sigma_{i-1})$,
Vassiliev \cite{Vass-99} introduced an auxiliary filtration of the
space $\sigma_i\setminus\sigma_{i-1}$. Complexes of $2$-connected
graphs naturally appear in the description of the $E^1$-term of
the spectral sequence associated to the auxiliary filtration.
It appears, at least on the formal level, that cycle-free
chessboard complexes $\Omega_n$ play the role, in the Ault and
Fiedorowicz approach to symmetric homology, analogous to the role
of $2$-connected graph complexes in Vassiliev's approach to the
homology of knot spaces.
\medskip
The homotopy type of the complex of (not) $2$-connected graphs was
(independently) determined by Babson, Bj\" orner, Linusson,
Shareshian, and Welker in \cite{BBLSW-99} and Turchin in
\cite{Tur-97}. This development stimulated further study of
connectivity graph properties (complexes), see chapter~VI of
\cite{Jo-book} (\cite{Jo-thesis}).
\section{Cycle-free chessboard complexes}
Chessboard complexes $\Delta_{m,n}$ are matching (graph) complexes
associated to complete bipartite graphs \cite{Jo-book},
\cite{ShaWa04}, \cite{Wa03}. However, they most naturally arise as
complexes of {\em admissible rook configurations} on general
$m\times n$ chessboards.
A $(m\times n)$-chessboard is the set $A_{m,n}=[m]\times
[n]\subset \mathbb{Z}^2$ where (as usual in combinatorics)
$[n]=\{1,2,\ldots,n\}$. The associated {\em chessboard complex}
$\Delta(A_{m,n})=\Delta_{m,n}$ is defined as the (abstract,
simplicial) complex of all admissible or {\em non-taking rook
configurations} on the chessboard $A_{m,n}$. More generally, for
an arbitrary (finite) subset $A\subset \mathbb{Z}^2$, the
associated chessboard complex $\Delta(A)$ has $A$ for the set of
vertices and $S\in\Delta(A)$ if and only if for each pair $(i,j),
(i',j')$ of distinct vertices of $S$ both $i\neq i'$ and $j\neq
j'$. Also, we often denote by $\Delta_{X,Y}=\Delta(X\times Y)$ the
chessboard complex carried by the ``chessboard'' $X\times Y$ where
$X$ and $Y$ are not necessarily subsets of $\mathbb{Z}$.
Let $\Delta_n:=\Delta_{n,n}$ be the chessboard complex associated
to the canonical $(n\times n)$-chessboard $[n]\times [n]$,
similarly $\Delta_X:=\Delta_{X,X}$. Each top dimensional simplex
in $\Delta_n$ is essentially the graph $\Gamma_\phi:=\{(i,\phi(i))
\mid i\in [n]\}$ of a permutation $\phi : [n]\rightarrow [n]$. Any
other simplex $S\in\Delta_n$ arises as a top dimensional simplex
of the complex $\Delta(A\times B)$ where $A$ and $B$ are two
subsets of $[n]$ of equal size. Alternatively $S$ can be described
as the graph $\Gamma_\psi$ of a bijection $\psi : A\rightarrow B$,
which is sometimes referred to as a partial, injective function
(relation) defined on $[n]$.
\begin{defin}\label{def:cycle-free}
A non-taking rook configuration $S\subset [n]\times [n]$ of size
$n-1$ is {\em cycle-free} if there is a linear order $\rho :
i_1\prec i_2\prec \ldots\prec i_n$ of elements of the set $[n]$
such that
$$
S = S_\rho = \{(i_1,i_2),(i_2,i_3),\ldots, (i_{n-1},i_{n})\}.
$$
Define $\Omega_n:= \bigcup_{\rho\in LO_n}~S_\rho\subset\Delta_n$
as the union, or $\subset$-ideal closure, of the collection of all
cycle-free configurations $S_\rho$, $\rho\in LO_n$, where $LO_n$
is the set of all linear orders on $[n]$.
Alternatively the complex $\Omega_n$ can be described as the
collection of all non-taking rook placements $S\in \Delta_n$ which
do not contain cycles, that is sub-configurations of the form
$\{(x_1,x_2),(x_2,x_3),\ldots,(x_m,x_1)\}$ for some $1\leq m\leq
n$. For this reason we call $\Omega_n$ the chessboard complex
without cycles or simply the {\em cycle-free} chessboard complex.
\end{defin}
In order to study functorial properties of complexes $\Omega_n$ it
is convenient to extend slightly the definitions and introduce a
class of more general cycle-free chessboard complexes. In
Section~\ref{sec:general} we introduce an even larger class of
hybrid chessboard complexes which contain $\Omega_n$ and
$\Delta_n$, as well as $\Delta(A)$ for $A\subset \mathbb{Z}^2$, as
special cases.
If $X$ is a finite set than $\Omega_X\subset \Delta_X=\Delta_{X,
X}$ is defined as the union of all simplices
$S_\rho=\{(x_1,x_2),(x_2,x_3),\ldots,(x_{n-1},x_n)\}$ where $\rho
: x_1\prec x_2\prec \ldots\prec x_n$ is a linear order on $X$.
More generally, given a bijection $\alpha : Y\rightarrow X$ of two
finite sets, let $\Omega(X\times Y;\alpha)$ be the complex of all
non-taking rook configurations in $X\times Y$ without
sub-configurations of the form $\{(x_1,y_2), (x_2,y_3),\ldots,
(x_m,y_1)\}$ where $1\leq m\leq\vert X\vert=n$ and
$x_{j}=\alpha(y_j)$ for each $j$. It is clear that all these
complexes are isomorphic to $\Omega_n$ if $\vert X\vert =\vert
Y\vert =n$.
\medskip
For visualization and as a convenient bookkeeping device,
simplices in $\Delta(X\times Y)$ as well as in $\Omega(X\times
Y;\alpha)$ can be represented as matchings in the complete
bipartite graph $K_{X,Y}$.
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.60]{bipartite-2.eps}
\caption{A cycle in $\Delta_6$.}\label{fig:bipartite}
\end{figure}
The partial matching
$\{(x_1,y_3),(x_2,y_1),(x_3,y_4),(x_4,y_6),(x_6,y_2)\}$, exhibited
in Figure~\ref{fig:bipartite}, clearly determines a non-taking
rook placement on the chessboard $X\times Y$ where
$X=\{x_i\}_{i=1}^6$ and $Y=\{y_i\}_{i=1}^6$. If $\alpha :
Y\rightarrow X$ is the bijection $y_j\mapsto x_j$ then this
matching does not contribute a simplex to $\Omega(X\times
Y;\alpha)$ since it contains a cycle
$$x_1\mapsto y_3\downarrow x_3\mapsto y_4\downarrow x_4\mapsto y_6
\downarrow x_6\mapsto y_2\downarrow x_2\mapsto y_1.$$
The following proposition establishes a key structural property
for cycle-free chessboard complexes $\Omega_n$.
\begin{prop}\label{prop:link}
The link ${\rm Link}(v)={\rm Link}_{\Omega_n}(v)$ of each vertex
$v$ in the cycle-free chessboard complex $\Omega_n$ is isomorphic
to $\Omega_{n-1}$.
\end{prop}
\medskip\noindent
{\bf Proof:} Let us choose $\Omega(X\times Y;\alpha)$ as our model
for $\Omega_n$ where $X=\{x_i\}_{i=1}^n$ and $Y=\{y_i\}_{i=1}^n$
while the bijection $\alpha : Y\rightarrow X$ maps $y_j$ to $x_j$.
Let $v=(x_i,y_j)\in X\times Y$, where $i\neq j$.
Define $X':=X\setminus\{x_i\}$ and $Y':=Y\setminus \{y_j\}$. Let
$\alpha' : Y'\rightarrow X'$ be the bijection defined by
$\alpha'(y_i)= x_j$ and $\alpha(y_k)=x_k$ for $k\in
[n]\setminus\{i,j\}$. Than it is not difficult to show that ${\rm
Link}_{\Omega_n}(v)\cong \Omega(X'\times Y',\alpha')\cong
\Omega_{n-1}$. \hfill$\square$
\subsection{$\Omega_n$ as a digraph complex}
\label{sec:digraph}
Chessboard complexes $\Delta_n$ and cycle-free chessboard
complexes $\Omega_n$, as well as their natural generalizations,
admit another, equally useful description as directed graph
(digraph) complexes.
A chessboard $A_n=[n]\times [n]$ is naturally interpreted as a
complete digraph $DK_n$ (loops included) where each $(i,j)\in A_n$
contributes a directed edge $\overrightarrow{ij}$ in $DK_n$. A
directed subgraph $\Gamma\subset DK_n$ describes an admissible
rook configuration in $A_n$ if and only if
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.40]{omega-1.eps}
\caption{\mbox{$\Omega(G)= S^0\ast S^0\ast S^0\ast S^0 = S^3$.}}
\label{fig:omega-1}
\end{figure}
no two directed edges in $\Gamma$ are allowed to have the same
{\em tail} or the same {\em end}. In other words configurations
depicted in Figure~\ref{fig:omega-1}~(a) are banned from the graph
$\Gamma$. It follows that $\Delta_n$ is the complex of all
subgraphs of $DK_n$ such that the associated connected components
are either directed cycles or directed paths. The complex
$\Omega_n$ arises as the cycle-free subcomplex of $\Delta_n$,
i.e.\ $\Gamma\in \Omega_n$ if only directed paths are allowed as
connected components of $\Gamma$. This definition reveals that
probably the closest relative of $\Omega_n$, that has been
systematically analyzed so far, is the complex $\Delta_n^{DM}$ of
directed matchings on the node set [n] introduced in
\cite{Bj-We-99}.
More generally, for each directed graph $G$ one can define the
associated complexes $\Delta(G)$ and $\Omega(G)$ as the complexes
of all directed subgraphs $\Gamma$ in $G$ which have only directed
paths and cycles (respectively paths alone) as connected
components. For example if $G$ is the directed graph depicted in
Figure~\ref{fig:omega-1}~(b) then $\Delta(G)=\Omega(G)\cong S^3$.
\section{Generalized cycle-free complexes}
\label{sec:general}
Let $\Omega(X\times Y,\alpha)$ be the cycle-free chessboard
complex associated to sets $X,Y\subset \mathbb{Z}$ and a bijection
$\alpha : Y\rightarrow X$. Assume that $A\subset \mathbb{Z}^2$ is
a finite superset of $X\times Y$. Define $\Omega =
\Omega(A,X\times Y,\alpha)$ as the subcomplex of the full
chessboard complex $\Delta(A)$ by the condition that $S\in
\Delta(A)$ is in $\Omega$ if and only if the restriction of $S$ on
$\Delta(X\times Y)$ is in $\Omega(X\times Y,\alpha)$. $\Omega$ is
referred to as the generalized cycle-free chessboard complex.
If $A = (X\cup Z)\times (Y\cup T)$, where $X\cap Z=\emptyset=Y\cap
T$, let $$\Omega^{Y,T}_{X,Z}:=\Omega(A,X\times Y,\alpha).$$ The
isomorphism type of the complex $\Omega^{Y,T}_{X,Z}$ depends only
on cardinalities of sets $X,Y,Z,T$ so if $\vert X\vert=\vert
Y\vert =n, \vert Z\vert =m,$ and $\vert T\vert=p$, we will
frequently denote by $\Omega^{n,p}_{n,m}$ one of its unspecified
representatives. If $p=0$ we write $\Omega_{n,m}:=
\Omega_{n,m}^{n,0}$ and if $m=0$, the complex
$\Omega_{n,0}=\Omega_n$ reduces to the standard cycle-free
chessboard complex defined on a $n\times n$-chessboard.
\begin{defin}\label{def:reduced}
Let $\Omega = \Omega(A,X\times Y,\alpha)$ be a generalized,
cycle-free chessboard complex based on a chessboard $A\subset
\mathbb{Z}^2$, where $X\times Y\subset A$ and $\alpha :
Y\rightarrow X$ is an associated bijection. Let $v = (a,b)\in A$.
The $v$-{\em reduced complex} $\Omega' = \Omega'_v =
\Omega(A',X'\times Y',\alpha')$ of $\Omega$ is defined as follows.
Let $A':=A\setminus (\{a\}\times\mathbb{Z}\cup
\mathbb{Z}\times\{b\})$.
\begin{enumerate}
\item[{\rm (a)}] If both $a\in X$ and $b\in Y$ let $X':=
X\setminus\{a\}, Y':=Y\setminus\{b\}$ and let $\alpha' :
Y'\rightarrow X'$ be the bijection defined by
$\alpha'(\alpha^{-1}(a)):=\alpha(b)$, and $\alpha'(z)=\alpha(z)$
for $z\neq \alpha^{-1}(a)$.
\item[{\rm (b)}] If $a\in X$ and $b\notin Y$ let $X':=
X\setminus\{a\}, Y':=Y\setminus\{\alpha^{-1}(a)\}$ and $\alpha' : Y'\rightarrow
X'$ is the restriction of $\alpha$ on $Y'$.
\item[{\rm (c)}] If $b\in Y$ and $a\notin X$ let
$Y':=Y\setminus\{b\}, X':=X\setminus\{\alpha(b)\}$ and $\alpha' :
Y'\rightarrow X'$ is the restriction of $\alpha$ on $Y'$.
\item[{\rm (d)}] If neither $a\in X$ nor $b\in Y$, let $X'=X,
Y'=Y$ and $\alpha'=\alpha$.
\end{enumerate}
\end{defin}
\medskip
The following proposition records for the future reference the key
structural property of generalized cycle-free chessboard complexes
$\Omega = \Omega(A,X\times Y,\alpha)$. The proof is similar to the
proof of Proposition~\ref{prop:link} so we omit the details.
\begin{prop}\label{prop:structural}
If ${\rm Link}(v)={\rm Link}_\Omega(v)$ is the link of a vertex
$v=(a,b)\in A$ in $\Omega=\Omega(A,X\times Y,\alpha)$ then there
is an isomorphism
$$
{\rm Link(v)\cong \Omega(A',X'\times Y',\alpha')}
$$
where $\Omega(A',X'\times Y',\alpha')$ is the $v$-reduced complex
of the generalized cycle-free chessboard complex $
\Omega(A,X\times Y,\alpha)$ (Definition~\ref{def:reduced}).
\end{prop}
\section{Filtrations of chessboard complexes}
\label{sec:filtrations}
The chessboard complex $\Delta(A)$ functorially depends on the
chessboard $A\subset \mathbb{Z}^2$. It follows that a filtration
$$
A_0\subset A_1\subset\ldots \subset A_{m-1}\subset A_m\subset A
$$
induces a filtration of the complex $\Delta(A)$,
$$
\Delta(A_0)\subset \Delta(A_1)\subset\ldots \subset
\Delta(A_{m-1})\subset \Delta(A_m)\subset \Delta(A).
$$
This filtration in turn induces a filtration
$\{F_j(\Omega)\}_{j=0}^m$ of the associated generalized,
cycle-free chessboard complex $\Omega=\Omega(A,X\times Y,\alpha)$.
If $X\times Y\subset A_0$ then clearly
$F_j(\Omega)=\Omega(A_j,X\times Y,\alpha)$. We are particularly
interested in filtrations where $A_j\setminus A_{j-1}=\{a_j\}$ is
a singleton. Consequently a filtration is determined once we
choose a linear order of the elements (elementary squares) of the
set $A\setminus A_0$.
\medskip
A basic fact and a well known consequence of the {\em
Gluing Lemma} \cite{Brown} is that the homotopy type of the
``double mapping cylinder'' (homotopy colimit) of the diagram
$B\stackrel{f}\longleftarrow A \stackrel{g}\longrightarrow C$ of
spaces (complexes) depends only on homotopy types of maps $f$ and
$g$. It follows that if both maps $f$ and $g$ are homotopic to
constant maps the associated double mapping cylinder has the
homotopy type of a wedge $B\vee \Sigma(A)\vee C$. From here we
immediately deduce that if a simplicial complex $X = X_1\cup X_2$
is expressed as a union of its sub-complexes such that both $X_1$
and $X_2$ have the homotopy type of a wedge of $n$-dimensional
spheres while the intersection $X_1\cap X_2$ is a wedge of
$(n-1)$-dimensional spheres, then the complex $X$ is also a wedge
of $n$-dimensional spheres. An immediate consequence is the
following lemma.
\begin{lema}\label{lem:vertex}
Let $K$ be a finite simplicial complex. Given a vertex $v\in K$,
let ${\rm Link}_K(v)$ and ${\rm Star}_K(v)$ be the link and star
subcomplex of $K$. Let $A$-${\rm Star}_K(v) = K\setminus \{v\}$ be
the ``anti-star'' of $v$ in $K$, i.e.\ the complex obtained by
deleting $v$ from all simplices, or equivalently by removing the
``open star'' of $v$ from $K$. If $A$-${\rm Star}(v)$ is homotopy
equivalent to a wedge of $n$-dimensional spheres and ${\rm
Link}_K(v)$ is homotopy equivalent to a wedge of
$(n-1)$-dimensional spheres, then the complex $K$ itself has the
homotopy type of a wedge of $n$-dimensional spheres.
\end{lema}
One way of proving that a simplicial complex is homotopically a
wedge of $n$-spheres is to iterate Lemma~\ref{lem:vertex}. In the
following section we show that among the complexes where this
strategy can be successfully carried on are some generalized
cycle-free complexes.
\section{Complexes $\Omega_{n,m}$}
\begin{prop}\label{prop:primena}
The complex $\Omega_{n,m}$ is homotopy equivalent to a wedge of
$(n-1)$-dimensional spheres provided $m\geq n$.
\end{prop}
\medskip\noindent
{\bf Proof:} Let us establish the statement for all complexes
$\Omega_{n,m}$, where $m\geq n$, by induction on $n$. Note that
$\Omega_{2,2}$ is a circle and that $\Omega_{2,m}$ for $m\geq 3$
is always a connected, $1$-dimensional complex, hence a wedge of
$1$-spheres.
Assume, as an inductive hypothesis, that $\Omega_{n,m}$ is
homotopic to a wedge of $(n-1)$-spheres for each $m\geq n$.
Our model for $\Omega_{n,n}$ will be the complex $\Omega_{X,Z}^Y$
where $X=\{x_i\}_{i=1}^n, Y=\{y_i\}_{i=1}^n, Z=\{z_i\}_{i=1}^{n}$,
where $\alpha : Y\rightarrow X$ is the canonical bijection
$y_j\mapsto x_j$.
Our model for $\Omega_{n+1,n+1}$ will be the complex
$\Omega_{X',Z'}^{Y'}$ where $X'=X\cup\{x_0\}, Y'=Y\cup\{y_0\},
Z'=Z\cup\{z_0\}$ and the bijection $\alpha' : Y'\rightarrow X'$ is
the (unique) extension of $\alpha$ characterized by
$\alpha'(y_0)=x_0$.
Following the strategy outlined in Section~\ref{sec:filtrations},
we define a filtration of the complex $\Omega_{n+1,n+1}$ by
choosing $A_0=\{(z_0,y_0)\}\cup ((X'\cup Z)\times Y)$ as the
initial chessboard and selecting a linear order on the set
$W:=((X\cup Z)\times \{y_0\}) \cup (\{z_0\}\times Y)$ of
elementary squares (Figure~\ref{fig:chess-1}). Note that the
element $(x_0,y_0)$ is omitted since it is not allowed to be a
vertex of the cycle-free complex $\Omega(X',Y')$. Let
$$
P = \{(x_i,y_0)\}_{i=1}^n, \quad
Q=\{(z_i,y_0)\}_{i=1}^{n}, \quad R=\{((z_0,y_i))\}_{i=1}^n.
$$
List elements of $W= P\cup Q\cup R$ in the order of appearance in
this $\cup$-decomposition. Within each of the blocks $P, Q, R$ the
elements can be ordered in an arbitrary way, say according to the
index $i=1,\ldots,n$.
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.70]{chess-1.eps}
\caption{}\label{fig:chess-1}
\end{figure}
If $W=\{v_k\}_{k=1}^{N}$ where $N=3n$, let
\begin{equation}
\{(z_0,y_0)\}\cup ((X'\cup Z)\times Y)=A_0\subset A_1\subset\ldots
\subset A_{N}=A=(X'\cup Z')\times Y' \label{eqn:filtration}
\end{equation}
be the filtration defined by $A_j:=A_0\cup\{v_k\}_{k=1}^j$. Let
$\{\Delta(A_j)\}_{j=0}^N$ be the associated filtration of the
chessboard complex $\Delta(A)$ and let $\{F_j(\Omega)\}_{j=0}^N$
be the induced filtration on the generalized cycle-free complex
$\Omega = \Omega(A,X'\times Y',\alpha')$. Note that
$F_j(\Omega)=\Omega(A_j,X'\times Y',\alpha)$ for $j\geq n$ while
in general $F_j(\Omega)=\Omega(A,X'\times Y',\alpha')\cap
\Delta(A_j)$.
By Proposition~\ref{prop:structural}, the homotopy type of the
link ${\rm Link}_k(v_k)$ of $v_k$ in the complex $F_k(\Omega)$ can
be described as follows.
\begin{enumerate}
\item[(I)] $v_k\in P$, i.e.\ $v_k = (x_i,y_0)$ for some
$i=1,\ldots, n$.
$$
{\rm Link}_{k}(v_k) \cong \Omega_{n,n}
$$
\item[(II)] $v_k\in Q$, i.e.\ $v_k = (z_i,y_0)$ for some
$i=1,\ldots, n$.
$$
{\rm Link}_{k}(v_k) \cong \Omega_{n,n}.
$$
\item[(III)] $v_k\in R$, i.e.\ $v_k=(z_0,y_i)$ for some
$i=1,\ldots, n$.
$$
{\rm Link}_{k}(v_k) \cong \Omega_{n,n+1}.
$$
\end{enumerate}
The complex $F_0(\Omega)$ is a cone with apex $(z_0,y_0)$, hence
it is contractible. In all cases (I)--(III), by the inductive
hypothesis, the complexes $\Omega_{n,n}$ and $\Omega_{n,n+1}$ have
the homotopy type of a wedge of $(n-1)$-dimensional spheres.
Consequently, by repeated use of Lemma~\ref{lem:vertex},
$\Omega_{n+1,n+1}$ has the homotopy type of a wedge of
$n$-dimensional spheres.
\medskip
It remains to be shown that the complex $\Omega_{n+1,m}$ has the
homotopy type of a wedge of $n$-dimensional spheres if $m>n+1$.
This is achieved by expanding the filtration
(\ref{eqn:filtration}) by adding vertices from new columns, in
some order, and applying the same argument as above.
\hfill$\square$
\section{Complexes $\Omega_{n,m}$ and the nerve lemma}
A classical result of topological combinatorics is the Nerve
Lemma. It was originally proved by J.~Leray in \cite{Le45}, see
also \cite{Bjo} for a more recent overview of applications and
related results.
\begin{lema}
{\bf (Nerve Lemma, \cite{Le45})} Let $\Delta$ be a simplicial
complex and $\{ L_i\}_{i=1}^k$ a family of subcomplexes such that
$\Delta = \cup_{i=1}^k~L_i.$ Suppose that every nonempty
intersection $L_{i_1} \cap L_{i_2} \cap \ldots \cap L_{i_t}$ is
$(\mu-t+1)$-connected for $t \geq 1.$ Then $\Delta$ is
$\mu$-connected if and only if ${\cal N}(\{L_i \}_{i=1}^k),$ the
nerve of the covering $\{L_i \}_{i=1}^k,$ is $\mu$-connected.
\end{lema}
In the preceding section we showed that for $m\geq n$ the
complex $\Omega_{n,m}$ is a wedge of $(n-1)$-dimensional spheres,
consequently it is $(n-2)$-connected. Here we continue the
analysis of these complexes and establish a lower bound for the
connectivity of the complex $\Omega_{n,m}$ for any $m\geq 1$.
\begin{prop}
\label{prop:pomoc} The complex $\Omega_{n,m}$ is
$\mu_{n,m}$-connected, where $$\mu_{n,m}=\min \left\{\left[ \frac
{2n+m}3 \right]-2,n-2\right\}.$$
\end{prop}
\medskip\noindent
{\bf Proof:} We proceed by induction on $n$. For $n=2$, the
complex $\Omega_{2,1}$ is the union of two segments and so
non-empty (or $(-1)$-connected), and for $m\geq 2$ the complex
$\Omega_{2,m}$ is clearly connected (or $0$-connected).
Let us suppose that complexes $\Omega_{r,m}$ are
$\mu_{r,m}$-connected, whenever $r\leq n-1$, and consider the
complex $\Omega_{n,m}$. If $m\geq n$, then $\mu_{n,m}=n-2$, and
the complex $\Omega_{n,m}$ is $(n-2)$-connected by Proposition
\ref{prop:primena}. Suppose that $1\leq m\leq n-1$, which implies
that $\mu_{n,m}\leq n-3$.
We use $\Omega_{[n],Z}^{[n],\emptyset}$, where $\vert Z\vert =m$,
as a model for the complex $\Omega_{n,m}$. For example, in order
to keep our chessboards in $\mathbb{Z}^2$, we could take
$Z=\{-1,-2,...,-m\}$. Let $\mathcal{L}_{n,m}=\{L_{z,i} \mid z\in
Z, i\in [n]\}$ be the family of subcomplexes of $\Omega_{n,m}$
where by definition $L_{z,i}:={\rm Star((z,i))}$ is the union of
all simplices with $(z,i)$ as a vertex, together with their faces.
Every maximal simplex in $\Omega_{n,m}$ must have a vertex
belonging to $Z\times [n]$. So, the collection $\mathcal{L}_{n,m}$
of contractible complexes is a covering of $\Omega_{n,m}$.
Let us apply the Nerve Lemma. It is easy to see that the
intersections of any $n-1$ complexes $L_{z,i}$ is nonempty. It
follows that the nerve $\mathcal{N}(\mathcal{L}_{n,m})$ of the
covering contains the full $(n-2)$-dimensional skeleton, hence it
is at least $(n-3)$-connected. It remains to show that the
intersection of any subcollection of $t$ of these complexes is at
least $(\mu_{n,m}-t+1)$-connected.
For the reader's convenience, we begin with the simplest case
$t=2$. There are three possibilities for the intersection
$L_{z_1,i}\cap L_{z_2,j}$.
\begin{itemize}
\item If $z_1\neq z_2$ and $i\neq j$, this intersection is a join
of the interval spanned by vertices $(z_1,i),(z_2,j)$, and a
subcomplex of type $\Omega_{n-2,m}$. Therefore, it is
contractible.
\item If $z_1\neq z_2$ and $i=j$, this intersection is the
subcomplex of type $\Omega_{n-1,m-1}$, which is at least
$\mu_{n-1,m-1}=(\mu_{n,m}-1)$-connected by the induction
hypothesis.
\item If $z_1=z_2$ and $i\neq j$, this intersection is the
subcomplex of the type $\Omega_{n-2,m+1}$ which is
$\mu_{n-2,m+1}$-connected.
Then $\left[ \frac{2(n-2)+(m+1)}3\right]-2=\mu_{n,m}-1$. Also,
$(n-2)-2\geq \mu_{n,m}-1$ because $\mu_{n,m}\leq n-3$. Therefore,
$\mu_{n-2,m+1}\geq \mu_{n,m}-1$.
\end{itemize}
Similar arguments apply also in the case $t\geq 3$. The
intersection $L_{z_1,i_1}\cap L_{z_2,i_2}\cap \cdots \cap
L_{z_t,i_t}$ could be either contractible (when for some $h\in
\{1,2,...,t\}$ both $z_h$ and $i_h$ are different from all other
$z_j$ and $i_j$ respectively), or it could be a subcomplex of the
type $\Omega_{r,s}$ where both $r\geq n-t$ and $r+s\geq n+m-t$.
Then $2r+s\geq 2n+m-2t\geq 2n+m-3t+3$. Actually it could be easily
proved more, i.e. that $2r+s\geq 2n+m-\frac 32 t$, but we need the
more precise estimate only in the case $t=2$.
The above inequality implies $\left[ \frac {2r+s}3\right] -2\geq
\mu_{n,m}-t+1$. Also, $r-2\geq n-t-2\geq \mu_{n,m}-t+1$, because
$\mu_{n,m}\leq n-3$.
These two facts together imply that $\mu_{r,s}=\min \left\{ \left[
\frac {2r+s}3\right]-2,r-2\right\}\geq \mu_{n,m}-t+1$ which is
precisely the desired inequality. \hfill $\square$
\section{Complexes $\Omega_n$}
Now we are ready to prove our main result, i.e. to establish
high-connectivity of the complex $\Omega_n$.
\begin{prop}
For each $n\geq 5$, $\pi_1(\Omega_n)=0$.
\end{prop}
\medskip\noindent
{\bf Proof:} We apply the Nerve Lemma on the complex $L := L_1\cup
L_2\cup L_3$ where $L_j$ is the subcomplex of $\Omega_n$ based on
the chessboard $[n]\times ([n]\setminus\{i\})$. In other words a
simplex $\sigma\in\Omega_n$ is in $L_i$ if and only if it doesn't
have a vertex of the type $(\cdot, i)$.
It is clear that the $1$-skeleton of $\Omega_n$ is a subcomplex of
$L$, hence it suffices to show that $L$ is $1$-connected. Since
$L_1\cap L_2\cap L_2\neq\emptyset$ it is sufficient to show that
$L_i$ is $1$-connected for each $i$ and that $L_i\cap L_j$ is
connected for each pair $i\neq j$. Since $L_i\cong \Omega_{n-1,1}$
and $n\geq 5$ the first part follows from
Proposition~\ref{prop:pomoc}. Similarly, since $L_i\cap L_j\cong
\Omega_{n-2,2}$, again by Proposition~\ref{prop:pomoc} the complex
$L_i\cap L_j$ is connected if $n\geq 5$. \hfill $\square$
\begin{theo}
\label{thm:main} The complex $\Omega_n$ is $\mu_n$-connected,
where $\mu_n=\left[ \frac {2n-1}3\right] -2$.
\end{theo}
\medskip\noindent
{\bf Proof:} For $n=2$ the complex $\Omega_2$ consists of two
points and is nonempty or $(-1)$-connected. For $n=3$ the complex
$\Omega_3$ is also nonempty ($(-1)$-connected), being an union of
two disjoint circles. The complex $\Omega_4$ is $0$-connected.
Indeed, each pair $v_0, v_1$ of vertices in $\Omega_4$ belongs to
a subcomplex isomorphic to $\Omega_{2,2}$ which is connected.
Let us assume that $n\geq 5$. We already know that
$\pi_1(\Omega_n)=0$ so it remains to be shown that
$H_j(\Omega_n)\cong 0$ for $j\leq \mu_n$. We establish this fact
by induction on $n$.
Let us suppose that the statement of the theorem is true for
complexes $\Omega_{n-2}$ and $\Omega_{n-1}$. Consider the
subcomplex $\Theta_n$ of $\Omega_n$ formed by simplices having
possibly a vertex of the type $(1,i)$ or $(j,1)$ but not both.
Here is an excerpt from the long homology exact sequence of the
pair $(\Omega_n,\Theta_n)$.
\begin{equation}
\label{long} \cdots \to H_{\mu_n}(\Theta_n)\to
H_{\mu_n}(\Omega_n)\to H_{\mu_n}(\Omega_n,\Theta_n)\to \cdots
\end{equation}
We need yet another exact sequence involving complexes $\Omega_n$
and $\Theta_n$. For motivation, the reader is referred to
\cite{ShaWa04} where similar sequences are constructed in the
context of usual chessboard complexes.
Let us denote by $\Theta_n^1$ the subcomplex of $\Theta_n$
consisting of simplices having one vertex of the type $(1,i)$, and
by $\Theta_n^2$ the subcomplex of $\Theta_n$ consisting of
simplices having one vertex of the type $(j,1)$. We use the
Mayer-Vietoris sequence for the decomposition
$\Theta_n=\Theta_n^1\cup \Theta_n^2$. Obviously $\Theta_n^1\cap
\Theta_n^2=\Omega_{n-1}$ so we obtain the following exact sequence
\begin{equation}
\label{Mayer} \cdots \to H_{\mu_n}(\Theta_n^1)\oplus
H_{\mu_n}(\Theta_n^2) \to H_{\mu_n}(\Theta_n) \to
H_{\mu_n-1}(\Omega_{n-1}) \to \cdots
\end{equation}
\noindent Since both $\Theta_n^1$ and $\Theta_n^2$ are the
complexes of type $\Omega_{n-1,1}$, they are
$\mu_{n-1,1}$-connected by Proposition \ref{prop:pomoc}. Since
$\mu_{n-1,1}=\left[ \frac {2n-1}3\right]-2=\mu_n$ we observe that
$H_{\mu_n}(\Theta_n^1)\oplus H_{\mu_n}(\Theta_n^2)=0$.
The complex $\Omega_{n-1}$ is $\mu_{n-1}$-connected by the
induction hypothesis, and $\mu_{n-1}=\left[ \frac {2n-3}3\right]
-2\geq \left[ \frac {2n-1}3\right] -2-1=\mu_n-1$. Therefore,
$H_{\mu_n-1}(\Omega_{n-1})=0$.
These facts, together with the exactness of the sequence
(\ref{Mayer}), allow us to conclude that $H_{\mu_n}(\Theta_n)=0$.
The homology of the pair $(\Omega_n,\Theta_n)$ is isomorphic to
the homology of the quotient $\Omega_n/\Theta_n$. If we denote by
$I_{i,j}$ (for $i\neq j$) the $1$-simplex with endpoints $(1,i)$
and $(j,1)$, the argument similar to the one from Proposition
\ref{prop:link} shows that this quotient is homotopy equivalent to
the wedge
$$\bigvee_{1<i\neq j\leq n} (I_{i,j}\ast \Omega_{n-2})/(\partial
I_{i,j}\ast \Omega_{n-2}).$$
Each quotient $(I_{i,j}\ast \Omega_{n-2})/(\partial I_{i,j}\ast
\Omega_{n-2})$ is homotopy equivalent to a wedge of double
suspensions of the complex $\Omega_{n-2}$. These double
suspensions are by the induction hypothesis
$(\mu_{n-2}+2)$-connected, and $(\mu_{n-2}+2)=\left[ \frac
{2n-5}3\right] \geq \left[ \frac {2n-1}3\right] -2=\mu_n$.
Therefore $H_{\mu_n}(\Omega_n,\Theta_n)=0$.
Finally, from the exact sequence (\ref{long}) we deduce
$H_{\mu_n}(\Omega_n)=0$, which completes our inductive argument.
\hfill $\square$
\bigskip
Substituting $n=p+1$ and taking the suspension, one immediately
obtains the desired estimate for the connectivity of the complex
$\mbox{Sym}^{(p)}_*$ introduced by Ault and Fiedorowicz in
\cite{AuFie07}.
\begin{cor}\label{cor:rezultat}
\label{posl} The complex $\mbox{Sym}^{(p)}_*$ is $\left[ \frac
23(p-1)\right]$-connected.
\end{cor}
\medskip\noindent
{\bf Proof:} Since by definition $\mbox{Sym}^{(p)}_*=\Sigma
\Omega_{p+1}$ it is $\gamma_p$-connected where
$$\gamma_p=\left[ \frac {2(p+1)-1}3\right] -2+1=\left[ \frac 23
(p-1)\right].$$ \hfill $\square$
\section{Tightness of the bound}
\label{sec:tightness}
Our objective in this section is to explore how far from being
tight is the connectivity bound established in
Theorem~\ref{thm:main}. Our central result is
Theorem~\ref{thm:tight1} which says that the constant $\mu_{n}$ is
the best possible at least if $n=3k+2$ for some $k\geq 1$.
\subsection{The case $n=3k+2$}
It is well known that $H_2(\Delta_{5,5})\cong \mathbb{Z}_3$,
\cite{BLVZ}, \cite{ShaWa04}, \cite{Jo07-2}. This fact was
essentially established in \cite{BLVZ}, Proposition~2.3.
Unfortunately the proof of this proposition suffers from an easily
rectifiable error which was detected too late to be inserted in
the final version of \cite{BLVZ}. Since the proof of
Proposition~\ref{prop:pet-puta-pet} depends on this result, we
start with a proposition which isolates the needed fact, points to
the error in the original proof of Proposition~2.3. and shows how
it should be corrected.
Recall that the chessboard complex $\Delta_{3,4}$ is isomorphic to
a torus $T^2$. More precisely, \cite{BLVZ}, p.\ 30, the universal
covering space of $\Delta_{3,4}$ is the triangulated honeycomb
tessellation of the plane. An associated fundamental domain for
$\Delta_{3,4}=\mathbb{R}/\Gamma$ is depicted in
Figure~\ref{fig:torus-2} with the lattice $\Gamma$ generated by
vectors $x=\overrightarrow{AB}$ and $y=\overrightarrow{AC}$. As
clear from the picture, $x=4a +2b$ and $y=2a+4b$ are generators of
the lattice $\Gamma:=H_1(\Delta_{3,4})$, where
$a:=\overrightarrow{AX}$ and $b:=\overrightarrow{AY}$. If
$\Gamma_1$ is the lattice spanned by vectors $6a$ and $6b$ then
$\Gamma_1\subset\Gamma$ and $\Gamma/\Gamma_1\cong \mathbb{Z}_3$.
As a consequence (\cite{BLVZ}, Lemma~2.2.),
$${\rm Coker}(H_1(\Delta_{3,3}) \rightarrow
H_1(\Delta_{3,4}))\cong \Gamma/\Gamma_1 \cong \mathbb{Z}_3.$$
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.5]{torus-3.eps}
\caption{The chessboard complex
$\Delta_{3,4}$.}\label{fig:torus-2}
\end{figure}
\begin{prop}\label{prop:korekcija}
There is an isomorphism
$$
H_2(\Delta_{5,5}) \cong \oplus_{i=1}^4 H_1(\Delta^i_{3,4})/N\cong
\Gamma^{\oplus 4}/N \cong \mathbb{Z}_3
$$
where $\Delta^i_{3,4}\cong \Delta_{3,4}$ for each $i$ and $N=A+B$,
where $A=\Gamma_1^{\oplus 4}$ and $B = {\rm Ker}(\Gamma^{\oplus
4}\stackrel{\theta}\rightarrow \Gamma)$,
$\theta(x,y,z,t)=x+y+z+t$.
\end{prop}
\medskip\noindent
{\bf Proof:} The proof follows into the footsteps of the proof of
Proposition~2.3.\ from \cite{BLVZ}. The only defect in the proof
of that proposition is an incorrect determination of the kernel
${\rm Ker}(\gamma)$ of the homomorphism $\gamma :
H_1(\Delta_{3,3}^i)\rightarrow H_1(\overline{\Delta}_{4,3})$ in
the commutative diagram (loc.\ cit.), leading to the omission of
the factor $B$ in the decomposition $N=A+B$. As a consequence, the
group $H_2(\Delta_{5,5})$ is isomorphic to the group
$\Gamma^{\oplus 4}/N\cong \Gamma/\Gamma_1\cong \mathbb{Z}_3$,
rather than to group $\Gamma^{\oplus 4}/A\cong
\mathbb{Z}_3^{\oplus 4}$, as erroneously stated in the formulation
of Proposition~2.3.\ in \cite{BLVZ}. \hfill $\square$
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.80]{cetri-pet.eps}
\caption{A non-zero class in
$H_2(\Omega_5)$.}\label{fig:cetri-pet}
\end{figure}
\begin{exam}\label{exam:primena}{\rm
Proposition~\ref{prop:korekcija} is in practise applied as
follows. Suppose we want to check if a subcomplex $S\subset
\Delta_{4,5}\subset \Delta_5$ contributes a non-trivial
$2$-dimensional class to $H_2(\Delta_5)$. For example let $S\cong
S^1\ast S^0$ be the $2$-sphere shown in
Figure~\ref{fig:cetri-pet}~(a) where $S^1$ is the hexagon shown in
Figure~\ref{fig:cetri-pet}~(c) and $S^0=\{(3,5),(4,5)\}$. Let
$\Delta^i_{3,4},\, 1\leq i\leq 4$, be the chessboard complex
associated to the chessboard $[4]\setminus\{i\}\times [4]$ so for
example $\Delta_{3,4}^4\cong\Delta_{3,4}\cong\Gamma$ is associated
to the board depicted in Figure~\ref{fig:cetri-pet}~(b). Recall
that $\Delta_{4,5}/\Delta_{4,4}\cong
\vee_{i=1}^4~\Sigma(\Delta_{3,4}^i)$. Let $\nu :
H_2(\Delta_{4,5})\rightarrow \oplus_{i=1}^4
H_1(\Delta_{3,4}^i)\cong \Gamma^{\oplus 4}$ be a homomorphism
associated to the natural projection $\Delta_{4,5}\rightarrow
\Delta_{4,5}/\Delta_{4,4}$. Then the fundamental class $[S]$ is a
non-trivial element in $H_2(\Delta_5)$ if and only if $\nu([S])$
is not an element of $N=A+B$.
For example in our case the image of $[S]$ in $\Gamma^{\oplus 4}/N
\cong \Gamma/\Gamma_1\cong \mathbb{Z}_3$ is equal to the image of
the class of the circle $S^1$ depicted in
Figure~\ref{fig:cetri-pet}~(c) in $\Gamma/\Gamma_1$. By inspection
of Figure~\ref{fig:torus-2} we observe that this class is a
generator of $\Gamma$, hence $[S]$ is a generator of
$H_2(\Delta_{5})$. }
\end{exam}
\begin{prop}\label{prop:pet-puta-pet}
The inclusion $\Omega_5\hookrightarrow \Delta_{5}$ induces an
epimorphism $$H_2(\Omega_5)\stackrel{\alpha}{\longrightarrow}
H_2(\Delta_5)\cong \mathbb{Z}_3.$$ Moreover, for a class $[S]$
such that $\alpha([S])$ is a generator in $H_2(\Delta_5)$, one can
choose the fundamental class of the $2$-sphere $S\cong S^1\ast
S^0\cong \Omega_{2,2}\ast \Delta_{2,1}\subset \Omega_5$, depicted
in Figure~\ref{fig:cetri-pet}, where $\Omega_{2,2}\subset
\Delta_{[2],[4]}$ and $\Delta_{2,1}\cong \Delta(\{(3,5),(4,5)\})$.
\end{prop}
\medskip\noindent
{\bf Proof:} We have already demonstrated in
Example~\ref{exam:primena} that the image of $[S]$ in
$H_2(\Delta_5)$ is non-zero so the proof follows from the
observation that $S\subset \Omega_5$. \hfill $\square$
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.80]{jedanaest.eps}
\caption{The complex $\Omega_{11}$ is not
$6$-connected.}\label{fig:jedanaest}
\end{figure}
\begin{theo}\label{thm:tight1}
The inclusion map $\Omega_{3k+2}\hookrightarrow \Delta_{3k+2}$
induces a non-trivial homomorphism
$$
H_{2k}(\Omega_{3k+2})\longrightarrow H_{2k}(\Delta_{3k+2}).
$$
It follows that $H_{2k}(\Omega_{3k+2})$ is non-trivial, hence the
cycle-free chessboard complex $\Omega_{3k+2}$ is
$(2k-1)$-connected but not $(2k)$-connected for each $k\geq 1$.
\end{theo}
\medskip\noindent
{\bf Proof}: We already know that the result is true in the case
$k=1$. The general case is not much more difficult to prove in
light of the properties of chessboard complexes of the form
$\Delta_{3k+2}$ established in \cite{ShaWa04}. For example
Theorem~5.4.\ (loc.\ cit.) implies that
$H_{2k}(\Delta_{3k+2})\cong \mathbb{Z}_3$. Moreover, a generator
of this group is determined by a sphere $S^{2k}\cong
S^0\ast\ldots\ast S^0$ obtained as a join of $(2k+1)$ copies of
$S^0$ such that $(k+1)$ of them are vertical and the remaining $k$
are horizontal ``dominoes'', i.e.\ complexes of the form
$\Delta_{2,1}$ and $\Delta_{1,2}$ respectively. It is often
convenient to represent two dominoes of different type inside a
chessboard complex of the type $\Delta_{3,3}$, two of these
$(3\times 3)$-chessboards with pairs of complementary dominos are
indicated in Figure~\ref{fig:jedanaest}.
Let us illustrate the argument leading to the proof of the theorem
in the case of the complex $\Omega_{11}$, the proof of the general
case follows exactly the same pattern. Figure~\ref{fig:jedanaest}
exhibits a sphere $\Sigma:=S\ast S^1\ast S^1\cong S^6$, where $S$
is the $2$-sphere described in Example~\ref{exam:primena} while
the two copies of $S^1$ arise from the dominos in two $(3\times
3)$-blocks. It is clear that $\Sigma\subset \Omega_{11}$ so it
remains to be shown that the image of $\Sigma$ in $\Delta_{11}$
defines a non-zero homology class.
The image $\nu([S])$ of the class $[S]\in H_2(\Omega_5)$ in
$H_2(\Delta_5)$ is shown in Proposition~\ref{prop:pet-puta-pet} to
be non-trivial hence, according to Theorem~5.4.\ from
\cite{ShaWa04}, it is homologous (in $\Delta_5$) to a sphere
$S_1=S^0\ast S^0\ast S^0$ where two of the ``dominoes'' $S^0$ are
vertical. Hence $[\Sigma]$ is homologous (in $\Delta_{11}$) to the
fundamental class $[\Sigma_1]$ of $\Sigma_1:=S_1\ast S^1\ast S^1$
which, again by Theorem~5.4.\ from \cite{ShaWa04}, is non-trivial.
This completes the proof of the theorem. \hfill $\square$
\subsection{The cases $n=3k$ and $n=3k+1$}
\label{sec:cases}
Unfortunately the methods used in this paper do not alow us to
clarify if the constant $\mu_n$ from Theorem~\ref{thm:main} is the
best possible if $n=3k$ or $n=3k+1$ for some $k\geq 1$.
Nevertheless we are able to show that this bound should not be
expected to be too far off the actual bound.
\begin{prop}\label{prop:tight-1}
The group $H_{2k-1}(\Omega_{3k,1})$ is non-trivial. Moreover, a
non-trivial element of this group arises as the fundamental class
$\xi_{2k-1} = [\Sigma_{2k-1}]$ of a subcomplex
$\Sigma_{2k-1}\subset \Omega_{3k,1}$ isomorphic to the join
$S^0\ast\ldots\ast S^0\cong S^{2k-1}$ of\, $2k$ copies of\,
$0$-dimensional spheres.
\end{prop}
\medskip\noindent
{\bf Proof:} Our model for $\Omega_{n,1}=\Omega_{3k,1}$ is the
complex $\Omega_{X,Z}^{Y,\emptyset}$ where $X=\{x_1,\ldots,
x_n\}$, $Y=\{y_1,\ldots,y_n\}$, $Z=\{x_0\}$ and the bijection
$\alpha : Y\rightarrow X$ maps $y_j$ to $x_j$. The case $k=3$ is
depicted in Figure~\ref{fig:tight-1} where the shaded squares
correspond (from left to right) to squares $(x_j,y_j)$ while the
first column is filled with squares $(x_0,y_j),\, j=1,\ldots, n$.
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.80]{tight-1.eps}
\caption{A cycle in $\Delta_9$.}\label{fig:tight-1}
\end{figure}
Let $T_1:=\{(x_0,y_1), (x_0,y_2)\}, T_2:=\{(x_1,y_3),(x_2,y_3)\},
T_3:=\{(x_3,y_4),(x_3,y_5)\}, \ldots,$
$T_{2k-1}:=\{(x_{3k-3},y_{3k-2}),(x_{3k-3},y_{3k-1})\},
T_{2k}:=\{(x_{3k-2},y_{3k}),(x_{3k-1},y_{3k})\}$. Define
$\Sigma_{2k-1}$ as the join $T_1\ast\ldots\ast T_{2k}$. The proof
is completed by the observation that the cycle $\xi_{2k-1}$,
determined by the sphere $\Sigma_{2k-1}$, does not bound even in
the larger chessboard complex $\Delta_{X\cup\{x_0\},Y}$, cf.\
\cite{ShaWa04}, Section~3. \hfill $\square$
\medskip
The following corollary provides evidence that the connectivity
bound established in Theorem~\ref{thm:main} is either tight or
very close to the actual connectivity bound in the two remaining
cases, $n=3k, n=3k+1$.
\begin{cor} For each $k\geq 1$,
$$ \mbox{ {\rm either} }\quad H_{2k-1}(\Omega_{3k})\neq 0 \quad\mbox{ {\rm or} }
\quad H_{2k-1}(\Omega_{3k+1})\neq 0. $$
\end{cor}
\medskip\noindent
{\bf Proof:} Let $\Omega_{3k+1}$ be the cycle-free chessboard
complex based on the chessboard $[3k+1]\times [3k+1]$. Define
$\Omega_{3k,1}$ as the subcomplex of $\Omega_{3k+1}$ such that a
simplex $S\in \Omega_{3k+1}$ is in $\Omega_{3k,1}$ if and only if
$S\cap (\{1\}\times [3k+1])=\emptyset$. The quotient complex
$\Omega_{3k+1}/\Omega_{3k,1}$ has the homotopy type of a wedge
$\bigvee_{i=1}^{3k+1}\Sigma(\Omega_{3k}^{(i)})$ where each of the
complexes $\Omega_{3k}^{(i)}$ is isomorphic to $\Omega_{3k}$.
Consider the following fragment of the long exact sequence of the
pair $(\Omega_{3k+1},\Omega_{3k,1})$,
$$
\ldots\rightarrow\oplus_{i=1}^{3k+1} H_{2k-1}(\Omega_{3k}^{(i)})
\rightarrow H_{2k-1}(\Omega_{3k,1}) \rightarrow
H_{2k-1}(\Omega_{3k+1})\rightarrow\ldots
$$
The desired conclusion follows from the fact that
$H_{2k-1}(\Omega_{3k,1})\neq 0$. \hfill$\square$
\bigskip\noindent {\bf Conjecture}: The connectivity bound given in
Theorem~\ref{thm:main} is the best possible or in other words for
each $n\geq 2$,
$$H_{\mu_n+1}(\Omega_n)\neq 0 . $$
\section{Relatives of $\Omega_n$}
\label{sec:comparison}
The closest relative of $\Omega_n$, that has so far appeared in
the literature, is the complex $\Delta_n^{DM}$ of directed
matchings introduced by Bj\" orner and Welker in \cite{Bj-We-99},
see also Section~\ref{sec:digraph}. In this section we describe a
natural ``ecological niche'' for all these complexes and briefly
compare their connectivity properties.
In the sequel we put more emphasis on the directed graph
description of $\Omega_n, \Delta_n$ and related complexes
(Section~\ref{sec:digraph}). We silently identify a directed graph
with its set of directed edges (assuming the set of vertices is
fixed and clear from the context).
Let $DK_n$ be the complete directed graph on the set $[n]$ of
vertices (directed loops included) and $K_n^{{\uparrow}}$ its
companion with all loops excluded.
Following $\cite{Bj-We-99}$, let $\Delta_n^{DM}$ be the directed
graph complex of all {\em directed matchings} in $K_n^{\uparrow}$.
By definition, $\Gamma\subset K_n^{\uparrow}$ is a directed
matching if both the inn-degree and out-degree of each vertex is
at most one. This is equivalent to the condition that two graphs
depicted in Figure~\ref{fig:omega-1}~(a) are banned from $\Gamma$.
It follows that $\Gamma\subset DK_n$ is in $\Delta_n^{DM}$ if and
only if the connected components of $\Gamma$ are either directed
paths or directed cycles of length at least $2$, i.e.\ the only
difference between $\Delta_n$ and $\Delta_n^{DM}$ is that in the
former complex the cycles of length one (loops) are allowed.
Summarizing, if $\Gamma\subset DK_n$ then
\begin{enumerate}
\item[(1)] $\Gamma\in\Delta_n \Leftrightarrow$ Each connected
component of $\Gamma$ is either a directed path or a directed
cycle,
\item[(2)] $\Gamma\in\Delta_n^{DM} \Leftrightarrow$ Each connected
component of $\Gamma$ is either a directed path or a directed
cycle of length at least $2$,
\item[(3)] $\Gamma\in\Omega_n \Leftrightarrow$ Each connected
components of $\Gamma$ is a directed path.
\end{enumerate}
The following definition introduces (some of) natural intermediate
complexes which interpolate between $\Omega_n$ and $\Delta_n$,
respectively $\Omega_n$ and $\Delta_n^{DM}$.
\begin{defin}\label{def:filtration}
Let $F_p^I=F_p(\Delta_n)$, respectively
$F_p^{II}=F_p(\Delta_n^{DM})$, be the subcomplex of $\Delta_n$
(respectively $\Delta_n^{DM}$) such that $\Gamma\in F_p(\Delta_n)$
(respectively $\Gamma\in F_p(\Delta_n^{DM})$) if and only if the
number of cycles, among the connected components in $\Gamma$ is at
most $p$.
\end{defin}
\begin{defin}
A graph $C\subset DK_n$ is a $p$-multicycle if $C=C_1\uplus
C_2\uplus\ldots\uplus C_p$ has exactly $p$ connected components
and each $C_j$ is a cycle. If $l_j:=l(C_j)$ is the length of $C_j$
then the multiset $t(C):=(l_1,l_2,\ldots,l_p)$ is called the type
of $C$ and the number $l(C):=l(C_1)+\ldots +l(C_k)$ is called the
length of the $p$-multicycle $C$. Let $\mathcal{C}_p$ be the set
of all $p$-multicycles and $\mathcal{C}_p^{\geqslant 2}$ the set
of all $p$-multicycles $C$ of type $t(C):=(l_1,l_2,\ldots,l_p)$
such that $l_j\geq 2$ for each $j$.
\end{defin}
\begin{prop}\label{prop:filtration}
\begin{equation}\label{eqn:prva}
F_0^I=F_0^{II}=F_0(\Delta_n)=F_0(\Delta_n^{DM})=\Omega_n
\end{equation}
\begin{equation}\label{eqn:druga}
F_p^I/F_{p-1}^I\simeq \bigvee_{C\in \mathcal{C}_p} S^{l(C)-1}\ast
\Omega_{n-l(C)}\cong \bigvee_{C\in \mathcal{C}_p} \Sigma^{l(C)}(\Omega_{n-l(C)})
\end{equation}
\begin{equation}\label{eqn:treca}
F_p^{II}/F_{p-1}^{II}\simeq \bigvee_{C\in \mathcal{C}_p^{\geqslant 2}} S^{l(C)-1}\ast
\Omega_{n-l(C)}\cong \bigvee_{C\in \mathcal{C}_p^{\geqslant 2}} \Sigma^{l(C)}(\Omega_{n-l(C)})
\end{equation}
\end{prop}
The associated exact (spectral) sequences show that all these
complexes are closely related, in particular have very similar
connectivity properties. Let $\mu_n=[\frac{2n-1}{3}]-2$ and
$\nu_n=[\frac{2n+1}{3}]-2$. The complex $\Delta_n$ is
$\nu_n$-connected, as demonstrated by Bj\" orner et al.\ in
\cite{BLVZ}. The same connectivity bound was established by Bj\"
orner and Welker for $\Delta_n^{DM}$ in \cite{Bj-We-99}. Both
bounds are tight as proved by Shareshian and Wachs in
\cite{ShaWa04}. It follows from Proposition~\ref{prop:filtration}
that majority of complexes $F_p^I$ and $F_p^{II}$ share this
connectivity bound. On the other hand $\Omega_n$ is by
Theorem~\ref{thm:main} $\mu_n$-connected, hence all these
complexes are $\mu_n=\nu_n$ connected if $n=3k+2$ for some $k$.
|
train/arxiv
|
BkiUgD_xK0zjCxh76EBJ
| 5 | 1 |
\section{Introduction}\label{sec1}
Polarization imaging aims to measure the polarization information described by Stokes parameters and their derivatives such as degree of linear polarization (DoLP) and angle of polarization (AoP). It has a wide range of applications in various fields such as remote sensing~\cite{RN1742}, biomedical diagnosis~\cite{RN1741,RN1745}, and interferometry~\cite{RN93,RN1546,RN1545,RN1748,RN1747,RN1749}. The modulation technique in polarization imaging can be classified into four categories: division-of-time (DoT)~\cite{RN1736}, division-of-amplitude~\cite{RN1737,RN1746}, division-of-aperture~\cite{RN1735}, and division-of-focal-plane (DoFP)~\cite{RN833,RN1725,RN1726}. The DoFP technique achieves the polarization modulation by integrating a polarization filter array in front of a photodetector array. The most commonly used polarization filter array arrangement is shown in Fig.~\ref{fig1}, which is composed by 2$\times$2 periodically patterned 0$^{\circ}$, 45$^{\circ}$, 90$^{\circ}$, and 135$^{\circ}$ linear polarization filters. The DoFP polarimeters have advantages of compact structure and high temporal resolution, and therefore are especially suitable for real-time polarization imaging.
\begin{figure}[htb!]
\centering\includegraphics[width=7cm]{fig1.pdf}
\caption{Polarization filter array arrangement composed by 2$\times$2 periodically patterned 0$^{\circ}$, 45$^{\circ}$, 90$^{\circ}$, and 135$^{\circ}$ linear polarization filters.}
\label{fig1}
\end{figure}
A fundamental problem in the DoFP technique is the Stokes parameters reconstruction. Since each pixel can only capture the polarization information in one orientation, the measurement of the Stokes parameters is incomplete. Mathematically, reconstructing the Stokes parameters from the DoFP polarization modulation is an ill-posed inverse problem. Many methods have been proposed to reconstruct the Stokes parameters~\cite{RN1724,RN836,RN1652,RN1593,RN1047,RN1665,RN1051}. Most of these methods are spatial domain interpolation-based methods~\cite{RN1724,RN836,RN1652,RN1593} and frequency domain filtering-based methods~\cite{RN1047,RN1665,RN1051}. In the interpolation-based methods, the DoFP image is split into 0$^{\circ}$, 45$^{\circ}$, 90$^{\circ}$, and 135$^{\circ}$ polarization images. After interpolating the missing pixel values in these polarization images, the interpolation-based methods determine the Stokes parameters through the ordinary least-squares criterion. The interpolation algorithms are usually convolutional, including nearest-neighbor, bilinear, bicubic, and natural bicubic spline interpolation algorithms~\cite{RN1724,RN836}. Some edge-preserved interpolation algorithms have also been proposed recently~\cite{RN1652}. Based on the characteristics of the spectrum of the DoFP image, the filtering-based methods use the filter transfer functions constructed by window functions to reconstruct the Stokes parameters. The window functions used in previous studies including Hamming~\cite{RN1047}, Gaussian~\cite{RN1665}, and Planck-taper~\cite{RN1051} window functions. However, the interpolation-based and filtering-based methods are only suitable for the theoretical case that the Stokes parameters are periodically modulated. In practice, the manufacturing imperfects of the DoFP polarimeters cause the non-uniformity of the performance of the linear polarization filters and photodetectors. The non-uniformity is characterized as the differences of the major and minor principal transmittances of the linear polarization filters, the differences of the gains and dark offsets of the photodetectors, and the deviations between the actual and designed orientations of the linear polarization filters~\cite{RN1502,RN834,RN1720}. The non-uniformity destroys the periodicity of the polarization modulation. Consequently, the interpolation-based and filtering-based methods will fail in practical applications since these methods are unable to tackle the reconstruction errors and artifacts caused by the non-uniformity.
In this paper, we study the Stokes parameters reconstruction from the DoFP modulation in the presence of the non-uniformity. We propose two reconstruction methods that can tackle the reconstruction errors and artifacts caused by the non-uniformity. The proposed methods are inspired by the classical Lucas-Kanade method~\cite{RN1727} and Horn-Schunck method~\cite{RN1728} in optical flow estimation. One is the ordinary least-squares method (OLSM), which reconstructs the Stokes parameters under the local constant assumption that the Stokes parameters are constant functions in 2$\times$2 subsets. The basic idea of this method has been reported in Ref.~\cite{david2008unpolarized, RN1643} and our previous study~\cite{RN1637}. Here, we further add a four-subset averaging strategy, present in-depth theoretical analyses, and explain the relationship between the OLSM and interpolation-based methods. The other is the smoothing regularization method (SRM), which reconstructs the Stokes parameters under the global smoothing assumption that the Stokes parameters are spatially smooth. This method has more similarities to the filtering-based methods.
This paper is organized as follows: in Section~\ref{sec2}, a linear pixel model is introduced to characterize the non-uniformity; in Section~\ref{sec3}, the OLSM and SRM are presented to reconstruct the Stokes parameters; in Section~\ref{sec4}, Fourier analysis and numerical simulations are used to evaluate the reconstruction errors of the OLSM and SRM; in Section~\ref{sec5}, the choice of the regularization parameters in the SRM is discussed; in Section~\ref{sec6}, the performance of the OLSM, SRM, and interpolation-based and filtering-based methods is evaluated and compared through two experiments; in Section~\ref{sec7}, the performance of the OLSM and SRM is summarized.
\section{Linear pixel model}\label{sec2}
After integrating with the array composed by the linear polarization filters, the photodetector array can be regarded as being sensitive to the first three Stokes parameters. Assuming the performance of the linear polarization filters and photodetectors is linear, for each pixel, we have~\cite{RN834}
\begin{equation}\label{eq1}
i(x,y)=m_0(x,y)s_0(x,y)+m_1(x,y)s_1(x,y)+m_2(x,y)s_2(x,y)+d(x,y).
\end{equation}
Here, $x$ and $y$ are the horizontal and vertical pixel coordinates, respectively, $i$ represents the intensity of the DoFP image, $s_0$, $s_1$, and $s_2$ represent the first three Stokes parameters, $m_0$, $m_1$, and $m_2$ are the modulation parameters of $s_0$, $s_1$, and $s_2$, respectively, and $d$ represents the dark offset of the photodetector. Specifically, $m_0$, $m_1$, and $m_2$ are expressed as
\begin{equation}\label{eq2}
\left\{
\begin{array}{l}
m_0(x,y)=\frac{1}{2}g(x,y)[k_1(x,y)+k_2(x,y)] \\
m_1(x,y)=\frac{1}{2}g(x,y)[k_1(x,y)-k_2(x,y)]\cos2\theta(x,y) \\
m_2(x,y)=\frac{1}{2}g(x,y)[k_1(x,y)-k_2(x,y)]\sin2\theta(x,y)
\end{array}
\!.\right.
\end{equation}
Here, $g$ represents the gain of the photodetector, $k_1$ and $k_2$ represent the major and minor principal transmittances of the linear polarization filter, respectively, and $\theta$ represents the orientation of the linear polarization filter.
In the interpolation-based and filtering-based methods, the modulation parameters are regarded as periodically ideal values, expressed as~\cite{RN1047}
\begin{equation}\label{eq3}
\left\{
\begin{array}{l}
m_0(x,y)=1\\
m_1(x,y)=\frac{1}{2}[\cos(\pi x)+\cos(\pi y)]\\
m_2(x,y)=\frac{1}{2}[\cos(\pi x)-\cos(\pi y)]
\end{array}
\!,\right.
\end{equation}
corresponding to $g(x,y)=2$, $k_1(x,y)=1$, $k_2(x,y)=0$, and $\theta(x,y)$ takes $0$, $\pi/4$, $\pi/2$, or $3\pi/4$ according to the arrangement shown in Fig.~\ref{fig1}. In practice, due to the existence of the non-uniformity, the major and minor principal transmittances of the linear polarization filters and the gains and dark offsets of the photodetectors generally show individual differences, and the actual orientations of the linear polarization filters are also deviated from the designed orientations. To tackle the reconstruction errors and artifacts caused by the non-uniformity, these parameters need to be calibrated. The calibration of these parameters has been well discussed in Refs.~\cite{RN834,RN1720,RN1637}, therefore we consider $m_0$, $m_1$, $m_2$, and $d$ as known. Notice that here the term calibration refers to experimental processes for measuring the modulation and offset parameters, instead of numerical methods for correcting the non-uniform intensity responses in the DoFP image to ideal~\cite{RN834}. To normalize the modulation parameters and choose an orientation as the reference of 0$^{\circ}$, we apply the scaling and rotation transformation given in~\ref{app:A} to the modulation parameters. And since the influence caused by the non-uniformity of $d$ can be mitigated by simply subtracting $d$ from $i$, we will omit $d$ for brevity.
To generalize our discussions, we consider two sets of modulation parameters. One is the ideal modulation parameters given in Eq.~(\ref{eq3}). The scatterplots of the ideal modulation parameters are shown in the first row of Fig.~\ref{fig2}. We use the ideal modulation parameters to give the theoretical evaluations of the performance of the proposed methods, and explain the relationship between the proposed methods and the interpolation-based and filtering-based methods. The other is the non-uniform modulation parameters obtained from the calibration of our self-developed DoFP polarimeters. The second row of Fig.~\ref{fig2} shows the scatterplots of the non-uniform modulation parameters. And Table~\ref{tab1} gives the means and standard deviations of $g\cdot(k_1+k_2)$, $g\cdot(k_1-k_2)$, and $\theta$ in the non-uniform modulation parameters according to the designed polarization orientations. It can be seen that the non-uniform modulation parameters show strong non-uniformity deviations. We use the non-uniform modulation parameters to illustrate the abilities of the OLSM and SRM for mitigating the reconstruction errors and artifacts caused by the non-uniformity.
\begin{figure}[hbt!]
\centering\includegraphics[width=\textwidth]{fig2.pdf}
\caption{Scatterplots of the ideal and
non-uniform modulation parameters. The points are painted with four different colors according to the designed polarization orientations}
\label{fig2}
\end{figure}
\begin{table}[]
\centering
\caption{Means and standard deviations of $g\cdot(k_1+k_2)$, $g\cdot(k_1-k_2)$, and $\theta$.}
\resizebox{\textwidth}{!}{
\begin{tabular}{lllllllll}
\hline
\multirow[b]{2}{*}{\begin{tabular}[c]{@{}l@{}}Polarization\\ orientation\end{tabular}} & \multicolumn{2}{l}{$g\cdot(k_1+k_2)$} & & \multicolumn{2}{l}{$g\cdot(k_1-k_2)$} & & \multicolumn{2}{l}{$\theta$} \\ \cline{2-3} \cline{5-6} \cline{8-9}
& Mean & \begin{tabular}[c]{@{}l@{}}Standard\\ deviation\end{tabular} & & Mean & \begin{tabular}[c]{@{}l@{}}Standard\\ deviation\end{tabular} & & Mean & \begin{tabular}[c]{@{}l@{}}Standard\\ deviation\end{tabular} \\ \hline
0$^{\circ}$ & 0.9583 & 0.1818 & & 0.8822 & 0.1766 & & -2.4422$^{\circ}$ & 1.7530$^{\circ}$ \\
45$^{\circ}$ & 1.0083 & 0.1288 & & 0.9161 & 0.1252 & & 46.9305$^{\circ}$ & 1.8916$^{\circ}$ \\
90$^{\circ}$ & 1.0458 & 0.1319 & & 0.9668 & 0.1286 & & 87.8559$^{\circ}$ & 2.4745$^{\circ}$ \\
135$^{\circ}$ & 0.9876 & 0.1314 & & 0.9141 & 0.1277 & & 137.5902$^{\circ}$ & 1.2284$^{\circ}$ \\ \hline
\end{tabular}}
\label{tab1}
\end{table}
\section{Stokes parameters reconstruction}\label{sec3}
Reconstructing the Stokes parameters requires dealing with the underdetermined system of linear equations composed by Eq.~(\ref{eq1}). This problem has infinite solutions. Generally, some prior constraints need to be introduced to find a desired solution. In this section, the OLSM and SRM are presented to solve the underdetermined system of linear equations by applying the local constant assumption and global smoothing assumption, respectively.
\subsection{Ordinary least-squares method}
For each $2\times2$ subset in the DoFP image, there are four equality constraints and twelve unknown Stokes parameters. By applying the local constant assumption that the Stokes parameters are constant functions in $2\times2$ subsets, the number of the unknowns in each $2\times2$ subset is reduced to three. Then the Stokes parameters can be determined according to the ordinary least-squares criterion, expressed as
\begin{equation}\label{eq4}
\begin{bmatrix}
{{{\hat s}_0}(x + \tfrac{1}{2},y + \tfrac{1}{2})} \\
{{{\hat s}_1}(x + \tfrac{1}{2},y + \tfrac{1}{2})} \\
{{{\hat s}_2}(x + \tfrac{1}{2},y + \tfrac{1}{2})}
\end{bmatrix}
=
\begin{bmatrix}
{{m_0}(x,y)}&{{m_1}(x,y)}&{{m_2}(x,y)} \\
{{m_0}(x,y + 1)}&{{m_1}(x,y + 1)}&{{m_2}(x,y + 1)} \\
{{m_0}(x + 1,y)}&{{m_1}(x + 1,y)}&{{m_2}(x + 1,y)} \\
{{m_0}(x + 1,y + 1)}&{{m_1}(x + 1,y + 1)}&{{m_2}(x + 1,y + 1)}
\end{bmatrix}
^\dag
{\begin{bmatrix}
{i(x,y)} \\
{i(x,y + 1)} \\
{i(x + 1,y)} \\
{i(x + 1,y + 1)}
\end{bmatrix}}.
\end{equation}
Here, ${\hat s}_0$, ${\hat s}_1$, and ${\hat s}_2$ represent the reconstructed Stokes parameters, and $[\bullet]^\dagger$ represents the pseudo-inverse of a matrix. The feasibility of using Eq.~(\ref{eq4}) to reconstruct the Stokes parameters has been demonstrated in Ref.~\cite{david2008unpolarized,RN1643,RN1637}. However, when substituting the ideal modulation parameters into Eq.~(\ref{eq4}), it can be found that the reconstruction results of Eq.~(\ref{eq4}) are equivalent to that of the nearest-neighbor interpolation-based method. This indicates that when facing the spatial variations of the Stokes parameters, Eq.~(\ref{eq4}) performs as poorly as the nearest-neighbor interpolation-based reconstruction.
Considering that each pixel is contained in four different 2$\times$2 subsets, we further apply a four-subset averaging strategy to obtain the final reconstruction results, expressed as
\begin{equation}\label{eq5}
{\hat s_k}(x,y) = \tfrac{1}{4} [{{\hat s}_k}(x - \tfrac{1}{2},y - \tfrac{1}{2}) + {{\hat s}_k}(x - \tfrac{1}{2},y + \tfrac{1}{2}) + {{\hat s}_k}(x + \tfrac{1}{2},y - \tfrac{1}{2}) + {{\hat s}_k}(x + \tfrac{1}{2},y + \tfrac{1}{2})].
\end{equation}
Here, the subscript $k$ takes $0$, $1$, and $2$. The four-subset averaging strategy is also an optimization in the sense of the ordinary least-squares. With this strategy, when substituting the ideal modulation parameters, it can be found that the reconstruction results of the OLSM (Eqs.~(\ref{eq4}) and (\ref{eq5})) are equivalent to that of the bilinear interpolation-based method.
\subsection{Smoothing regularization method}
Regularization is a commonly used optimization technique for solving underdetermined problems~\cite{RN1728,boyd2004convex}. We assume that the Stokes parameters are spatially smooth and apply the regularization technique to find the globally smoothing solution of the system of linear equations composed by Eq.~(\ref{eq1}). The Stokes parameters are determined by minimizing the objective function $L$ defined as
\begin{equation}\label{eq6}
\begin{split}
L({{\hat s}_0},{{\hat s}_1},{{\hat s}_2}) &= \sum\limits_{x,y} {{\left[ {{m_0}(x,y){{\hat s}_0}(x,y) + {m_1}(x,y){{\hat s}_1}(x,y) + {m_2}(x,y){{\hat s}_2}(x,y) - i(x,y)} \right]}^2}\\
&+ {\lambda _0}R({{\hat s}_0}) + {\lambda _1}R({\kappa _1}\tfrac{{{{\hat s}_1} + {{\hat s}_2}}}{2}) + {\lambda _2}R({\kappa _2}\tfrac{{{{\hat s}_1} - {{\hat s}_2}}}{2}).
\end{split}
\end{equation}
Here, the first term on the right side of Eq.~\ref{eq6} is the fidelity term used to penalize the deviation between the reconstructed Stokes parameters and the constraint of Eq.~(\ref{eq1}), $R(\hat s_0)$, $R(\kappa _1\tfrac{\hat s_1+\hat s_2}{2})$, and $R(\kappa _2\tfrac{\hat s_1-\hat s_2}{2})$ are the regularization terms, $R$ is the discrete thin-plate energy functional used to introduce the constraint of the spatial smoothness, defined as
\begin{equation}\label{eq7}
\begin{split}
R(f) &= \sum\limits_{x,y} {{{\left[ {f(x - 1,y) - 2f(x,y) + f(x + 1,y)} \right]}^2}} \\
&+ \sum\limits_{x,y} {{{\left[ {f(x,y - 1) - 2f(x,y) + f(x,y + 1)} \right]}^2}} \\
&+ 2\sum\limits_{x,y} {{{\left[ {f(x,y) - f(x + 1,y) - f(x,y + 1) + f(x + 1,y + 1)} \right]}^2}},
\end{split}
\end{equation}
$\lambda_0$, $\lambda_1$, and $\lambda_2$ are the regularization parameters used to control the weights of the regularization terms, and $\kappa_1$ and $\kappa_2$ are two parameters used to compensate the change of the relative weights between the fidelity term and regularization terms caused by the non-uniformity, defined as
\begin{equation}\label{eq8}
\left\{
\begin{array}{l}
{\kappa_1} = \frac{{\sum\limits_{x,y} {{m_1}(x,y)\cos (\pi x)} + \sum\limits_{x,y} {{m_2}(x,y)\cos (\pi x)} }}{{\sum\limits_{x,y} {{m_0}(x,y)} }}\\
{\kappa_2} = \frac{{\sum\limits_{x,y} {{m_1}(x,y)\cos (\pi y)} - \sum\limits_{x,y} {{m_2}(x,y)\cos (\pi y)} }}{{\sum\limits_{x,y} {{m_0}(x,y)} }}
\end{array}
\!.\right.
\end{equation}
The desired Stokes parameters should satisfy the Euler-Lagrange equation of the objective function, expressed as
\begin{equation}\label{eq9}
\left\{
\begin{array}{l}
\frac{{\partial L}}{{\partial {{\hat s}_0}}} = m_0^2{{\hat s}_0} + {m_0}{m_1}{{\hat s}_1} + {m_0}{m_2}{{\hat s}_2} - {m_0}i + {\lambda _0}{\nabla ^4}{{\hat s}_0} = 0 \\
\frac{{\partial L}}{{\partial {{\hat s}_1}}} = {m_0}{m_1}{{\hat s}_0} + m_1^2{{\hat s}_1} + {m_1}{m_2}{{\hat s}_2} - {m_1}i + \frac{1}{4}( {{\lambda _1}\kappa _1^2 + {\lambda _2}\kappa _2^2} ){\nabla ^4}{{\hat s}_1} + \frac{1}{4}( {{\lambda _1}\kappa _1^2 - {\lambda _2}\kappa _2^2} ){\nabla ^4}{{\hat s}_2} = 0 \\
\frac{{\partial L}}{{\partial {{\hat s}_2}}} = {m_0}{m_2}{{\hat s}_0} + {m_1}{m_2}{{\hat s}_1} + m_2^2{{\hat s}_2} - {m_2}i + \frac{1}{4}( {{\lambda _1}\kappa _1^2 - {\lambda _2}\kappa _2^2} ){\nabla ^4}{{\hat s}_1} + \frac{1}{4}( {{\lambda _1}\kappa _1^2 + {\lambda _2}\kappa _2^2} ){\nabla ^4}{{\hat s}_2} = 0
\end{array}
\!.\right.
\end{equation}
Here, $\nabla ^4$ represents the discrete biharmonic operator, corresponding to the variation of the discrete thin-plate functional. The discrete biharmonic operator is implemented by the convolution operation
\begin{equation}\label{eq10}
{\nabla ^4}f = f *
\begin{bmatrix}
0&0&1&0&0 \\
0&2&{ - 8}&2&0 \\
1&{ - 8}&{20}&{ - 8}&1 \\
0&2&{ - 8}&2&0 \\
0&0&1&0&0
\end{bmatrix}.
\end{equation}
Here, $*$ represents the convolution operation.
Equation~(\ref{eq9}) can be solved by gradient descent algorithms. We gave a MATLAB code implementation of the SRM in Ref.~\cite{srm}. Since the objective function is a convex function, the calculations can well converge to the global optimal solution that minimizes the objective function.
It is worth pointing out that the proposed methods and numerical calibration methods tackle the non-uniformity based on the same pixel model but have different purposes. The proposed methods aims to directly reconstruct the Stokes parameters, while the numerical calibration methods aims to correct the non-uniform intensity responses in the DoFP image. The relationship between them is further explained in~\ref{app:D}.
\section{Reconstruction error evaluations}\label{sec4}
In the DoFP modulation, the measurement information from different pixel coordinates is combined to reconstruct the Stoke parameters. Consequently, the spatial variations of the Stokes parameters will introduce reconstruction errors, which is significantly different from the other modulation techniques.
In this section, the reconstruction errors of the OLSM and SRM for the different frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ are evaluated. Firstly, Fourier analysis is applied to evaluate the reconstruction errors in the ideal case that the Stokes parameters are modulated by the ideal modulation parameters. Secondly, numerical simulations are applied to evaluate the reconstruction errors in the non-uniform case that the Stokes parameters are modulated by the non-uniform modulation parameters.
\subsection{Fourier analysis}
When the Stokes parameters are modulated by the ideal modulation parameters, substituting Eq.~(\ref{eq3}) into Eq.~(\ref{eq1}), the discrete Fourier transform (DFT) of $i$ is expressed as
\begin{equation}\label{eq11}
I(u,v) = {S_0}(u,v) + \tfrac{1}{2}[{S_1}(u + \tfrac{1}{2},v) + {S_2}(u + \tfrac{1}{2},v)] + \tfrac{1}{2}[{S_1}(u,v + \tfrac{1}{2}) - {S_2}(u,v + \tfrac{1}{2})].
\end{equation}
Here, $u$ and $v$ represent the horizontal and vertical frequency coordinates, respectively, and the uppercase symbols represent the DFT of the corresponding lowercase symbols. Equation.~(\ref{eq11}) indicates that the frequency components of $s_0$ are located in the center region of the spectrum of $i$, while the frequency components of $(s_1+s_2)/2$ and $(s_1-s_2)/2$ are shifted into the horizontal and vertical border regions, respectively. Figure.~\ref{fig3}(a) gives an example of the spectrum of $i$ where the Stokes parameters are modulated by the ideal modulation parameters (see Section~\ref{sec6}). It can be seen that there are noticeable carrier peaks at the frequency coordinates $(0,0)$, $(\pm\frac{1}{2},0)$, and $(0,\pm\frac{1}{2})$.
\begin{figure}[hbt!]
\centering\includegraphics[width=8cm]{fig3.pdf}
\caption{Examples of the log-scale spectrum of $i$. (a) The Stokes parameters are modulated by the ideal modulation parameters. (b) The Stokes parameters are modulated by the non-uniform modulation parameters.}
\label{fig3}
\end{figure}
The distribution of the frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ allows us to reconstruct the Stokes parameters through frequency domain filtering, expressed as
\begin{equation}\label{eq12}
\left\{
\begin{array}{l}
{{\hat S}_0}(u,v) = I(u,v) \cdot {H_0}(u,v)\\
\frac{1}{2}[{{\hat S}_1}(u + \tfrac{1}{2},v) + {{\hat S}_2}(u + \tfrac{1}{2},v)] = I(u,v) \cdot {H_1}(u,v)\\
\frac{1}{2}[{{\hat S}_1}(u,v + \tfrac{1}{2}) - {{\hat S}_2}(u,v + \tfrac{1}{2})] = I(u,v) \cdot {H_2}(u,v)
\end{array}
\!.\right.
\end{equation}
Here, $H_0$, $H_1$, and $H_2$ represent the filter transfer functions which aim to retrieve the frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$, respectively. In the filtering-based methods, the filter transfer functions are constructed by window functions such as Hamming, Gaussian, and Planck-taper window functions~\cite{RN1047,RN1665,RN1051}. In fact, ignoring the differences around the image boundaries of the reconstruction results, we find that the convolutional interpolation-based methods, OLSM, and SRM have equivalent implementations in the frequency domain. The filter transfer functions of the mainstream convolutional interpolation-based methods are given in~\ref{app:B}, including the nearest-neighbor, bilinear, bicubic, and natural bicubic spline interpolation-based methods. For the OLSM, substituting Eq.~(\ref{eq3}) into Eqs.~(\ref{eq4}) and (\ref{eq5}), and performing the DFT, after some simplifications, we have
\begin{equation}\label{eq13}
\left\{
\begin{array}{l}
{H_0}(u,v) = \frac{1}{4}[ {1 + \cos (2\pi u)} ][ {1 + \cos (2\pi v)} ] \\
{H_1}(u,v) = \frac{1}{4}[ {1 + \cos (2\pi u + \pi )} ][ {1 + \cos (2\pi v)} ] \\
{H_2}(u,v) = \frac{1}{4}[ {1 + \cos (2\pi u)} ][ {1 + \cos (2\pi v + \pi )} ]
\end{array}
\!.\right.
\end{equation}
The first row of Fig.~\ref{fig4} shows the filter transfer functions of the OLSM and their full-widths-at-half-maximum (FWHMs). Notice that the OLSM is equivalent to the bilinear interpolation-based method in this case. For the SRM, substituting Eq.~(\ref{eq3}) into Eq.~(\ref{eq9}), and performing the DFT, after the same simplifications, we have
\begin{equation}\label{eq14}
\left\{
\begin{array}{l}
{H_0}(u,v) = {\left[ {1 + {\lambda _0}G(u,v) + \frac{{{\lambda _0}G(u,v)}}{{{\lambda _1}G(u + \tfrac{1}{2},v)}} + \frac{{{\lambda _0}G(u,v)}}{{{\lambda _2}G(u,v + \tfrac{1}{2})}}} \right]^{ - 1}} \\
{H_1}(u,v) = {\left[ {1 + {\lambda _1}G(u + \tfrac{1}{2},v) + \frac{{{\lambda _1}G(u + \tfrac{1}{2},v)}}{{{\lambda _0}G(u,v)}} + \frac{{{\lambda _1}G(u + \tfrac{1}{2},v)}}{{{\lambda _2}G(u,v + \tfrac{1}{2})}}} \right]^{ - 1}} \\
{H_2}(u,v) = {\left[ {1 + {\lambda _2}G(u,v + \tfrac{1}{2}) + \frac{{{\lambda _2}G(u,v + \tfrac{1}{2})}}{{{\lambda _0}G(u,v)}} + \frac{{{\lambda _2}G(u,v + \tfrac{1}{2})}}{{{\lambda _1}G(u + \tfrac{1}{2},v)}}} \right]^{ - 1}}
\end{array}
\!.\right.
\end{equation}
Here, $G$ is the DFT of the discrete biharmonic operator, expressed as
\begin{equation}\label{eq15}
\begin{split}
G(u,v) &= 20 - 16\cos (2\pi u) - 16\cos (2\pi v) + 4\cos (2\pi u + 2\pi v) \\
& + 4\cos (2\pi u - 2\pi v) + 2\cos (4\pi u) + 2\cos (4\pi v).
\end{split}
\end{equation}
The filter transfer functions of the SRM are adjustable by changing the values of the regularization parameters. For arbitrary $\lambda_1/\lambda_0$ and $\lambda_2/\lambda_0$, when $\lambda_0$ tends to 0, these filter transfer functions satisfy
\begin{equation}\label{eq16}
\left\{
\begin{array}{l}
{H_0}( \pm \frac{1}{\pi }\arctan \left( {{{(\frac{{{\lambda _1}}}{{{\lambda _0}}})}^{\frac{1}{4}}}} \right),0) = {H_1}( \pm \frac{1}{\pi }\arctan \left( {{{(\frac{{{\lambda _1}}}{{{\lambda _0}}})}^{\frac{1}{4}}}} \right),0) = 0.5 \\
{H_0}(0, \pm \frac{1}{\pi }\arctan \left( {{{(\frac{{{\lambda _2}}}{{{\lambda _0}}})}^{\frac{1}{4}}}} \right)) = {H_2}(0, \pm \frac{1}{\pi }\arctan \left( {{{(\frac{{{\lambda _2}}}{{{\lambda _0}}})}^{\frac{1}{4}}}} \right)) = 0.5
\end{array}
\!.\right.
\end{equation}
The second, third, and fourth rows of Fig.~\ref{fig4} show the filter transfer functions of the SRM and their FWHMs with the regularization parameters chosen as "$\lambda_0=0.001$, $\lambda_1=0.001$, $\lambda_2=0.001$", "$\lambda_0=0.001$, $\lambda_1=0.0407$, $\lambda_2=0.0204$", and "$\lambda_0=0.001$, $\lambda_1=0.0407$, $\lambda_2=0.0285$", respectively. It can be seen that the FWHMs along $u=0$ and $v=0$ in these examples are well consistent with Eq.~(\ref{eq16}).
\begin{figure}[hbt!]
\centering\includegraphics[width=\textwidth]{fig4.pdf}
\caption{Filter transfer functions and their FWHMs.}
\label{fig4}
\end{figure}
In the perspective of sampling and reconstruction, the reconstruction errors come from the pre-aliasing occurring in the sampling process and the post-aliasing occurring in the reconstruction process. The pre-aliasing refers to the overlap of the frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$. Sufficient sampling is a necessary prerequisite to apply the DoFP technique, otherwise the strong pre-aliasing will always cause large reconstruction errors, which means the failure of the measurement. It has been show in Ref.~\cite{RN1047} that if the sampling can make the frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ satisfy a band-limit condition, theoretically the Stokes parameters can be perfectly reconstructed without errors. Practically, the pre-aliasing is hard to avoid, while for most polarization imaging targets, the main energy of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ to be measured are concentrated in their low-frequency components, and thus the pre-aliasing reconstruction errors are usually small.
The post-aliasing refers to the misscategorization of the frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$. For the nearest-neighbor interpolation-based method, its filter transfer functions cannot well attenuate the undesired frequency components, and therefore usually result in large post-aliasing reconstruction errors. For the OLSM and the bilinear, bicubic, and natural bicubic spline interpolation-based methods, their filter transfer functions indicate that only when the bandwidths occupied by the frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ do not exceed 0.5 cycles per pixel, can the Stokes parameters be reconstructed with less post-aliasing reconstruction errors. For the SRM and filtering-based methods, their filter transfer functions are adjustable by changing regularization parameters and window function parameters, respectively. Considering that the practical bandwidths occupied by the frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ are different for different polarization imaging targets, the adjustable filter transfer functions make these methods have more powerful abilities to reduce the post-aliasing reconstruction errors. Comparing the filter transfer functions of the SLM and the filter transfer functions constructed by the window functions used in previous studies, a noticeable difference is that the frequency responses of the filter transfer functions of the SRM for the frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ are anisotropic. It can be seen from Fig.~\ref{fig3}(a) that the probabilities that the pre-aliasing occurs along different directions are different. This indicates that the anisotropic frequency responses can better reduce the reconstruction errors.
\subsection{Numerical simulations}
Figure.~\ref{fig3}(b) gives an example of the spectrum of $i$ where the Stokes parameters are modulated by the non-uniform modulation parameters (see Section~\ref{sec6}). There are still noticeable carrier peaks at the frequency coordinates $(0,0)$, $(\pm\frac{1}{2},0)$, and $(0,\pm\frac{1}{2})$. However, due to the existence of the non-uniformity, the spectrum is disturbed. In this case, if the ideal modulation parameters are still used in the reconstruction, according to the Fourier analysis, the reconstruction results are inevitably influenced by the non-uniformity. However, combining with the non-uniform modulation parameters, the OLSM and SRM are able to construct the filters with space-varying filter kernels to deal with the non-uniformity.
\begin{figure}[hbt!]
\centering\includegraphics[width=\textwidth]{fig5.pdf}
\caption{RMSEs of the reconstructed cosine patterns.}
\label{fig5}
\end{figure}
Since the non-uniform modulation parameters are no longer periodic, we used numerical simulations to evaluate the reconstruction errors of the OLSM and SRM for the different frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ in the non-uniform case. Firstly, we generated the cosine patterns of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ by setting "$\alpha=1$, $\beta=0$, $\gamma=0$", "$\alpha=0$, $\beta=\sqrt{2}/2$, $\gamma=0$", and "$\alpha=0$, $\beta=0$, $\gamma=\sqrt{2}/2$", respectively, in the following equation
\begin{equation}\label{eq17}
\left\{
\begin{array}{l}
s_0(x,y) = 1 + \alpha \cos (2\pi u'x + 2\pi v'y) \\
\frac{1}{2}[{s_1(x,y) + s_2(x,y)}] = \beta \cos (2\pi u'x + 2\pi v'y) \\
\frac{1}{2}[{s_1(x,y) - s_2(x,y)}] = \gamma \cos (2\pi u'x + 2\pi v'y)
\end{array}
\!.\right.
\end{equation}
Here, $u'$ and $v'$ are the horizontal and vertical frequencies varying from 0 to 0.5 cycles per pixel. The choices of $\alpha$, $\beta$, and $\gamma$ make the generated cosine patterns satisfy the physical constraint $0 \leq \sqrt {{s_1}^2 + {s_2}^2}/{s_0} \leq 1$. Secondly, we substituted the generated cosine patterns and the non-uniform modulation parameters into Eq.~(\ref{eq1}) to generate the DoFP images. Lastly, we used the OLSM and SRM combined with the ideal and non-uniform modulation parameters to reconstruct the Stokes parameters from the generated DoFP images. The regularization parameters used in the numerical simulations were chosen as "$\lambda_0=0.001$, $\lambda_1=0.0407$, $\lambda_2=0.0285$" as example. The first and second rows of Fig.~\ref{fig5} show the root mean square errors (RMSEs) of the cosine patterns reconstructed by the OLSM combined with the ideal and non-uniform modulation parameters, respectively, the third and fourth rows of Fig.~\ref{fig5} show the RMSEs of the cosine patterns reconstructed by the SRM combined with the ideal and non-uniform modulation parameters, respectively, and the last row of Fig.~\ref{fig5} shows the comparisons of the above RMSEs along $v'=0$.
For the both methods, it can be seen that the high-frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ tend to have large reconstruction errors. Therefore, it is still necessary to ensure sufficient sampling to reduce the energy in the high-frequency components. When the ideal modulation parameters are used, the low-frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ have noticeable reconstruction errors, but when the non-uniform modulation parameters are used, the reconstruction errors for the low-frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ are reduced, and the RMSEs reach zero for the zero-frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$. This indicates that when the non-uniformity is correctly characterized, the space-varying filter kernels constructed by the OLSM and SRM can reduce the reconstruction errors caused by the non-uniformity. It is worth pointing out that when the non-uniform modulation parameters are used, the RMSEs in Fig.~\ref{fig5} show exactly opposite trends compared with the frequency responses of the filter transfer functions in Fig.~\ref{fig4}. This phenomenon indicates that the overall frequency responses of the space-varying filter kernels for the frequency components of $s_0$, $(s_1+s_2)/2$, and $(s_1-s_2)/2$ are consistent with that of the filter transfer functions. The reason for this phenomenon is that the non-uniform modulation parameters are still close to the ideal modulation parameters in average sense.
\section{Choice of the regularization parameters}\label{sec5}
In this section, we give some rules of thumb for choosing suitable regularization parameters in the SRM. We represente the regularization parameters as $\lambda_0$, $\lambda_1/\lambda_0$, and $\lambda_2/\lambda_0$ to discuss their choices.
Keeping $\lambda_1/\lambda_0$ and $\lambda_2/\lambda_0$ invariant, $\lambda_0$ determines the overall weight of the regularization terms. The major factor influencing the choice of $\lambda_0$ is the level of the additional Gaussian noise. However, in this paper, the DoFP images are considered to be noiseless, so $\lambda_0$ needs to be set to a small value so that the reconstruction results can meet the constraint of Eq.~(\ref{eq1}). Empirically, $\lambda_0$ is chosen as 0.001, smaller values will not cause significant changes of the reconstruction results.
\begin{figure}[hbt!]
\centering\includegraphics[width=8cm]{fig6.pdf}
\caption{Strategy for choosing $\lambda_1/\lambda_0$. (a) Spectrum of $i/m_0$. A spectrum curve can be obtained by sliding a rectangular window horizontally and averaging the spectrum in the rectangular window. (b) $\lambda_1/\lambda_0$ is chosen to make the overall frequency responses of the filters for reconstructing $s_0$ and $(s_1+s_2)/2$ can be approximately equal at the frequency coordinate $(\pm u_p,0)$. $H_0$ and $H_1$ are used to represent the approximations of the overall frequency responses of the filters for reconstructing $s_0$ and $(s_1+s_2)/2$, respectively.}
\label{fig6}
\end{figure}
$\lambda_1/\lambda_0$ determines the relative smoothness between the reconstructed $s_0$ and $(s_1+s_2)/2$. According to the evaluations in Section~\ref{sec4}, we need to choose a suitable value of $\lambda_1/\lambda_0$ so that the filters constructed by the SRM can well categorize the frequency components of $s_0$ and $(s_1+s_2)/2$. Our strategy for choosing $\lambda_1/\lambda_0$ is illustrated in Fig.~\ref{fig6}. As shown in Fig.~\ref{fig6}(a), to reduce the non-uniformity disturbances, the spectrum of $i/m_0$ is used for choosing $\lambda_1/\lambda_0$. By sliding the rectangular window whose center is located at the horizontal line $v=0$ and averaging the spectrum in the rectangular window, a spectrum curve can be obtained. Empirically, we choose the size of the rectangular window as $0.02\times0.4$. The frequency coordinate of the lowest point $u_p$ in the spectrum curve represents our estimate of the relative bandwidth occupied by the frequency components of $s_0$ and $(s_1+s_2)/2$ in the horizontal direction. As illustrated in Fig.~\ref{fig6}(b), we determine $\lambda_1/\lambda_0$ through the formula
\begin{equation}\label{eq18}
\frac{{{\lambda _1}}}{{{\lambda _0}}} = {\left[ {\tan (\pi {u_p})} \right]^4}.
\end{equation}
so that the overall frequency responses of the filters for reconstructing $s_0$ and $(s_1+s_2)/2$ can be approximately equal at the frequency coordinate $(\pm u_p,0)$.
Likewise, $\lambda_2/\lambda_0$ can be chosen by a similar strategy. In practice, inappropriate choices of $\lambda_1/\lambda_0$ and $\lambda_2/\lambda_0$ usually result in significantly serrated artifacts in the reconstructed results. If the above strategy fails, the regularization parameters can still be suitably chosen through visual inspection.
\section{Experiments}\label{sec6}
Two experiments were performed to evaluate and compare the performance of the OLSM, SRM, bicubic interpolation-based, Newton's polynomial interpolation-based method~\cite{RN1652}, and Planck-Taper window filtering-based methods. The polarization imaging target in the experiments is a model car. A DoT polarimeter and the self-developed DoFP polarimeter were used to capture the polarization information. Three measures were implemented to ensure sufficient sampling, including making the target occupy a large field of view, defocusing the target slightly, and reducing the lens aperture.
The images recorded by the polarimeters were captured 100 times and averaged to decouple the effects of noise.
\begin{figure}[hbt!]
\centering\includegraphics[width=\textwidth]{fig7.pdf}
\caption{True and reconstructed $s_0$, DoLP, and AoP. The true Images in the white boxes show the magnification of local regions}
\label{fig7}
\end{figure}
The DoT polarimeter is composed by a monochrome CCD camera (FLIR GS3-U3-15S5M-C) and a linear polarization filter (THORLABS WP25M-VIS) placed on a motorized rotating platform (THORLABS PRM1/MZ8). The DoT polarimeter captured 0$^{\circ}$, 45$^{\circ}$, 90$^{\circ}$, and 135$^{\circ}$ polarization images. These polarization images were used to calculate the full-resolution Stokes parameters, DoLP, and AoP. The results are regarded as the true values. By re-sampling the four polarization images, a DoFP image was synthesized. Benefiting from the high positioning accuracy of the rotating platform and the uniform performance of the camera and linear polarization filter, the Stokes parameters in the synthesized DoFP image can be regarded as being modulated by the ideal modulation parameters. The spectrum of the synthesized DoFP image is given in Fig.~\ref{fig3}(a). It can be seen that the frequency components of $s_0$ occupy more sized bandwidth than these of $(s_1+s_2)/2$ and $(s_1-s_2)/2$. As a matter of fact, this is a common phenomenon which has been shown in Ref.~\cite{RN1652,RN1051}. The OLSM, bicubic interpolation-based method, Newton's polynomial interpolation-based method, Planck-Taper window filtering-based method, and SRM were used to reconstruct the Stokes parameters, DoLP, and AoP from the synthesized DoFP image. The window function parameters of the Planck-Taper window functions are given in~\ref{app:C}. Since there is no good method for choosing suitable window function parameters, the best choice minimizing the RMSEs of $\hat s_0$, $(\hat s_1+ \hat s_2)/2$, and $(\hat s_1-\hat s_2)/2$ was used. The regularization parameters used in the SRM were chosen as "$\lambda_0=0.001$, $\lambda_1=0.0407$, $\lambda_2=0.0204$" according to the discussions given in Section~\ref{sec5}. Figure.~\ref{fig7} shows the true and reconstructed $s_0$, DoLP, and AoP. $s_0$ ranges from 0 to 255, while the range of the grayscale bar in Fig.~\ref{fig7} is set to $[0,160]$ to increase the image contrast. The most noticeable differences are marked in the white boxes. $s_0$ reconstructed by the OLSM and bicubic interpolation-based method are blurred, and the reconstructed DoLP and AoP show obviously serrated artifacts. The main reason for these phenomena is that the frequency components of $s_0$ that more than $\pm$ 0.25 cycles per pixel are incorrectly categorized. The Newton's polynomial interpolation-based method is an edge-preserved reconstruction method. It can be seen that the reconstructed $s_0$ shows more details, and the reconstructed DoLP image avoids the serrated artifacts. However, due to the wrong edge discriminations, the reconstructed AoP image has "$\times$" artifacts. The reconstructed results of the Planck-Taper window filtering-based method and SRM all show good visual effects. This benefits from the suitably choices of the window function parameters and regularization parameters.
Table~\ref{tab2} gives the RMSEs and normalized RMSEs of the reconstructed results. The normalized RMSEs are defined as the ratio of the RMSE to the difference between the maximum and minimum value of the true value. All the reconstruction methods can achieve relatively small reconstruction errors, which illustrates the effectiveness of the DoFP technique. Comparing the relative reconstruction errors of the different reconstruction items, it can be seen that the relative reconstruction errors of the AoP images are larger than that of the others. This is consistent with the results given in Ref.~\cite{RN1593}. The SRM shows minimal reconstruction errors for most reconstruction results. Meanwhile, due to the anisotropic frequency responses, the SRM shows smaller reconstruction errors than the Planck-Taper window filtering-based method for all the reconstruction results.
\begin{table}[]
\centering
\caption{RMSEs (\textit{normalized RMSEs}) of reconstructed $s_0$, $s_1$, $s_2$, DoLP, and AoP.}
\resizebox{\textwidth}{!}{
\begin{tabular}{llllll}
\hline
Reconstruction method & $s_0$ & $s_1$ & $s_2$ & DoLP & AoP \\ \hline
\multirow{2}{*}{OLSM} & 0.67075 & 0.57365 & 0.56399 & 0.022871 & 0.20427 \\
& (\textit{0.425\%}) & (\textit{1.188\%}) & (\textit{1.358\%}) & (\textit{3.210\%}) & (\textit{6.504\%}) \\
\multirow{2}{*}{Bicubic} & 0.58439 & 0.53396 & 0.52278 & 0.020552 & 0.19243 \\
& (\textit{0.371\%}) & (\textit{1.106\%}) & (\textit{1.259\%}) & (\textit{2.885\%}) & (\textit{6.125\%}) \\
\multirow{2}{*}{Newton's polynomial} & 0.26618 & 0.27781 & 0.27062 & \textbf{0.015017} & 0.13984 \\
& (\textit{0.169\%}) & (\textit{0.575\%}) & (\textit{0.652\%}) & (\textit{\textbf{2.108\%}}) & (\textit{4.451\%}) \\
\multirow{2}{*}{Planck-Taper} & 0.31674 & 0.26161 & 0.25777 & 0.016486 & 0.13862 \\
& (\textit{0.201\%}) & (\textit{0.542\%}) & (\textit{0.621\%}) & (\textit{2.314\%}) & (\textit{4.413\%}) \\
\multirow{2}{*}{SRM} & \textbf{0.25188} & \textbf{0.25569} & \textbf{0.24729} & 0.015096 & \textbf{0.13747} \\
&(\textit{\textbf{0.160\%}}) & (\textit{\textbf{0.530\%}}) & (\textit{\textbf{0.595\%}}) & (\textit{2.119\%}) & (\textit{\textbf{4.375\%}})\\
\hline
\end{tabular}}
\label{tab2}
\end{table}
\begin{figure}[hbt!]
\centering\includegraphics[width=\textwidth]{fig8.pdf}
\caption{Reconstructed $s_0$, DoLP, and AoP. Images in the white boxes show the magnification of local regions. White ellipses mark the differences of the reconstruction results the OLSM and SRM at defect pixels.
}
\label{fig8}
\end{figure}
The modulation parameters of the self-developed DoFP polarimeter are shown in Fig.~\ref{fig2}. The spectrum of the DoFP image captured by this polarimeter is shown in Fig.~\ref{fig3}(b), which is disturbed by the non-uniformity. The first three rows of Fig.~\ref{fig8} show the $s_0$, DoLP, and AoP reconstructed by the bicubic interpolation-based method, Newton's polynomial interpolation-based method, Planck-Taper window filtering-based method. The window function parameters used in the reconstruction were the same as the previous. The reconstruction results of these three methods contain strong non-uniformity artifacts, which indicates the interpolation-based and filtering-based methods fail in the presence of the non-uniformity. The fourth and fifth rows of Fig.~\ref{fig8} show the $s_0$, DoLP, and AoP reconstructed by the OLSM and SRM, respectively, with the non-uniform modulation parameters used in the reconstructions. The regularization parameters were chosen as "$\lambda_0=0.001$, $\lambda_1=0.0407$, $\lambda_2=0.0285$" according to the discussions given in Section~\ref{sec5}. Since the non-uniformity has been effectively characterized by the non-uniform modulation, the non-uniformity artifacts in the reconstruction results of these two methods are well mitigated. This proves the effectiveness of our methods for reconstructing the Stokes parameters in the presence of the non-uniformity. Comparing the reconstruction results of the OLSM and SRM in Fig.~\ref{fig7} and Fig.~\ref{fig8}, it can be seen that the performance of the OLSM and SRM in these two experiments is basically consistent. In Fig.~\ref{fig8}, $s_0$ reconstructed by the OLSM is still blurred, and the reconstructed DoLP and AoP also show obviously serrated artifacts. While the $s_0$ reconstructed by the SRM shows more details, and the reconstructed DoLP and AoP show better visual effects. The reason for these phenomena is that when the non-uniformity has been considered in the reconstruction, the frequency responses of the two methods for the different frequency components of the Stokes parameters under the non-uniform modulation is basically the same as that under the ideal modulation. Notice that although the OLSM and SRM tackle the non-uniformity based on the same pixel model, the results are different at defect pixels. This phenomenon is marked by the white ellipses in Fig.~\ref{fig8}. It can be seen that the reconstruction results of the OLSM show more steep changes in flat background compared with that of the SRM. The performance of the defects pixels far deviates from the design performance. Since two adjacent defect pixels can easily cause the degeneration of the $4\times3$ matrix in Eq.(\ref{eq4}), the OLSM is more likely to fail at defect pixels. In contrast, the SRM is more robust to the defect pixels, this is due to the use of the global measurement information. However, empirically, we found that it is still difficult for the SRM to obtain satisfactory reconstruction results when the size of the defect area exceeds that of the biharmonic operator.
\section{Conclusion}\label{sec7}
We proposed the OLSM and SRM to reconstruct the Stokes parameters. The performance of the OLSM and SRM was investigated through Fourier analysis, numerical simulations, and experiments. The proposed methods can effectively mitigate the reconstruction errors and artifacts caused by the non-uniformity. The OLSM reconstructs the Stokes parameters under the local constant assumption. This method can be regraded as a generalization of the bilinear interpolation-based methods. Since the pseudo-inverse of the matrix in Eq.~\ref{eq4} can be pre-calculated, the OLSM consumes less computational costs, and is therefore more suitable for hardware implementation. The performance of the OLSM can be further improved if an optical low-passed filter is applied to restrict the bandwidths occupied by the frequency components of the Stokes parameters. The SRM reconstructs the Stokes parameters under the global smoothing assumption. This method has more flexible filtering performance. With a suitable choice of the regularization parameters, the SRM can reconstruct the Stokes parameters with good visual effects and low reconstruction errors. Some edge-preserved constraints can be applied to further improve the performance of the SRM.
|
train/arxiv
|
BkiUdsk5qoTA9Gqu2sC2
| 5 | 1 |
\section{Introduction}
\IEEEPARstart{C}{onvolutional} neural networks (CNNs), which use a stack of convolution operations followed by non-linear activation (e.g., Rectified Linear Unit, ReLU) to extract high-level discriminative features, have achieved considerable improvements for visual tasks \cite{krizhevsky2012imagenet,he2016deep,zhao2019object}. Recent advances of the CNN architectures, such as ResNet \cite{he2016deep}, DenseNet \cite{huang2017densely}, ResNeXt \cite{xie2017aggregated}, and PyramidNet \cite{han2017deep}, ease the vanishing gradient problem and boost the performance. However, CNNs still suffer from the overfitting problem, which reduces their generalization capability.
A wide variety of regularization strategies were exploited to alleviate overfitting and decrease the generalization error. Data augmentation \cite{krizhevsky2012imagenet} is a simple yet effective manner to improve the diversity of training data. Batch normalization \cite{ioffe2015batch} standardizes the mean and variance of features of each mini-batch, which makes the optimization landscape smoother \cite{santurkar2018does}. Dropout \cite{srivastava2014dropout} aims to train an ensemble of sub-networks, weakening the effect of ``co-adaptions'' on training data. DropBlock \cite{ghiasi2018dropblock} introduces a structured dropout approach, which drops the contiguous regions of a feature map. Shake-Shake regularization \cite{gastaldi2017shake} was proposed to randomly interpolate two complementary features in the two residual branches of ResNeXt, achieving state-of-the-art classification performance. ShakeDrop \cite{yamada2018shakedrop} incorporates the idea of stochastic depth \cite{huang2016deep} with Shake-Shake regularization to stabilize the training process for ResNet-like architectures. Despite the impressive improvement of the regularization methods, there are two main drawbacks with these methods.
\begin{enumerate}
\item The regularization strength (or amplitude) is not flexible for different network architectures. For example, the ShakeDrop was designed for deep networks but not suitable for shallow networks. Instead of improving classification performance, it even worsens the performance of shallow networks (see the TABLE \ref{table1}).
\item The regularization strength is unchangeable over the whole training process. The fixed strong regularization is beneficial to reduce overfitting, but it causes difficulties to fit data at the beginning of training. From the perspective of curriculum learning \cite{bengio2009curriculum}, the learner should begin with easy examples.
\end{enumerate}
In view of these issues, we propose a dynamic regularization method for CNNs, in which the regularization strength is adaptable to the change of the training loss. During training, the regularization strength is gradually increased with respect to the training status. Analogous to human education, the regularizer is regarded as an instructor who gradually increases the difficulty of training examples in a form of feature perturbation. The dynamic regularization can adapt to different model sizes. It provides a strong regularization for large-scale models, and vice versa. (See Fig. \ref{Fig5} (b)). That is, the regularization strength grows faster and achieves a higher value for a large-scale model than that of a light model.
\begin{figure*} [ht]
\centering
\includegraphics[width=13cm]{FIG/Fig1.pdf}\\
\caption{The proposed dynamic regularization in the ResNet structure. Conv denotes the convolutional layer. FC denotes the fully connected layer. \(F\) denotes the residual function. \(\nabla f(loss)\) denotes a backward difference of the training loss. The dynamic regularization aims to make a self-adaptive schedule throughout training for various network sizes by adjusting the strength of the random perturbation \(\theta\). As a manner of feature augmentation, the \(\theta\) introduces noises for the residual branch in the forward and backward process.}\label{Fig1}
\end{figure*}
Fig. \ref{Fig1} shows the proposed dynamic regularization in the ResNet structure. The training loss is not only used to perform backpropagation but also exploited to update the amplitude of the regularization. The features are multiplied by the regularizer in the residual branch. The regularizer works as a perturbation which introduces an augmentation in feature space, so CNNs are trained by the diversity of augmented features. Additionally, the regularization amplitude is changeable with respect to the change of the training loss. We conduct experiments on the image classification task to evaluate our regularization strategy. Experimental results show that the proposed dynamic regularization outperforms state-of-the-art regularization methods, i.e., PyramidNet, ResNeXt, and DenseNet equipped with our dynamic regularization improve the classification accuracy in various model settings, when compared with the same networks with ShakeDrop \cite{yamada2018shakedrop}, Shake-Shake \cite{gastaldi2017shake}, and DropBlock \cite{ghiasi2018dropblock}, respectively.
The rest of this paper is organized as follows. We first briefly introduce the related work on deep CNNs and regularization methods in Section \ref{Related}. Then, the proposed dynamic regularization is presented in Section \ref{Dynamic}. Experimental results and discussion are given in Section \ref{Experiment}. Finally, Section \ref{Conclusion} concludes this paper.
\section{Related Work} \label{Related}
\subsection{Deep CNNs}
CNNs have become deeper and wider with a more powerful capacity \cite{he2016deep,huang2017densely,han2017deep,simonyan2014very,szegedy2015going}. As our proposed regularization is based on ResNet and its variants, we briefly review the basic structure of ResNet, i.e., the residual block.
\textbf{Residual block.} The residual block (Res-Block, shown in Fig. \ref{Fig1}) is formulated as
\begin{equation}
\label{eqn1}
x_{l+1} = x_{l} + F(x_{l},\mathcal{W}_{l}),
\end{equation}
where an identity branch \(x_{l}\) is the input features of the \(l^{th}\) Res-Block, which is added by a residual branch \(F\) that is a non-linear transformation between \(x_{l}\) and a set of parameters \(\mathcal{W}_{l}\) (\(\mathcal{W}_{l}\) will be omitted for simplicity in the following). \(F\) consists of two Conv-BN-ReLU or Bottleneck Architectures in the original ResNet structure \cite{he2016deep}. In recent works, \(F\) was also designed to other forms, e.g. Wide-ResNet \cite{zagoruyko2016wide}, Inception module \cite{szegedy2017inception}, PyramidNet \cite{han2017deep}, and ResNeXt \cite{xie2017aggregated}. PyramidNet gradually increases the number of channels in the Res-Blocks as the layers go deep. ResNeXt has multiple aggregated residual branches expressed as
\begin{equation}
\label{eqn2}
x_{l+1} = x_{l} + F_{1}(x_{l}) +F_{2}(x_{l}),
\end{equation}
where \(F_{1}\) and \(F_{2}\) are two residual branches. The number of branches (namely cardinality) is not limited.
\begin{figure*} [ht]
\centering
\includegraphics[width=16.2cm]{FIG/Fig2.pdf}\\
\caption{Shake-based regularization methods in the Res-Block. Some layers (e.g., batch normalization and ReLU) in the residual branch is omitted for simplicity. (a) 3-branch architecture with Shake-Shake regularization \cite{gastaldi2017shake}. (b) 2-branch architecture with ShakeDrop \cite{yamada2018shakedrop}.}\label{Fig2}
\end{figure*}
\subsection{Regularization}
In addition to the advances of network architectures, many regularization techniques, e.g., data augmentation \cite{krizhevsky2012imagenet,devries2017improved}, stochastic dropping \cite{srivastava2014dropout,huang2016deep,larsson2016fractalnet,morerio2017curriculum,ghiasi2018dropblock}, and Shake-based regularization methods \cite{gastaldi2017shake,yamada2018shakedrop}, have been successfully applied to avoid overfitting of CNNs.
Data augmentation (e.g., random cropping, flipping, and color adjusting \cite{krizhevsky2012imagenet}) is a simple yet effective strategy to increase the diversity of data. DeVries and Taylor \cite{devries2017improved} introduced an image augmentation technique, where augmented images are generated by randomly cutting out square regions from input images (called Cutout). Dropout \cite{srivastava2014dropout} is a widely-used technique which stochastically drops out the hidden nodes from the networks during the training process. Following this idea, Maxout \cite{goodfellow2013maxout}, Continuous Dropout \cite{shen2017continuous}, DropPath \cite{larsson2016fractalnet}, and stochastic depth \cite{huang2016deep} were proposed. Stochastic depth randomly drops a certain number of residual branches of ResNet so that the network is shrunk in training. By incorporating Dropout with Cutout, DropBlock \cite{ghiasi2018dropblock} drops the contiguous regions in a feature map. Adding a parameter norm penalty to the loss function, the weight decay (or Tikhonov regularization) is commonly used for neural networks and linear inverse problems \cite{de2003regularization}. DisturbLabel \cite{xie2016disturblabel} imposes noisy labels in the loss function. Shake-based regularization approaches \cite{gastaldi2017shake,yamada2018shakedrop} were recently proposed to augment features inside CNNs, which achieve appealing classification performance.
\textbf{Shake-based regularization approaches.} Gastaldi \cite{gastaldi2017shake} proposed a Shake-Shake regularization method, as shown in Fig. \ref{Fig2} (a). A random variable \(\alpha\) is used to control the interpolation of the two residual branches (i.e., \(F_{1}(x)\) and \(F_{2}(x)\) in 3-branch ResNeXt). It is given by:
\begin{equation}
\label{eqn3}
x_{l+1} = x_{l} + \alpha F_{1}(x_{l}) +(1-\alpha)F_{2}(x_{l}),
\end{equation}
where \(\alpha \in [0,1]\) follows the uniform distribution in the forward pass. For the backward pass, \(\alpha\) is replaced by another uniform random variable \(\beta \in [0,1]\) to disturb the learning process. The regularization amplitude of each branch is fixed to \(1\).
To extend the use of Shake-Shake regularization, Yamada \textit{et al.} \cite{yamada2018shakedrop} introduced a single Shake in 2-branch architectures (e.g., ResNet or PyramidNet) as shown in Fig. \ref{Fig2} (b). Stochastic depth \cite{huang2016deep} was adopted to stabilize the learning:
\begin{equation}
\label{eqn4}
x_{l+1} = x_{l} + (b_{l}+\alpha-b_{l}\alpha)F(x_{l}),
\end{equation}
where \(\alpha \in [-1,1]\) is a uniform random variable and \(b_{l} \in \left \{ 0,1 \right \}\) is the Bernoulli random variable determining when to perform the original network (i.e., \(x_{l+1} = x_{l} + F(x_{l})\), if \(b_{l}=1\)) or the perturbated one (i.e., \(x_{l+1} = x_{l} + \alpha F(x_{l})\), if \(b_{l}=0\)). In the backward pass, \(\alpha\) is replaced by \(\beta \in [0,1]\). The probability of \(b_{l}\) denotes \(p_{l}=P(b_{l}=1)\), which follows a linear decay rule, i.e., \(p_{l}=1-\frac{l}{L}(1-p_{L})\), where \(L\) is the total number of Res-Blocks and \(p_{L}=0.5\). The regularization amplitude of the branch is also fixed to \(1\). We argue that this heavy regularization overemphasizes the overfitting and the fixed regularization amplitude cannot fit the dynamics of the training process and different model sizes well.
\section{The Proposed Method}\label{Dynamic}
As aforementioned, the fixed regularization strength in the existing regularization methods, such as DropPath \cite{larsson2016fractalnet}, Stochastic depth \cite{huang2016deep}, Shake-Shake \cite{gastaldi2017shake}, and Shakedrop \cite{yamada2018shakedrop}, departs from the human learning paradigm (e.g., the curriculum learning \cite{bengio2009curriculum,morerio2017curriculum} or self-paced learning \cite{kumar2010self}). A naive way is to predefine the schedule for updating the regularization strength, such as the linear increment scheme in \cite{ghiasi2018dropblock,zoph2018learning}, which linearly increases the regularization strength from low to high. We argue that the predefined schedule is not flexible enough to reveal the learning process. Based on the fact that the loss of the learning system can fully provide the learning status, we propose a dynamic regularization, which is capable of adjusting the regularization strength adaptively.
Our dynamic regularization for CNNs leverages the dynamics of the training loss. That is, at the beginning of training, both the training and testing losses keep decreasing. Through a certain number of iterations, the network overfits the training data, resulting in that the training loss decreases more rapidly than the testing loss. We design a regularization strategy to follow this dynamics. If the training loss drops in an iteration, the regularization strength should increase against the overfitting in the next iteration; otherwise, the regularization strength should decrease against the underfitting. In what follows, we first introduce the dynamic regularization in the residual architectures and then deliberate the update of the regularization strength in each iteration of the training process. We finally extend our dynamic regularization in the densely-connected networks.
\begin{figure} [t]
\centering
\includegraphics[width=7.5cm]{FIG/Fig3.pdf}\\
\caption{The 2-branch Res-Block with dynamic regularization.}\label{Fig3}
\end{figure}
\subsection{Residual Architectures with Dynamic Regularization}
We apply the dynamic regularization method in two residual network architectures: the 2-branch architecture (e.g., PyramidNet \cite{han2017deep}) and the 3-branch architecture (e.g., ResNeXt \cite{xie2017aggregated}).
\subsubsection{The 2-branch architecture with dynamic regularization}
\textbf{Training phase.} The dynamic regularization adopted in a Res-Block is shown in Figs. \ref{Fig3} (a) and (b). Specifically, a dynamic regularization unit (called random perturbation) is embedded into the residual branch of a Res-Block. The random perturbation \(\theta\) is achieved by
\begin{equation} \label{eqn5}
\theta = A + s_{i}\cdot r,
\end{equation}
where \(A\) is the basic constant amplitude, \(s_{i}\) is the dynamic factor at the \(i^{th}\) iteration, and \(r \in [-R,R]\) is the uniform random noise with the expected value \(E(r)=0\). The value of $s_i$ is updated via the backward difference of the training loss (See Section \ref{Dynamic}.B). The regularization amplitude is proportional to \(A+s_{i}\cdot R\). In the forward pass, the output of the \((l+1)^{th}\) Res-Block can be expressed as:
\begin{equation}
\label{eqn6}
x_{l+1} = x_{l} + (A+s_{i}\cdot r)F(x_{l}).
\end{equation}
In the backward pass, \(\theta\) has a different value (represented by \(\mu\) in Fig. \ref{Fig3} (b)) due to the random noise \(r\).
\textbf{Random noise.} The range of \(r\), i.e., $R$, is a hyper-parameter in the training phase. A straightforward way is to set \(R\) to be uniform inside all Res-Blocks. According to \cite{huang2016deep}, the features of the bottom Res-Blocks should remain more than those of the top Res-Blocks. Hence, we propose a linear enhancement rule to configure this range inside Res-Blocks. For the \(l^{th}\) Res-Block, the range denoted as $R_l$ is given by
\begin{equation}
\label{eqn7}
R_{l} = l/L,
\end{equation}
where \(L\) is the total number of Res-Blocks. With the linearly increased \(R\), the regularization strength is gradually raised from the bottom layers to the top layers. We conducted comparative experiments on different settings of \(R\) in Section \ref{Experiment}.C.3.
\textbf{Inference phase.} As shown in Fig. \ref{Fig3} (c), we calculate the expected value of \(\theta\) as
\begin{equation}
\label{eqn8}
E(\theta) = E(A+s_{i}\cdot r)=A,
\end{equation}
and obtain a forward pass for inference:
\begin{equation}
\label{eqn9}
x_{l+1} = x_{l} +A\cdot F(x_{l}).
\end{equation}
Since \(A\) is a constant, Eq. (\ref{eqn9}) is equivalent to the standard Res-Block.
\subsubsection{The 3-branch architecture with dynamic regularization}
As shown in Fig. \ref{Fig2} (a), we apply the dynamic regularization in the 3-branch architecture. Formally, we use the proposed random perturbation \(\theta\) of Eq. (\ref{eqn5}) to replace \(\alpha\) of Eq. (\ref{eqn3}) in Shake-Shake regularization. Hence, the Res-Block with dynamic regularization can be defined as
\begin{equation}
\label{eqn10}
x_{l+1} = x_{l} + (A+s_{i}\cdot r)F_{1}(x_{l}) +(1-A-s_{i}\cdot r)F_{2}(x_{l}).
\end{equation}
If we set \(A=0.5\), \(r \in [-0.5, 0.5]\), and \(s_{i}=1\), then \(\theta\) ranges from \(0\) and \(1\), which is equivalent to \(\alpha\) of Eq. (\ref{eqn3}). The Shake-Shake regularization can be thought of as a special case of our dynamic regularization with a fixed strength.
\subsection{Update of the Regularization Strength}
The proposed updating solution for the dynamic regularization strength is achieved by the dynamics of the training loss. In particular, the dynamic characteristic of the training loss can be model as the backward difference between the training losses at successive iterations:
\begin{equation}
\label{eqn11}
\nabla loss_{i}= loss_{i} - loss_{i-1},
\end{equation}
where \(loss_{i}\) denotes the training loss at the \(i^{th}\) iteration. Although the training loss shows a downtrend in training, large fluctuations appear when sequential mini-batches are fed. To eliminate the fluctuations and obtain the overall trend of the loss, we apply a Gaussian filter to smooth it. The filtered backward difference can be rewritten as
\begin{equation}
\label{eqn12}
\nabla f(loss_{i})= f(loss_{i}) - f(loss_{i-1}),
\end{equation}
where \(f(\cdot )\) is the filtering operation defined as
\begin{equation}
\label{eqn13}
f(loss_{i})=\sum_{n=0}^{N}w[n]\cdot loss_{i-n}.
\end{equation}
The filter length is \(N+1\). Here we use the normalized Gaussian window and formulate \(w[n]\) as
\begin{equation}
\label{eqn14}
w[n]=\frac{1}{\sqrt{2\pi}(\sigma N/2)}e^{-\frac{1}{2}\left ( \frac{n-N/2}{\sigma N/2} \right )^{2}},
\end{equation}
where \(\sigma=0.4\), and \(0\leq n \leq N\). The standard deviation is determined by \(\sigma \cdot N/2\). We will discuss the effectiveness of the Gaussian filter in Section \ref{Experiment}.C.4. The dynamic factor in Eqs. (\ref{eqn6}) and (\ref{eqn10}) is updated with respect to \(\nabla f(loss_{i})\), i.e.,
\begin{equation}
\label{eqn15}
s_{i+1} = \left\{\begin{matrix}
s_{i} + \Delta s, & \nabla f(loss_{i})\leq 0\\
s_{i} - \Delta s, & \nabla f(loss_{i})> 0
\end{matrix}\right.
\end{equation}
where \(\Delta s\) is a small constant step for changing the regularization amplitude. From Eq. (15), it can be observed that if the training loss decreases (\(\nabla f(loss_{i})\leq 0\)), the regularization amplitude increases to avoid overfitting; otherwise, it decreases to prevent underfitting. The dynamic factor keeps updating to reflect the dynamics of the training loss.
\textbf{Remark.} Some methods have been proposed to change the regularization strength. For instance, Zoph \textit{et al.} \cite{zoph2018learning} introduced a linear increment scheme, ScheduledDropPath, to regularize NASNets. The probability of dropping out a path is increased linearly throughout training. Following this, DropBlock \cite{ghiasi2018dropblock} employs a linearly-increased dropping rate. However, the constant or linear scheme is still a predefined rule, which cannot adapt to the training procedure and different model size. Different from them, our dynamic scheduling exploits the dynamics of the training loss, which is applicable to different network architectures. In Section \ref{Experiment}.C.2, we conducted comparisons between them.
\begin{figure} [t]
\centering
\includegraphics[width=7.5cm]{FIG/Fig4.pdf}\\
\caption{Dense block with dynamic regularization. \(F\) denotes a convolution operation. \(\theta\) is a random perturbation. `C' means a concatenation operation. \(\nabla f(loss)\) denotes a backward difference of the training loss.}\label{Fig4}
\end{figure}
\subsection{Extension to Densely-Connected Networks}
To illustrate the flexibility of our method, we further incorporate it into DenseNet \cite{huang2017densely} by assigning the random perturbations inside the dense block. Fig. \ref{Fig4} shows a two-layer dense block with dynamic regularization, where the perturbations are inserted behind the output features of the convolutional layers. This manner accumulates the noise from all preceding layers to the current layer. A small perturbation could lead to serious noise for the subsequent layers. In experiments, we found that ShakeDrop and DropBlock with default hyper-parameters yield worse results. This is caused by the strong regularization. To decrease the regularization strength, we increase the probability of the Bernoulli random variable (i.e., \(p_{L}\), the rate of keeping original features rather than shaking features) in Eq. (\ref{eqn4}) for ShakeDrop and increase the \textit{keep\_prob} for DropBlock. Note that for our dynamic regularization, it is not needed to adjust the hyper-parameters used in the residual structure. Our method obtains consistent better results. More details can be seen in Section \ref{Experiment}.
\begin{table}[t]
\centering \caption{Comparison of regularization methods on CIFAR100 in the 2-branch architecture (i.e., PyramidNet) and DenseNet architecture. Top-1 error rates (\%) are shown. Dynamic denotes the proposed regularization method. HP: hyper-parameters. The best result under each case is bold.}
\label{table1}
\begin{tabular}{l|c|c|c}
\hline
Network Architecture & Params & Regularization & \begin{tabular}[c]{@{}c@{}}Top-1 \\ Error \end{tabular} \\ \hline
\multirow{4}{*}{PyramidNet-110-a48} & \multirow{4}{*}{1.8M} & Baseline \cite{han2017deep} & 23.40 \\ \cline{3-4}
& & ShakeDrop \cite{yamada2018shakedrop} & 21.60 \\ \cline{3-4}
& & DropBlock \cite{ghiasi2018dropblock} & 21.50 \\ \cline{3-4}
& & Dynamic (ours) & \textbf{21.32} \\ \hline
\multirow{4}{*}{PyramidNet-26-a84} & \multirow{4}{*}{0.9M} & Baseline \cite{han2017deep} & 26.30 \\ \cline{3-4}
& & ShakeDrop \cite{yamada2018shakedrop} & 31.83 \\ \cline{3-4}
& & DropBlock \cite{ghiasi2018dropblock} & 23.88 \\ \cline{3-4}
& & Dynamic (ours) & \textbf{23.83} \\ \hline
\multirow{4}{*}{PyramidNet-26-a200} & \multirow{4}{*}{3.8M} & Baseline \cite{han2017deep} & 22.53 \\ \cline{3-4}
& & ShakeDrop \cite{yamada2018shakedrop} & 26.11 \\ \cline{3-4}
& & DropBlock \cite{ghiasi2018dropblock} & 21.22 \\ \cline{3-4}
& & Dynamic (ours) & \textbf{20.34} \\ \hline \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}l@{}}DenseNet-BC-100-k12 \\ (default HP) \end{tabular}} & \multirow{4}{*}{0.8M} & Baseline \cite{han2017deep} & 22.26 \\ \cline{3-4}
& & ShakeDrop \cite{han2017deep} (pL=0.5) & 25.29 \\ \cline{3-4}
& & DropBlock \cite{ghiasi2018dropblock} (kp=0.9) & 23.57 \\ \cline{3-4}
& & Dynamic (ours) & \textbf{20.59} \\ \hline
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}DenseNet-BC-100-k12 \\ (optimized HP)\end{tabular}} & \multirow{2}{*}{0.8M} & ShakeDrop \cite{han2017deep} (pL=0.9) & 21.41 \\ \cline{3-4}
& & DropBlock \cite{ghiasi2018dropblock} (kp=0.95) & 21.20 \\ \hline
\end{tabular}
\end{table}
\section{Experimental Results}\label{Experiment}
In this section, we evaluate the proposed dynamic regularization on the classification benchmark: CIFAR100 \cite{krizhevsky2009learning} and ImageNet \cite{deng2009imagenet}, in comparison with three state-of-the-art approaches: Shake-Shake \cite{gastaldi2017shake}, ShakeDrop \cite{yamada2018shakedrop}, and DropBlock \cite{ghiasi2018dropblock}. Then we conduct ablation studies to compare the fixed or linear-increment scheme of the regularization strength, and discuss the effectiveness of the Gaussian filter and the random noise.
\subsection{Implementation Details}
\subsubsection{CIFAR100}
The following settings are used throughout the experiments. We set the training epoch to \(300\) and the batch size to \(128\). The learning rate was initialized to \(0.1\) for the 2-branch architecture \cite{yamada2018shakedrop}, and \(0.2\) for the 3-branch architecture \cite{gastaldi2017shake} and DenseNet \cite{huang2017densely}. We used the cosine learning schedule to gradually reduce the learning rate to \(0\). The weight decay and momentum was set to 0.0001 and 0.9, respectively. PyramidNet \cite{han2017deep}, ResNeXt \cite{xie2017aggregated}, and DenseNet \cite{huang2017densely} were used as baselines. We employed the standard translation, flipping \cite{krizhevsky2012imagenet} and Cutout \cite{devries2017improved} as the data augmentation scheme. Therefore, the regularizer is the only factor to affect experiments. All experimental results are presented by the average of 3 runs at the 300-th epoch.
\subsubsection{ImageNet}
ResNet-18 was trained on ImageNet-1k \cite{krizhevsky2012imagenet} with 120 epochs. The learning rate was initialized to \(0.1\). Other settings were the same as those in CIFAR100. We reported the single-crop testing results.
\subsubsection{Regularizer}
For the dynamic regularization, we set the initial dynamic factor \(s_{0}=0\) and \(A=0.5\). We used \(\Delta s=0.0003\) for the 2-branch architecture and DenseNet, and \(\Delta s=0.00025\) for the 3-branch architecture. The length of the Gaussian filter was \(501\). In ShakeDrop \cite{yamada2018shakedrop} and Shake-Shake \cite{gastaldi2017shake}, the default hyper-parameters were employed. In DropBlock \cite{ghiasi2018dropblock}, we used the default hyper-parameters in their paper, i.e., the \textit{keep\_prob} and \textit{block\_size} were set to 0.9 and 7, respectively. To prevent high regularization strength in DenseNet, we increased the value of \(p_{L}\) of ShakeDrop from 0.5 to 0.9 and increased the value of \textit{keep\_prob} of DropBlock to 0.95. To prevent underfitting of ImageNet, we applied a small \(\Delta s=5\times 10^{-7}\) in the dynamic regularization and 0.99 \textit{keep\_prob} in DropBlock.
\begin{table}[t]
\centering \caption{Comparison of regularization methods on CIFAR100 in the 3-branch architecture (i.e., ResNeXt). Top-1 error rates (\%) are shown. The best result under each case is bold.}
\label{table2}
\begin{tabular}{l|c|c|c}
\hline
Network Architecture & Params & Regularization & Top-1 Error (\%) \\ \hline \hline
\multirow{4}{*}{ResNeXt-26-2x32d} & \multirow{4}{*}{2.9M} & Baseline \cite{xie2017aggregated} & 22.95 \\ \cline{3-4}
& & Shake-Shake \cite{gastaldi2017shake} & 21.45 \\ \cline{3-4}
& & DropBlock \cite{ghiasi2018dropblock} & 21.20 \\ \cline{3-4}
& & Dynamic (ours) & \textbf{20.91} \\ \hline
\multirow{4}{*}{ResNeXt-26-2x64d} & \multirow{4}{*}{11.7M} & Baseline \cite{xie2017aggregated} & 20.59 \\ \cline{3-4}
& & Shake-Shake \cite{gastaldi2017shake} & 19.19 \\ \cline{3-4}
& & DropBlock \cite{ghiasi2018dropblock} & 19.26 \\ \cline{3-4}
& & Dynamic (ours) & \textbf{18.76} \\ \hline
\end{tabular}
\end{table}
\subsection{Comparison with State-of-the-Art Regularization Methods}
\subsubsection{2-branch architecture}
We start with comparing the proposed dynamic regularization with ShakeDrop \cite{yamada2018shakedrop} and DropBlock \cite{ghiasi2018dropblock} in the 2-branch architecture on CIFAR100. Following the ShakeDrop, we used PyramidNet \cite{han2017deep} as our baseline (denoted as Baseline in Table \ref{table1}) and chose different architectures including: 1) PyramidNet-110-a48 (i.e., the network has a depth of 110 layers and a widening factor of 48) which is a deep and narrow network, 2) PyramidNet-26-a84 which is a shallow network, and 3) PyramidNet-26-a200 which is a shallow and wide network.
The first three entries of Table \ref{table1} are the results of PyramidNet. From Table \ref{table1}, it can be observed that our dynamic regularization outperforms the counterparts of ShakeDrop and DropBlock in various architectures. The error rates of ShakeDrop are even worse than those of Baseline in the shallow architectures, i.e., PyramidNet-26-a84 and PyramidNet-26-a200, which means ShakeDrop with a fixed regularization strength fails in this case. This issue comes from stochastic depth \cite{huang2016deep}, where stochastic depth is designed for deep networks. With a linearly-increased dropping rate, DropBlock gains lower error rates than Baseline. However, the predefined schedule of dropping rate is inferior to our dynamic schedule. Regardless of the depth of networks, the dynamic regularization method obtains a consistent improvement.
\subsubsection{3-branch architecture}
For the 3-branch architecture, we compare the dynamic regularization with Shake-Shake \cite{gastaldi2017shake} and DropBlock \cite{ghiasi2018dropblock} in ResNeXt-26-2x32d (i.e., the network has a depth of 26 layers and 2 residual branches, and the first residual block has a width of 32 channels) and ResNeXt-26-2x64d. The results are shown in Table \ref{table2}. We can see that the error rates of dynamic regularization are lower than those of Shake-Shake and DropBlock. The results from Tables \ref{table1} and \ref{table2} show that our dynamic regularization can adapt to various network architectures. Our method can decrease the errors by more than 2\% on average in comparison with Baseline.
\subsubsection{Densely-connected architecture}
Moreover, we evaluate the regularization methods in DenseNet-BC-100-k12 (i.e., the network uses bottleneck layers and compression with a depth of 100 layers and a growth rate of 12 \cite{huang2017densely}). The results are shown in the bottom of Table \ref{table1}. Default HP means that all regularizers employed the same hyper-parameters as the ones in PyramidNet. Optimized HP means we adjusted the hyper-parameters in terms of the regularization strength for DenseNet. With the default HP, ShakeDrop and DropBlock damaged the performance of Baseline. We found that the training errors of the two methods were much higher than the testing errors, which means the model underfitted data due to the high regularization strength. With the Optimized HP, we decreased the regularization strengths. We set a larger \textit{keep\_rate} for DropBlock and a larger \(p_{L}\) for ShakeDrop, so the performance was optimized accordingly. On the contrary, without adjusting the hyper-parameters, our dynamic regularization was stable and reduced the Top-1 error by 1.67\% (from 22.26\% to 20.59\%).
\begin{figure} [t]
\centering
\includegraphics[width=8.8cm]{FIG/Fig5.pdf}\\
\caption{Illustration of the training loss, dynamic factor, and Top-1 error with respect to epoch for PyramidNet. Gap stands for the difference between training and testing errors. \textit{Zoom in the figure for better viewing}.}\label{Fig5}
\end{figure}
\begin{table}[t]
\centering \caption{Comparison of regularization methods on ImageNet, with single-crop testing. Top-1 error rates (\%) are shown. The best result is bold.}
\label{table3}
\begin{tabular}{l|c|c|c}
\hline
Network Architecture & Params & Regularization & Top-1 Error (\%) \\ \hline \hline
\multirow{4}{*}{ResNet-18} & \multirow{4}{*}{11.7M} & Baseline \cite{he2016deep} & 29.05 \\ \cline{3-4}
& & ShakeDrop \cite{yamada2018shakedrop} & - \\ \cline{3-4}
& & DropBlock \cite{ghiasi2018dropblock} & 29.06 \\ \cline{3-4}
& & Dynamic (ours) & \textbf{28.82} \\ \hline
\end{tabular}
\end{table}
\subsubsection{Results on ImageNet}
We evaluate ResNet-18 with the dynamic regularization on ImageNet. The classification results are shown in Table \ref{table3}. Due to a large amount of data trained by a light model, ResNet-18 Baseline underfitted the training data, leading to worse performance when strong regularizers were used. ShakeDrop cannot converge on such a shallow network, so we did not report it. DropBlock also cannot work well, even though we reduced the regularization strength (i.e., the value of \textit{keep\_rate} was set to 0.99). Our dynamic regularization performed well in this situation and produced the best result (28.82\%).
\subsection{Ablation Study and Discussion}
\subsubsection{Effectiveness of dynamic regularization}
Fig. \ref{Fig5} shows the training loss, dynamic factor, and Top-1 error with respect to the epoch in the two networks, i.e., PyramidNet-26-a84 and PyramidNet-110-a48. As shown in Fig. \ref{Fig5} (a), one property of the dynamic regularization is that it prevents the training loss from a rapid descent. In other words, the networks are not easy to fit the training data by rote. Fig. \ref{Fig5} (b) illustrates that the dynamic factors of two networks gradually increase throughout training. Instead of using a predefined scheduling function in \cite{zoph2018learning}, our dynamic scheduling is self-adaptive according to the backward difference of the training loss. Another important property of the dynamic regularization is that a low regularization strength is generated for a light model (e.g., PyramidNet-26-a84), and a high strength is for a heavy model (e.g., PyramidNet-110-a48). Figs. \ref{Fig5} (c) and (d) show the networks with dynamic regularization could narrow the gap between the training and testing errors (from Gap-1 to Gap-2 for PyramidNet-26-a84 and from Gap-3 to Gap-4 for PyramidNet-110-a48, respectively) and achieve lower testing errors than the Baselines.
\subsubsection{Schedules of the regularization strength}
In \cite{zoph2018learning,ghiasi2018dropblock}, the regularization strength is adjusted by a linear-increment schedule, where ScheduledDropPath is used to linearly increase the probability of dropped path (that can also be considered as the regularization strength) in training. Besides, the fixed regularization schedule is commonly used in previous methods \cite{larsson2016fractalnet,huang2016deep,gastaldi2017shake,yamada2018shakedrop}. We used PyramidNet-26-a84 as a backbone to compare different regularization schedules.
Table \ref{table4} illustrates six different configurations of the regularization strength. `Fix-\(x\)' means the dynamic factor is fixed to \(x\) and `Linear-\(x\)' means the dynamic factor is linearly scheduled from \(0\) to \(x\) over the course of training steps. `Fix-2' and `Linear-3' achieve the best results in fixed and linear schedules, respectively. Compared with them, the dynamic setting with 23.83\% error rate achieved the best performance, which shows the effectiveness of our dynamic regularization schedule.
\begin{table}[t]
\centering \caption{Comparison of regularization schedules.}
\label{table4}
\begin{tabular}{l|c|l|c}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{PyramidNet-26-a84}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Top-1 \\ Error(\%)\end{tabular}} & \multicolumn{1}{c|}{\multirow{2}{*}{PyramidNet-26-a84}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Top-1\\ Error(\%)\end{tabular}} \\
\multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{} & \\ \hline \hline
Fix-1 & 25.45 & Linear-1 & 25.76 \\ \hline
Fix-2 & 24.75 & Linear-2 & 25.09 \\ \hline
Fix-3 & 25.52 & Linear-3 & 24.28 \\ \hline
Fix-4 & 30.52 & Linear-4 & 25.80 \\ \hline \hline
Baseline & 26.30 & Dynamic & \textbf{23.83} \\ \hline
\end{tabular}
\end{table}
\subsubsection{Random noise}
As mentioned in Section \ref{Dynamic}, the range of the random noise involved in our dynamic regularization, i.e., $R$, is designed to grow from bottom Res-Blocks to top Res-Blocks linearly. To evaluate this setting, we performed the dynamic regularization with uniform \(R\) and linearly growing \(R\) in PyramidNet-26-a84. From the 2nd and 3rd entries of Table \ref{table5}, we can see the model with uniform \(R\) is inferior to the model with a linearly growing \(R\) (25.28\% v.s. 23.83\%).
\subsubsection{Gaussian Filtering}
In the process of updating the dynamic factor, we employed a Gaussian filter to remove the instant change of the training loss in a mini-batch mode. That is, we refer to the Eq. (\ref{eqn12}) rather than the Eq. (\ref{eqn11}) to update the dynamic factor. To study the effectiveness of the Gaussian filter, we conducted comparative experiments between the Dynamic with and without the Gaussian filter. The last two entries of Table \ref{table5} show that if we remove the Gaussian filter, the error rate increases by 1.38\%, which validates that the Gaussian filter also plays an important role in dynamic regularization.
\begin{table}[t]
\centering \caption{Effectiveness of linearly growing \(R\) and Gaussian filtering.}
\label{table5}
\begin{tabular}{l|c}
\hline
PyramidNet-26-a84 & \begin{tabular}[c]{@{}l@{}}Top-1 Error (\%)\end{tabular} \\ \hline \hline
Baseline & 26.30 \\ \hline
Dynamic-Uniform \(R\) & 25.28 \\ \hline
Dynamic-Linear growth \(R\) & \textbf{23.83} \\ \hline \hline
Dynamic-No filter & 25.21 \\ \hline
Dynamic-Gaussian filter & \textbf{23.83} \\ \hline
\end{tabular}
\end{table}
\section{Conclusion}\label{Conclusion}
In this paper, we have presented a dynamic schedule to adjust the regularization strength to fit various network architectures and the training process. Our dynamic regularization is self-adaptive in accordance with the change of the training loss. It produces a low regularization strength for light network architectures and high regularization strength for heavy ones. Furthermore, the strength is self-paced grown to avoid overfitting. Experimental results demonstrate that the proposed dynamic regularization outperforms state-of-the-art ShakeDrop, Shake-Shake, and DropBlock regularization methods. In future, we will investigate the potential of the dynamic regularization in data augmentation and Dropout-based methods.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUf0bxK7Tt522Wc-eo
| 5 | 1 |
\section{Introduction}
The general framework of perturbation theory consists in the construction of
the chronological products such that Bogoliubov axioms are verified \cite{BS}, \cite{EG}, \cite{DF}, \cite{ano}; for every set of Wick monomials
$
W_{1}(x_{1}),\dots,W_{n}(x_{n})
$
acting in some Fock space generated by the free fields of the model
$
{\cal H}
$
one associates the operator
$
T_{W_{1},\dots,W_{n}}(x_{1},\dots,x_{n});
$
all these expressions are in fact distribution-valued operators called chronological products. Sometimes it is convenient to use another notation:
$
T(W_{1}(x_{1}),\dots,W_{n}(x_{n})).
$
The construction of the chronological products can be done recursively according to Epstein-Glaser prescription \cite{EG}, \cite{Gl} (which reduces the induction procedure to a distribution splitting of some distributions with causal support) or according to Stora prescription \cite{PS} (which reduces the renormalization procedure to the process of extension of distributions). These products are not uniquely defined but there are some natural limitation on the arbitrariness. If the arbitrariness does not grow with $n$ we have a renormalizable theory. An equivalent point of view uses retarded products \cite{St1}.
Gauge theories describe particles of higher spin. Usually such theories are not renormalizable. However, one can save renormalizability using ghost fields. Such theories are defined in a Fock space
$
{\cal H}
$
with indefinite metric, generated by physical and un-physical fields (called {\it ghost fields}). One selects the physical states assuming the existence of an operator $Q$ called {\it gauge charge} which verifies
$
Q^{2} = 0
$
and such that the {\it physical Hilbert space} is by definition
$
{\cal H}_{\rm phys} \equiv Ker(Q)/Im(Q).
$
The space
$
{\cal H}
$
is endowed with a grading (usually called {\it ghost number}) and by construction the gauge charge is raising the ghost number of a state. Moreover, the space of Wick monomials in
$
{\cal H}
$
is also endowed with a grading which follows by assigning a ghost number to every one of the free fields generating
$
{\cal H}.
$
The graded commutator
$
d_{Q}
$
of the gauge charge with any operator $A$ of fixed ghost number
\begin{equation}
d_{Q}A = [Q,A]
\end{equation}
is raising the ghost number by a unit. It means that
$
d_{Q}
$
is a co-chain operator in the space of Wick polynomials. From now on
$
[\cdot,\cdot]
$
denotes the graded commutator.
A gauge theory assumes also that there exists a Wick polynomial of null ghost number
$
T(x)
$
called {\it the interaction Lagrangian} such that
\begin{equation}
~[Q, T] = i \partial_{\mu}T^{\mu}
\end{equation}
for some other Wick polynomials
$
T^{\mu}.
$
This relation means that the expression $T$ leaves invariant the physical states, at least in the adiabatic limit. In all known models one finds out that there exists a chain of Wick polynomials
$
T^{\mu},~T^{\mu\nu},~T^{\mu\nu\rho},\dots
$
such that:
\begin{equation}
~[Q, T] = i \partial_{\mu}T^{\mu}, \quad
[Q, T^{\mu}] = i \partial_{\nu}T^{\mu\nu}, \quad
[Q, T^{\mu\nu}] = i \partial_{\rho}T^{\mu\nu\rho},\dots
\label{descent}
\end{equation}
In all cases
$
T^{\mu\nu},~T^{\mu\nu\rho},\dots
$
are completely antisymmetric in all indices; it follows that the chain of relation stops at the step $4$ (if we work in four dimensions). We can also use a compact notation
$
T^{I}
$
where $I$ is a collection of indices
$
I = [\nu_{1},\dots,\nu_{p}]~(p = 0,1,\dots,)
$
and the brackets emphasize the complete antisymmetry in these indices. All these polynomials have the same canonical dimension
\begin{equation}
\omega(T^{I}) = \omega_{0},~\forall I
\end{equation}
and because the ghost number of
$
T \equiv T^{\emptyset}
$
is supposed null, then we also have:
\begin{equation}
gh(T^{I}) = |I|.
\end{equation}
One can write compactly the relations (\ref{descent}) as follows:
\begin{equation}
d_{Q}T^{I} = i~\partial_{\mu}T^{I\mu}.
\label{descent1}
\end{equation}
For concrete models the equations (\ref{descent}) can stop earlier: for instance in the case of gravity
$
T^{\mu\nu\rho\sigma} = 0.
$
Now we can construct the chronological products
$$
T^{I_{1},\dots,I_{n}}(x_{1},\dots,x_{n}) \equiv T(T^{I_{1}}(x_{1}),\dots,T^{I_{n}}(x_{n}))
$$
according to the recursive procedure. We say that the theory is gauge invariant in all orders of the perturbation theory if the following set of identities generalizing (\ref{descent1}):
\begin{equation}
d_{Q}T^{I_{1},\dots,I_{n}} =
i \sum_{l=1}^{n} (-1)^{s_{l}} {\partial\over \partial x^{\mu}_{l}}
T^{I_{1},\dots,I_{l}\mu,\dots,I_{n}}
\label{gauge}
\end{equation}
are true for all
$n \in \mathbb{N}$
and all
$
I_{1}, \dots, I_{n}.
$
Here we have defined
\begin{equation}
s_{l} \equiv \sum_{j=1}^{l-1} |I|_{j}
\end{equation}
(see also \cite{DB}). In particular, the case
$
I_{1} = \dots = I_{n} = \emptyset
$
is sufficient for the gauge invariance of the scattering matrix, at least
in the adiabatic limit.
Such identities can be usually broken by {\it anomalies} i.e. expressions of the type
$
A^{I_{1},\dots,I_{n}}
$
which are quasi-local and might appear in the right-hand side of the relation (\ref{gauge}). These expressions verify some consistency conditions - the so-called Wess-Zumino equations. One can use these equations in the attempt to eliminate the anomalies by redefining the chronological products. All these operations can be proved to be of cohomological nature and naturally lead to descent equations of the same type as (\ref{descent1}) but for different ghost number and canonical dimension.
If one can choose the chronological products such that gauge invariance is true then there is still some freedom left for redefining them. To be able to decide if the theory is renormalizable one needs the general form of such arbitrariness. Again, one can reduce the study of the arbitrariness to descent equations of the type as (\ref{descent1}).
Such type of cohomology problems have been extensively studied in the more popular approach to quantum gauge theory based on functional methods (following from some path integration method). In this setting the co-chain operator is non-linear and makes sense only for classical field theories. On the contrary, in the causal approach the co-chain operator is linear so the cohomology problem makes sense directly in the Hilbert space of the model. For technical reasons
one needs however a classical field theory machinery to analyze the descent equations more easily.
In this paper we want to give a general description of these methods and we will apply them to gravitation models. We consider the case of massless and massive gravity.
In the next Section we remind the axioms verified by the chronological products (for simplicity we give them only for the second order of perturbation theory) and consider the particular case of gauge models. In Section \ref{WZ} we give some general results about the structure of the anomalies and reduce the proof of (\ref{gauge}) to descent equations. We will use a convenient geometric setting for our problem presented in \cite{cohomology}. In Section \ref{q} we determine the cohomology of the operator
$
d_{Q}
$
for gravity models. Using this cohomology and the algebraic Poincar\'e lemma we can solve the descent equations in various ghost numbers in Section \ref{relative}. In Section \ref{int} we use these methods to prove the gauge invariance and renormalization properties of the pure gravity model in the second order of perturbation theory. In Section \ref{int1} we determine the interaction between massless gravity, matter and Yang-Mills fields; we consider here only massless Yang-Mills fields and we will treat the massive case elsewere. An interesting fact appears in this case due to the fact that, as it is well known, one cannot enforce the property
\begin{equation}
\partial_{\mu}v_{a}^{\mu} = 0
\end{equation}
for quantum massless fields. This means that the usual expression for the energy-momentum tensor
$
T^{\mu\nu}
$
will not be divergenceless in the quantum context and this spoils the gauge invariance property of the interaction. Fortunately one can correct this adding a new ghost term in the interaction Lagrangian.
In \cite{Sc2} one can find similar results for massless gravity and its interaction with a scalar field but the cohomological methods are not used for the proofs.
We note that the renormalizability of quantum gravity has attracted a lot of attention and we mention \cite{GW} and \cite{Kr}. Here we concentrate only on some technical procedures of cohomological nature which can be used to simplify the understanding of the lower order of perturbation theory.
\section{General Gauge Theories\label{ggt}}
We give here the essential ingredients of perturbation theory. For simplicity we emphasize the second order of the perturbation theory.
\subsection{Bogoliubov Axioms}{\label{bogoliubov}}
Suppose that we have a Fock space
$
{\cal H}
$
generated by some set of free fields and consider a set of Wick monomials
$
W_{j},~j = 1,\dots,n
$
acting in this Hilbert space. The chronological products
$
T(W_{1}(x_{1}),\dots,W_{n}(x_{n})) \equiv T_{W_{1},\dots,W_{n}}(x_{1},\dots,x_{n})
\quad n = 1,2,\dots
$
are some operator-valued distribution acting in the Fock space and verifying a set of axioms (named {\it Bogoliubov axioms}) explained in detail in \cite{cohomology}. For the case of gravity we will concentrate in this paper on the second order of perturbation theory, so we give the the axioms only for the cases
$
n= 1, 2.
$
We postulate the ``initial condition"
\begin{equation}
T_{W}(x) = W(x)
\end{equation}
and we give the axioms for the chronological products
$
T_{W_{1},W_{2}}(x_{1},x_{2});
$
for the particular case
$
W_{1} = W_{2} = T
$
one denotes
$
T_{W_{1},W_{2}}(x_{1},x_{2}) = T_{2}(x_{1},x_{2})
$
and these axioms guarantee that the scattering matrix
\begin{equation}
S(g) = {\bf I} + \int_{\mathbb{R}^{4n}} dx g(x) T(x) +
\int_{\mathbb{R}^{8n}} dx_{1}~dx_{2}~g(x_{1})~g(x_{2})~T_{2}(x_{1},x_{2}) + \cdots
\end{equation}
is Poincar\'e covariant, unitary and causal. In general we require:
\begin{itemize}
\item
Skew-symmetry:
\begin{equation}
T_{W_{1},W_{2}}(x_{1},x_{2}) = (-1)^{f_{1} f_{2}}~T_{W_{2},W_{1}}(x_{2},x_{1})
\end{equation}
where
$f_{i}$
is the number of Fermi fields appearing in the Wick monomial
$W_{i}$.
\item
Poincar\'e invariance: we have a natural action of the Poincar\'e group in the space of Wick monomials and we impose that for all
$(a,A) \in inSL(2,\mathbb{C})$
we have:
\begin{equation}
U_{a, A} T_{W_{1},W_{2}}(x_{1},x_{2}) U^{-1}_{a, A} =
T_{A\cdot W_{1},A\cdot W_{2}}(A\cdot x_{1}+a,A\cdot x_{2}+a);
\label{invariance}
\end{equation}
Sometimes it is possible to supplement this axiom by other invariance properties: space and/or time inversion, charge conjugation invariance, global symmetry invariance with respect to some internal symmetry group, supersymmetry, etc.
\item
Causality: if
$
x_{1}
$
succeeds causally
$
x_{2}
$
(which one denotes
$
x_{1} \geq x_{2})
$
then we have:
\begin{equation}
T_{W_{1},W_{2}}(x_{1},x_{2}) = T_{W_{1}}(x_{1})~~T_{W_{2}}(x_{2}) = W_{1}(x_{1})~W_{2}(x_{2});
\label{causality}
\end{equation}
\item
Unitarity:
\begin{equation}
T_{W^{\dagger}_{1},W^{\dagger}_{2}}(x_{1},x_{2})^{\dagger} =
- T_{W_{1},W_{2}}(x_{1},x_{2}) + W_{1}(x_{1})~W_{2}(x_{2}) + W_{2}(x_{2})~W_{1}(x_{1}).
\label{unitarity}
\end{equation}
\end{itemize}
It can be proved that this system of axioms can be supplemented with
\begin{equation}
T_{W_{1},W_{2}}(x_{1},x_{2}) = \sum \epsilon \quad
<\Omega, T_{W^{\prime}_{1},W^{\prime}_{2}}(x_{1},x_{2})\Omega>
~:W_{1}^{\prime\prime}(x_{1}),W_{2}^{\prime\prime}(x_{2}):
\label{wick-chrono2}
\end{equation}
where
$W^{\prime}_{i}$
and
$W^{\prime\prime}_{i}$
are Wick submonomials of
$W_{i}$
such that
$W_{i} = :W^{\prime}_{i} W^{\prime\prime}_{i}:$
and the sign
$\epsilon$
takes care of the permutation of the Fermi fields. This is called the {\it Wick expansion property}.
We can also include in the induction hypothesis a limitation on the order of singularity of the vacuum averages of the chronological products associated to arbitrary Wick monomials
$W_{1},W_{2}$;
explicitly:
\begin{equation}
\omega(<\Omega, T_{W_{1},W_{2}}(x_{1},x_{2})\Omega>) \leq \omega(W_{1}) + \omega(W_{2}) - 4
\label{power}
\end{equation}
where by
$\omega(d)$
we mean the order of singularity of the (numerical) distribution $d$ and by
$\omega(W)$
we mean the canonical dimension of the Wick monomial $W$; in particular this means
that we have
\begin{equation}
T_{W_{1},W_{2}}(x_{1},x_{2}) = \sum_{g} t_{g}(x_{1} - x_{2})~W_{g}(x_{1},x_{2})
\label{generic}
\end{equation}
where
$W_{g}$
are Wick polynomials of fixed canonical dimension,
$t_{g}$
are distributions in one variable with the order of singularity bounded by the power counting
theorem \cite{EG}:
\begin{equation}
\omega(t_{g}) + \omega(W_{g}) \leq \omega(W_{1}) + \omega(W_{2}) - 4
\label{power1}
\end{equation}
and the sum over $g$ is essentially a sum over Feynman graphs. We indicate briefly the simplest way to obtain the chronological products \cite{EG}. We compute the commutator
$
[W_{1}(x_{1}), W_{2}(x_{2})]
$
and we first consider only the contributions coming from three graphs; we end up with an expression of the form
\begin{equation}
[W_{1}(x_{1}), W_{2}(x_{2})]_{\rm tree} = \sum {\partial \over \partial x^{\mu_{1}}}\cdots
{\partial \over \partial x^{\mu_{k}}}D_{m_{j}}(x_{1} - x_{2})~
W^{\mu_{1}\dots\mu_{k}}_{j}(x_{1},x_{2}) + \cdots
\label{commutator}
\end{equation}
where
$
D_{m}(x_{1} - x_{2})
$
is the Pauli-Villars causal distribution of mass $m$,
$
W^{\mu_{1}\dots\mu_{k}}_{j}(x_{1},x_{2})
$
are Wick polynomials and by
$
\cdots
$
we mean similar terms for the Fermi sector where instead of
$
D_{m}(x_{1} - x_{2})
$
we have the corresponding causal function
$
S_{m}(x_{1} - x_{2})
$
(see \cite{Sc2} for the explicit expressions). Now one defines
\begin{equation}
T_{W_{1},W_{2}}(x_{1}, x_{2})_{\rm tree} = \sum {\partial \over \partial x^{\mu_{1}}} \cdots
{\partial \over \partial x^{\mu_{k}}}D^{F}_{m_{j}}(x_{1} - x_{2})~
W^{\mu_{1}\dots\mu_{k}}_{j}(x_{1},x_{2}) + \cdots
\label{feynman}
\end{equation}
obtained from the previous one by replacing the causal distributions by the corresponding Feynman propagators. A similar procedure works for loop graphs also only the procedure of obtaining the Feynman propagators from the corresponding causal distributions is more complicated (however, as for the tree contributions, is based on a standard procedure of distribution splitting). The resulting chronological products do verify all the axioms.
Up to now, we have defined the chronological products only for Wick monomials
$
W_{1}, W_{2}
$
but we can extend the definition for Wick polynomials by linearity.
One can modify the chronological products without destroying the basic property of causality {\it iff} one can make
\begin{equation}
T_{W_{1},W_{2}}(x_{1},x_{2}) \rightarrow T_{W_{1},W_{2}}(x_{1},x_{2})
+ R_{W_{1},W_{2}}(x_{1},x_{2})
\label{renorm}
\end{equation}
where $R$ are quasi-local expressions; by a {\it quasi-local expression} we mean in this case an expression of the form
\begin{equation}
R_{W_{1},W_{2}}(x_{1},x_{2}) = \sum_{g} \left[ P_{g}(\partial)\delta(x_{1} - x_{2})\right]
W_{g}(x_{1},x_{2})
\label{renorm1}
\end{equation}
with
$P_{g}$
monomials in the partial derivatives and
$W_{g}$
are Wick polynomials. Because of the delta function we can consider that
$P_{g}$
is a monomial only in the derivatives with respect to, say
$
x_{2}.
$
If we want to preserve (\ref{power}) we impose the restriction
\begin{equation}
deg(P_{g}) + \omega(W_{g}) \leq \omega(W_{1}) + \omega(W_{2}) - 4
\label{power2}
\end{equation}
and some other restrictions are following from the preservation of Lorentz covariance and unitarity.
The redefinitions of the type (\ref{renorm}) are the so-called {\it finite renormalizations}. Let us note that in higher orders of perturbation theory this arbitrariness, described by the number of independent coefficients of the polynomials
$
P_{g}
$
can grow with $n$ and in this case the theory is called {\it non-renormalizable}. This can happen if some of the Wick monomials
$
W_{j}, j = 1,\dots,n
$
have canonical dimension greater than $4$. This seems to be the case for quantum gravity.
It is not hard to prove that any finite renormalization can be rewritten in the form
\begin{equation}
R(x_{1},x_{2}) =
\delta(x_{1} - x_{2})~W(x_{1})
+ {\partial \over \partial x^{\mu}_{2}}\delta(x_{1} - x_{2})~W^{\mu}(x_{1})
\label{renorm2}
\end{equation}
where the expression
$
W, W^{\mu}
$
are Wick polynomials. But it is clear that the second term in the above expression is null in the adiabatic limit so we can postulate that these type of finite renormalizations are {\it trivial}. This means that we can admit that the finite renormalizations have a much simpler form, namely
\begin{equation}
R(x_{1},x_{2}) = \delta(X)~W(x_{1})
\label{renorm3}
\end{equation}
where the Wick polynomial $W$ is constrained by
\begin{equation}
\omega(W) \leq \omega(W_{1}) + \omega(W_{2}) - 4.
\label{power3}
\end{equation}
\subsection{Gauge Theories and Anomalies\label{anomalies}}
From now on we consider that we work in the four-dimensional Minkowski space and we have the Wick polynomials
$
T^{I}
$
such that the descent equations (\ref{descent1}) are true and we also have
\begin{equation}
T^{I}(x_{1})~T^{J}(x_{2}) = (-1)^{|I||J|}~T^{J}(x_{2})~T^{I}(x_{1}),~~
\forall~x_{1} \sim x_{2}
\label{graded-comm}
\end{equation}
i.e. for
$
x_{1} - x_{2}
$
space-like these expressions causally commute in the graded sense.
The equations (\ref{descent1}) are called a {\it relative cohomology} problem. The co-boundaries for this problem are of the type
\begin{equation}
T^{I} = d_{Q}B^{I} + i~\partial_{\mu}B^{I\mu}.
\label{coboundary}
\end{equation}
In the second order of perturbation theory we construct the associated chronological products
$$
T^{I_{1},I_{2}}(x_{1},x_{2}) = T_{T^{I_{1}},T^{I_{2}}}(x_{1},x_{2}).
$$
We will impose the graded symmetry property:
\begin{equation}
T^{I_{1},I_{2}}(x_{1},x_{2}) = (-1)^{|I_{1}| |I_{2}|}~T^{I_{2},I_{1}}(x_{2},x_{1}).
\label{symmetryT}
\end{equation}
We also have
\begin{equation}
gh(T^{I_{1},I_{2}}) = |I_{1}| + |I_{2}|.
\label{ghT}
\end{equation}
In the case of a gauge theory the set of {\it trivial} finite renormalizations is larger; we can also include co-boundaries because they induce the null operator on the physical space:
\begin{equation}
R^{I_{1},I_{2}}(x_{1},x_{2}) = d_{Q}B^{I_{1},I_{2}}(x_{1})
+ i~{\partial \over \partial x^{\mu}_{2}}\delta(x_{1} - x_{2})~B^{I_{1},I_{2};\mu}(x_{1})
\label{renorm4}
\end{equation}
One can write the gauge invariance condition (\ref{gauge}) in a more compact form \cite{cohomology} but for
$
n = 2
$
this will not be necessary.
We now determine the obstructions for the gauge invariance relations (\ref{gauge}). These relations are true for $n = 1$ according to (\ref{descent1}). Then one can prove that in order $n = 2$ we must have:
\begin{equation}
d_{Q}T^{I_{1},I_{2}} = i~{\partial\over \partial x^{\mu}_{1}}T^{I_{1}\mu,I_{2}}
+ i~(-1)^{|I_{1}|} {\partial\over \partial x^{\mu}_{2}}T^{I_{1},I_{2}\mu}
+ A^{I_{1},I_{2}}(x_{1},x_{2})
\label{gauge2}
\end{equation}
where the expressions
$
A^{I_{1},I_{2}}(x_{1},x_{2})
$
are quasi-local operators:
\begin{eqnarray}
A^{I_{1},I_{2}}(x_{1},x_{2})
= \sum_{k}~\left[ {\partial \over \partial x_{2}^{\rho_{1}}} \dots
{\partial\over \partial x_{2}^{\rho_{k}}} \delta(x_{2} - x_{1}) \right]~W^{I_{1},I_{2};\{\rho_{1},\dots,\rho_{k}\}}(x_{1})
\label{genericA}
\end{eqnarray}
and are called {\it anomalies}. In this expression the Wick polynomials
$
W^{I_{1},\dots,I_{n};\{\rho_{1},\dots,\rho_{k}\}}
$
are uniquely defined. From (\ref{power1}) we have
\begin{equation}
\omega(W^{I_{1},I_{2};\{\rho_{1},\dots,\rho_{k}\}}) \leq 2~\omega_{0} - 3 - k
\label{power4}
\end{equation}
where we remind that
$
\omega_{0} \equiv \omega(T);
$
this gives a bound on $k$ in the previous sum. It is clear that we have from (\ref{symmetryT}) a similar symmetry for the anomalies: namely we have:
\begin{equation}
A^{I_{1},I_{2}}(x_{1},x_{2}) = (-1)^{|I_{1}| |I_{2}|}~A^{I_{2},I_{1}}(x_{2},x_{1})
\label{symmetryA}
\end{equation}
and we also have
\begin{equation}
gh(A^{I_{1},I_{2}}) = |I_{1}| + |I_{2}| + 1
\label{ghA}
\end{equation}
and
\begin{equation}
A^{I_{1},I_{2}} = 0
\quad {\it iff} \quad |I_{1}| + |I_{2}| > 2~\omega_{0} - 4.
\label{limit}
\end{equation}
We also have some consistency conditions verified by the anomalies. If one applies the operator
$d_{Q}$
to (\ref{gauge2}) one obtains the so-called {\it Wess-Zumino consistency conditions} for the cases
$
n =2
$:
\begin{equation}
d_{Q}A^{I_{1},I_{2}} = - i~{\partial\over \partial x^{\mu}_{1}}A^{I_{1}\mu,I_{2}}
- i~(-1)^{|I_{1}|} {\partial\over \partial x^{\mu}_{2}}A^{I_{1},I_{2}\mu}.
\label{wz}
\end{equation}
Let us note that we can suppose, as for the finite renormalizations ( see (\ref{renorm3})) that all anomalies which are total divergences are trivial because they spoil gauge invariance by terms which can be made as small as one wishes (in the adiabatic limit), i.e. we can take the form:
\begin{equation}
A^{I_{1},I_{2}}(x_{1},x_{2}) =\delta(x_{1} - x_{2})~W^{I_{1},I_{2}}(x_{1}).
\label{ano}
\end{equation}
In the case of quantum gravity it is not necessary to postulate this relation: one can prove it if one makes convenient finite renormalizations! For Yang-Mills models one can prove even more: such type of relations can be implemented in an arbitrary order of perturbation theory.
Suppose now that we have fixed the gauge invariance (\ref{gauge}) (for $n = 2$) and we investigate the renormalizability issue i.e. we make the redefinitions
\begin{equation}
T^{I_{1},I_{2}} \rightarrow T^{I_{1},I_{2}} + R^{I_{1},I_{2}}
\label{renorm5}
\end{equation}
where $R$ are quasi-local expressions. As before we have
\begin{equation}
R^{I_{1},I_{2}}(x_{1},x_{2}) = (-1)^{|I_{1}| |I_{2}|}~R^{I_{2},I_{1}}(x_{2},x_{1}).
\label{symmetryR}
\end{equation}
We also have
\begin{equation}
gh(R^{I_{1},I_{2}}) = |I_{1}| + |I_{2}|
\label{ghR}
\end{equation}
and
\begin{equation}
R^{I_{1},I_{2}} = 0~~{\it iff}~~
\quad
|I_{1}| + |I_{1}| > 2~\omega_{0} - 4.
\label{limit1}
\end{equation}
If we want to preserve (\ref{gauge}) it is clear that the quasi-local operators
$
R^{I_{1},I_{2}}
$
should also verify
\begin{equation}
d_{Q}R^{I_{1},I_{2}} = i~{\partial\over \partial x^{\mu}_{1}}R^{I_{1}\mu,I_{2}}
- i~(-1)^{|I_{1}|} {\partial\over \partial x^{\mu}_{2}}R^{I_{1},I_{2}\mu}
\label{wz3}
\end{equation}
i.e. equations of the type (\ref{wz}). In this case we note that we have more structure;
according to the previous discussion we can impose the structure (\ref{renorm3}):
\begin{equation}
R^{I_{1},I_{2}}(x_{1},x_{2}) = \delta(x_{1} - x_{2})~W^{I_{1},I_{2}}(x_{1})
\end{equation}
and we obviously have:
\begin{equation}
gh(W^{I_{1},I_{2}}) = |I_{1}| + |I_{2}|
\label{ghR1}
\end{equation}
and
\begin{equation}
W^{I_{1},I_{2}} = 0~~{\it iff}~~
\quad
|I_{1}| + |I_{2}| > 2~\omega_{0} -4.
\label{limit2}
\end{equation}
From (\ref{wz3}) we obtain after some computations that there are Wick polynomials
$
R^{I}
$
such that
\begin{equation}
W^{I_{1},I_{2}} = (-1)^{|I_{1}| |I_{2}|}~R^{I_{1} \cup I_{2}}.
\label{gauge6}
\end{equation}
Moreover, we have
\begin{equation}
gh(R^{I}) = |I|
\label{ghR2}
\end{equation}
and
\begin{equation}
R^{I} = 0~~{\it iff}~~
\quad
|I| > 2~\omega_{0} -4.
\label{limit3}
\end{equation}
Finally, the following descent equations are true:
\begin{equation}
d_{Q}R^{I} = i~\partial_{\mu}R^{I\mu}
\label{gauge5}
\end{equation}
and we have obtained another relative cohomology problem similar to the one from the Introduction.
\section{Wess-Zumino Consistency Conditions \label{WZ}}
In this Section we consider a particular form of (\ref{gauge2}) and (\ref{wz}) namely the case when all polynomials
$
T^{I}
$
have canonical dimension
$
\omega_{0} = 5
$
and
$
T^{\mu\nu\rho\sigma} = 0.
$
In this case (\ref{limit}) becomes:
\begin{equation}
A^{I_{1},I_{2}}(X) = 0
\quad {\it iff} \quad |I_{1}| + |I_{2}| > 6.
\label{limit4}
\end{equation}
It is convenient to define
\begin{eqnarray}
A_{1} \equiv A^{\emptyset,\emptyset},~
A_{2}^{\mu} \equiv A^{[\mu],\emptyset},~
A_{3}^{[\mu\nu]} \equiv A^{[\mu\nu],\emptyset},
A_{4}^{\mu;\nu} \equiv A^{[\mu],[\nu]},
\nonumber \\
A_{5}^{[\mu\nu];\rho} \equiv A^{[\mu\nu],\rho},~
A_{6}^{[\mu\nu];[\rho\sigma]} \equiv A^{[\mu\nu],[\rho\sigma]},
A_{7}^{\mu\nu\rho} \equiv A^{[\mu\nu\rho],\emptyset},
\nonumber \\
A_{8}^{[\mu\nu\rho];\sigma} \equiv A^{[\mu\nu\rho],[\sigma]},~
A_{9}^{[\mu\nu\rho];[\sigma\lambda]} \equiv A^{[\mu\nu\rho],[\sigma\lambda]},~
A_{10}^{[\mu\nu\rho];[\sigma\lambda\omega]} \equiv A^{[\mu\nu\rho],[\sigma\lambda\omega]}
\end{eqnarray}
where we have emphasized the antisymmetry properties with brackets. We have from (\ref{gauge2}) the following anomalous gauge equations:
\begin{eqnarray}
d_{Q} T(T(x_{1}),T(x_{2})) =
\nonumber \\
i {\partial\over \partial x^{\mu}_{1}}T(T(x_{1}),T^{\mu}(x_{2}))
+ i {\partial\over \partial x^{\mu}_{2}}T(T(x_{1}),T^{\mu}(x_{2}))
+ A_{1}(x_{1},x_{2})
\label{g1}
\end{eqnarray}
\begin{eqnarray}
d_{Q} T(T^{\mu}(x_{1}),T(x_{2})) =
\nonumber \\
i {\partial\over \partial x^{\mu}_{1}}T(T^{\mu\nu}(x_{1}),T(x_{2}))
-i {\partial\over \partial x^{\nu}_{2}} T(T^{\mu}(x_{1}),T^{\nu}(x_{2}))
+ A^{\mu}_{2}(x_{1},x_{2})
\label{g2}
\end{eqnarray}
\begin{eqnarray}
d_{Q} T(T^{\mu\nu}(x_{1}),T(x_{2})) =
\nonumber \\
i {\partial\over \partial x^{\rho}_{1}}T(T^{\mu\nu\rho}(x_{1}),T(x_{2}))
+ i {\partial\over \partial x^{\rho}_{2}}
T(T^{\mu\nu}(x_{1}),T^{\rho}(x_{2}))
+ A^{[\mu\nu]}_{3}(x_{1},x_{2})
\label{g3}
\end{eqnarray}
\begin{eqnarray}
d_{Q} T(T^{\mu}(x_{1}),T^{\nu}(x_{2})) =
\nonumber \\
i {\partial\over \partial x^{\rho}_{1}}T(T^{\mu\rho}(x_{1}),T^{\nu}(x_{2}))
- i {\partial\over \partial x^{\rho}_{2}}T(T^{\mu}(x_{1}),T^{\nu\rho}(x_{2}))
+ A^{\mu;\nu}_{4}(x_{1},x_{2})
\label{g4}
\end{eqnarray}
\begin{eqnarray}
d_{Q} T(T^{\mu\nu}(x_{1}),T^{\rho}(x_{2})) =
\nonumber \\
i {\partial\over \partial x^{\sigma}_{1}}T(T^{\mu\nu\sigma}(x_{1}),T^{\rho}(x_{2}))
+ i {\partial\over \partial x^{\sigma}_{2}}T(T^{\mu\nu}(x_{1}),T^{\rho\sigma}(x_{2}))
+ A^{[\mu\nu];\rho}_{5}(x_{1},x_{2})
\label{g5}
\end{eqnarray}
\begin{eqnarray}
d_{Q} T(T^{\mu\nu}(x_{1}),T^{\rho\sigma}(x_{2})) =
\nonumber \\
i {\partial\over \partial x^{\lambda}_{1}}T(T^{\mu\nu\lambda}(x_{1}),T^{\rho\sigma}(x_{2}))
+ i {\partial\over \partial x^{\lambda}_{2}}T(T^{\mu\nu}(x_{1}),T^{\rho\sigma\lambda}(x_{2}))
+ A^{[\mu\nu];[\rho\sigma]}_{6}(x_{1},x_{2})
\label{g6}
\end{eqnarray}
\begin{eqnarray}
d_{Q} T(T^{\mu\nu\rho}(x_{1}),T(x_{2})) =
- i {\partial\over \partial x^{\sigma}_{2}}T(T^{\mu\nu\rho}(x_{1}),T^{\sigma}(x_{2}))
+ A^{[\mu\nu\rho]}_{7}(x_{1},x_{2})
\label{g7}
\end{eqnarray}
\begin{eqnarray}
d_{Q} T(T^{\mu\nu\rho}(x_{1}),T^{\sigma}(x_{2})) =
- i {\partial\over \partial x^{\lambda}_{2}}T(T^{\mu\nu\rho}(x_{1}),T^{\sigma\lambda}(x_{2}))
+ A^{[\mu\nu\rho];\sigma}_{8}(x_{1},x_{2})
\label{g8}
\end{eqnarray}
\begin{eqnarray}
d_{Q} T(T^{\mu\nu\rho}(x_{1}),T^{\sigma\lambda}(x_{2})) =
- i {\partial\over \partial x^{\omega}_{2}}
T(T^{\mu\nu\rho}(x_{1}),T^{\sigma\lambda\omega}(x_{2}))
+ A^{[\mu\nu\rho];[\sigma\lambda]}_{9}(x_{1},x_{2})
\label{g9}
\end{eqnarray}
\begin{eqnarray}
d_{Q} T(T^{\mu\nu\rho}(x_{1}),T^{\sigma\lambda\omega}(x_{2})) = 0.
\label{g10}
\end{eqnarray}
From (\ref{symmetryA}) we get the following symmetry properties:
\begin{equation}
A_{1}(x_{1},x_{2}) = A_{1}(x_{2},x_{1})
\label{sA1}
\end{equation}
and we also have:
\begin{equation}
A_{4}^{\mu;\nu}(x_{1},x_{2}) = - A_{4}^{\nu;\mu}(x_{2},x_{1}),
\label{s4'}
\end{equation}
\begin{equation}
A_{6}^{[\mu\nu];[\rho\sigma]}(x_{1},x_{2}) =
A_{6}^{[\rho\sigma];[\mu\nu]}(x_{2},x_{1}),
\label{s6'}
\end{equation}
and
\begin{equation}
A_{10}^{[\mu\nu\rho];[\sigma\lambda\omega]}(x_{1},x_{2})
= - A_{10}^{[\sigma\lambda\omega];[\mu\nu\rho]}(x_{2},x_{1}).
\label{s7'}
\end{equation}
The Wess-Zumino consistency conditions are in this case:
\begin{equation}
d_{Q} A_{1}(x_{1},x_{2})
= - i {\partial\over \partial x^{\mu}_{1}}A^{\mu}_{2}(x_{1},x_{2})
- i {\partial\over \partial x^{\mu}_{2}}A^{\mu}_{2}(x_{2},x_{1})
\label{WZ1}
\end{equation}
\begin{equation}
d_{Q} A^{\mu}_{2}(x_{1},x_{2})
= - i {\partial\over \partial x^{\nu}_{1}}A^{[\mu\nu]}_{3}(x_{1},x_{2})
+ i {\partial\over \partial x^{\nu}_{2}}A^{\mu;\nu}_{4}(x_{1},x_{2})
\label{WZ2}
\end{equation}
\begin{equation}
d_{Q} A^{[\mu\nu]}_{3}(x_{1},x_{2})
= - i {\partial\over \partial x^{\rho}_{1}}A^{[\mu\nu\rho]}_{7}(x_{1},x_{2})
- i {\partial\over \partial x^{\rho}_{2}}A^{[\mu\nu];\rho}_{5}(x_{1},x_{2})
\label{WZ3}
\end{equation}
\begin{equation}
d_{Q} A^{\mu;\nu}_{4}(x_{1},x_{2})
= - i {\partial\over \partial x^{\rho}_{1}}A^{[\mu\rho];\nu}_{5}(x_{1},x_{2})
+ i {\partial\over \partial x^{\rho}_{2}}A^{[\nu\rho];\mu}_{5}(x_{2},x_{1})
\label{WZ4}
\end{equation}
\begin{equation}
d_{Q} A^{[\mu\nu];\rho}_{5}(x_{1},x_{2})
= - i {\partial\over \partial x^{\sigma}_{1}}A^{[\mu\nu\sigma];\rho}_{8}(x_{1},x_{2})
- i {\partial\over \partial x^{\sigma}_{2}}A^{[\mu\nu];[\rho\sigma]}_{6}(x_{1},x_{2})
\label{WZ5}
\end{equation}
\begin{equation}
d_{Q} A^{[\mu\nu];[\rho\sigma]}_{6}(x_{1},x_{2}) =
- i {\partial\over \partial x^{\lambda}_{1}}A^{[\mu\nu\lambda];[\rho\sigma]}_{9}(x_{1},x_{2})
- i
{\partial\over \partial x^{\lambda}_{2}}A^{[\rho\sigma\lambda];[\mu\nu]}_{9}(x_{2},x_{1});
\label{WZ6}
\end{equation}
\begin{equation}
d_{Q} A^{[\mu\nu\rho]}_{7}(x_{1},x_{2})
= i {\partial\over \partial x^{\sigma}_{2}}A^{[\mu\nu\rho];\sigma}_{8}(x_{1},x_{2});
\label{WZ7}
\end{equation}
\begin{equation}
d_{Q} A^{[\mu\nu\rho];\sigma}_{8}(x_{1},x_{2})
= i
{\partial\over \partial x^{\lambda}_{2}}A^{[\mu\nu\rho];[\sigma\lambda]}_{9}(x_{1},x_{2});
\label{WZ8}
\end{equation}
\begin{equation}
d_{Q} A^{[\mu\nu\rho];[\sigma\lambda]}_{9}(x_{1},x_{2})
= i {\partial\over \partial x^{\omega}_{2}}A^{[\mu\nu\rho];[\sigma\lambda\omega]}_{10}(x_{1},x_{2});
\label{WZ9}
\end{equation}
\begin{equation}
d_{Q} A^{[\mu\nu\rho];[\sigma\lambda\omega]}_{10}(x_{1},x_{2}) = 0.
\label{WZ10}
\end{equation}
We suppose from now on that we work in a $4$-dimensional Minkowski space-time and we have the following result:
\begin{thm}
One can redefine the chronological products such that
\begin{eqnarray}
A_{1}(x_{1},x_{2}) = \delta(x_{1} - x_{2})~W(x_{1}), \qquad
A^{\mu}_{2}(x_{1},x_{2}) = \delta(x_{1} - x_{2})~W^{\mu}(x_{1})
\nonumber \\
A^{[\mu\nu]}_{3}(x_{1},x_{2}) = \delta(x_{1} - x_{2})~W^{[\mu\nu]}(x_{1}), \qquad
A^{\mu;\nu}_{4}(x_{1},x_{2}) = - \delta(x_{1} - x_{2})~W^{[\mu\nu]}(x_{1}),
\nonumber \\
A^{[\mu\nu];\rho}_{5}(x_{1},x_{2}) = \delta(x_{1} - x_{2})~W^{[\mu\nu\rho]}(x_{1}), \qquad
A^{[\mu\nu];[\rho\sigma]}_{7}(x_{1},x_{2}) = - \delta(x_{1} - x_{2})~W^{[\mu\nu\rho]}(x_{1})
\end{eqnarray}
and
$
A_{j} = 0,~j = 6, 8, 9, 10.
$
Moreover one has the following descent equations:
\begin{equation}
d_{Q}W = - i~\partial_{\mu}W^{\mu},\qquad
d_{Q}W^{\mu} = i~\partial_{\nu}W^{[\mu\nu]},\qquad
d_{Q}W^{[\mu\nu]} = - i\partial_{\rho}W^{[\mu\nu\rho]},\qquad
d_{Q}W^{[\mu\nu\rho]} = 0.
\label{W}
\end{equation}
The expressions
$
W,~W^{\mu}
$
and
$
W^{[\mu\nu]}
$
are relative co-cycles and are determined up to relative co-boundaries. The expression
$
W^{[\mu\nu\rho]}
$
is a co-cycle and it is determined up to a co-boundary.
\label{ano-2}
\end{thm}
{\bf Proof:} The symmetry properties and the Wess-Zumino equations of consistency will be enough to obtain the result from the statement. We will rely on some computations done in \cite{cohomology}. We will use (\ref{genericA}) together with the restriction (\ref{power4}). Because we also have
\begin{equation}
gh(W^{I_{1},I_{2};\{\rho_{1},\dots,\rho_{k}\}}) = |I_{1}| + |I_{2}| + 1
\label{gh2}
\end{equation}
the sum goes in fact up to
$
k = 6.
$
If we get rid of the top terms (i.e. corresponding to
$
k = 5, 6
$)
from the preceding sum then we are, at least for
$
|I_{1}|, |I_{2}| \leq 2
$,
in the case studied in \cite{cohomology}.
We divide the proof in a number of steps.
(i) From (\ref{genericA}) we have:
\begin{equation}
A_{1}(x_{1},x_{2})
= \sum_{k \leq 6}
\partial_{\mu_{1}} \dots \partial_{\mu_{k}} \delta(x_{2} - x_{1})
W^{\{\mu_{1},\dots,\mu_{k}\}}_{1}(x_{1})
\label{A1-2}
\end{equation}
and we have the restrictions
\begin{equation}
\omega(W^{\{\mu_{1},\dots,\mu_{k}\}}_{1}) \leq 7 - k, \qquad
gh(W^{\{\mu_{1},\dots,\mu_{k}\}}_{1}) = 1
\end{equation}
for all
$
k = 0, \dots, 6.
$
We perform the finite renormalization:
\begin{equation}
T(T^{\mu_{1}}(x_{1}),T(x_{2})) \rightarrow T(T^{\mu_{1}}(x_{1}),T(x_{2}))
+ \partial_{\mu_{2}}\cdots\partial_{\mu_{6}}~\delta(x_{2} - x_{1})
U^{\mu_{1};\{\mu_{2},\dots,\mu_{6}\}}_{2}(x_{1})
\label{R2}
\end{equation}
and it is easy to see that if we choose
$
U^{\mu_{1};\{\mu_{2},\dots,\mu_{6}\}}_{2} =
- {i\over 2}~W^{\{\mu_{1},\dots,\mu_{6}\}}_{1}
$
then we obtain a new expression (\ref{A1-2}) for the anomaly
$
A_{1}
$
where the sum goes only up to
$
k = 5.
$
(Although the monomials
$
W^{\{\mu_{1},\dots,\mu_{k}\}}_{1}
$
will be changed after this finite renormalization we keep the same notation.) Now we impose the symmetry property (\ref{sA1}) and consider only the terms with five derivatives on
$\delta$;
it easily follows that
$
W^{\{\mu_{1},\dots,\mu_{5}\}}_{1} = 0
$
i.e. in the expression (\ref{A1-2}) for the anomaly
$
A_{1}
$
the sum goes only up to
$
k = 4.
$
Now we have the expression (3.53) from \cite{cohomology} and we can perform the succession of finite renormalizations from there. In the end the expression (\ref{A1-2}) will have the form from the statement.
(ii) From (\ref{genericA}) we have:
\begin{equation}
A^{\mu}_{2}(x_{1},x_{2})
= \sum_{k \leq 5}
\partial_{\rho_{1}} \dots \partial_{\rho_{k}} \delta(x_{2} - x_{1})
W^{\mu;\{\rho_{1},\dots,\rho_{k}\}}_{2}(x_{1})
\label{A2-2}
\end{equation}
and we have the restrictions
\begin{equation}
\omega(W^{\mu;\{\rho_{1},\dots,\rho_{k}\}}_{2}) \leq 7 - k, \quad
gh(W^{\mu;\{\rho_{1},\dots,\rho_{k}\}}_{2}) = 2
\end{equation}
for all
$
k = 0, \dots, 5.
$
We use Wess-Zumino consistency condition (\ref{WZ1}); if we consider only the terms with six derivatives on
$\delta$
we obtain that the completely symmetric part of
$
W^{\mu_{1};\mu_{2},\dots,\mu_{6}}_{2}
$
is null:
$
W^{\{\mu_{1};\mu_{2},\dots,\mu_{6}\}}_{2} = 0.
$
In this case it is easy to prove that one can write
$
W^{\mu_{1};\mu_{2},\dots,\mu_{6}}_{2}
$
in the following form:
\begin{equation}
W^{\mu_{1};\mu_{2},\dots,\mu_{6}}_{2} =
{1 \over 5}~\sum_{j=2}^{6}~
\tilde{W}^{[\mu_{1}\mu_{j}];\{\mu_{2},\dots\hat{\mu_{j}},\dots,\mu_{6}\}}_{2}
\end{equation}
with
\begin{equation}
\tilde{W}^{[\mu_{1}\mu_{2}];\{\mu_{3},\dots,\mu_{6}\}}_{2} \equiv
{5 \over 4}~W^{\mu_{1};\mu_{2},\dots,\mu_{6}}_{2} - (\mu_{1} \leftrightarrow \mu_{2}).
\end{equation}
We perform the finite renormalization
\begin{equation}
T(T^{[\mu_{1}\mu_{2}]}(x_{1}),T(x_{2})) \rightarrow T(T^{[\mu_{1}\mu_{2}]}(x_{1}),T(x_{2}))
+ \partial_{\mu_{3}}\cdots\partial_{\mu_{6}}\delta(x_{2} - x_{1})
~U_{3}^{[\mu_{1}\mu_{2}];\{\mu_{3},\dots,\mu_{6}\}}(x_{1})
\label{R3a}
\end{equation}
with
$
U_{3}^{[\mu_{1}\mu_{2}];\{\mu_{3},\dots,\mu_{6}\}} = - i~\tilde{W}^{[\mu_{1}\mu_{2}];\{\mu_{3},\dots,\mu_{6}\}}_{2}
$
and we eliminate the contributions corresponding to
$k = 5$
from (\ref{A2-2}). We use again the Wess-Zumino consistency condition (\ref{WZ1}); if we consider only the terms with five derivatives on
$\delta$
we obtain that the completely symmetric part of
$
W^{\mu_{1};\mu_{2},\dots,\mu_{5}}_{2}
$
is null
$
W^{\{\mu_{1};\mu_{2},\dots,\mu_{5}\}}_{2} = 0
$
and write:
\begin{equation}
W_{2}^{\mu_{1};\mu_{2},\dots,\mu_{5}} =
{1\over 4}~\sum_{j=2}^{5}~
\tilde{W}^{[\mu_{1}\mu_{j}];\{\mu_{2},\dots\hat{\mu_{j}},\dots,\mu_{5}\}}_{2}
\end{equation}
with
\begin{equation}
\tilde{W}_{2}^{[\mu_{1}\mu_{2}];\{\mu_{3},\mu_{4},\mu_{5}\}} =
{4\over 5}~W_{2}^{\mu_{1};\mu_{2},\dots,\mu_{5}}
- (\mu_{1} \leftrightarrow \mu_{2}).
\end{equation}
Now we consider the finite renormalization
\begin{equation}
T(T^{[\mu\nu]}(x_{1}),T(x_{2})) \rightarrow T(T^{[\mu\nu]}(x_{1}),T(x_{2}))
+ \partial_{\rho_{1}}\partial_{\rho_{2}}\partial_{\rho_{3}}
\delta(x_{2} - x_{1})~U_{3}^{[\mu\nu];\{\rho_{1}\rho_{2}\rho_{3}\}}(x_{1})
\label{R3b-2}
\end{equation}
with
$
U_{3}^{[\mu\nu];\rho_{1}\rho_{2}\rho_{3}} = i~\tilde{W}_{2}^{[\mu\nu];\rho_{1}\rho_{2}\rho_{3}}
$
and we get a new expressions (\ref{A2-2}) for which
$
W_{2}^{\mu_{1};\{\mu_{2},\dots,\mu_{5}\}} = 0,
$
i.e. the summation in (\ref{A2-2}) goes only up to
$
k = 4.
$
As a result we have the expression (3.57) from \cite{cohomology} and we can perform the succession of finite renormalizations from there. In the end the expression (\ref{A2-2}) will have the form from the statement.
It is easy to prove that the Wess-Zumino equation (\ref{WZ1}) is now equivalent to:
\begin{equation}
d_{Q}W_{1} = - i~\partial_{\mu}W^{\mu}_{2}.
\label{WZ1'}
\end{equation}
(iii) From (\ref{genericA}) we have:
\begin{equation}
A^{[\mu\nu]}_{3}(x_{1},x_{2})
= \sum_{k \leq 4}
\partial_{\rho_{1}} \dots \partial_{\rho_{k}} \delta(x_{2} - x_{1})
W^{[\mu\nu];\{\rho_{1},\dots,\rho_{k}\}}_{3}(x_{1})
\label{A3-2}
\end{equation}
and we have the restrictions
\begin{equation}
\omega(W^{[\mu\nu];\{\rho_{1},\dots,\rho_{k}\}}_{3}) \leq 7 - k,\quad
gh(W^{[\mu\nu];\{\rho_{1},\dots,\rho_{k}\}}_{3}) = 3
\end{equation}
for all
$
k = 0,\dots,4.
$
We perform the finite renormalization
\begin{equation}
T(T^{[\mu\nu]}(x_{1}),T^{\rho}(x_{2})) \rightarrow T(T^{[\mu\nu]}(x_{1}),T^{\rho}(x_{2}))
+ \partial_{\sigma_{1}}\partial_{\sigma_{2}}\partial_{\sigma_{3}}
\delta(x_{2} - x_{1})~U_{5}^{[\mu\nu];\rho;\{\sigma_{1}\sigma_{2}\sigma_{3}\}}(x_{1})
\label{R5a}
\end{equation}
with
$
U_{5}^{[\mu\nu];\rho_{1};\{\rho_{2}\rho_{3}\rho_{4}\}} =
i~W^{[\mu\nu];\{\rho_{1},\dots,\rho_{4}\}}_{3}
$
and we eliminate the contributions corresponding to
$k = 4$
from (\ref{A3-2}). Now we consider the finite renormalization
\begin{equation}
T(T^{[\mu\nu]}(x_{1}),T^{\rho}(x_{2})) \rightarrow T(T^{[\mu\nu]}(x_{1}),T^{\rho}(x_{2}))
+ \partial_{\sigma_{1}}\partial_{\sigma_{2}}
\delta(x_{2} - x_{1})~U_{3}^{[\mu\nu];\rho;\{\sigma_{1}\sigma_{2}\}}(x_{1})
\label{R5b}
\end{equation}
with
$
U_{5}^{[\mu\nu];\rho_{1};\{\rho_{2}\rho_{3}\}} =
i~W_{3}^{[\mu\nu];\{\rho_{1}\rho_{2}\rho_{3}\}}
$
and we get a new expressions (\ref{A3-2}) with
$
k \leq 2
$.
As a result we have the expression (3.67) from \cite{cohomology} and we can perform the succession of finite renormalizations from there. In the end the expression (\ref{A3-2}) will have the form from the statement.
(iv) From (\ref{genericA}) we have:
\begin{equation}
A^{\mu;\nu}_{4}(x_{1},x_{2})
= \sum_{k \leq 4}
\partial_{\rho_{1}} \dots \partial_{\rho_{k}} \delta(x_{2} - x_{1})
W^{\mu;\nu;\{\rho_{1},\dots,\rho_{k}\}}_{4}(x_{1})
\label{A4-2}
\end{equation}
and we have the restrictions
\begin{equation}
\omega(W^{\mu;\nu;\{\rho_{1},\dots,\rho_{k}\}}_{4}) \leq 7 - k, \quad
gh(W^{\mu;\nu;\{\rho_{1},\dots,\rho_{k}\}}_{4}) = 3
\end{equation}
for all
$
k = 0,\dots,4.
$
We will have to consider the (anti)symmetry (\ref{s4'}). From the terms with four derivatives on delta we obtain that
$
W^{\mu;\nu;\{\rho_{1},\dots,\rho_{4}\}}_{4}
$
is antisymmetric in the first two indices i.e. we have the writing
$
W^{\mu;\nu;\{\rho_{1},\dots,\rho_{4}\}}_{4} = W^{[\mu\nu];\{\rho_{1},\dots,\rho_{4}\}}_{4}.
$
Next we consider the Wess-Zumino consistency condition (\ref{WZ2}). From the terms with five derivatives on delta we obtain
\begin{equation}
{\cal S}_{\nu,\rho_{1},\dots,\rho_{4}}~W^{[\mu\nu];\{\rho_{1},\dots,\rho_{4}\}}_{4} = 0
\end{equation}
where
$
{\cal S}
$
denotes symmetrization in the corresponding indices. We note now that in the finite renormalization (\ref{R5a}) we have used only the expression
$
U_{5}^{[\mu\nu];\{\rho_{1};\rho_{2}\rho_{3}\rho_{4}\}}
$
i.e. we still can use
$
U_{5}^{[\mu\nu];\rho_{1};\rho_{2}\rho_{3}\rho_{4}}
$
with
$
U_{5}^{[\mu\nu];\{\rho_{1};\rho_{2}\rho_{3}\rho_{4}\}} = 0.
$
It is not so complicated to prove (using the preceding relation) that the choice:
$
U_{5}^{[\mu\nu];\rho_{1};\rho_{2}\rho_{3}\rho_{4}} = c~
(W^{\mu;\rho_{1};\{\nu\rho_{2}\rho_{3}\rho_{4}\}}_{4}
+ {1\over 4}~ W^{\mu;\nu;\{\rho_{1},\dots,\rho_{4}\}}_{4})
$
is possible i.e. it verifies the preceding relation; moreover if we take
$
c = {8 i \over 15}
$
we get a new expression (\ref{A4-2}) for which
$
k \leq 3
$.
We use again the (anti)symmetry property (\ref{s4'}); from the terms with three derivatives on $\delta$ we obtain:
\begin{eqnarray}
W^{\mu;\nu;\{\rho_{1}\rho_{2}\rho_{3}\}}_{4} = W^{\nu;\mu;\{\rho_{1}\rho_{2}\rho_{3}\}}_{4}
\end{eqnarray}
i.e. we have the writing
$
W^{\mu;\nu;\{\rho_{1}\rho_{2}\rho_{3}\}}_{4} =
W^{\{\mu\nu\};\{\rho_{1}\rho_{2}\rho_{3}\}}_{4}.
$
We consider again the Wess-Zumino consistency condition (\ref{WZ2}); from the terms with
four derivatives on $\delta$ we obtain:
\begin{equation}
{\cal S}_{\nu,\rho_{1}\rho_{2}\rho_{3}}~W^{\{\mu\nu\};\{\rho_{1}\rho_{2}\rho_{3}\}}_{4} = 0.
\end{equation}
As before we note now that in the finite renormalization (\ref{R5b}) we have used only the expression
$
U_{5}^{[\mu\nu];\{\rho_{1};\rho_{2}\rho_{3}\}}
$
i.e. we still can use
$
U_{5}^{[\mu\nu];\rho_{1};\rho_{2}\rho_{3}}
$
with
$
U_{5}^{[\mu\nu];\{\rho_{1};\rho_{2}\rho_{3}\}} = 0.
$
A possible choice is:
$
U_{5}^{[\mu\nu];\rho_{1};\rho_{2}\rho_{3}} = c~
(W^{\mu;\rho_{1};\{\nu\rho_{2}\rho_{3}\}}_{4}
+ {1\over 3}~ W^{\mu;\nu;\{\rho_{1}\rho_{2}\rho_{3}\}}_{4})
$;
moreover if we take
$
c = {9 i \over 16}
$
we get a new expression (\ref{A4-2}) for which
$
k \leq 3
$.
As a result we have the expression (3.71) from \cite{cohomology} and we can perform the succession of finite renormalizations from there. In the end the expression (\ref{A3-2}) will have the form from the statement. The Wess-Zumino equation (\ref{WZ2}) is equivalent to:
\begin{eqnarray}
d_{Q}W^{\mu}_{2} = i~\partial_{\nu}W_{3}^{[\mu\nu]}
\nonumber \\
W^{\mu;\nu}_{4} = - W^{[\mu\nu]}_{3}.
\end{eqnarray}
(v) From (\ref{genericA}) we have:
\begin{equation}
A^{[\mu\nu\rho]}_{7}(x_{1},x_{2})
= \sum_{k \leq 3}
\partial_{\sigma_{1}} \dots \partial_{\sigma_{k}} \delta(x_{2} - x_{1})
W^{[\mu\nu\rho];\{\sigma_{1},\dots,\sigma_{k}\}}_{7}(x_{1})
\label{A7-2}
\end{equation}
and we have the restrictions
\begin{equation}
\omega(W^{[\mu\nu\rho];\{\sigma_{1},\dots,\sigma_{k}\}}_{4}) \leq 7 - k, \quad
gh(W^{[\mu\nu\rho];\{\sigma_{1},\dots,\sigma_{k}\}}_{4}) = 4
\end{equation}
for all
$
k = 0,\dots,3.
$
We perform the finite renormalization
\begin{equation}
T(T^{[\mu\nu\rho]}(x_{1}),T^{\sigma}(x_{2})) \rightarrow T(T^{[\mu\nu\rho]}(x_{1}),T^{\sigma}(x_{2}))
+ \partial_{\lambda_{1}}\partial_{\lambda_{2}}\delta(x_{2} - x_{1})~
U_{8}^{[\mu\nu\rho];\sigma;\{\lambda_{1}\lambda_{2}\}}(x_{1})
\label{R8a}
\end{equation}
with
$
U_{8}^{[\mu\nu\rho];\sigma;\{\lambda_{1}\lambda_{2}\}} =
- i~W^{[\mu\nu\rho];\{\sigma\lambda_{1}\lambda_{2}\}}_{7}
$
and we eliminate the contributions corresponding to
$k = 3$
from (\ref{A7-2}). Now we consider the finite renormalization
\begin{equation}
T(T^{[\mu\nu\rho]}(x_{1}),T^{\sigma}(x_{2})) \rightarrow T(T^{[\mu\nu\rho]}(x_{1}),T^{\sigma}(x_{2}))
+ \partial_{\lambda}\delta(x_{2} - x_{1})~U_{8}^{[\mu\nu\rho];\sigma;\lambda}(x_{1})
\label{R8b}
\end{equation}
with
$
U_{8}^{[\mu\nu\rho];\sigma;\lambda} = - i~W_{7}^{[\mu\nu\rho];\{\sigma\lambda\}}
$
and we get a new expressions (\ref{A7-2}) with
$
k \leq 1
$.
Finally we consider the finite renormalization
\begin{equation}
T(T^{[\mu\nu\rho]}(x_{1}),T^{\sigma}(x_{2})) \rightarrow T(T^{[\mu\nu\rho]}(x_{1}),T^{\sigma}(x_{2}))
+ \delta(x_{2} - x_{1})~U_{8}^{[\mu\nu\rho];\sigma}(x_{1})
\label{R8c}
\end{equation}
with
$
U_{8}^{[\mu\nu\rho];\sigma} = - i~W_{7}^{[\mu\nu\rho];\sigma}
$
and we get the expression for
$
A_{7}
$
from the statement.
(vi) From (\ref{genericA}) we have:
\begin{equation}
A^{[\mu\nu];\rho}_{5}(x_{1},x_{2})
= \sum_{k \leq 3}
\partial_{\sigma_{1}} \dots \partial_{\sigma_{k}} \delta(x_{2} - x_{1})
W^{[\mu\nu];\rho;\{\sigma_{1},\dots,\sigma_{k}\}}_{5}(x_{1})
\label{A5-2}
\end{equation}
and we have the restrictions
\begin{equation}
\omega(W^{[\mu\nu];\rho;\{\sigma_{1},\dots,\sigma_{k}\}}_{4}) \leq 7 - k, \quad
gh(W^{[\mu\nu];\rho;\{\sigma_{1},\dots,\sigma_{k}\}}_{4}) = 4
\end{equation}
We consider the Wess-Zumino consistency conditions (\ref{WZ3}). From the terms with four derivatives on delta we obtain:
\begin{equation}
{\cal S}_{\rho,\sigma_{1}\sigma_{2}\sigma_{3}}~
W^{[\mu\nu];\rho;\{\sigma_{1}\sigma_{2}\sigma_{3}\}}_{5} = 0.
\end{equation}
This equation can be solved explicitly: if we denote:
\begin{equation}
\tilde{W}^{[\mu\nu];[\rho\sigma_{1}];\{\sigma_{2}\sigma_{3}\}}_{5} =
{3\over 4}~W^{[\mu\nu];\rho;\{\sigma_{1}\sigma_{2}\sigma_{3}\}}_{5}
- (\rho \leftrightarrow \sigma_{1})
\end{equation}
we have:
\begin{equation}
W^{[\mu\nu];\rho;\{\sigma_{1}\sigma_{2}\sigma_{3}\}}_{5}
= {\cal S}_{\sigma_{1}\sigma_{2}\sigma_{3}}~
\tilde{W}^{[\mu\nu];[\rho\sigma_{1}];\{\sigma_{2}\sigma_{3}\}}_{5}
\end{equation}
and we can make in (\ref{A5-2})
$
W^{[\mu\nu];\rho;\{\sigma_{1}\sigma_{2}\sigma_{3}\}}_{5}
\rightarrow
\tilde{W}^{[\mu\nu];[\rho\sigma_{1}];\{\sigma_{2}\sigma_{3}\}}_{5}.
$
From the Wess-Zumino consistency conditions (\ref{WZ4}) we consider again the terms with four derivatives on delta and we obtain after some computations:
\begin{equation}
{\cal S}_{\rho,\sigma_{1}\sigma_{2}\sigma_{3}}~
(\tilde{W}^{[\mu\rho];[\nu\sigma_{1}];\{\sigma_{2}\sigma_{3}\}}_{5}
- \tilde{W}^{[\nu\rho];[\mu\sigma_{1}];\{\sigma_{2}\sigma_{3}\}}_{5}) = 0.
\end{equation}
It is convenient to split
$
\tilde{W}^{[\mu\nu];[\rho\sigma_{1}];\{\sigma_{2}\sigma_{3}\}}_{5}
$
as follows
\begin{equation}
\tilde{W}^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}_{5}
= \tilde{W}^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}_{5,+}
+ \tilde{W}^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}_{5,-}
\end{equation}
where
\begin{equation}
\tilde{W}^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}_{5,\epsilon}
= {1\over 2}~(\tilde{W}^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}_{5}
+ \epsilon~\tilde{W}^{[\rho\sigma];[\mu\nu];\{\lambda_{1}\lambda_{2}\}}_{5}).
\end{equation}
We now make the finite renormalization
\begin{equation}
T(T^{[\mu\nu]}(x_{1}),T^{[\rho\sigma]}(x_{2})) \rightarrow T(T^{[\mu\nu]}(x_{1}),T^{[\rho\sigma]}(x_{2}))
+ \partial_{\lambda_{1}}\partial_{\lambda_{2}}\delta(x_{1} - x_{2})~
U_{6}^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}(x_{1})
\label{R6a}
\end{equation}
with
$
U_{6}^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}} =
i~\tilde{W}^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}_{5,+}
$
such that all symmetry properties of the chronological products are preserved. As a result
we get a new expression (\ref{A5-2}) with:
$
\tilde{W}^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}_{5}
\rightarrow
\tilde{W}^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}_{5,-}
\equiv W^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}.
$
The Wess-Zumino consistency conditions (\ref{WZ4}) with four derivatives on delta from above reduces to:
\begin{equation}
{\cal S}_{\rho\sigma_{1}\sigma_{2}\sigma_{3}}~
W^{[\mu\rho];[\nu\sigma_{1}];\{\sigma_{2}\sigma_{3}\}}_{5} = 0.
\end{equation}
We note that we still can use the finite renormalization (\ref{R8a}) if we require:
\begin{equation}
{\cal S}_{\rho\lambda_{1}\lambda_{2}}~
U_{8}^{[\mu\nu\rho];\sigma;\{\lambda_{1};\lambda_{2}\}} = 0
\end{equation}
i.e. such that we do not spoil the form of
$
A_{7}
$
from the statement. One can write a generic form for such an expression
$
U_{8}^{[\mu\nu\rho];\sigma;\{\lambda_{1};\lambda_{2}\}}
$
in terms of
$
W^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}.
$
There are five possible combinations meeting the symmetry properties:
\begin{eqnarray}
U_{8}^{[\mu\nu\rho];\sigma_{1};\{\sigma_{2};\sigma_{3}\}} =
{\cal A}_{\mu\nu\rho}~{\cal S}_{\sigma_{2}\sigma_{3}}~
(c_{1}~W^{[\mu\rho];[\nu\sigma_{1}];\{\sigma_{2}\sigma_{3}\}}
+ c_{2}~W^{[\mu\rho];[\nu\sigma_{2}];\{\sigma_{1}\sigma_{2}\}}
\nonumber \\
+ c_{3}~W^{[\mu\sigma_{1}];[\nu\sigma_{2}];\{\rho\sigma_{3}\}}
+ c_{4}~W^{[\mu\sigma_{2}];[\nu\sigma_{3}];\{\rho\sigma_{1}\}}
+ c_{5}~W^{[\mu\rho];[\sigma_{1}\sigma_{2}];\{\nu\sigma_{3}\}})
\end{eqnarray}
where we apply the corresponding (anti -)symmetrization operators. It comes after some work that one can fix the coefficients such that we get a new expression (\ref{A5-2}) for which
$
k \leq 2
$;
we have found the possible values
$
c_{1} = 3 i,~c_{2} = i,~c_{3} = 0,~c_{4} = 2 i,~c_{5} = 0.
$
We consider again the Wess-Zumino consistency conditions (\ref{WZ3}); from the terms with three derivatives on delta we obtain:
\begin{equation}
{\cal S}_{\rho\sigma_{1}\sigma_{2}}~W^{[\mu\nu];\rho;\{\sigma_{1}\sigma_{2}\}}_{5} = 0.
\end{equation}
This equation can also be solved explicitly: if we denote:
\begin{equation}
\tilde{W}^{[\mu\nu];[\rho\sigma_{1}];\{\sigma_{2}\sigma_{3}\}}_{5} =
{2\over 3}~W^{[\mu\nu];\rho;\{\sigma_{1}\sigma_{2}\}}_{5}
- (\rho \leftrightarrow \sigma_{1})
\end{equation}
we have:
\begin{equation}
W^{[\mu\nu];\rho;\{\sigma_{1}\sigma_{2}\}}_{5}
= {\cal S}_{\sigma_{1}\sigma_{2}}~\tilde{W}^{[\mu\nu];[\rho\sigma_{1}];\sigma_{2}}_{5}
\end{equation}
and we can make in (\ref{A5-2})
$
W^{[\mu\nu];\rho;\{\sigma_{1}\sigma_{2}\}}_{5}
\rightarrow
\tilde{W}^{[\mu\nu];[\rho\sigma_{1}];\{\sigma_{2}\}}_{5}.
$
From the Wess-Zumino consistency conditions (\ref{WZ4}) we consider the terms with three derivatives on $\delta$ and we obtain:
\begin{equation}
{\cal S}_{\rho\sigma_{1}\sigma_{2}}~
(\tilde{W}^{[\mu\rho];[\nu\sigma_{1}];\sigma_{2}}_{5}
- \tilde{W}^{[\nu\rho];[\mu\sigma_{1}];\sigma_{2}}_{5}) = 0.
\end{equation}
It is convenient to split
$
\tilde{W}^{[\mu\nu];[\rho\sigma];\lambda}_{5}
$
as before
\begin{equation}
\tilde{W}^{[\mu\nu];[\rho\sigma];\lambda}_{5}
= \tilde{W}^{[\mu\nu];[\rho\sigma];\lambda}_{5,+}
+ \tilde{W}^{[\mu\nu];[\rho\sigma];\lambda}_{5,-}
\end{equation}
where
\begin{equation}
\tilde{W}^{[\mu\nu];[\rho\sigma];\lambda}_{5,\epsilon}
= {1\over 2}~(\tilde{W}^{[\mu\nu];[\rho\sigma];\lambda}_{5}
+ \epsilon~\tilde{W}^{[\rho\sigma];[\mu\nu];\lambda}_{5}).
\end{equation}
We now make the finite renormalization
\begin{equation}
T(T^{[\mu\nu]}(x_{1}),T^{[\rho\sigma]}(x_{2})) \rightarrow T(T^{[\mu\nu]}(x_{1}),T^{[\rho\sigma]}(x_{2}))
+ \partial_{\lambda}\delta(x_{1} - x_{2})~U_{6}^{[\mu\nu];[\rho\sigma];\lambda}(x_{1})
\label{R6b}
\end{equation}
with
$
U_{6}^{[\mu\nu];[\rho\sigma];\lambda} =
i~\tilde{W}^{[\mu\nu];[\rho\sigma];\lambda}_{5,+}
$
such that all symmetry properties of the chronological products are preserved. As a result
we get a new expression (\ref{A5-2}) with:
$
\tilde{W}^{[\mu\nu];[\rho\sigma];\lambda}_{5}
\rightarrow
\tilde{W}^{[\mu\nu];[\rho\sigma];\lambda}_{5,-}
\equiv W^{[\mu\nu];[\rho\sigma];\lambda}.
$
The Wess-Zumino consistency conditions (\ref{WZ4}) with three derivatives on $\delta$ from above reduces to:
\begin{equation}
{\cal S}_{\rho\sigma_{1}\sigma_{2}}~W^{[\mu\rho];[\nu\sigma_{1}];\sigma_{2}}_{5} = 0.
\end{equation}
We note that we still can use the finite renormalization (\ref{R8b}) if we require:
\begin{equation}
U_{8}^{[\mu\nu\rho];\sigma;\lambda} = - (\sigma \leftrightarrow \lambda)
\end{equation}
such that we do not spoil the form of
$
A_{7}
$
from the statement. One can write a generic form for such an expression
$
U_{8}^{[\mu\nu\rho];[\sigma\lambda]}
$
and the possible combinations meeting the symmetry properties are:
\begin{equation}
U_{8}^{[\mu\nu\rho];[\sigma\lambda]} =
{\cal A}_{\mu\nu\rho}~{\cal A}_{\sigma\lambda}~
(c_{1}~W^{[\mu\rho];[\nu\sigma];\lambda}
+ c_{2}~W^{[\mu\nu];[\sigma\lambda];\rho})
\end{equation}
where we apply the corresponding antisymmetrization operators. If we take
$
c_{1} = {2 i \over 3},~c_{2} = {8 i \over 3}
$
we get a new expression (\ref{A5-2}) for which
$
k \leq 1
$.
As a result we have the expression (3.77) from \cite{cohomology} and we can perform the succession of finite renormalizations from there. In the end the expression (\ref{A5-2}) will have the form from the statement.
The Wess-Zumino equation (\ref{WZ3}) becomes equivalent to
\begin{eqnarray}
d_{Q}W_{3}^{[\mu\nu]} = - i~\partial_{\rho}W_{7}^{[\mu\nu\rho]}
\nonumber \\
W_{5}^{[\mu\nu];\rho} = W_{7}^{[\mu\nu\rho]}.
\end{eqnarray}
The Wess-Zumino consistency conditions (\ref{WZ4}) is equivalent to
\begin{equation}
d_{Q}W_{4}^{\mu;\nu} = i~\partial_{\rho}W_{7}^{[\mu\nu\rho]}
\end{equation}
which follows from the preceding relation if we remember the connection between
$
W_{3}^{[\mu\nu]}
$
and
$
W_{4}^{\mu;\nu}
$
obtained at (iv).
(vii) From (\ref{genericA}) we have:
\begin{equation}
A^{[\mu\nu\rho];\sigma}_{8}(x_{1},x_{2})
= \sum_{k \leq 2}
\partial_{\lambda_{1}} \dots \partial_{\lambda_{k}}\delta(x_{2} - x_{1})
W^{[\mu\nu\rho];\sigma;\{\lambda_{1},\dots,\lambda_{k}\}}_{8}(x_{1})
\label{A8-2}
\end{equation}
and we have the restrictions
\begin{equation}
\omega(W^{[\mu\nu\rho];\sigma;\{\lambda_{1},\dots,\lambda_{k}\}}_{8}) \leq 7 - k, \quad
gh(W^{[\mu\nu\rho];\sigma;\{\lambda_{1},\dots,\lambda_{k}\}}_{8}) = 5
\end{equation}
for all
$
k = 0,1,2.
$
We consider the Wess-Zumino consistency condition (\ref{WZ7}). From the terms with three derivatives on $\delta$ we obtain
\begin{equation}
{\cal S}_{\sigma\lambda_{1}\lambda_{2}}~
W^{[\mu\nu\rho];\sigma;\{\lambda_{1}\lambda_{2}\}}_{8} = 0.
\end{equation}
This equation can be solved explicitly: if we denote:
\begin{equation}
\tilde{W}^{[\mu\nu\rho];[\sigma\lambda_{1}];\lambda_{2}}_{8} =
{2\over 3}~W^{[\mu\nu\rho];\sigma;\{\lambda_{1}];\lambda_{2}\}}_{8}
- (\sigma \leftrightarrow \lambda_{1})
\end{equation}
we have:
\begin{equation}
W^{[\mu\nu\rho];\sigma;\{\lambda_{1}];\lambda_{2}\}}_{8}
= {\cal S}_{\lambda_{1}\lambda_{2}}~
\tilde{W}^{[\mu\nu\rho];[\sigma\lambda_{1}];\lambda_{2}}_{8}
\end{equation}
and we can make in (\ref{A8-2})
$
W^{[\mu\nu\rho];\sigma;\{\lambda_{1}\lambda_{2}\}}_{8}
\rightarrow
\tilde{W}^{[\mu\nu\rho];[\sigma\lambda_{1}];\lambda_{2}}_{8}.
$
We make the finite renormalization
\begin{equation}
T(T^{[\mu\nu\rho]}(x_{1}),T^{[\sigma\lambda]}(x_{2})) \rightarrow T(T^{[\mu\nu\rho]}(x_{1}),T^{[\sigma\lambda]}(x_{2}))
+ \partial_{\alpha}\delta(x_{1} - x_{2})~
U_{9}^{[\mu\nu\rho];[\sigma\lambda];\alpha}(x_{1})
\label{R9a}
\end{equation}
with
$
U_{9}^{[\mu\nu\rho];\sigma;\{\lambda_{1}\lambda_{2}\}} =
- i~\tilde{W}^{[\mu\nu\rho];\sigma];\{\lambda_{1}\lambda_{2}\}}_{8}
$
and we get a new expression (\ref{A8-2}) with
$
k \leq 1
$.
We consider again the Wess-Zumino consistency condition (\ref{WZ7}); from the terms with
two derivatives on $\delta$ we obtain:
\begin{equation}
W^{[\mu\nu\rho];\sigma;\lambda}_{8} = - (\sigma \leftrightarrow \lambda)
\end{equation}
i.e. we have the writing
$
W^{[\mu\nu\rho];\sigma;\lambda}_{8} = W^{[\mu\nu\rho];[\sigma\lambda]}_{8}
$.
We now make the finite renormalization
\begin{equation}
T(T^{[\mu\nu\rho]}(x_{1}),T^{[\sigma\lambda]}(x_{2})) \rightarrow T(T^{[\mu\nu\rho]}(x_{1}),T^{[\sigma\lambda]}(x_{2}))
+ \delta(x_{1} - x_{2})~U_{9}^{[\mu\nu\rho];[\sigma\lambda]}(x_{1})
\label{R9b}
\end{equation}
with
$
U_{9}^{[\mu\nu\rho];[\sigma\lambda]} =
- i~\tilde{W}^{[\mu\nu\rho];[\sigma\lambda]}_{8}
$
we get a new expression
\begin{equation}
A^{[\mu\nu\rho];\sigma}_{8}(x_{1},x_{2})
= \delta(x_{1} - x_{2}) W^{[\mu\nu\rho;\sigma}_{8}(x_{1}).
\label{A8a}
\end{equation}
But the Wess-Zumino consistency condition (\ref{WZ7}) is in this case equivalent to
\begin{eqnarray}
d_{Q}W_{7}^{[\mu\nu\rho]} = 0
\nonumber \\
W^{[\mu\nu\rho;\sigma}_{8} = 0
\end{eqnarray}
so we have in fact:
\begin{equation}
A^{[\mu\nu\rho];\sigma}_{8}(x_{1},x_{2}) = 0.
\label{A8-2'}
\end{equation}
(viii) From (\ref{genericA}) we have:
\begin{equation}
A^{[\mu\nu];[\rho\sigma]}_{6}(x_{1},x_{2})
= \sum_{k \leq 2}
\partial_{\lambda_{1}} \dots \partial_{\lambda_{k}}\delta(x_{2} - x_{1})
W^{[\mu\nu];[\rho\sigma];\{\lambda_{1},\dots,\lambda_{k}\}}_{6}(x_{1})
\label{A6-2}
\end{equation}
and we have the restrictions
\begin{equation}
\omega(W^{[\mu\nu];[\rho\sigma];\{\lambda_{1},\dots,\lambda_{k}\}}_{6}) \leq 7 - k
\qquad
gh(W^{[\mu\nu];[\rho\sigma];\{\lambda_{1},\dots,\lambda_{k}\}}_{6}) = 5.
\end{equation}
From the symmetry property (\ref{s6'}) we consider the terms with two derivatives on the $\delta$ function and obtain:
\begin{equation}
W^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}_{6}
= W^{[\rho\sigma];[\mu\nu];\{\lambda_{1}\lambda_{2}\}}_{6}.
\end{equation}
Now the Wess-Zumino consistency condition (\ref{WZ5}) gives:
\begin{equation}
{\cal S}_{\sigma\lambda_{1}\lambda_{2}}~
W^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}_{6} = 0.
\end{equation}
We observe that in the renormalization (\ref{R9a}) we have used only the piece
$
{\cal S}_{\lambda\alpha} U_{9}^{[\mu\nu\rho];[\sigma\lambda];\alpha}
$
so we still can use
$
{\cal A}_{\lambda\alpha} U_{9}^{[\mu\nu\rho];[\sigma\lambda];\alpha}.
$
We make the following ansatz for
$
U_{9}^{[\mu\nu\rho];\sigma\lambda;\alpha}
$
\begin{eqnarray}
U_{9}^{[\mu\nu\lambda_{1}];[\rho\sigma];\lambda_{2}} =
{\cal A}_{\mu\nu\lambda_{1}}~{\cal A}_{\rho\sigma}~
(c_{1}~W^{[\mu\nu];[\rho\sigma];\{\lambda_{1}\lambda_{2}\}}
+ c_{2}~W^{[\mu\nu];[\sigma\lambda_{2}];\{\lambda_{1}\rho\}}
\nonumber \\
+ c_{3}~W^{[\lambda_{1}\lambda_{2}];[\mu\rho];\{\nu\sigma\}}
+ c_{4}~W^{[\mu\rho];[\nu\sigma];\{\lambda_{1}\lambda_{2}\}})
\end{eqnarray}
which is compatible with the symmetry properties. A long computation shows that one can fix these coefficients such that the renormalization (\ref{R9a}) leaves the expression
$
A_{8}
$
unchanged but the expression(\ref{A6-2}) gets modified: we have the restriction
$
k \leq 1.
$
Now from the symmetry property (\ref{s6'}) with one derivatives on the $\delta$ function we obtain:
\begin{equation}
W^{[\mu\nu];[\rho\sigma];\lambda}_{6} = - W^{[\rho\sigma];[\mu\nu];\lambda}_{6}
\end{equation}
and from the Wess-Zumino equation (\ref{WZ5}):
\begin{equation}
W^{[\mu\nu];[\rho\sigma];\lambda}_{6} = - (\sigma \leftrightarrow \lambda).
\end{equation}
If we combine these two equations we arrive at the conclusion that
$
W^{[\mu\nu];[\rho\sigma];\lambda}_{6}
$
is completely antisymmetric in all indices so it must be null (because we are in $4$ dimensions). As a consequence
\begin{equation}
A^{[\mu\nu];[\rho\sigma]}_{6}(x_{1} - x_{2}) =
\delta(x_{1} - x_{2})~W^{[\mu\nu];[\rho\sigma]}_{6}(x_{1}).
\end{equation}
Now the Wess-Zumino equation (\ref{WZ5}) is equivalent to
\begin{equation}
W^{[\mu\nu];[\rho\sigma]}_{6} = 0
\end{equation}
so in fact:
\begin{equation}
A^{[\mu\nu];[\rho\sigma]}_{6} = 0.
\end{equation}
(ix) From (\ref{genericA}) we have:
\begin{equation}
A^{[\mu\nu\rho];[\sigma\lambda]}_{9}(x_{1},x_{2})
= \delta(x_{2} - x_{1})~W^{[\mu\nu\rho];[\sigma\lambda]}_{9}(x_{1})
+ \partial_{\omega}\delta(x_{2} - x_{1})
W^{[\mu\nu\rho];[\sigma\lambda];\omega}_{6}(x_{1})
\label{A9-2}
\end{equation}
and we have the restrictions
\begin{equation}
\omega(W^{[\mu\nu\rho];[\sigma\lambda]}_{9}) \leq 7
\qquad
\omega(W^{[\mu\nu\rho];[\sigma\lambda];\omega}_{9}) \leq 6
\qquad
gh(W^{[\mu\nu\rho];[\sigma\lambda]}_{9}) = (W^{[\mu\nu\rho];[\sigma\lambda];\omega}_{9}) = 6.
\end{equation}
Now the Wess-Zumino consistency condition (\ref{WZ8}) gives:
\begin{equation}
W^{[\mu\nu\rho];[\sigma\lambda];\omega}_{9} = - (\lambda \leftrightarrow \omega)
\end{equation}
so we can write
$
W^{[\mu\nu\rho];[\sigma\lambda];\omega}_{9} = W^{[\mu\nu\rho];[\sigma\lambda\omega]}_{9}.
$
We perform the finite renormalization
\begin{equation}
T(T^{[\mu\nu\rho]}(x_{1}),T^{[\sigma\lambda\omega]}(x_{2})) \rightarrow T(T^{[\mu\nu\rho]}(x_{1}),T^{[\sigma\lambda\omega]}(x_{2}))
+ \delta(x_{2} - x_{1})~U_{10}^{[\mu\nu\rho];[\sigma\lambda\omega]}(x_{1})
\label{R10}
\end{equation}
with
$
U_{10}^{[\mu\nu\rho];[\sigma\lambda\omega]} = i~W_{9}^{[\mu\nu\rho];[\sigma\lambda\omega]}
$
As a consequence the formula (\ref{A9-2}) becomes
\begin{equation}
A^{[\mu\nu\rho];[\sigma\lambda]}_{9}(x_{1} - x_{2}) =
\delta(x_{1} - x_{2})~W^{[\mu\nu\rho];[\sigma\lambda]}_{9}(x_{1}).
\end{equation}
Now the Wess-Zumino equation (\ref{WZ8}) is equivalent to
\begin{equation}
W^{[\mu\nu\rho];[\sigma\lambda]}_{9} = 0
\end{equation}
so in fact:
\begin{equation}
A^{[\mu\nu\rho];[\sigma\lambda]}_{9} = 0.
\end{equation}
(x) From (\ref{genericA}) we have:
\begin{equation}
A^{[\mu\nu\rho];[\sigma\lambda\omega]}_{10}(x_{1},x_{2})
= \delta(x_{2} - x_{1})~W^{[\mu\nu\rho];[\sigma\lambda\omega]}_{9}(x_{1})
W^{[\mu\nu\rho];[\sigma\lambda\omega]}_{10}(x_{1})
\label{A10-2}
\end{equation}
and we have the restrictions
\begin{equation}
\omega(W^{[\mu\nu\rho];[\sigma\lambda\omega]}_{10}) \leq 7
\qquad
gh(W^{[\mu\nu\rho];[\sigma\lambda\omega]}_{10}) = 7.
\end{equation}
The Wess-Zumino equation (\ref{WZ9}) is equivalent to
$
W^{[\mu\nu\rho];[\sigma\lambda\omega]}_{10} = 0
$
so in fact
\begin{equation}
A^{[\mu\nu\rho];[\sigma\lambda\omega]}_{10}(x_{1},x_{2}) = 0.
\end{equation}
(xi) Finally we observe that we can make some redefinitions of the chronological products without changing the structure of the anomalies. Indeed we have
\begin{equation}
T(T(x_{1}),T(x_{2})) \rightarrow T(T(x_{1}),T(x_{2})) + \delta(x_{1} - x_{2})~B(x_{1})
\end{equation}
which makes
\begin{equation}
W \rightarrow W + d_{Q}B
\end{equation}
and
\begin{equation}
T(T^{\mu}(x_{1}),T(x_{2})) \rightarrow T(T^{\mu}(x_{1}),T(x_{2}))
+ \delta(x_{1} - x_{2})~B^{\mu}(x_{1})
\end{equation}
which makes
\begin{equation}
W \rightarrow W + i~\partial_{\mu}B^{\mu}, \qquad
W^{\mu} \rightarrow W^{\mu} + d_{Q}B^{\mu}.
\end{equation}
We also observe that we can consider the finite renormalizations
\begin{equation}
T(T^{[\mu\nu]}(x_{1}),T(x_{2})) \rightarrow T(T^{[\mu\nu]}(x_{1}),T(x_{2}))
+ \delta(x_{2} - x_{1})~U_{3}^{[\mu\nu]}(x_{1})
\label{R3c}
\end{equation}
and
\begin{equation}
T(T^{\mu}(x_{1}),T^{\nu}(x_{2})) \rightarrow T(T^{\mu}(x_{1}),T^{\nu}(x_{2}))
+ \delta(x_{2} - x_{1})~U_{4}^{[\mu\nu]}(x_{1})
\label{R4b}
\end{equation}
with
\begin{equation}
U_{3}^{[\mu\nu]} = B^{[\mu\nu]}, \qquad U_{4}^{[\mu\nu]} = - B^{[\mu\nu]}
\end{equation}
and they produce the redefinitions
\begin{equation}
W^{\mu} \rightarrow W^{\mu} + i~\partial_{\nu}B^{[\mu\nu]}, \qquad
W^{[\mu\nu]} \rightarrow W^{[\mu\nu]} + d_{Q}B^{[\mu\nu]}.
\end{equation}
Finally we have the finite renormalizations
\begin{equation}
T(T^{[\mu\nu]}(x_{1}),T^{\rho}(x_{2})) \rightarrow T(T^{[\mu\nu]}(x_{1}),T^{\rho}(x_{2}))
+ \delta(x_{2} - x_{1})~U_{5}^{[\mu\nu];\rho}(x_{1})
\label{R5}
\end{equation}
and
\begin{equation}
T(T^{[\mu\nu\rho]}(x_{1}),T(x_{2})) \rightarrow T(T^{[\mu\nu\rho]}(x_{1}),T(x_{2}))
+ \delta(x_{2} - x_{1})~U_{7}^{[\mu\nu\rho]}(x_{1})
\label{R7}
\end{equation}
with
\begin{equation}
U_{5}^{[\mu\nu];\rho} = U_{7}^{[\mu\nu\rho]} = B^{[\mu\nu\rho]}
\end{equation}
and they produce the redefinitions
\begin{equation}
W^{[\mu\nu]} \rightarrow W^{[\mu\nu]} + i~\partial_{\rho}B^{[\mu\nu\rho]}, \qquad
W^{[\mu\nu\rho]} \rightarrow W^{[\mu\nu\rho]} + d_{Q}B^{[\mu\nu\rho]}.
\end{equation}
All these redefinitions do not modify the form of the anomalies from the statement and we have obtained the last assertion of the theorem.
$\blacksquare$
As we can see one can simplify considerably the form of the anomalies in the second order of the perturbation theory if one makes convenient redefinitions of the chronological products. Moreover, the result is of purely cohomological nature i.e. we did not use the explicit form of the expressions
$
T, T^{\mu}, T^{[\mu\nu]}, T^{[\mu\nu\rho]}.
$
The main difficulty of the proof is to find a convenient way of using Wess-Zumino equations, the (anti)symmetry properties and a succession of finite renormalizations. It will be a remarkable fact to extend the preceding result for arbitrary order of the perturbation theory.
We have proved that renormalization of gauge theories leads to some descent equations.
We have the expressions
$
T^{I}
$
and
$
R^{I}
$
(with ghost numbers
$
gh(T^{I}) = gh(R^{I}) = |I|
$
and canonical dimension $\leq 5$ and $\leq 6$ respectively) for the interaction Lagrangian and the finite renormalizations compatible with gauge invariance; we also have the expressions
$
W^{I}
$
(with ghost numbers
$
gh(W^{I}) = |I| + 1
$
and canonical dimension $\leq 7$) for the anomalies. In the next Sections we give the most simplest way to solve in general such type of problems.
\section{The Cohomology of the Gauge Charge Operator\label{q}}
We consider the vector space
$
{\cal H}
$
of Fock type generated (in the sense of Borchers theorem) by the symmetric tensor field
$
h_{\mu\nu}
$
(with Bose statistics) and the vector fields
$
u^{\rho}, \tilde{u}^{\sigma}
$
(with Fermi statistics). The Fermi fields are usually called {\it ghost fields}. We suppose that all these (quantum) fields are of null mass. Let $\Omega$ be the vacuum state in
$
{\cal H}.
$
In this vector space we can define a sesquilinear form
$<\cdot,\cdot>$
in the following way: the (non-zero) $2$-point functions are by definition:
\begin{eqnarray}
<\Omega, h_{\mu\nu}(x_{1}) h_{\rho\sigma}(x_{2})\Omega> = - {i\over 2}~
(\eta_{\mu\rho}~\eta_{\nu\sigma} + \eta_{\nu\rho}~\eta_{\mu\sigma}
- \eta_{\mu\nu}~\eta_{\rho\sigma})~D_{0}^{(+)}(x_{1} - x_{2}),
\nonumber \\
<\Omega, u_{\mu}(x_{1}) \tilde{u}_{\nu}(x_{2})\Omega> = i~\eta_{\mu\nu}~
D_{0}^{(+)}(x_{1} - x_{2}),
\nonumber \\
<\Omega, \tilde{u}_{\mu}(x_{1}) u_{\nu}(x_{2})\Omega> = - i~\eta_{\mu\nu}~
D_{0}^{(+)}(x_{1} - x_{2})
\end{eqnarray}
and the $n$-point functions are generated according to Wick theorem. Here
$
\eta_{\mu\nu}
$
is the Minkowski metrics (with diagonal $1, -1, -1, -1$) and
$
D_{0}^{(+)}
$
is the positive frequency part of the Pauli-Villars distribution
$
D_{0}
$
of null mass. To extend the sesquilinear form to
$
{\cal H}
$
we define the conjugation by
\begin{equation}
h_{\mu\nu}^{\dagger} = h_{\mu\nu}, \qquad
u_{\rho}^{\dagger} = u_{\rho}, \qquad
\tilde{u}_{\sigma}^{\dagger} = - \tilde{u}_{\sigma}.
\end{equation}
Now we can define in
$
{\cal H}
$
the operator $Q$ according to the following formulas:
\begin{eqnarray}
~[Q, h_{\mu\nu}] = - {i\over 2}~(\partial_{\mu}u_{\nu} + \partial_{\nu}u_{\mu}
- \eta_{\mu\nu} \partial_{\rho}u^{\rho}),\qquad
[Q, u_{\mu}] = 0,\qquad
[Q, \tilde{u}_{\mu}] = i~\partial^{\nu}h_{\mu\nu}
\nonumber \\
Q\Omega = 0
\label{Q-0}
\end{eqnarray}
where by
$
[\cdot,\cdot]
$
we mean the graded commutator. One can prove that $Q$ is well defined. Indeed, we have the causal commutation relations
\begin{eqnarray}
~[h_{\mu\nu}(x_{1}), h_{\rho\sigma}(x_{2}) ] = - {i\over 2}~
(\eta_{\mu\rho}~\eta_{\nu\sigma} + \eta_{\nu\rho}~\eta_{\mu\sigma}
- \eta_{\mu\nu}~\eta_{\rho\sigma})~D_{0}(x_{1} - x_{2})~\cdot I,
\nonumber \\
~[u(x_{1}), \tilde{u}(x_{2})] = i~\eta_{\mu\nu}~D_{0}(x_{1} - x_{2})~\cdot I
\end{eqnarray}
and the other commutators are null. The operator $Q$ should leave invariant these relations, in particular
\begin{equation}
[Q, [ h_{\mu\nu}(x_{1}),\tilde{u}_{\sigma}(x_{2})]] + {\rm cyclic~permutations} = 0
\end{equation}
which is true according to (\ref{Q-0}). It is useful to introduce a grading in
$
{\cal H}
$
as follows: every state which is generated by an even (odd) number of ghost fields and an arbitrary number of vector fields is even (resp. odd). We denote by
$
|f|
$
the ghost number of the state $f$. We notice that the operator $Q$ raises the ghost number of a state (of fixed ghost number) by an unit. The usefullness of this construction follows from:
\begin{thm}
The operator $Q$ verifies
$
Q^{2} = 0.
$
The factor space
$
Ker(Q)/Ran(Q)
$
is isomorphic to the Fock space of particles of zero mass and helicity $2$ (gravitons).
\label{fock-0}
\end{thm}
{\bf Proof:} (i) The fact that $Q$ squares to zero follows easily from (\ref{Q-0}): the operator
$
Q^{2} = 0
$
commutes with all field operators and gives zero when acting on the vacuum.
(ii) The generic form of a state
$
\Psi \in {\cal H}^{(1)} \subset {\cal H}
$
from the one-particle Hilbert subspace is
\begin{equation}
\Psi = \left[ \int f_{\mu\nu}(x) h^{\mu\nu}(x) + \int g^{(1)}_{\mu}(x) u^{\mu}(x)
+ \int g^{(2)}_{\mu}(x) \tilde{u}^{\mu}(x) \right] \Omega
\end{equation}
with test functions
$
f_{\mu\nu}, g^{(1)}_{\mu}, g^{(2)}_{\mu}
$
verifying the wave equation equation; we can also suppose that
$
f_{\mu\nu}
$
is symmetric. The condition
$
\Psi \in Ker(Q) \quad \Longleftrightarrow \quad Q\Psi = 0;
$
leads to
$
\partial^{\nu}f_{\mu\nu} = {1\over 2}~\partial_{\mu}f
$
(where
$
f = \eta^{\mu\nu}f_{\mu\nu}
$
is the trace of
$
f_{\mu\nu}
$
and
$
g^{(2)}_{\mu} = 0
$
i.e. the generic element
$
\Psi \in {\cal H}^{(1)} \cap Ker(Q)
$
is
\begin{equation}
\Psi = \left[ \int f_{\mu\nu}(x) h^{\mu\nu}(x) + \int g_{\mu}(x) u^{\mu}(x) \right] \Omega
\label{kerQ-0}
\end{equation}
with $g_{\mu}$ arbitrary and
$
f_{\mu\nu}
$
constrained by the transversality condition
$
\partial^{\nu}f_{\mu\nu} = {1\over 2}~\partial_{\mu}f;
$
so the elements of
$
{\cal H}^{(1)} \cap Ker(Q)
$
are in one-one correspondence with couples of test functions
$
[f_{\mu\nu}, g_{\rho}]
$
with the transversality condition on the first entry. Now, a generic element
$
\Psi^{\prime} \in {\cal H}^{(1)} \cap Ran(Q)
$
has the form
\begin{equation}
\Psi^{\prime} = Q\Phi = \left[
- {1\over 2} \int (\partial_{\mu}g^{\prime}_{\nu} + \partial_{\nu}g^{\prime}_{\mu})(x) h^{\mu\nu}(x)
+ \int \left(\partial^{\nu}g^{\prime}_{\mu\nu}
- {1\over 2}~\partial_{\mu}g^{\prime}\right)(x) u(x) \right] \Omega
\label{ranQ-0}
\end{equation}
with
$
g^{\prime} = \eta^{\mu\nu}g^{\prime}_{\mu\nu}
$
so if
$
\Psi \in {\cal H}^{(1)} \cap Ker(Q)
$
is indexed by the couple
$
[f_{\mu\nu}, g_{\rho}]
$
then
$
\Psi + \Psi^{\prime}
$
is indexed by the couple
$
\left[
f_{\mu\nu} - {1\over 2}~(\partial_{\mu}g^{\prime}_{\nu} + \partial_{\nu}g^{\prime}_{\mu}),
g_{\mu} + \left( \partial^{\nu}g^{\prime}_{\mu\nu}
- {1\over 2}~\partial_{\mu}g^{\prime}\right)\right].
$
If we take
$
g^{\prime}_{\mu\nu}
$
conveniently we can make
$
g_{\mu} = 0
$
and if we take
$
g^{\prime}_{\mu}
$
convenient we can make
$
f = 0;
$
in this case the transversality condition becomes
$
\partial^{\nu}f_{\mu\nu} = 0.
$
It follows that the equivalence classes from
$
({\cal H}^{(1)} \cap Ker(Q))/({\cal H}^{(1)} \cap Ran(Q))
$
are indexed by wave functions
$
f_{\mu\nu}
$
verifying the conditions of transversality and tracelessness
$
\partial^{\nu}f_{\mu\nu} = 0,~f = 0.
$
We still have the freedom to change
$
f_{\mu\nu} \rightarrow f_{\mu\nu}- {1\over 2}~(\partial_{\mu}g^{\prime}_{\nu} + \partial_{\nu}g^{\prime}_{\mu})
$
with
$
\partial^{\mu}g^{\prime}_{\mu} = 0
$
without affecting the properties
$
\partial^{\nu}f_{\mu\nu} = 0,~f = 0.
$
It remains to prove that the sesquilinear form
$<\cdot,\cdot>$
induces a positively defined form on
$
({\cal H}^{(1)} \cap Ker(Q))/({\cal H}^{(1)} \cap Ran(Q))
$
and we have obtained the usual one-particle Hilbert space for the graviton (i.e. a particle of zero mass and helicity $2$).
(iii) The extension of this argument to the $n$th-particle space is done as in \cite{cohomology} using K\"unneth formula \cite{Dr}.
$\blacksquare$
We see that the condition
$
[Q, T] = i~\partial_{\mu}T^{\mu}
$
means that the expression $T$ leaves invariant the physical Hilbert space (at least in the adiabatic limit).
Now we have the physical justification for solving another cohomology problem namely to determine the cohomology of the operator
$
d_{Q} = [Q,\cdot]
$
induced by $Q$ in the space of Wick polynomials. To solve this problem it is convenient to use the same geometric formalism \cite{jet} used in \cite{cohomology}. We consider that the (classical) fields are
$
h_{\mu\nu}, u_{\rho}, \tilde{u}_{\sigma}
$
of null mass and we consider the set
$
{\cal P}
$
of polynomials in these fields and their formal derivatives (in the sense of jet bundle theory). The formal derivatives operators
$
d_{\mu}
$
are given by
\begin{equation}
d_{\mu}y^{\alpha}_{\nu_{1}\cdots\nu_{n}} \equiv y^{\alpha}_{\mu\nu_{1}\cdots\nu_{n}}
\end{equation}
where
$
y^{A}
$
are the basic variables
$
y^{\alpha} = (h_{\mu\nu}, u_{\rho}, \tilde{u}_{\sigma})
$
and
$
y^{\alpha}_{\nu_{1}\cdots\nu_{n}}
$
are the jet bundle coordinates (see \cite{cohomology} for details). We note that on
$
{\cal P}
$
we have a natural grading. We introduce by convenience the notation:
\begin{equation}
B_{\mu} \equiv d^{\nu}h_{\mu\nu}
\end{equation}
and define the graded derivation
$
d_{Q}
$
on
$
{\cal P}
$
according to
\begin{eqnarray}
d_{Q}h_{\mu\nu} = - {i\over 2}~(d_{\mu}u_{\nu} + d_{\nu}u_{\mu}
- \eta_{\mu\nu}~d_{\rho}u^{\rho}),
\qquad
d_{Q}u_{\mu} = 0,
\qquad d_{Q}\tilde{u}_{\mu} = i~B_{\mu}
\nonumber \\
~[d_{Q}, d_{\mu} ] = 0.
\end{eqnarray}
Then one can easily prove that
$
d_{Q}^{2} = 0
$
and the cohomology of this operator is isomorphic to the cohomology of the preceding operator (denoted also by $d_{Q}$) and acting in the space of Wick polynomials. The operator
$
d_{Q}
$
raises the grading and the canonical dimension by an unit. To determine the cohomology of
$
d_{Q}
$
it is convenient to introduce some notations: first
\begin{equation}
h \equiv \eta^{\mu\nu}h_{\mu\nu} \qquad
\hat{h}_{\mu\nu} \equiv h_{\mu\nu} - {1\over 2}~\eta_{\mu\nu}~h
\end{equation}
and the we define the {\it Christoffel symbols} according to:
\begin{equation}
\Gamma_{\mu;\nu\rho} \equiv d_{\rho}\hat{h}_{\mu\nu} + d_{\nu}\hat{h}_{\mu\rho} - d_{\mu}\hat{h}_{\nu\rho}.
\end{equation}
We observe that
\begin{equation}
d_{Q}\Gamma_{\mu;\nu\rho} = - i~d_{\nu}d_{\rho}u_{\mu}.
\end{equation}
and we can express the first order derivatives through the Christoffel symbols
\begin{equation}
d_{\rho}\hat{h}_{\mu\nu} = {1\over 2}~(\Gamma_{\mu;\nu\rho} + \Gamma_{\nu;\mu\rho}).
\end{equation}
The expression
\begin{equation}
R_{\mu\nu;\rho\sigma} \equiv d_{\rho}\Gamma_{\mu;\nu\sigma} - (\rho \leftrightarrow \sigma)
\end{equation}
is called the {\it Riemann tensor}; we can easily prove
\begin{eqnarray}
R_{\mu\nu;\rho\sigma} = - R_{\nu\mu;\rho\sigma} = - R_{\mu\nu;\sigma\rho} = R_{\rho\sigma;\mu\nu},
\nonumber \\
d_{Q}R_{\mu\nu;\rho\sigma} = 0,
\nonumber \\
R_{\mu\nu;\rho\sigma} + R_{\mu\rho;\nu\sigma} + R_{\mu\sigma;\nu\rho} = 0;
\nonumber \\
d_{\lambda}R_{\mu\nu;\rho\sigma} + d_{\rho}R_{\mu\nu;\sigma\lambda} + d_{\sigma}R_{\mu\nu;\lambda\rho} = 0
\end{eqnarray}
the last two relations are called {\it Bianchi identities}.
Next we consider, as in the case of the Yang-Mills fields, more convenient variables: (i) first one can expresses the derivatives of the Christoffel symbols in terms of the completely symmetric derivatives
\begin{equation}
\Gamma_{\mu;\rho_{1},\dots,\rho_{n}} \equiv {\cal S}_{\rho_{1},\dots,\rho_{n}}~ (d_{\rho_{3}}\dots d_{\rho_{n}}~\Gamma_{\mu;\rho_{1}\rho_{2}})
\end{equation}
and derivatives of the Riemann tensor;
(ii) next, one expresses the variables
$
\Gamma_{\mu;\rho_{1},\dots,\rho_{n}}
$
in terms of the expressions
$
\Gamma^{(0)}_{\mu;\rho_{1},\dots,\rho_{n}}
$
(which is, by definition, the traceless part in
$
\rho_{1},\dots,\rho_{n}
$)
and
$
B_{\mu;\rho_{1},\dots,\rho_{n-2}};
$
(iii) finally one expresses the derivatives of the Riemann tensor
$
d_{\lambda_{1}}\dots d_{\lambda_{n}}~R_{\mu\nu;\rho\sigma}
$
in terms of the traceless part in all indices
$
R^{(0)}_{\mu\nu;\rho\sigma;\lambda_{1},\dots,\lambda_{n}}
$
and
$
B_{\mu;\rho_{1},\dots,\rho_{n+1}}.
$
We will use the K\"unneth theorem:
\begin{thm}
Let
$
{\cal P}
$
be a graded space of polynomials and $d$ an operator verifying
$
d^{2} = 0
$
and raising the grading by an unit. Let us suppose that
$
{\cal P}
$
is generated by two subspaces
$
{\cal P}_{1}, {\cal P}_{2}
$
such that
$
{\cal P}_{1} \cap {\cal P}_{2} = \{0\}
$
and
$
d{\cal P}_{j} \subset {\cal P}_{j}, j = 1,2.
$
We define by
$
d_{j}
$
the restriction of $d$ to
$
{\cal P}_{j}.
$
Then there exists the canonical isomorphism
$
H(d) \cong H(d_{1}) \times H(d_{2})
$
of the associated cohomology spaces.
\label{kunneth}
\end{thm}
(see \cite{Dr}). Now we can give a generic description for the co-cycles of
$
d_{Q};
$
we denote by
$
Z_{Q}
$
and
$
B_{Q}
$
the co-cycles and the co-boundaries of this operator. First we define
\begin{eqnarray}
u_{\mu\nu} = u_{[\mu\nu]} \equiv {1\over 2}~(d_{\mu}u_{\nu} - d_{\nu}u_{\mu})
\nonumber \\
u_{\{\mu\nu\}} \equiv {1\over 2}~(d_{\mu}u_{\nu} + d_{\nu}u_{\mu})
\end{eqnarray}
such that we have:
\begin{equation}
d_{\mu}u_{\nu} = u_{\mu\nu} + u_{\{\mu\nu\}}.
\end{equation}
Now we have:
\begin{thm}
Let
$
p \in Z_{Q}.
$
Then $p$ is cohomologous to a polynomial in
$u_{\mu}, u_{\mu\nu}$
and
$
R^{(0)}_{\mu\nu;\rho\sigma;\lambda_{1},\dots,\lambda_{n}}.
$
\label{m=0}
\end{thm}
{\bf Proof:} (i) The idea is to define conveniently two subspaces
$
{\cal P}_{1}, {\cal P}_{2}
$
and apply K\"unneth theorem. We will take
$
{\cal P}_{1} = {\cal P}_{0}
$
from the statement and
$
{\cal P}_{2}
$
the subspace generated by the variables
$
B_{\mu;\nu_{1},\dots,\nu_{n}}~(n \geq 0),~
\Gamma^{(0)}_{\mu;\nu_{1},\dots,\nu_{n}}~(n \geq 2),~
\tilde{u}_{\mu;\nu_{1},\dots,\nu_{n}}~(n \geq 0),~
u_{\mu;\nu_{1},\dots,\nu_{n}}(n \geq 2),
u_{\{\mu\nu\}}
$
and
$
\hat{h}_{\mu\nu}.
$
We have
$
d_{Q}{\cal P}_{1} = \{0\}
$
and
\begin{eqnarray}
d_{Q}u_{\{\mu;\nu\}} = 0,\qquad
d_{Q}u_{\mu;\nu_{1},\dots,\nu_{n}} = 0~(n \geq 2)
\nonumber \\
d_{Q}\Gamma^{(0)}_{\mu;\nu_{1},\dots,\nu_{n}} = - i~u_{\mu;\nu_{1},\dots,\nu_{n}}~
(n \geq 2)
\nonumber \\
d_{Q}\tilde{u}_{\mu;\nu_{1},\dots,\nu_{n}} = i~B_{\mu;\nu_{1},\dots,\nu_{n}}~
(n \geq 0)
\nonumber \\
d_{Q}B_{\mu;\nu_{1},\dots,\nu_{n}} = 0~(n \geq 0)
\nonumber \\
d_{Q}\hat{h}_{\mu\nu} = - i~u_{\{\mu\nu\}}
\end{eqnarray}
so we meet the conditions of K\"unneth theorem. Let us define in
$
{\cal P}_{2}
$
the graded derivation ${\mathfrak h}$ by:
\begin{eqnarray}
{\mathfrak h}u_{\{\mu\nu\}} = i~\hat{h}_{\mu\nu}
\nonumber \\
{\mathfrak h}u_{\mu;\nu_{1},\dots,\nu_{n}} = i~\Gamma^{(0)}_{\mu;\nu_{1},\dots,\nu_{n}}~
(n \geq 2)
\nonumber \\
{\mathfrak h}B_{\mu;\nu_{1},\dots,\nu_{n}} = - i~\tilde{u}_{\mu;\nu_{1},\dots,\nu_{n}}~
(n \geq 0)
\end{eqnarray}
and zero on the other variables from
$
{\cal P}_{2}.
$
It is easy to prove that ${\mathfrak h}$ is well defined: the condition of tracelessness is essential to avoid conflict with the equations of motion. Then one can prove that
\begin{equation}
[d_{Q},{\mathfrak h}] = Id
\end{equation}
on polynomials of degree one in the fields and because the left hand side is a derivation operator we have
\begin{equation}
[d_{Q},{\mathfrak h}] = n \cdot Id
\end{equation}
on polynomials of degree $n$ in the fields. It means that ${\mathfrak h}$ is a homotopy for
$
d_{Q}
$
restricted to
$
{\cal P}_{2}
$
so the the corresponding cohomology is trivial: indeed, if
$
p \in {\cal P}_{2}
$
is a co-cycle of degree $n$ in the fields then it is a co-boundary
$
p = {1\over n} d_{Q}{\mathfrak h}p.
$
According to K\"unneth formula if $p$ is an arbitrary co cycle from
$
{\cal P}
$
it can be replaced by a cohomologous polynomial from
$
{\cal P}_{0}
$
and this proves the theorem.
$\blacksquare$
\begin{rem}
There is an important difference with respect to the Yang-Mills case, namely the space
$
{\cal P}_{0}
$
is not isomorphic to the cohomology group
$
H_{Q}
$
and this follows from the fact that
$
{\cal P}_{0}~\cap B_{Q} \not= 0.
$
We provide an example of an expression belonging to this intersection. We start with the expression
\begin{equation}
B^{\mu\nu\rho\sigma\lambda} \equiv u^{\mu}~u^{\nu}~u^{\rho}~u^{\sigma}~u^{\lambda};
\end{equation}
because of the complete antisymmetry we have in fact
\begin{equation}
B^{\mu\nu\rho\sigma\lambda} = 0.
\end{equation}
On the other hand we have
\begin{equation}
d_{\lambda}B^{\mu\nu\rho\sigma\lambda} = p^{\mu\nu\rho\sigma} + d_{Q}(\cdots)
\end{equation}
where
\begin{equation}
p^{\mu\nu\rho\sigma} \equiv u_{\lambda}~
(u^{\mu}~u^{\nu}~u^{\rho}~u^{\sigma\lambda}
+ u^{\nu}~u^{\rho}~u^{\sigma}~u^{\mu\lambda}
+ u^{\rho}~u^{\sigma}~u^{\mu}~u^{\nu\lambda}
+ u^{\sigma}~u^{\mu}~u^{\nu}~u^{\rho\lambda})
\end{equation}
so we have
$
p^{\mu\nu\rho\sigma} \in {\cal P}_{0}~\cap B_{Q}.
$
\end{rem}
We repeat the whole argument for the case of massive graviton i.e. particles of spin $1$ and positive mass.
We consider a vector space
$
{\cal H}
$
of Fock type generated (in the sense of Borchers theorem) by the tensor field
$
h_{\mu\nu},
$
the vector field
$
v_{\mu}
$
(with Bose statistics) and the vector fields
$
u_{\mu}, \tilde{u}_{\mu}
$
(with Fermi statistics). We suppose that all these (quantum) fields are of mass
$
m > 0.
$
In this vector space we can define a sesquilinear form
$<\cdot,\cdot>$
in the following way: the (non-zero) $2$-point functions are by definition:
\begin{eqnarray}
<\Omega, h_{\mu\nu}(x_{1}) h_{\rho\sigma}(x_{2})\Omega> = - {i\over 2}~
(\eta_{\mu\rho}~\eta_{\nu\sigma} + \eta_{\nu\rho}~\eta_{\mu\sigma}
- \eta_{\mu\nu}~\eta_{\rho\sigma})~D_{m}^{(+)}(x_{1} - x_{2}),
\nonumber \\
<\Omega, u_{\mu}(x_{1}) \tilde{u}_{\nu}(x_{2})\Omega> = i~\eta_{\mu\nu}~
D_{m}^{(+)}(x_{1} - x_{2}),
\nonumber \\
<\Omega, \tilde{u}_{\mu}(x_{1}) u_{\nu}(x_{2})\Omega> = - i~\eta_{\mu\nu}~
D_{m}^{(+)}(x_{1} - x_{2}),
\nonumber \\
<\Omega, v_{\mu}(x_{1}) v_{\mu}(x_{2})\Omega> =i~\eta_{\mu\nu}~D_{m}^{(+)}(x_{1} - x_{2})
\end{eqnarray}
and the $n$-point functions are generated according to Wick theorem. Here
$
D_{m}^{(+)}
$
is the positive frequency part of the Pauli-Villars distribution
$
D_{m}
$
of mass $m$. To extend the sesquilinear form to
$
{\cal H}
$
we define the conjugation by
\begin{eqnarray}
h_{\mu\nu}^{\dagger} = h_{\mu\nu}, \qquad
u_{\rho}^{\dagger} = u_{\rho}, \qquad
\tilde{u}_{\sigma}^{\dagger} = - \tilde{u}_{\sigma}, \qquad
v_{\mu}^{\dagger} = v_{\mu}.
\end{eqnarray}
Now we can define in
$
{\cal H}
$
the operator $Q$ according to the following formulas:
\begin{eqnarray}
~[Q, h_{\mu\nu}] = - {i\over 2}~(\partial_{\mu}u_{\nu} + \partial_{\nu}u_{\mu}
- \eta_{\mu\nu} \partial_{\rho}u^{\rho}),
\nonumber \\
~[Q, u_{\mu}] = 0,\qquad
[Q, \tilde{u}_{\mu}] = i~(\partial^{\nu}h_{\mu\nu} + m v_{\mu}),
\nonumber \\
~[Q, v_{\mu}] = - {i~m\over 2}~u_{\mu}
\nonumber \\
Q\Omega = 0.
\label{Q-m}
\end{eqnarray}
One can prove that $Q$ is well defined. Indeed, we have the causal commutation relations
\begin{eqnarray}
~[h_{\mu\nu}(x_{1}), h_{\rho\sigma}(x_{2}) ] = - {i\over 2}~
(\eta_{\mu\rho}~\eta_{\nu\sigma} + \eta_{\nu\rho}~\eta_{\mu\sigma}
- \eta_{\mu\nu}~\eta_{\rho\sigma})~D_{m}(x_{1} - x_{2})~\cdot I,
\nonumber \\
~[u(x_{1}), \tilde{u}(x_{2})] = i~\eta_{\mu\nu}~D_{m}(x_{1} - x_{2})~\cdot I
\nonumber \\
~[v_{\mu}(x_{1}) v_{\mu}(x_{2})] = i~\eta_{\mu\nu}~D_{m}(x_{1} - x_{2})~\cdot I
\end{eqnarray}
and the other commutators are null. The operator $Q$ should leave invariant these relations, in particular
\begin{eqnarray}
[Q, [ h_{\mu\nu}(x_{1}),\tilde{u}_{\sigma}(x_{2})]] + {\rm cyclic~permutations} = 0,
\nonumber \\
~[Q, [ v_{\mu}(x_{1}),\tilde{u}_{\sigma}(x_{2})]] + {\rm cyclic~permutations} = 0.
\end{eqnarray}
We have a result similar to the first theorem of this Section:
\begin{thm}
The operator $Q$ verifies
$
Q^{2} = 0.
$
The factor space
$
Ker(Q)/Ran(Q)
$
is isomorphic to the Fock space of particles of mass $m$ and spin $2$ (massive gravitons).
\end{thm}
{\bf Proof:} (i) The fact that $Q$ squares to zero follows easily from (\ref{Q-m}).
(ii) The generic form of a state
$
\Psi \in {\cal H}^{(1)} \subset {\cal H}
$
from the one-particle Hilbert subspace is
\begin{equation}
\Psi = \left[ \int f_{\mu\nu}(x) h^{\mu\nu}(x) + \int g^{(1)}_{\mu}(x) u^{\mu}(x)
+ \int g^{(2)}_{\mu}(x) \tilde{u}^{\mu}(x) + \int h_{\mu}(x) v^{\mu}(x) \right] \Omega
\end{equation}
with test functions
$
f_{\mu\nu}, g^{(1)}_{\mu}, g^{(2)}_{\mu}, h_{\mu}
$
verifying the wave equation equation; we can also suppose that
$
f_{\mu\nu}
$
is symmetric. The condition
$
\Psi \in Ker(Q)~\Longleftrightarrow~Q\Psi = 0
$
leads to
$
h_{\mu} = {2\over m}~
\left(\partial^{\nu}f_{\mu\nu} - {1\over 2}~\partial_{\mu}f\right)
$
(where
$
f = \eta^{\mu\nu}f_{\mu\nu}
$
is the trace of
$
f_{\mu\nu})
$
and
$
g^{(2)}_{\mu} = 0
$
i.e. the generic element
$
\Psi \in {\cal H}^{(1)} \cap Ker(Q)
$
is
\begin{equation}
\Psi = \left[ \int f_{\mu\nu}(x) h^{\mu\nu}(x) + \int g_{\mu}(x) u^{\mu}(x)
+ {2\over m}~\int
\left(\partial^{\nu}f_{\mu\nu} - {1\over 2}~\partial_{\mu}f\right)(x) v^{\mu}(x)\right] \Omega
\label{kerQ-m}
\end{equation}
with
$g_{\mu}$
and
$
f_{\mu\nu}
$
arbitrary so
$
\Psi \in {\cal H}^{(1)} \cap Ker(Q)
$
is indexed by couples of test functions
$
[f_{\mu\nu},g_{\mu}].
$
Now, a generic element
$
\Psi^{\prime} \in {\cal H}^{(1)} \cap Ran(Q)
$
has the form
\begin{equation}
\Psi^{\prime} = Q\Phi = \left[
- {1\over 2} \int (\partial_{\mu}g^{\prime}_{\nu} + \partial_{\nu}g^{\prime}_{\mu})(x) h^{\mu\nu}(x)
+ \int \left(\partial^{\nu}g^{\prime}_{\mu\nu}
- {1\over 2}~\partial_{\mu}g^{\prime} - {m\over 2} h^{\prime}_{\mu}\right)(x) u^{\mu}(x) \right] \Omega
\label{ranQ-m}
\end{equation}
with
$
g^{\prime} = \eta^{\mu\nu}g^{\prime}_{\mu\nu}
$
so if
$
\Psi \in {\cal H}^{(1)} \cap Ker(Q)
$
is indexed by the couple
$
[f_{\mu\nu}, g_{\rho}]
$
then
$
\Psi + \Psi^{\prime}
$
is indexed by the couple
$
\left[
f_{\mu} - {1\over 2}~(\partial_{\mu}g^{\prime}_{\nu} + \partial_{\nu}g^{\prime}_{\mu}),
g_{\mu} + \left( \partial^{\nu}g^{\prime}_{\mu\nu}
- {1\over 2}~\partial_{\mu}g^{\prime} - {m\over 2} h^{\prime}_{\mu}\right)\right].
$
If we take
$
h^{\prime}_{\mu}
$
conveniently we can make
$
g_{\mu} = 0
$
and if we take
$
g^{\prime}_{\mu\nu}
$
convenient we can make
$
\partial^{\nu}f_{\mu\nu} = 0.
$
We still have the freedom to change
$
f_{\mu\nu} \rightarrow f_{\mu\nu}- {1\over 2}~(\partial_{\mu}g^{\prime}_{\nu} + \partial_{\nu}g^{\prime}_{\mu})
$
with transverse functions
$
\partial^{\mu}g^{\prime}_{\mu} = 0
$
without affecting the property
$
\partial^{\nu}f_{\mu\nu} = 0.
$
It remains to prove that the sesquilinear form
$<\cdot,\cdot>$
induces a positively defined form on
$
({\cal H}^{(1)} \cap Ker(Q))/({\cal H}^{(1)} \cap Ran(Q))
$
and we have obtained a direct sum of the one-particle Hilbert space for the graviton of mass $m$ (i.e. a particle of mass $m$ and helicity $2$) and a scalar particle of the same mass $m$.
(iii) The extension of this argument to the $n$th-particle space is done as in \cite{cohomology} using K\"unneth formula \cite{Dr}.
$\blacksquare$
Now we determine the cohomology of the operator
$
d_{Q} = [Q,\cdot]
$
induced by $Q$ in the space of Wick polynomials. As before, it is convenient to use the formalism from the preceding Section. We consider that the (classical) fields
$
y^{\alpha}
$
are
$
h_{\mu\nu}, u_{\mu}, \tilde{u}_{\mu}, v_{\mu}
$
of mass $m$ and we consider the set
$
{\cal P}
$
of polynomials in these fields and their derivatives. We introduce by convenience the notation:
\begin{equation}
C_{\mu} \equiv d^{\nu}h_{\mu\nu} + m v_{\mu}
\end{equation}
and define the graded derivation
$
d_{Q}
$
on
$
{\cal P}
$
according to
\begin{eqnarray}
d_{Q}h_{\mu\nu} = - {i\over 2}~(d_{\mu}u_{\nu} + d_{\nu}u_{\mu}
- \eta_{\mu\nu}~d_{\rho}u^{\rho}),
\nonumber \\
d_{Q}u_{\mu} = 0,\qquad
d_{Q}\tilde{u}_{\mu} = i~B_{\mu}, \qquad
d_{Q}v_{\mu} = - {i~m\over 2}~u_{\mu}
\nonumber \\
~[d_{Q}, d_{\mu} ] = 0.
\end{eqnarray}
Then one can prove that
$
d_{Q}^{2} = 0
$
and the cohomology of this operator is isomorphic to the cohomology of the preceding operator (denoted also by $d_{Q}$) and acting in the space of Wick monomials. To determine the cohomology of
$
d_{Q}
$
it is convenient to introduce the Riemann tensor
$
R_{\mu\nu;\rho\sigma}
$
as before and also
\begin{eqnarray}
\phi_{\mu\nu} \equiv
d_{\mu}v_{\nu} + d_{\nu}v_{\mu} - \eta_{\mu\nu} d_{\rho}v^{\rho} - m~h_{\mu\nu}
\nonumber \\
\phi \equiv \eta^{\mu\nu}~\phi_{\mu\nu}
\end{eqnarray}
and observe that we also have
\begin{equation}
d_{Q}\phi_{\mu\nu} = 0.
\end{equation}
Then we construct new variables as in the massless case: (i) we express the variables
$
v_{\{\mu\nu\}} \equiv {1\over 2} (d_{\mu}v_{\nu} +d_{\nu}v_{\mu}),
$
and
$
d_{\nu_{1}}\dots d_{\nu_{n}}v_{\mu}~(n \geq 2)
$
through
$
\phi_{\mu\nu}, h_{\mu\nu}
$
and their derivatives; (ii) next, we express the derivatives
$
\phi_{\mu\nu;\rho_{1}\dots\rho_{n}}
$
through the traceless parts
$
\phi^{(0)}_{\mu\nu;\rho_{1}\dots\rho_{n}}, \phi^{(0)}_{;\rho_{1}\dots\rho_{n}}
$
and
$
C_{\mu;\rho_{1},\dots,\rho_{n}}.
$
(iii) Finally we express the variables:
$
\Gamma_{\mu;\nu_{1},\dots,\nu_{n}}
$
and
$
d_{\lambda_{1}}\dots d_{\lambda_{n}}R_{\mu\nu;\rho\sigma}
$
in terms of the traceless parts
$
\Gamma^{(0)}_{\mu;\nu_{1},\dots,\nu_{n}},
R^{(0)}_{\mu\nu;\rho\sigma;\lambda_{1},\dots,\lambda_{n}},
\phi^{(0)}_{\mu\nu;\rho_{1}\dots\rho_{n}},
\phi^{(0)}_{;\rho_{1}\dots\rho_{n}}
C_{\mu;\nu_{1},\dots,\nu_{n}}
$
and
$
h_{\mu\nu},~v_{[\mu\nu]}.
$
Now we can describe the cohomology of the operator
$
d_{Q}
$
in the massive case.
\begin{thm}
Let
$
p \in Z_{Q}.
$
Then $p$ is cohomologous to a polynomial in the traceless variables:
$
R^{(0)}_{\mu\nu;\rho\sigma;\lambda_{1},\dots,\lambda_{n}}
$
and
$
\phi^{(0)}_{\mu\nu;\rho_{1}\dots\rho_{n}},
\phi^{(0)}_{;\rho_{1}\dots\rho_{n}}
$
\label{m>0}
\end{thm}
{\bf Proof:} (i) Is similar to the proof of theorem \ref{m=0}. We take
$
{\cal P}_{1} = {\cal P}_{0}
$
as in the statement of the theorem and
$
{\cal P}_{2}
$
generated by the other variables. The graded derivation ${\mathfrak h}$ is defined in this case by:
\begin{eqnarray}
{\mathfrak h}u_{\mu} = {2 i\over m} ~v_{\mu},\qquad
{\mathfrak h}u_{\{\mu\nu\}} = i~\hat{h}_{\mu\nu},\qquad
{\mathfrak h}u_{[\mu\nu]} = {2 i\over m} ~v_{[\mu\nu]},
\nonumber \\
{\mathfrak h}u^{(0)}_{\mu;\nu_{1},\dots,\nu_{n}} = i~\Gamma^{(0)}_{\mu;\nu_{1},\dots,\nu_{n}}~
(n \geq 2)
\nonumber \\
{\mathfrak h}C^{(0)}_{\mu;\nu_{1},\dots,\nu_{n}} =
- i~\tilde{u}^{(0)}_{\mu;\nu_{1},\dots,\nu_{n}}~
(n \geq 0)
\end{eqnarray}
and it follows that ${\mathfrak h}$ is a homotopy for
$
d_{Q}
$
restricted to
$
{\cal P}_{2}
$
so the the corresponding cohomology is trivial.
According to K\"unneth formula if $p$ is an arbitrary co-cycle from
$
{\cal P}
$
it can be replaced by a cohomologous polynomial from
$
{\cal P}_{0}
$
and this proves the theorem.
$\blacksquare$
We note that in the case of null mass the operator
$
d_{Q}
$
raises the canonical dimension by one unit and this fact is not true anymore in the massive case. We are lead to another cohomology group. Let us take as the space of co-chains the space
$
{\cal P}^{(n)}
$
of polynomials of canonical dimension
$
\omega \leq n;
$
then
$
Z_{Q}^{(n)} \subset {\cal P}^{(n)}
$
and
$
B_{Q}^{(n)} \equiv d_{Q}{\cal P}^{(n-1)}
$
are the co-cycles and the co-boundaries respectively. It is possible that a polynomial is a co-boundary as an element of
$
{\cal P}
$
but not as an element of
$
{\cal P}^{(n)}.
$
The situation is described by the following generalization of the preceding theorem.
\begin{thm}
Let
$
p \in Z^{(n)}_{Q}.
$
Then $p$ is cohomologous to a polynomial of the form
$
p_{1} + d_{Q}p_{2}
$
where
$
p_{1} \in {\cal P}_{0}
$
and
$
p_{2} \in {\cal P}^{(n)}.
$
\label{Q-cohomology}
\end{thm}
We will call the co-cycles of the type
$
p_{1}
$
(resp.
$
d_{Q}p_{2})
$
{\it primary} (resp. {\it secondary}).
\section{The Relative Cohomology of the Operator $d_{Q}$\label{relative}}
A polynomial
$
p \in {\cal P}
$
verifying the relation
\begin{equation}
d_{Q}p = i~d_{\mu}p^{\mu}
\label{rel-co}
\end{equation}
for some polynomials
$
p^{\mu}
$
is called a {\it relative co-cycle} for
$
d_{Q}.
$
The expressions of the type
\begin{equation}
p = d_{Q}b + i~d_{\mu}b^{\mu}, \qquad (b, b^{\mu} \in {\cal P})
\end{equation}
are relative co-cycles and are called {\it relative co-boundaries}. We denote by
$
Z_{Q}^{\rm rel}, B_{Q}^{\rm rel}
$
and
$
H_{Q}^{\rm rel}
$
the corresponding cohomological spaces. In (\ref{rel-co}) the expressions
$
p_{\mu}
$
are not unique. It is possible to choose them Lorentz covariant.
Now we consider the framework and notations of the preceding Section in the case
$
m = 0
$. Then we have the following result which describes the most general form of the self-interaction of the gravitons. Summation over the dummy indices is used everywhere.
\begin{thm}
Let $T$ be a relative co-cycle for
$
d_{Q}
$
which is as least tri-linear in the fields and is of canonical dimension
$
\omega(T) \leq 5
$
and ghost number
$
gh(T) = 0.
$
Then:
(i) $T$ is (relatively) cohomologous to a non-trivial co-cycle of the form:
\begin{eqnarray}
t = \kappa ( 2~h_{\mu\rho}~d^{\mu}h^{\nu\lambda}~d^{\rho}h_{\nu\lambda}
+ 4~h_{\nu\rho}~d^{\lambda}h^{\mu\nu}~d_{\mu}{h_{\nu}}^{\lambda}
- 4~h_{\rho\lambda}~d^{\mu}h^{\nu\rho}~d_{\mu}{h_{\nu}}^{\lambda}
\nonumber \\
+ 2~h^{\rho\lambda}~d_{\mu}h_{\rho\lambda}~d^{\mu}h
- h_{\mu\rho}~d^{\mu}h~d^{\rho}h
- 4~u^{\rho}~d^{\nu}\tilde{u}^{\lambda}~d_{\rho}h_{\nu\lambda}
\nonumber \\
+ 4~d^{\rho}u^{\nu}~d_{\nu}\tilde{u}^{\lambda}~h_{\rho\lambda}
+ 4~d^{\rho}u_{\nu}~d^{\lambda}\tilde{u}_{\nu}~h_{\rho\lambda}
- 4~d^{\nu}u_{\nu}~d^{\rho}\tilde{u}^{\lambda}~h_{\rho\lambda})
\end{eqnarray}
where
$
\kappa \in \mathbb{R}.
$
(ii) The relation
$
d_{Q}t = i~d_{\mu}t^{\mu}
$
is verified by:
\begin{eqnarray}
t^{\mu} = \kappa ( - 2 u^{\mu}~d_{\nu}h_{\rho\lambda}~d^{\rho}h^{\nu\lambda}
+ u^{\mu}~d_{\rho}h_{\nu\lambda}~d^{\rho}h^{\nu\lambda}
- {1\over 2} u^{\mu}~d_{\rho}h~d^{\rho}h
\nonumber \\
+ 4~u^{\rho}~d^{\nu}h^{\mu\lambda}~d_{\rho}h_{\nu\lambda}
- 2~u^{\rho}~d^{\mu}h^{\nu\lambda}~d_{\rho}h_{\nu\lambda}
+ u^{\rho}~d^{\mu}h~d_{\rho}h
\nonumber \\
- 4~d^{\rho}u^{\nu}~d_{\nu}h^{\mu\lambda}~h_{\rho\lambda}
- 4~d^{\rho}u_{\nu}~d^{\lambda}h^{\mu\nu}~h_{\rho\lambda}
+ 4~d^{\lambda}u_{\rho}~d^{\mu}h^{\nu\rho}~h_{\nu\lambda}
\nonumber \\
+ 4~d_{\nu}u^{\nu}~d^{\rho}h^{\mu\lambda}~h_{\rho\lambda}
- 2~d_{\nu}u^{\nu}~d^{\mu}h^{\rho\lambda}~h_{\rho\lambda}
- 2~d^{\rho}u^{\lambda}~h_{\rho\lambda}~d^{\mu}h
+ d^{\nu}u_{\nu}~h~d^{\mu}h
\nonumber \\
- 2~u^{\mu}~d_{\nu}d_{\rho}u^{\rho}~\tilde{u}^{\nu}
+ 2~u_{\rho}~d^{\rho}d^{\sigma}u_{\sigma}~\tilde{u}^{\mu}
- 2~u^{\mu}~d_{\lambda}u_{\rho}~d^{\rho}\tilde{u}^{\nu}
\nonumber \\
+ 2~u_{\rho}~d_{\lambda}u^{\mu}~d^{\rho}\tilde{u}^{\lambda}
+ 2~d^{\rho}u_{\rho}~d_{\lambda}u^{\mu}~\tilde{u}^{\lambda}
- 2~u_{\rho}~d^{\rho}u_{\lambda}~d^{\mu}\tilde{u}^{\lambda})
\label{Tmu}
\end{eqnarray}
(iii) The relation
$
d_{Q}t^{\mu} = i~d_{\nu}t^{\mu\nu}
$
is verified by:
\begin{eqnarray}
t^{\mu\nu} \equiv \kappa [ 2 ( - u^{\mu}~d_{\lambda}u_{\rho}~d^{\rho}h^{\nu\lambda}
+ u_{\rho}~d_{\lambda}u^{\mu}~d^{\rho}h^{\nu\lambda}
+ u_{\rho}~d^{\rho}u_{\lambda}~d^{\nu}h^{\mu\lambda}
+ d_{\rho}u^{\rho}~d_{\lambda}u^{\mu}~h^{\nu\lambda})
\nonumber \\
- (\mu \leftrightarrow \nu)
+ 4~d^{\lambda}u^{\mu}~d^{\rho}u^{\nu}~h_{\rho\lambda} ].
\label{Tmunu}
\end{eqnarray}
(iv) The relation
$
d_{Q}t^{\mu\nu} = i~d_{\rho}t^{\mu\nu\rho}
$
is verified by:
\begin{eqnarray}
t^{\mu\nu\rho} \equiv \kappa [ 2 u_{\lambda}~d^{\lambda}u^{\rho}~u^{\mu\nu}
- u_{\rho}~(d^{\mu}u^{\lambda}~d_{\lambda}u^{\nu}
- d^{\nu}u^{\lambda}~d_{\lambda}u^{\mu})
+ {\rm circular~perm.}]
\label{Tmunurho}
\end{eqnarray}
and we have
$
d_{Q}t^{\mu\nu\rho} = 0.
$
(v) The co-cycles
$
t, t^{\mu}, t^{\mu\nu}
$
and
$
t^{\mu\nu\rho}
$
are non-trivial and invariant with respect to parity.
\label{T1}
\end{thm}
{\bf Proof:}
(i) By hypothesis we have
\begin{equation}
d_{Q}T = i~d_{\mu}T^{\mu}.
\label{descent-t0}
\end{equation}
If we apply
$
d_{Q}
$
we obtain
$
d_{\mu}d_{Q}~T^{\mu} = 0
$
so with the Poincar\'e lemma there must exist the polynomials
$
T^{\mu\nu}
$
antisymmetric in $\mu, \nu$ such that
\begin{equation}
d_{Q}T^{\mu} = i~d_{\nu}T^{\mu\nu}.
\label{descent-t}
\end{equation}
Continuing in the same way we find
$
T^{\mu\nu\rho},~T^{\mu\nu\rho\sigma}
$
which are completely antisymmetric and we also have
\begin{eqnarray}
d_{Q}T^{\mu\nu} = i~d_{\rho}T^{\mu\nu\rho}
\nonumber \\
d_{Q}T^{\mu\nu\rho} = i~d_{\sigma}T^{\mu\nu\rho\sigma}
\nonumber \\
d_{Q}T^{\mu\nu\rho\sigma} = 0.
\label{descent-T}
\end{eqnarray}
According to a theorem proved in \cite{cohomology} can choose the expressions
$
T^{I}
$
to be Lorentz covariant; we also have
\begin{equation}
gh(T^{I}) = |I|.
\end{equation}
From the last relation we find, using Theorem \ref{m=0} that
\begin{equation}
T^{\mu\nu\rho\sigma} = d_{Q}B^{\mu\nu\rho\sigma} + T_{0}^{\mu\nu\rho\sigma}
\end{equation}
with
$
T_{0}^{\mu\nu\rho\sigma} \in {\cal P}_{0}^{(5)}
$
and we can choose the expressions
$
B^{\mu\nu\rho\sigma}
$
and
$
T_{0}^{\mu\nu\rho\sigma}
$
completely antisymmetric. The generic form of
$
T_{0}^{\mu\nu\rho\sigma}
$
is:
\begin{equation}
T_{0}^{\mu\nu\rho\sigma} = a~u^{\mu}~u^{\nu}~u^{\rho}~u^{\sigma}
\end{equation}
with $a$ a constant. If we substitute the expression obtained for
$
T^{\mu\nu\rho\sigma}
$
in the second relation (\ref{descent-T}) we find out
\begin{equation}
d_{Q}(T^{\mu\nu\rho} - i~d_{\sigma}B^{\mu\nu\rho\sigma}) = i~d_{\sigma}T_{0}^{\mu\nu\rho\sigma}
\end{equation}
so the expression in the right hand side must be a co-boundary: we use systematically
\begin{equation}
d_{\sigma}u_{\mu} = u_{\sigma\mu} + u_{\{\sigma\mu\}}
= u_{\sigma\mu} + i~d_{Q}\hat{h}_{\sigma\mu}
\end{equation}
and find out
\begin{equation}
d_{\sigma}T_{0}^{\mu\nu\rho\sigma} = a (u^{\sigma\mu}~u^{\nu}~u^{\rho}
+ u^{\sigma\nu}~u^{\rho}~u^{\mu} + u^{\sigma\rho}~u^{\mu}~u^{\nu})~u_{\sigma} + d_{Q}(\cdots)
\end{equation}
and we obtain
$
a = 0.
$
It follows that
\begin{equation}
T^{\mu\nu\rho\sigma} = d_{Q}B^{\mu\nu\rho\sigma}
\end{equation}
and
\begin{equation}
d_{Q}(T^{\mu\nu\rho} - i~d_{\sigma}B^{\mu\nu\rho\sigma}) = 0
\end{equation}
We apply again Theorem \ref{m=0} and obtain
\begin{equation}
T^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma}
+ T^{\mu\nu\rho}_{0}
\end{equation}
where
$
T_{0}^{\mu\nu\rho} \in {\cal P}_{0}^{(5)}
$
and we can choose the expressions
$
B^{\mu\nu\rho}
$
and
$
T_{0}^{\mu\nu\rho}
$
completely antisymmetric. The generic form of
$
T_{0}^{\mu\nu\rho}
$
is:
\begin{eqnarray}
T_{0}^{\mu\nu\rho} = a_{0}~u^{\mu}~u^{\nu}~u^{\rho}
+ a_{1}~( u^{\rho}~u^{\mu\lambda}~{u^{\nu}}_{\lambda}
+ u^{\mu}~u^{\nu\lambda}~{u^{\rho}}_{\lambda}
+ u^{\nu}~u^{\rho\lambda}~{u^{\mu}}_{\lambda})
\nonumber \\
+ a_{2}~u_{\lambda} ( u^{\lambda\rho}~u^{\mu\nu} + u^{\lambda\mu}~u^{\nu\rho}
+ u^{\lambda\nu}~u^{\rho\mu})
+ a^{\prime}~\epsilon^{\mu\nu\rho\sigma}~u_{\sigma\alpha}~u^{\alpha\beta}~u_{\beta}.
\label{t0mnr}
\end{eqnarray}
We substitute the expression
$
T^{\mu\nu\rho}
$
into the first relation (\ref{descent-T}) and obtain
\begin{equation}
d_{Q}(T^{\mu\nu} - i~d_{\rho}B^{\mu\nu\rho}) = i~d_{\rho}T_{0}^{\mu\nu\rho}.
\end{equation}
The right hand side must be a co-boundary. If we compute the divergence
$
d_{\rho}T_{0}^{\mu\nu\rho}
$
and impose that it is a co-boundary we obtain
$
a_{0} = a^{\prime} = 0
$
and no constraints on
$
a_{j}~(j = 1,2)
$
so apparently we have two possible solutions, namely the corresponding polynomials
$
T_{0j}^{\mu\nu\rho}~(j = 1,2)
$
from the expression of
$
T^{\mu\nu\rho}.
$
However, let us define
\begin{eqnarray}
b^{\mu\nu\rho\sigma} \equiv {\cal A}~u^{\mu\nu}~u^{\rho}~u^{\sigma}
\nonumber
\end{eqnarray}
where
$
{\cal A}
$
performs antisymmetrization in all indices. Then it is not hard to obtain that
\begin{eqnarray}
d_{\sigma}b^{\mu\nu\rho\sigma} = - {1\over 3}~T_{01}^{\mu\nu\rho}
- {1\over 6}~T_{02}^{\mu\nu\rho} + d_{Q}b^{\mu\nu\rho}
\nonumber
\end{eqnarray}
where we can choose the expression
$
b^{\mu\nu\rho}
$
completely antisymmetric. It follows that if we modify conveniently the expressions
$
B^{\mu\nu\rho\sigma}
$
and
$
B^{\mu\nu\rho}
$
we make
$
a_{1} \rightarrow a_{1} + 2~c,\quad
a_{2} \rightarrow a_{2} + c
$
with $c$ arbitrary. In particular we can arrange such that
$
a_{1} = a_{2} \equiv 2~\kappa
$
(this is the choice made in \cite{descent}). In this case one can prove rather easily that
$
T_{0}^{\mu\nu\rho} = t^{\mu\nu\rho} + d_{Q}(\cdots)
$
where
$
t^{\mu\nu\rho}
$
is the expression from the statement. It follows that one can exhibit
$
T^{\mu\nu\rho}
$
in the following form:
\begin{equation}
T^{\mu\nu\rho} = t^{\mu\nu\rho}+ d_{Q}B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma}
\end{equation}
Now one proves by direct computation that
\begin{equation}
d_{\rho}t^{\mu\nu\rho} = - i~d_{Q}~t^{\mu\nu}
\end{equation}
where
$
t^{\mu\nu}
$
is the expression from the statement so we obtain
\begin{equation}
d_{Q}(T^{\mu\nu} - t^{\mu\nu} - i~d_{\rho}B^{\mu\nu\rho}) = 0.
\end{equation}
(ii) We use again Theorem \ref{m=0} and obtain
\begin{equation}
T^{\mu\nu} = t^{\mu\nu} + d_{Q}B^{\mu\nu} + i~d_{\rho}B^{\mu\nu\rho}
+ T^{\mu\nu}_{0}
\end{equation}
where
$
T_{0}^{\mu\nu} \in {\cal P}_{0}^{(5)}
$
and we can choose the expressions
$
B^{\mu\nu}
$
and
$
T_{0}^{\mu\nu}
$
antisymmetric. The generic form of the expression
$
T_{0}^{\mu\nu}
$
is:
\begin{equation}
T_{0}^{\mu\nu} = b~u_{\rho}~u_{\sigma}~R^{(0)\mu\nu;\rho\sigma}
+ b^{\prime}~\epsilon^{\mu\nu\rho\sigma}~
u^{\alpha}~u^{\beta}~R^{(0)}_{\rho\sigma;\alpha\beta};
\end{equation}
the monomials
$
u_{\rho}~u_{\sigma}~R^{(0)\mu\rho;\nu\sigma}
$
and
$
\epsilon^{\mu\nu\rho\sigma}~u^{\alpha}~u^{\beta}~R^{(0)}_{\rho\alpha;\sigma\beta}
$
can be eliminated if we use the following consequence of the Bianchi identity:
\begin{equation}
R^{(0)}_{\mu\nu;\rho\sigma} + R^{(0)}_{\mu\rho;\nu\sigma}
+ R^{(0)}_{\mu\sigma;\nu\rho} = d_{Q}(\cdots)
\end{equation}
and redefine the expression
$
B^{\mu\nu}.
$
We substitute the expression of
$
T^{\mu\nu}
$
in (\ref{descent-t}) and get:
\begin{equation}
d_{Q}(T^{\mu} - i~d_{\nu}B^{\mu\nu}) = i~d_{\nu}(T_{0}^{\mu\nu} + t^{\mu\nu}).
\end{equation}
But one proves by direct computation that we have
\begin{equation}
d_{\rho}t^{\mu\nu} = - i~d_{Q}~t^{\mu}
\end{equation}
where
$
t^{\mu}
$
is the expression from the statement so the preceding relation becomes
\begin{equation}
d_{Q}(T^{\mu} - t^{\mu}- i~d_{\nu}B^{\mu\nu}) = i~d_{\nu}T_{0}^{\mu\nu}.
\label{descent-t'}
\end{equation}
The right hand side must be a co-boundary and one easily obtains that
$
b = b^{\prime} = 0
$
so we have:
\begin{equation}
d_{Q}(T^{\mu} - t^{\mu}- i~d_{\nu}B^{\mu\nu}) = 0.
\end{equation}
(iii) Now it is again time we use Theorem \ref{m=0} and obtain
\begin{equation}
T^{\mu} = t^{\mu} + d_{Q}B^{\mu} + i~d_{\nu}B^{\mu\nu} + T^{\mu}_{0}
\end{equation}
where
$
T_{0}^{\mu} \in {\cal P}_{0}^{(5)}.
$
But there are no such expression i.e.
$
T^{\mu}_{0} = 0
$
and we have
\begin{equation}
T^{\mu} = t^{\mu} + d_{Q}B^{\mu} + i~d_{\nu}B^{\mu\nu}.
\end{equation}
Now we get from (\ref{descent-t0})
\begin{equation}
d_{Q}(T - i~d_{\mu}B^{\mu}) = i~d_{\mu}t^{\mu}.
\label{descent-0t'}
\end{equation}
But we obtain by direct computation that we have
\begin{equation}
d_{\rho}t^{\mu} = - i~d_{Q}~t
\end{equation}
where $t$ is the expression from the statement so the preceding relation becomes
\begin{equation}
d_{Q}(T - t - i~d_{\mu}B^{\mu}) = 0
\end{equation}
so a last use of Theorem \ref{m=0} gives
\begin{equation}
T = t + d_{Q}B + i~d_{\mu}B^{\mu} + T_{0}
\end{equation}
where
$
T_{0} \in {\cal P}_{0}^{(5)}.
$
But there are no such expression i.e.
$
T_{0} = 0
$
and we have
\begin{equation}
T = t + d_{Q}B + i~d_{\mu}B^{\mu}
\end{equation}
i.e. we have obtained the first four assertions from the statement.
(v) We prove now that $t$ from the statement is not a trivial (relative) co-cycle. Indeed, if this would be true i.e.
$
t = d_{Q}B + i~d_{\mu}B^{\mu}
$
then we get
$
d_{\mu}(t^{\mu} - d_{Q}B^{\mu}) = 0
$
so with Poincar\'e lemma we have
$
t^{\mu} = d_{Q}B^{\mu} + i~d_{\nu}B^{[mu\nu}.
$
In the same way we obtain from here:
$
t^{\mu\nu} = d_{Q}B^{\mu\nu} + i~d_{\rho}B^{\mu\nu\rho}
$
and
$
t^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma}
$
But it is easy to see that there is no such an expression
$
B^{\mu\nu\rho\sigma}
$
with the desired antisymmetry property in ghost number $4$ so we have in fact
$
t^{\mu\nu\sigma} = d_{Q}B^{\mu\nu\sigma}.
$
This relation contradicts the fact that
$
t^{\mu\nu\sigma}
$
is a non-trivial co-cycle for
$
d_{Q}
$
as it follows from Theorem \ref{m=0}. The invariance with respect to parity invariance is obvious.
$\blacksquare$
If $T$ is bi-linear in the fields we cannot use the Poincar\'e lemma but we can make a direct analysis. The result is the following.
\begin{thm}
Let $T$ be a relative co cycle for
$
d_{Q}
$
which is bi-linear in the fields, of canonical dimension
$
\omega(T) \leq 5
$
and ghost number
$
gh(T) = 0.
$
Then:
(i) $T$ is (relatively) cohomologous to an expression of the form:
\begin{equation}
t = \kappa^{\prime}~
( - 2~h_{\mu\nu}~h^{\mu\nu} + h^{2} - 4 u_{\mu}~\tilde{u}^{\mu})
\end{equation}
(ii) The relation
$
d_{Q}t = i~d_{\mu}t^{\mu}
$
is verified with
\begin{equation}
t^{\mu} = 4~\kappa^{\prime}~u_{\nu} h^{\mu\nu}
\end{equation}
and we also have
\begin{equation}
d_{Q}t^{\mu} = 2 i \kappa^{\prime}~[u_{\nu}~d^{\mu}u^{\nu}
+ d_{\nu}~(u^{\mu}u^{\mu})].
\end{equation}
\end{thm}
{\bf Proof:} The cases
$
\omega(T) = 3, 5
$
are not possible on grounds of Lorentz covariance. The cases
$
\omega(T) = 2, 4
$
must be investigated starting from a general ansatz and the solution from the statement emerges.
$\blacksquare$
All linear solutions of this problem are trivial. Now we extend this result to the case
$
m > 0.
$
\begin{thm}
Let
$
T_{m}
$
be a relative co cycle for
$
d_{Q}
$
which is as least tri-linear in the fields and is of canonical dimension
$
\omega(T_{m}) \leq 5
$
and ghost number
$
gh(T_{m}) = 0.
$
Then:
(i)
$
T_{m}
$
is (relatively) cohomologous to a non-trivial co-cycle of the form:
\begin{eqnarray}
t_{m} = t + \kappa \Bigl[ m^{2}~\left( {4\over 3}~h^{\mu\nu}~h_{\nu\rho}~{h_{\mu}}^{\rho}
- h^{\mu\nu}~h_{\mu\nu}~h + {1\over 6} h^{3}\right)
\nonumber \\
+ 4~m~u_{\rho}~d^{\rho}v^{\lambda}~\tilde{u}_{\lambda}
- 4~d^{\rho}v^{\sigma}~d^{\lambda}v_{\sigma}~h_{\rho\lambda}\Bigl]
\end{eqnarray}
where $t$ is the expression from the preceding theorem.
(ii) The relation
$
d_{Q}t_{m} = i~d_{\mu}t^{\mu}_{m}
$
is verified by:
\begin{eqnarray}
t_{m}^{\mu} = t_{\mu} + \kappa \Bigl[ 4 u_{\rho}~d^{\rho}v_{\lambda}~d^{\mu}v^{\lambda}
- 2 u^{\mu}~d^{\rho}v^{\lambda}~d_{\rho}v_{\lambda}
+ 4 m (u^{\mu}~d_{\rho}v_{\lambda}~h^{\rho\lambda}
- u^{\rho}~d_{\rho}v_{\lambda}~h^{\mu\lambda})
\nonumber \\
- m^{2}~u^{\mu}~\left(h^{\rho\sigma}~h_{\rho\sigma} - {1\over 2}h^{2}\right)\Bigl]
\label{Tmu-m}
\end{eqnarray}
where
$
t_{\mu}
$
is the expression from the preceding theorem.
(iii) The relation
$
d_{Q}t_{m}^{\mu} = i~d_{\nu}t_{m}^{\mu\nu}
$
is verified by:
\begin{equation}
t_{m}^{\mu\nu} \equiv t^{\mu\nu} + 2~\kappa m (v^{\mu}~u^{\nu} - v^{\nu}~u^{\mu})
~d^{\rho}u_{\rho}
\label{Tmunu-m}
\end{equation}
where
$
t^{\mu\nu}
$
is the expression from the preceding theorem.
(iv) The relation
$
d_{Q}t_{m}^{\mu\nu} = i~d_{\rho}t_{m}^{\mu\nu\rho}
$
is verified by:
\begin{equation}
t_{m}^{\mu\nu\rho} \equiv t^{\mu\nu\rho} - 2~\kappa m^{2} u^{\mu}~u^{\nu}~u^{\rho}
\label{Tmunurho-m}
\end{equation}
where
$
t^{\mu\nu\rho}
$
is the expression from the preceding theorem. We also have
$
d_{Q}t_{m}^{\mu\nu\rho} = 0.
$
(v) The co-cycles
$
t_{m}, t_{m}^{\mu}, t_{m}^{\mu\nu}
$
and
$
t_{m}^{\mu\nu\rho}
$
are non-trivial, parity invariant and have smooth limit for
$
m \searrow 0.
$
\label{T1-m}
\end{thm}
{\bf Proof:}
(i) As in the preceding theorem we can prove that we must have
\begin{equation}
d_{Q}T_{m} = i~d_{\mu}T_{m}^{\mu}.
\label{descent-t-m}
\end{equation}
and
\begin{eqnarray}
d_{Q}T_{m}^{\mu} = i~d_{\nu}T_{m}^{\mu\nu},
\nonumber \\
d_{Q}T_{m}^{\mu\nu} = i~d_{\rho}T_{m}^{\mu\nu\rho}
\nonumber \\
d_{Q}T_{m}^{\mu\nu\rho} = i~d_{\sigma}T_{m}^{\mu\nu\rho\sigma}
\nonumber \\
d_{Q}T_{m}^{\mu\nu\rho\sigma} = 0.
\label{descent-T-m}
\end{eqnarray}
According to a theorem proved in \cite{cohomology} can choose the expressions
$
T^{I}_{m}
$
to be Lorentz covariant; we also have
\begin{equation}
gh(T_{m}^{I}) = |I|.
\end{equation}
From the last relation we find, using Theorem \ref{Q-cohomology} that
\begin{equation}
T_{m}^{\mu\nu\rho\sigma} = d_{Q}B^{\mu\nu\rho\sigma} + T_{0,m}^{\mu\nu\rho\sigma}
\end{equation}
with
$
T_{0,m}^{\mu\nu\rho\sigma} \in {\cal P}_{0}^{(5)}
$
and we can choose the expressions
$
B^{\mu\nu\rho\sigma}
$
and
$
T_{0,m}^{\mu\nu\rho\sigma}
$
completely antisymmetric. The generic form of
$
T_{0,m}^{\mu\nu\rho\sigma}
$
is the same as in the preceding theorem and if we substitute the expression obtained for
$
T^{\mu\nu\rho\sigma}
$
in the third relation (\ref{descent-T-m}) we find out as there that
$
a = 0.
$
It follows that
\begin{equation}
T_{m}^{\mu\nu\rho\sigma} = d_{Q}B^{\mu\nu\rho\sigma}
\end{equation}
and
\begin{equation}
d_{Q}(T_{m}^{\mu\nu\rho} - i~d_{\sigma}B^{\mu\nu\rho\sigma}) = 0
\end{equation}
We apply again Theorem \ref{Q-cohomology} and obtain
\begin{equation}
T_{m}^{\mu\nu\rho} = B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma} + T^{\mu\nu\rho}_{0,m}
\end{equation}
where
$
T_{0,m}^{\mu\nu\rho} \in {\cal P}_{0}^{(5)}
$
and we can choose the expressions
$
B^{\mu\nu\rho}
$
and
$
T_{0,m}^{\mu\nu\rho}
$
completely antisymmetric. The generic form of
$
T_{0,m}^{\mu\nu\rho}
$
is:
\begin{eqnarray}
T_{0,m}^{\mu\nu\rho} = T_{0}^{\mu\nu\rho}
+ c_{1}~u^{\mu}~u^{\nu}~u^{\rho}~\phi
+ c_{2}~( u^{\mu}~u^{\nu}~\phi^{\rho\lambda}
+ u^{\nu}~u^{\rho}~\phi^{\mu\lambda}
+ u^{\rho}~u^{\mu}~\phi^{\nu\lambda})~u_{\lambda}
\end{eqnarray}
where
$
T_{0}^{\mu\nu\rho}
$
is the expression (\ref{t0mnr}) from the massless case with
$
a = 0
$
(the corresponding term is a secondary co-cycles). We substitute the expression
$
T^{\mu\nu\rho}
$
into the first relation (\ref{descent-T-m}) and obtain
\begin{equation}
d_{Q}(T^{\mu\nu} - i~d_{\rho}B^{\mu\nu\rho}) = i~d_{\rho}T_{0}^{\mu\nu\rho}.
\end{equation}
The right hand side must be a co-boundary. If we compute the divergence
$
d_{\rho}T_{0,m}^{\mu\nu\rho}
$
and impose that it is a co-boundary we obtain immediately
$
c_{j} = 0~(j = 1,2)
$
and
$
a^{\prime} = 0
$
so
$
T_{0}^{\mu\nu\rho}
$
is given by the same expression as in the massless case and we also can take
$
a_{j} = 2~\kappa~(j = 1,2)
$
as we have argued there.
In this case one can prove rather easily that
$
T_{0,m}^{\mu\nu\rho} = t_{m}^{\mu\nu\rho} + d_{Q}(\cdots)
$
where
$
t_{m}^{\mu\nu\rho}
$
is the expression from the statement. It follows that one can exhibit
$
T_{m}^{\mu\nu\rho}
$
in the following form:
\begin{equation}
T_{m}^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma}
+ t_{m}^{\mu\nu\rho}
\end{equation}
Now one proves by direct computation that
\begin{equation}
d_{\rho}t_{m}^{\mu\nu\rho} = - i~d_{Q}~t_{m}^{\mu\nu}
\end{equation}
where
$
t_{m}^{\mu\nu}
$
is the expression from the statement. We substitute this expression in the second relation (\ref{descent-T-m}) and obtain
\begin{equation}
d_{Q}(T_{m}^{\mu\nu} - t_{m}^{\mu\nu} - i~d_{\rho}B^{\mu\nu\rho}) = 0.
\end{equation}
(ii) We use again Theorem \ref{m>0} and obtain
\begin{equation}
T_{m}^{\mu\nu} = t_{m}^{\mu\nu} + d_{Q}B^{\mu\nu} + i~d_{\rho}B^{\mu\nu\rho}
+ T^{\mu\nu}_{0,m}
\end{equation}
where
$
T_{0,m}^{\mu\nu} \in {\cal P}_{0}^{(5)}
$
and we can choose the expressions
$
B^{\mu\nu}
$
and
$
T_{0,m}^{\mu\nu}
$
antisymmetric. The generic form of the expression
$
T_{0,m}^{\mu\nu}
$
is the same as in the massless case
$
T_{0,m}^{\mu\nu} = T_{0}^{\mu\nu}
$
so we obtain from the first relation in (\ref{descent-T-m}):
\begin{equation}
d_{Q}(T_{m}^{\mu} - i~d_{\nu}B^{\mu\nu}) = i~d_{\nu}(T_{0}^{\mu\nu} + t_{m}^{\mu\nu}).
\end{equation}
But one proves by direct computation that we have
\begin{equation}
d_{\rho}t_{m}^{\mu\nu} = - i~d_{Q}~t_{m}^{\mu}
\end{equation}
where
$
t_{m}^{\mu}
$
is the expression from the statement so the preceding relation becomes
\begin{equation}
d_{Q}(T_{m}^{\mu} - t_{m}^{\mu}- i~d_{\nu}B^{\mu\nu}) = i~d_{\nu}T_{0}^{\mu\nu}.
\end{equation}
The right hand side must be a co-boundary and one obtains as in the massless case
$
T_{0}^{\mu\nu} = 0
$
so we have:
\begin{equation}
T_{m}^{\mu\nu} = t_{m}^{\mu\nu} + d_{Q}B^{\mu\nu} + i~d_{\rho}B^{\mu\nu\rho}
\end{equation}
and
\begin{equation}
d_{Q}(T_{m}^{\mu} - t_{m}^{\mu}- i~d_{\nu}B^{\mu\nu}) = 0.
\end{equation}
(iii) Now it is again time we use Theorem \ref{m>0} and obtain
\begin{equation}
T_{m}^{\mu} = t_{m}^{\mu} + d_{Q}B^{\mu} + i~d_{\nu}B^{\mu\nu} + T^{\mu}_{0,m}
\end{equation}
where
$
T_{0,}^{\mu} \in {\cal P}_{0}^{(5)}.
$
The generic form of such expression is
\begin{equation}
T^{\mu}_{0,m} = d_{1}~u^{\mu}~\phi^{2}
+ d_{2}~u^{\mu}~\phi^{\rho\sigma}~\phi_{\rho\sigma}
+ d_{3}~u_{\nu}~\phi^{\mu\nu}~\phi
+ d_{4}~u^{\sigma}~\phi^{\mu\rho}~\phi_{\rho\sigma}
\end{equation}
and we have from the relation (\ref{descent-t-m})
\begin{equation}
d_{Q}(T_{m} - i~d_{\nu}B^{\mu\nu}) = i~(d_{\mu}t_{m}^{\mu} + d_{\mu}T^{\mu}_{0,m})
\end{equation}
so the right hand side must be a co-boundary. But one proves by direct computation that
\begin{equation}
d_{\rho}t_{m}^{\mu} = - i~d_{Q}~t_{m}
\end{equation}
where
$
t_{m}
$
is the expression from the statement so the preceding relation becomes
\begin{equation}
d_{Q}(T_{m} - t_{m} - i~d_{\mu}B^{\mu}) = i~d_{\mu}T^{\mu}_{0,m}
\end{equation}
so the expression
$
d_{\mu}T^{\mu}_{0,m}
$
must be a co-boundary. By direct computation we obtain from this condition
$
d_{j} = 0~(j = 1,\dots,4)
$
i.e.
$
T^{\mu}_{0,m} = 0.
$
It follows that
\begin{equation}
T_{m}^{\mu} = t_{m}^{\mu} + d_{Q}B^{\mu} + i~d_{\nu}B^{\mu\nu}
\end{equation}
and
\begin{equation}
d_{Q}(T_{m} - t_{m} - i~d_{\mu}B^{\mu}) = 0
\end{equation}
so a last use of Theorem \ref{m>0} gives
\begin{equation}
T_{m} = t_{m} + d_{Q}B + i~d_{\mu}B^{\mu} + T_{0,m}
\end{equation}
where
$
T_{0,m} \in {\cal P}_{0}^{(5)}.
$
But there are no such expression i.e.
$
T_{0,m} = 0
$
and we have
\begin{equation}
T_{m} = t_{m} + d_{Q}B + i~d_{\mu}B^{\mu}
\end{equation}
i.e. we have obtained the first four assertions from the statement.
(v) We prove now that
$
t_{m}
$ from the statement is not a trivial (relative) co-cycle as in the massless case.
Parity invariance and the existence of a smooth limit
$
m \searrow 0
$
are obvious.
$\blacksquare$
If
$
T_{m}
$
is bi-linear in the fields we cannot use the Poincar\'e lemma but we can make a direct analysis as in the massless case. The result is the following.
\begin{thm}
Let
$
T_{m}
$
be a relative co-cycle for
$
d_{Q}
$
which is bi-linear in the fields, of canonical dimension
$
\omega(T_{m}) \leq 5
$
and ghost number
$
gh(T_{m}) = 0.
$
Then:
(i)
$
T_{m}
$
is (relatively) cohomologous to an expression of the form:
\begin{equation}
t_{m} = \kappa^{\prime}~
( - 2~h_{\mu\nu}~h^{\mu\nu} + h^{2} - 4 u_{\mu}~\tilde{u}^{\mu}
+ 4 v_{\mu}~v^{\mu})
\end{equation}
(ii) The expression
$
t_{m}^{\mu}
$
coincides with the expression
$
t^{\mu}
$
from the massless case.
\end{thm}
\section{Gauge Invariance and Renormalization in the Second Order of Perturbation Theory\label{int}}
In the same way one can analyze the descent equations (\ref{W}) and study the form of the anomalies in the second order of perturbation theory.
\begin{thm}
In the massless case the second order chronological products can be chosen such that
the expression
$
W^{I}
$
from theorem \ref{ano-2} are
\begin{equation}
W = 0,~W^{\mu} = 0~,W^{\mu\nu} = 0
\end{equation}
and
\begin{equation}
W^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho}
\end{equation}
with the expression
$
B^{\mu\nu\rho}
$
completely antisymmetric and constrained by the conditions
$
\omega(B^{\mu\nu\rho}) \leq 6
$
and
$
gh(B^{\mu\nu\rho}) = 4.
$
\end{thm}
{\bf Proof:} We will need the relations (\ref{W}) in which we prefer to change some signs
$
W^{\mu} \rightarrow - W^{\mu},~~
W^{\mu\nu} \rightarrow - W^{\mu\nu}
$
i.e.
\begin{equation}
d_{Q}W = i~\partial_{\mu}W^{\mu},\qquad
d_{Q}W^{\mu} = i~\partial_{\nu}W^{\mu\nu},\qquad
d_{Q}W^{\mu\nu} = i\partial_{\rho}W^{\mu\nu\rho},\qquad
d_{Q}W^{\mu\nu\rho} = 0
\label{W1}
\end{equation}
and we also have from (\ref{power4}) the bound
$
\omega(W^{I}) \leq 7.
$
Moreover, the parity invariance obtained in theorem \ref{m=0} can be used to prove that the polynomials
$
W^{I}
$
are also parity invariant.
(i) From the last relation (\ref{W1}) and theorem \ref{m=0} we obtain:
\begin{equation}
W^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho} + W_{0}^{\mu\nu\rho}
\label{w3}
\end{equation}
with
$
W_{0}^{\mu\nu\rho} \in {\cal P}_{0}^{(7)}
$
and we can choose the expressions
$
B^{\mu\nu\rho}
$
and
$
W_{0}^{\mu\nu\rho}
$
completely antisymmetric. The generic form of
$
W_{0}^{\mu\nu\rho}
$
is:
\begin{eqnarray}
W_{0}^{\mu\nu\rho} =
a_{1}~( u^{\mu\lambda}~u^{\nu}~u^{\rho} + u^{\nu\lambda}~u^{\rho}~u^{\mu}
+ u^{\rho\lambda}~u^{\mu}~u^{\nu})~u_{\lambda}
\nonumber \\
+ a_{2}~( u^{\mu\nu}~u^{\rho\lambda} + u^{\nu\rho}~u^{\mu\lambda}
+ u^{\rho\mu}~u^{\nu\lambda})~u_{\lambda\sigma}~u^{\sigma}
\nonumber \\
+ a_{3}~( u^{\mu\lambda}~{u^{\nu}}_{\lambda}~u^{\rho\sigma}
+ u^{\nu\lambda}~{u^{\rho}}_{\lambda}~u^{\mu\sigma}
+ u^{\rho\lambda}~{u^{\mu}}_{\lambda}~u^{\nu\sigma})~u_{\sigma}
\nonumber \\
+ a_{4}~( u^{\mu\lambda}~u^{\nu\sigma}~u^{\rho}
+ u^{\nu\lambda}~u^{\rho\sigma}~u^{\mu}
+ u^{\rho\lambda}~u^{\mu\sigma}~u^{\nu})~u_{\lambda\sigma}
\end{eqnarray}
with
$
a_{j} \in \mathbb{R}~(j = 1,\dots,4).
$
We denote by
$
T_{j}~(j = 1,\dots,4)
$
the polynomials multiplied by
$
a_{j}~(j = 1,\dots,4)
$
respectively. If we define the completely antisymmetric expressions
\begin{eqnarray}
b_{1}^{\mu\nu\rho\sigma} \equiv u^{\mu}~u^{\nu}~u^{\rho}~u^{\sigma}
\nonumber \\
b_{2}^{\mu\nu\rho\sigma} \equiv (u^{\mu\lambda}~{u^{\nu}}_{\lambda}~u^{\rho}
+ u^{\nu\lambda}~{u^{\rho}}_{\lambda}~u^{\mu}
+ u^{\rho\lambda}~{u^{\mu}}_{\lambda}~u_{\nu})~u^{\sigma}
\nonumber \\
+ u^{\sigma\lambda}~({u^{\mu}}_{\lambda}~u^{\rho}~u^{\nu}
+ {u^{\nu}}_{\lambda}~u^{\mu}~u^{\rho} + {u^{\rho}}_{\lambda}~u^{\nu}~u^{\mu})
\nonumber \\
b_{3}^{\mu\nu\rho\sigma} \equiv (u^{\mu\nu}~u^{\rho\lambda}
+ u^{\nu\rho}~u^{\mu\lambda} + u^{\rho\mu}~u^{\nu\lambda})~u^{\sigma}~u_{\lambda}
\nonumber \\
+ (u^{\mu\sigma}~u^{\nu\lambda}~u^{\rho}
+ u^{\nu\sigma}~u^{\rho\lambda}~u^{\mu} + u^{\rho\sigma}~u^{\mu\lambda}~u^{\nu})~
u_{\lambda}
\end{eqnarray}
then it is not hard to obtain that
\begin{eqnarray}
d_{\sigma}b_{1}^{\mu\nu\rho\sigma} = - T_{1}^{\mu\nu\rho} + d_{Q}b_{1}^{\mu\nu\rho}
\nonumber \\
d_{\sigma}b_{2}^{\mu\nu\rho\sigma} = - T_{3}^{\mu\nu\rho} + d_{Q}b_{2}^{\mu\nu\rho}
\nonumber \\
d_{\sigma}b_{3}^{\mu\nu\rho\sigma} = T_{2}^{\mu\nu\rho} - T_{3}^{\mu\nu\rho}
+ T_{4}^{\mu\nu\rho} + d_{Q}b_{3}^{\mu\nu\rho}
\end{eqnarray}
where we can choose the expressions
$
b_{j}^{\mu\nu\rho}~(j = 1,\dots,3)
$
completely antisymmetric. It follows that we can rewrite (\ref{w3}) in the form
\begin{equation}
W^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma}
+ W_{0}^{\mu\nu\rho}
\end{equation}
where in the expression of
$
W_{0}^{\mu\nu\rho}
$
we can make
$
a_{1} = a_{3} = a_{4} = 0.
$
We substitute the preceding expression in the third relation (\ref{W1}) and get
\begin{equation}
d_{Q}(W^{\mu\nu} - i~d_{\rho}B^{\mu\nu\rho}) = i~d_{\rho}W_{0}^{\mu\nu\rho}
\end{equation}
so the right hand side must be a co-boundary. From this condition we easily find
$
a_{2} = 0
$
so in fact we can take
$
T_{0}^{\mu\nu\rho} = 0
$
It follows that one can exhibit
$
W^{\mu\nu\rho}
$
in the following form:
\begin{equation}
W^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma}
\end{equation}
and we also have
\begin{equation}
d_{Q}(W^{\mu\nu} - i~d_{\rho}B^{\mu\nu\rho}) = 0.
\end{equation}
(ii) We use again Theorem \ref{m=0} and obtain
\begin{equation}
W^{\mu\nu} = d_{Q}B^{\mu\nu} + i~d_{\rho}B^{\mu\nu\rho} + W^{\mu\nu}_{0}
\label{w2}
\end{equation}
where
$
W_{0}^{\mu\nu} \in {\cal P}_{0}^{(7)}
$
and we can choose the expressions
$
B^{\mu\nu}
$
and
$
W_{0}^{\mu\nu}
$
antisymmetric. The generic form of the expression
$
W_{0}^{\mu\nu}
$
is:
\begin{eqnarray}
W_{0}^{\mu\nu} = b_{1}~(u^{\mu\rho}~u^{\nu} - u^{\nu\rho}~u^{\mu})~u_{\rho}
+ b_{2}~u^{\mu\rho}~u^{\nu\lambda}~u_{\rho\lambda}
+ b_{3}~R^{(0)\mu\nu;\alpha\beta}~u_{\alpha\rho}~u_{\beta}~u^{\rho}
\nonumber \\
+ b_{4}~(R^{(0)\mu\rho;\alpha\beta}~u^{\nu} - R^{(0)\nu\rho;\alpha\beta}~u^{\mu})
~u_{\alpha\beta}~u_{\rho}
+ b_{5}~(R^{(0)\mu\rho;\alpha\beta}~{u^{\nu}}_{\rho}
- R^{(0)\nu\rho;\alpha\beta}~{u^{\mu}}_{\rho})~u_{\alpha}~u_{\beta};
\end{eqnarray}
many other possible expressions can be eliminated, or reduced to these above if we use Bianchi identity. We denote by
$
T_{j}~(j = 1,\dots,5)
$
the polynomials multiplied by
$
b_{j}~(j = 1,\dots,5)
$
respectively. If we define the completely antisymmetric expressions
\begin{eqnarray}
b_{1}^{\mu\nu\rho} \equiv u^{\mu}~u^{\nu}~u^{\rho}
\nonumber \\
b_{2}^{\mu\nu\rho} \equiv (R^{(0)\mu\nu;\alpha\beta}~u^{\rho}
+ R^{(0)\nu\rho;\alpha\beta}~u^{\mu} + R^{(0)\rho\mu;\alpha\beta}~u^{\nu})
~u_{\alpha}~u_{\beta}
\end{eqnarray}
we easily obtain that
\begin{eqnarray}
d_{\rho}b_{1}^{\mu\nu\rho} = - T_{1}^{\mu\nu} + d_{Q}b_{1}^{\mu\nu}
\nonumber \\
d_{\rho}b_{2}^{\mu\nu\rho} = T_{5}^{\mu\nu} - 2 T_{3}^{\mu\nu} - T_{4}^{\mu\nu}
+ d_{Q}b_{2}^{\mu\nu}
\end{eqnarray}
where we can choose the expressions
$
b_{j}^{\mu\nu}~(j = 1,\dots,3)
$
antisymmetric. It follows that if we redefine the expressions
$
B^{\mu\nu\rho}
$
and
$
B^{\mu\nu}
$
from (\ref{w2}) we can make
$
b_{1} = b_{4} = 0
$
in
$
W_{0}^{\mu\nu}.
$
We substitute the expression of
$
W^{\mu\nu}
$
in the second relation (\ref{W1}) and get:
\begin{equation}
d_{Q}(W^{\mu} - i~d_{\nu}B^{\mu\nu}) = i~d_{\nu}W_{0}^{\mu\nu}
\end{equation}
so the right hand side must be a co-boundary. One easily obtains that
$
b_{3} = b_{5} = 0
$
so we are left with one nontrivial co-cycle corresponding to
$
b \equiv b_{2}:
$
\begin{equation}
W_{0}^{\mu\nu} = b~u^{\mu\rho}~u^{\nu\lambda}~u_{\rho\lambda}.
\end{equation}
Now one proves immediately that
\begin{equation}
d_{\nu}W_{0}^{\mu\nu} = - i~d_{Q}U^{\mu}
\end{equation}
where
\begin{equation}
U^{\mu} \equiv b \left(d_{\rho}\hat{h}^{\mu\nu}~u_{\nu\lambda}~u^{\rho\lambda}
+ {1\over 2}~d^{\lambda}h~u^{\mu\rho}~u_{\rho\lambda}
+ d_{\lambda}\hat{h}_{\nu\rho}~u^{\mu\rho}~u^{\nu\lambda}\right)
\end{equation}
and we have
\begin{equation}
d_{Q}(W^{\mu} - U^{\mu}- i~d_{\nu}B^{\mu\nu}) = 0.
\end{equation}
(iii) Now it is again time we use Theorem \ref{m=0} and obtain
\begin{equation}
W^{\mu} = U^{\mu} + d_{Q}B^{\mu} + i~d_{\nu}B^{\mu\nu} + W^{\mu}_{0}
\end{equation}
where
$
W_{0}^{\mu} \in {\cal P}_{0}^{(7)}.
$
The generic form of such an expression is:
\begin{equation}
W^{\mu}_{0} = c_{1}~u^{\mu\nu}~u_{\nu}
+ c_{2}~R^{(0)\mu\nu;\alpha\beta}~u_{\alpha\beta}~u_{\nu};
\label{w1}
\end{equation}
we denote by
$
T_{j}~(j = 1,2)
$
the polynomials multiplied by
$
c_{j}~(j = 1,2).
$
However let us consider the antisymmetric expressions
\begin{eqnarray}
b_{1}^{\mu\nu} \equiv u^{\mu}~u^{\nu}
\nonumber \\
b_{2}^{\mu\nu} \equiv R^{(0)\mu\nu;\alpha\beta}~~u_{\alpha}~u_{\beta}
\end{eqnarray}
and we have
\begin{eqnarray}
d_{\nu}b_{1}^{\mu\nu} = - T_{1}^{\mu} + d_{Q}b_{1}^{\mu}
\nonumber \\
d_{\nu}b_{2}^{\mu\nu} = T_{2}^{\mu} + d_{Q}b_{2}^{\mu}
\end{eqnarray}
so if we redefine the expressions
$
B^{\mu\nu}
$
and
$
B^{\mu}
$
from (\ref{w1}) we can make
$
c_{1} = c_{2} = 0
$
i.e.
$
W_{0}^{\mu} = 0.
$
As a consequence we have:
\begin{equation}
W^{\mu} = U^{\mu} + d_{Q}B^{\mu} + i~d_{\nu}B^{\mu\nu}.
\end{equation}
Now we get from the first equation (\ref{W1})
\begin{equation}
d_{Q}(W - i~d_{\mu}B^{\mu}) = i~d_{\mu}U^{\mu}
\end{equation}
so the right hand side must be a co-boundary. One can prove that this is not possible so we must have
$
b = 0
$
i.e.
$
U^{\mu} = 0.
$
As a consequence we have
\begin{equation}
W^{\mu} = d_{Q}B^{\mu} + i~d_{\nu}B^{\mu\nu}
\end{equation}
and
\begin{equation}
d_{Q}(W - i~d_{\mu}B^{\mu}) = 0
\end{equation}
so a last use of Theorem \ref{m=0} gives
\begin{equation}
W = d_{Q}B + i~d_{\mu}B^{\mu} + W_{0}
\end{equation}
where
$
W_{0} \in {\cal P}_{0}^{(7)}.
$
But there are no such expression i.e.
$
W_{0} = 0
$
and we have
\begin{equation}
W = d_{Q}B + i~d_{\mu}B^{\mu}.
\end{equation}
Now we use finite renormalizations to eliminate the expressions
$
B^{I}~(|I| \leq 3)
$
as in the end of theorem \ref{ano-2} and end up with the expression from the statement.
$\blacksquare$
\begin{rem}
(i) One can extend the preceding result to the massive case also. The complications are only of technical nature: more terms can appear in the generic expressions of the expressions
$
W^{I}_{0}
$
but they eventually are eliminated.
(ii) The preceding proof stays true if we do not use parity invariance: as in the preceding remark, more terms can appear in the expressions
$
W^{I}_{0}.
$
(iii) We cannot eliminate
$
B^{\mu\nu\rho\sigma}
$
by finite renormalizations.
(iv) In higher orders of perturbation theory some expressions of the type
$
W^{I}_{0}
$
will survive the algebraic machinery we have used and new ideas are needed to eliminate them.
\end{rem}
We have reduced the gauge invariance problem in the second order to a much simple computation namely of the anomaly
$
A_{7}
$
where the expression
$
W^{\mu\nu\rho}
$
do appear. But we have
\begin{thm}
The anomaly
$
A_{7}
$
can be eliminated by finite renormalizations.
\label{a7}
\end{thm}
{\bf Proof:} We consider the massless case. The standard procedure is to show by direct computation that
\begin{eqnarray}
[t^{\mu\nu\rho}(x_{1}), t^{\sigma}(x_{2}) ] =
A^{\mu\nu\rho}(x_{1},x_{2})~\partial^{\sigma}D_{0}(x_{1} - x_{2})
+ A^{\mu\nu\rho;\alpha}(x_{1},x_{2})~
\partial^{\sigma}\partial_{\alpha}D_{0}(x_{1} - x_{2}) + \cdots
\end{eqnarray}
where by
$
\cdots
$
we mean terms for which the index $\sigma$ does not act on the Pauli-Jordan distribution. If we transform this distribution in the corresponding Feynman propagator
$
D_{0}^{F}
$
we obtain in the left hand side of the relation (\ref{g7}) an anomaly of the form:
\begin{equation}
A^{[\mu\nu\rho]}_{7}(x_{1},x_{2})
= \delta(x_{2} - x_{1})~A^{\mu\nu\rho}(x_{1},x_{2})
+ [\partial_{\alpha}\delta(x_{2} - x_{1})]~A^{\mu\nu\rho;\alpha}(x_{1},x_{2}).
\end{equation}
Now we make simple computation to put the preceding expression in the standard form (\ref{A7-2}):
\begin{equation}
A^{[\mu\nu\rho]}_{7}(x_{1},x_{2})
= \delta(x_{2} - x_{1})~W^{\mu\nu\rho}(x_{1})
+ \partial_{\alpha}\delta(x_{2} - x_{1})~W^{\mu\nu\rho;\alpha}(x_{1}).
\end{equation}
The second term can be eliminated by a finite renormalization of the chronological products
$
T(T^{[\mu\nu\rho]}(x_{1}),T^{\sigma}(x_{2}))
$
of the type (\ref{R8c}). Now the only thing to prove is that the first term is a co-boundary:
\begin{equation}
W^{\mu\nu\rho}_{7} = d_{Q}B^{\mu\nu\rho}
\end{equation}
and we can eliminate it by a redefinition of the chronological products
$
T(T^{[\mu\nu\rho]}(x_{1}),T(x_{2})).
$
Finally one can prove that in the massive case no new contributions to the anomaly
$
A_{7}
$
do appear.
$\blacksquare$
We now turn to the renormalization problem for the second order of the perturbation theory. We have the following result:
\begin{thm}
In the massless case the finite renormalizations for the second order chronological products are of the form
\begin{equation}
R^{I} = t^{\prime} + d_{Q}B^{I} + i~d_{\mu}B^{I\mu}
\end{equation}
where
$
t^{\prime}
$
has the same form as the interaction Lagrangian $t$ from theorem \ref{T1} (but with a different overall constant) and can be eliminated by a redefinition of the gravitational coupling $\kappa$. The rest of the terms can be eliminated by finite renormalizations of the chronological products. In the massive case the contribution
\begin{equation}
R_{0} = r_{1}~\phi^{3} + r_{2}~\phi~\phi^{(0)}_{\alpha\beta}~\phi^{(0)\alpha\beta}
+ r_{3}~\phi^{(0)}_{\alpha\gamma}~\phi^{(0)}_{\alpha\beta}~{\phi^{(0)\mu}}_{\beta}
\end{equation}
of
$
R^{\emptyset}
$
cannot be eliminated.
\end{thm}
{\bf Proof:} According to (\ref{gauge5}) we have the following descent procedure:
\begin{eqnarray}
d_{Q}R = i~d_{\mu}R^{\mu}.
\nonumber \\
d_{Q}R^{\mu} = i~d_{\nu}R^{\mu\nu}.
\nonumber \\
d_{Q}R^{\mu\nu} = i~d_{\rho}R^{\mu\nu\rho}
\nonumber \\
d_{Q}R^{\mu\nu\rho} = i~d_{\sigma}R^{\mu\nu\rho\sigma}
\nonumber \\
d_{Q}R^{\mu\nu\rho\sigma} = 0
\label{R1}
\end{eqnarray}
and we have the limitations
$
\omega(R^{I}) \leq 6,~gh(R^{I}) = |I|
$
and also the expressions
$
T^{I}
$
are Lorentz covariant. We consider only the case
$
\omega(R^{I}) = 6
$
because the case
$
\omega(R^{I}) \leq 5
$
has been covered by theorem \ref{T1}.
From the last relation we find, using Theorem \ref{m=0} that
\begin{equation}
R^{\mu\nu\rho\sigma} = d_{Q}B^{\mu\nu\rho\sigma} + R_{0}^{\mu\nu\rho\sigma}
\end{equation}
with
$
R_{0}^{\mu\nu\rho\sigma} \in {\cal P}_{0}^{(6)}
$
and we can choose the expressions
$
B^{\mu\nu\rho\sigma}
$
and
$
R_{0}^{\mu\nu\rho\sigma}
$
completely antisymmetric. The generic form of
$
R_{0}^{\mu\nu\rho\sigma}
$
is:
\begin{equation}
R_{0}^{\mu\nu\rho\sigma} =
a_{1}~(u^{\mu\nu}~u^{\rho\lambda}~u^{\sigma} + \cdots)
+ a_{2}~(u^{\mu\lambda}~{u^{\nu}}_{\lambda}~u^{\rho}~u^{\sigma} + \cdots)
+ a^{\prime}~\epsilon^{\mu\nu\rho\sigma}~
u^{\alpha\beta}~u_{\alpha\gamma}~u_{\beta}~u^{\gamma}
\end{equation}
where by
$
\cdots
$
we mean the rest of the terms needed to make the expression completely antisymmetric. We denote by
$
R_{j}~(j = 1,2)
$
the polynomials multiplied by
$
a_{j}~(j = 1,2)
$
respectively. We define the completely antisymmetric expression
\begin{equation}
b^{\mu\nu\rho\sigma\lambda} \equiv u^{\mu\nu}~u^{\rho}~u^{\sigma}~u^{\lambda}
+ \cdots
\end{equation}
which must be obviously null. On the other hand we have
\begin{eqnarray}
d_{\lambda}b^{\mu\nu\rho\sigma\lambda} = - R_{1}^{\mu\nu\rho\sigma}
+ 2 R_{2}^{\mu\nu\rho\sigma} + d_{Q}b^{\mu\nu\rho\sigma}
\nonumber
\end{eqnarray}
where we can take
$
b^{\mu\nu\rho\sigma}
$
completely antisymmetric. In other words we have proved that
\begin{eqnarray}
R_{1}^{\mu\nu\rho\sigma} - 2 R_{2}^{\mu\nu\rho\sigma} = d_{Q}b^{\mu\nu\rho\sigma}
\label{intersection}
\end{eqnarray}
and this relation can be used to make
$
a_{1} = 0
$
in the expression of
$
R_{0}^{\mu\nu\rho\sigma}.
$
We substitute the expression of
$
R^{\mu\nu\rho\sigma}
$
in the fourth relation (\ref{W1}) and get
\begin{equation}
d_{Q}(R^{\mu\nu\rho} - i~d_{\sigma}B^{\mu\nu\rho\sigma}) = i~d_{\sigma}R_{0}^{\mu\nu\rho\sigma}
\end{equation}
so the right hand side must be a co-boundary. From this condition we easily find
$
a_{2} = a^{\prime} = 0
$
so in fact we can take
$
R_{0}^{\mu\nu\rho\sigma} = 0.
$
It follows that one can exhibit
$
R^{\mu\nu\rho\sigma}
$
in the following form:
\begin{equation}
R^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho\sigma}
\end{equation}
and we also have
\begin{equation}
d_{Q}(R^{\mu\nu\rho} - i~d_{\sigma}B^{\mu\nu\rho\sigma}) = 0.
\end{equation}
(ii) We use again Theorem \ref{m=0} and obtain
\begin{equation}
R^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma}
+ R^{\mu\nu\rho}_{0}
\end{equation}
where
$
R_{0}^{\mu\nu\sigma} \in {\cal P}_{0}^{(6)}
$
and we can choose the expressions
$
B^{\mu\nu\rho}
$
and
$
R_{0}^{\mu\nu\rho}
$
completely antisymmetric. The generic form of the expression
$
R_{0}^{\mu\nu\rho}
$
is:
\begin{eqnarray}
R_{0}^{\mu\nu\sigma} = b~(u^{\mu}~R^{(0)\nu\rho;\alpha\beta}
+ u^{\nu}~R^{(0)\rho\mu;\alpha\beta} + u^{\rho}~R^{(0)\mu\nu;\alpha\beta})
~u_{\alpha}~u_{\beta};
\end{eqnarray}
other possible expressions can be eliminated, or reduced to this above if we use Bianchi identity. We substitute the expression of
$
R^{\mu\nu\rho}
$
in the third relation (\ref{R1}) and get:
\begin{equation}
d_{Q}(R^{\mu\nu} - i~d_{\rho}B^{\mu\nu\rho}) = i~d_{\rho}R_{0}^{\mu\nu\rho}
\end{equation}
so the right hand side must be a co-boundary. One easily obtains that
$
b = 0
$
so we have
$
R_{0}^{\mu\nu\rho} = 0.
$
This means that
\begin{equation}
R^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma}
\end{equation}
and we have
\begin{equation}
d_{Q}(R^{\mu\nu} - i~d_{\rho}B^{\mu\nu\rho}) = 0.
\end{equation}
(iii) Now it is again time to use Theorem \ref{m=0} and obtain
\begin{equation}
R^{\mu\nu} = d_{Q}B^{\mu\nu} + i~d_{\rho}B^{\mu\nu\rho} + R^{\mu\nu}_{0}
\end{equation}
where
$
R_{0}^{\mu} \in {\cal P}_{0}^{(6)}
$
and we can take the expressions
$
B^{\mu\nu}
$
and
$
R^{\mu\nu}_{0}
$
antisymmetric. But there is no such an expression
$
R^{\mu\nu}_{0}
$
i.e. we have
$
R^{\mu\nu}_{0} = 0
$
and it follows that
\begin{equation}
R^{\mu\nu} = d_{Q}B^{\mu\nu} + i~d_{\rho}B^{\mu\nu\rho}.
\end{equation}
We substitute this in the second relation (\ref{R1}) and we obtain
\begin{equation}
d_{Q}(R^{\mu} - i~d_{\nu}B^{\mu\nu}) = 0.
\end{equation}
(iv) Once more we use theorem \ref{m=0} and get
\begin{equation}
R^{\mu} = d_{Q}B^{\mu} + i~d_{\nu}B^{\mu\nu} + R^{\mu}_{0}.
\end{equation}
with
$
R^{\mu}_{0} \in {\cal P}_{0}^{(6)};
$
but there is no such an expression i.e. we have
$
R^{\mu}_{0} = 0
$
so in fact:
\begin{equation}
R^{\mu} = d_{Q}B^{\mu} + i~d_{\nu}B^{\mu\nu}.
\end{equation}
We substitute this in the first relation (\ref{R1}) and we obtain
\begin{equation}
d_{Q}(R - i~d_{\mu}B^{\mu}) = 0.
\end{equation}
(v) A last use of Theorem \ref{m=0} gives
\begin{equation}
R = d_{Q}B + i~d_{\mu}B^{\mu} + R_{0}
\end{equation}
where
$
R_{0} \in {\cal P}_{0}^{(7)}.
$
But there are no such expression i.e.
$
R_{0} = 0
$
and we have
\begin{equation}
R = d_{Q}B + i~d_{\mu}B^{\mu}
\end{equation}
and this finishes the massless case. The massive case brings some new terms in
$
R_{0}
$
which survive and are given in the statement.
$\blacksquare$
\begin{rem}
(i) The expression (\ref{intersection}) is another example of a non-trivial element from
$
{\cal P}_{0} \cap B_{Q}.
$
(ii) In higher orders we can have, even in the massless case, expressions
$
R_{0}^{I}
$
which cannot be eliminated by the descent procedure.
\end{rem}
\section{The Interaction of Gravity with other Quantum Fields\label{int1}}
In \cite{cohomology} we have given the generic structure of the interaction between a system of Yang-Mills fields (particles of spin $1$ and mass
$m \geq 0$)
with ``matter" fields i.e scalar fields of spin $0$ and Dirac fields of spin $1/2$.
In this Section we add the interaction with massless gravitons.
First we remind the results from \cite{cohomology}. We take
$
I = I_{1} \cup I_{2} \cup I_{3}
$
a set of indices and for any index we take a quadruple
$
(v^{\mu}_{a}, u_{a}, \tilde{u}_{a},\Phi_{a}), a \in I
$
of fields with the following conventions:
(a) the first entry are vector fields and the last three ones are scalar fields;
(b) the fields
$
v^{\mu}_{a},~\Phi_{a}
$
are obeying Bose statistics and the fields
$
u_{a},~\tilde{u}_{a}
$
are obeying Fermi statistics;
(c) For
$
a \in I_{1}
$
we impose
$
\Phi_{a} = 0
$
and we take the masses to be null
$
m_{a} = 0;
$
(d) For
$
a \in I_{2}
$
we take the all the masses strictly positive:
$
m_{a} > 0;
$
(e) For
$
a \in I_{3}
$
we take
$
v_{a}^{\mu}, u_{a}, \tilde{u}_{a}
$
to be null and the fields
$
\Phi_{a} \equiv \phi^{H}_{a}
$
of mass
$
m^{H}_{a} \geq 0;
$
The fields
$
u_{a},~\tilde{u}_{a},~~a \in I_{1} \cup I_{2}
$
and
$
\Phi_{a}~~a \in I_{2}
$
are called {\it ghost fields} and the fields
$
\phi^{H}_{a},~~a \in I_{3}
$
are called {\it Higgs fields};
(f) we include matter fields also i.e some set of Dirac fields with Fermi statistics:
$
\Psi_{A}, A \in I_{4};
$
(g) we consider that the Hilbert space is generated by all these fields applied on the vacuum and define in
$
{\cal H}
$
the operator $Q$ according to the following formulas for all indices
$
a \in I:
$
\begin{eqnarray}
~[Q, v^{\mu}_{a}] = i~\partial^{\mu}u_{a},\qquad
[Q, u_{a}] = 0,
\nonumber \\
~[Q, \tilde{u}_{a}] = - i~(\partial_{\mu}v^{\mu}_{a} + m_{a}~\Phi_{a})
\qquad
[Q,\Phi_{a}] = i~m_{a}~u_{a},
\label{Q-general}
\end{eqnarray}
\begin{equation}
[Q,\Psi_{A}] = 0,
\end{equation}
and
\begin{equation}
Q\Omega = 0.
\end{equation}
Here
$
[\cdot,\cdot]
$
is the graded commutator. In \cite{descent} we have determined the most general interaction between these fields in theorem 6.1:
\begin{thm}
Let $T$ be a relative co-cycle for
$
d_{Q}
$
which is as least tri-linear in the fields and is of canonical dimension
$
\omega(T) \leq 4
$
and ghost number
$
gh(T) = 0.
$
Then:
(i) $T$ is (relatively) cohomologous to a non-trivial co-cycle of the form:
\begin{eqnarray}
t_{YM} = f_{abc} \left( {1\over 2}~v_{a\mu}~v_{b\nu}~F_{c}^{\nu\mu}
+ u_{a}~v_{b}^{\mu}~d_{\mu}\tilde{u}_{c}\right)
\nonumber \\
+ f^{\prime}_{abc} (\Phi_{a}~\phi_{b}^{\mu}~v_{c\mu} + m_{b}~\Phi_{a}~\tilde{u}_{b}~u_{c})
\nonumber \\
+ {1\over 3!}~f^{\prime\prime}_{abc}~\Phi_{a}~\Phi_{b}~\Phi_{c}
+ {1\over 4!}~\sum_{a,b,c,d \in I_{3}}~g_{abcd}~\Phi_{a}~\Phi_{b}~\Phi_{c}~\Phi_{d}
+ j^{\mu}_{a}~v_{a\mu} + j_{a}~\Phi_{a}
\end{eqnarray}
where we make the following conventions:
$
f_{abc} = 0
$
if one of the indices is in
$
I_{3};
$
$
f^{\prime}_{abc} = 0
$
if
$
c \in I_{3}
$
or one of the indices $a$ and $b$ are from
$
I_{1};
$
$
j^{\mu}_{a} = 0
$
if
$
a \in I_{3};
$
$
j_{a} = 0
$
if
$
a \in I_{1}.
$
Moreover we have:
(a) The constants
$
f_{abc}
$
are completely antisymmetric
\begin{equation}
f_{abc} = f_{[abc]}.
\label{anti-f}
\end{equation}
(b) The expressions
$
f^{\prime}_{abc}
$
are antisymmetric in the indices $a$ and $b$:
\begin{equation}
f^{\prime}_{abc} = - f^{\prime}_{bac}
\label{anti-f'}
\end{equation}
and are connected to
$f_{abc}$
by:
\begin{equation}
f_{abc}~m_{c} = f^{\prime}_{cab} m_{a} - f^{\prime}_{cba} m_{b}.
\label{f-f'}
\end{equation}
(c) The (completely symmetric) expressions
$f^{\prime\prime}_{abc} = f^{\prime\prime}_{\{abc\}}$
verify
\begin{equation}
f^{\prime\prime}_{abc}~m_{c} = \left\{\begin{array}{rcl}
{1 \over m_{c}}~f'_{abc}~(m_{a}^{2} - m_{b}^{2}) & \mbox{for} & a, b \in I_{3}, c \in I_{2} \\
- {1 \over m_{c}}~f'_{abc}~m_{b}^{2} & \mbox{for} & a, c \in I_{2}, b \in I_{3}.
\end{array}\right.
\label{f"}
\end{equation}
(d) the expressions
$
j^{\mu}_{a}
$
and
$
j_{a}
$
are bi-linear in the Fermi matter fields: in tensor notations;
\begin{eqnarray}
j_{a}^{\mu} = \overline{\psi} t^{\epsilon}_{a} \otimes \gamma^{\mu}\gamma_{\epsilon} \psi \qquad
j_{a} = \overline{\psi} s^{\epsilon}_{a} \otimes \gamma_{\epsilon} \psi
\label{current}
\end{eqnarray}
where for every
$
\epsilon = \pm
$
we have defined the chiral projectors of the algebra of Dirac matrices
$
\gamma_{\epsilon} \equiv {1\over 2}~(I + \epsilon~\gamma_{5})
$
and
$
t^{\epsilon}_{a},~s^{\epsilon}_{a}
$
are
$
|I_{4}| \times |I_{4}|
$
matrices; the summation over
$
\epsilon = \pm
$
is assumed. If $M$ is the mass matrix
$
M_{AB} = \delta_{AB}~M_{A}
$
then we must have
\begin{equation}
d_{\mu}j^{\mu}_{a} = m_{a}~j_{a}
\qquad \Leftrightarrow \qquad
m_{a}~s_{a}^{\epsilon} = i(M~t^{\epsilon}_{a} - t^{-\epsilon}_{a}~M)~~(\forall~a \in I_{1} \cup I_{2}.)
\label{conserved-current}
\end{equation}
(ii) The relation
$
d_{Q}t_{YM} = i~d_{\mu}t_{YM}^{\mu}
$
is verified by:
\begin{equation}
t_{YM}^{\mu} = f_{abc} \left( u_{a}~v_{b\nu}~F^{\nu\mu}_{c} -
{1\over 2} u_{a}~u_{b}~d^{\mu}\tilde{u}_{c} \right)
+ f^{\prime}_{abc}~\Phi_{a}~\phi_{b}^{\mu}~u_{c}
+ j^{\mu}_{a}~u_{a}
\end{equation}
(iii) The relation
$
d_{Q}t_{YM}^{\mu} = i~d_{\nu}t_{YM}^{\mu\nu}
$
is verified by:
\begin{equation}
t_{YM}^{\mu\nu} \equiv {1\over 2} f_{abc}~u_{a}~u_{b}~F_{c}^{\mu\nu}.
\end{equation}
(iv) The constants
$
f_{abc}~f^{\prime}_{abc},~f^{\prime\prime}_{abc}
$
and
$
g_{abcd}
$
are real and the matrices
$
t_{a}^{\epsilon},~s_{a}^{\epsilon}
$
are Hermitean.
\end{thm}
Now we extend the argument including massless gravitation. We include in the set of fields generating the Hilbert space
$
{\cal H}
$
the fields
$
h_{\mu\nu},~u^{\rho},~\tilde{u}^{\sigma}
$
the first one being a tensor fields with Bose statistics and the last are vector fields with Fermi statistics. We also extend the definition of the gauge charge
$Q$ given by (\ref{Q-general}) with
\begin{eqnarray}
~[Q, h_{\mu\nu}] = - {i\over 2}~(\partial_{\mu}u_{\nu} + \partial_{\nu}u_{\mu}
- \eta_{\mu\nu} \partial_{\rho}u^{\rho}),\qquad
[Q, u_{\mu}] = 0,\qquad
[Q, \tilde{u}_{\mu}] = i~\partial^{\nu}h_{\mu\nu}
\end{eqnarray}
and we can easily generalize theorem \ref{fock-0} and the corresponding result for the Yang-Mills system from \cite{cohomology}: the Fock space describes in this case massless gravitons, spin $0$ and spin $1$ particles. By definition the ghost number is the sum of the ghost numbers of the YM and gravity sectors. We want to extend the preceding theorem to this more general case. For simplicity we treat here only the case of massless Yang-Mills fields i.e. we take in the general scheme from above
$
I_{2} = 0.
$
Beside the expression
$
t_{YM}
$
given above and
$
t_{gh}
$
determined in theorem \ref{m=0} we need the interaction between the two sets of fields (Yang-Mills and gravitational). The result is described in the following:
\begin{thm}
Suppose that the interaction Lagrangian
$
T_{\rm int}
$
is restricted by Lorentz covariance, is at least tri-linear in the fields and
$
\omega(T_{\rm int}) \leq 5,~~gh(T_{\rm int}) = 0.
$
We also suppose that
$
I_{2} = 0.
$
Then: (i)
$
T_{\rm int}
$
is cohomologous to the expression
\begin{eqnarray}
t_{\rm int} \equiv \sum_{a,b \in I_{1}}~f_{ab}~
(4 h_{\mu\nu}~F_{a}^{\mu\rho}~{F_{b}^{\nu}}_{\rho}
- h~F_{a\rho\sigma}~F_{b}^{\rho\sigma}
+ 4~u_{\mu}~d_{\nu}\tilde{u}_{a}~F_{b}^{\mu\nu})
\nonumber \\
+ \sum_{c,d \in I_{3}}~f^{\prime}_{cd}~
\left(h_{\mu\nu}~d^{\mu}\Phi_{c}~d^{\nu}\Phi_{d}
- {m^{2}_{a} + m^{2}_{b} \over 4}~h~\Phi_{a}~\Phi_{b}\right)
\nonumber \\
+ h_{\mu\nu}~
(d^{\mu}\bar{\psi}~c^{\epsilon} \otimes \gamma^{\nu}\gamma_{\epsilon}\psi
- \bar{\psi}~c^{\epsilon} \otimes \gamma^{\nu}\gamma_{\epsilon}d^{\mu}\psi).
\label{t-int}
\end{eqnarray}
Moreover we have
(a) the constants
$
f_{ab}
$
are symmetric
$
f_{ab} = f_{ba};
$
(b)
the constants
$
f^{\prime}_{cd}
$
are symmetric
$
f^{\prime}_{cd}= f^{\prime}_{dc};
$
also if we denote by $m$ the mass matrix of the scalar fields
$
m_{cd} \equiv m_{c}~\delta_{cd},~\forall c,d \in I_{3}
$
it commutes with the matrix
$
f^{\prime}_{cd}:
$
\begin{equation}
[ m, f^{\prime} ] = 0
\end{equation}
(c) the matrices
$
c^{\epsilon}
$
verify
\begin{equation}
c^{\epsilon}~M = M~c^{- \epsilon}
\end{equation}
where $M$ is the mass matrix of the Dirac fields:
$
M_{AB} \equiv M_{A}~\delta_{AB},~\forall A,B \in I_{4}.
$
(ii) The relation
$
d_{Q}t_{\rm int} = i~d_{\mu}t_{\rm int}^{\mu}
$
is verified by:
\begin{eqnarray}
t_{\rm int}^{\mu} \equiv \sum_{a,b \in I_{1}}~f_{ab}~
( u^{\mu}~F_{a}^{\rho\sigma}~F_{b\rho\sigma}
+ 4~u^{\rho}~F_{a}^{\mu\nu}~F_{b\nu\rho} )
\nonumber \\
+ \sum_{c,d \in I_{3}}~f^{\prime}_{cd}~
\left({1\over 2}~u^{\mu}~d^{\nu}\Phi_{c}~d^{\nu}\Phi_{d}
- u_{\nu}~d^{\mu}\Phi_{c}~d^{\nu}\Phi_{d}
- {m^{2}_{a} + m^{2}_{b} \over 4}~u^{\mu}~\Phi_{a}~\Phi_{b}\right)
\nonumber \\
- {1\over 2}~u_{\nu}~
[ (d^{\mu}\bar{\psi}~c^{\epsilon} \otimes \gamma^{\nu}\gamma_{\epsilon}\psi
- \bar{\psi}~c^{\epsilon} \otimes \gamma^{\nu}\gamma_{\epsilon}d^{\mu}\psi)
+ (\mu \leftrightarrow \nu) ]
\end{eqnarray}
and we also have
\begin{equation}
d_{Q}t_{\rm int}^{\mu} = 0.
\end{equation}
(iii) the constants
$
f_{ab}
$
and
$
f^{\prime}_{cd}
$
are real and we also have the Hermiticity property
$
(c^{\epsilon})^{\dagger} = c^{- \epsilon}.
$
\label{T-int}
\end{thm}
{\bf Proof:} (i) By hypothesis we have
\begin{equation}
d_{Q}T_{\rm int} = i~d_{\mu}T_{\rm int}^{\mu}
\label{descent-tint}
\end{equation}
and the descent procedure leads to
\begin{eqnarray}
d_{Q}T_{\rm int}^{\mu} = i~d_{\nu}T_{\rm int}^{\mu\nu}.
\nonumber\\
d_{Q}T_{\rm int}^{\mu\nu} = i~d_{\rho}T_{\rm int}^{\mu\nu\rho}
\nonumber \\
d_{Q}T_{\rm int}^{\mu\nu\rho} = i~d_{\sigma}T_{\rm int}^{\mu\nu\rho\sigma}
\nonumber \\
d_{Q}T_{\rm int}^{\mu\nu\rho\sigma} = 0
\label{descent-T-int}
\end{eqnarray}
and can choose the expressions
$
T_{\rm int}^{I}
$
to be Lorentz covariant; we also have
\begin{equation}
gh(T_{\rm int}^{I}) = |I|, \omega(T_{\rm int}^{I}) \leq 5.
\end{equation}
From the last relation we find, using Theorem \ref{m=0} and the corresponding result from \cite{cohomology}, that
\begin{equation}
T_{\rm int}^{\mu\nu\rho\sigma} = d_{Q}B^{\mu\nu\rho\sigma}
+ T_{{\rm int},0}^{\mu\nu\rho\sigma}
\end{equation}
with
$
T_{{\rm int},0}^{\mu\nu\rho\sigma} \in {\cal P}_{0}^{(5)}
$
and we can choose the expressions
$
B^{\mu\nu\rho\sigma}
$
and
$
T_{{\rm int},0}^{\mu\nu\rho\sigma}
$
completely antisymmetric. The generic form of
$
T_{{\rm int},0}^{\mu\nu\rho\sigma}
$
is:
\begin{equation}
T_{{\rm int},0}^{\mu\nu\rho\sigma} = \sum_{a \in I_{3}}~ f_{a}~u^{\mu}~u^{\nu}~u^{\rho}~u^{\sigma}~\Phi_{a}.
\end{equation}
If we substitute the expression obtained for
$
T_{\rm int}^{\mu\nu\rho\sigma}
$
in the third relation (\ref{descent-T-int}) we find out
\begin{equation}
d_{Q}(T_{\rm int}^{\mu\nu\rho} - i~d_{\sigma}B^{\mu\nu\rho\sigma}) = i~d_{\sigma}T_{{\rm int},0}^{\mu\nu\rho\sigma}
\end{equation}
so the expression in the right hand side must be a co-boundary and we immediately obtain
$
f_{a} = 0.
$
It follows that
\begin{equation}
T_{\rm int}^{\mu\nu\rho\sigma} = d_{Q}B^{\mu\nu\rho\sigma}
\end{equation}
and
\begin{equation}
d_{Q}(T_{\rm int}^{\mu\nu\rho} - i~d_{\sigma}B^{\mu\nu\rho\sigma}) = 0
\end{equation}
so we obtain
\begin{equation}
T_{\rm int}^{\mu\nu\rho} = B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma}
+ T^{\mu\nu\rho}_{{\rm int},0}
\end{equation}
where
$
T_{{\rm int},0}^{\mu\nu\rho} \in {\cal P}_{0}^{(5)};
$
we can choose the expressions
$
B^{\mu\nu\rho}
$
and
$
T_{{\rm int},0}^{\mu\nu\rho}
$
completely antisymmetric. The generic form of
$
T_{0}^{\mu\nu\rho}
$
has a two contributions: even and odd with respect to parity invariance. We do not give the generic form here but we give the result: the second relation (\ref{descent-T-int}) gives
\begin{equation}
d_{Q}(T_{\rm int}^{\mu\nu} - i~d_{\rho}B^{\mu\nu\rho})
= i~d_{\rho}T_{{\rm int},0}^{\mu\nu\rho}
\end{equation}
so the right hand side must be a co-boundary and a direct computation gives that in fact
$
T_{{\rm int},0}^{\mu\nu\rho} = 0.
$
It follows that
\begin{equation}
T_{\rm int}^{\mu\nu\rho} = d_{Q}B^{\mu\nu\rho} + i~d_{\sigma}B^{\mu\nu\rho\sigma}
\end{equation}
and
\begin{equation}
d_{Q}(T_{\rm int}^{\mu\nu} - i~d_{\rho}B^{\mu\nu\rho}) = 0.
\end{equation}
(ii) We obtain from the preceding relation that
\begin{equation}
T_{\rm int}^{\mu\nu} = d_{Q}B^{\mu\nu} + i~d_{\rho}B^{\mu\nu\rho}
+ T^{\mu\nu}_{{\rm int},0}
\end{equation}
where
$
T_{{\rm int},0}^{\mu\nu} \in {\cal P}_{0}^{(5)}
$
and we can choose the expressions
$
B^{\mu\nu}
$
and
$
T_{{\rm int},0}^{\mu\nu}
$
antisymmetric. Again we do not give the generic form of the expression
$
T_{{\rm int},0}^{\mu\nu}
$
but we give the final result of this standard computation: by conveniently modifying the expressions
$
B^{I}
$
we can arrange such that
$
T_{{\rm int},0}^{\mu\nu} = 0.
$
We substitute the expression of
$
T_{\rm int}^{\mu\nu}
$
in the first relation (\ref{descent-T-int}) and get:
\begin{equation}
d_{Q}(T_{\rm int}^{\mu} - i~d_{\nu}B^{\mu\nu}) = 0
\end{equation}
(iii) Now it is again time we use known results and obtain
\begin{equation}
T_{\rm int}^{\mu} = d_{Q}B^{\mu} + i~d_{\nu}B^{\mu\nu} + T^{\mu}_{{\rm int},0}
\end{equation}
where
$
T_{{\rm int},0}^{\mu} \in {\cal P}_{0}^{(5)}.
$
Now we get from the first relation (\ref{descent-T-int})
\begin{equation}
d_{Q}(T_{\rm int} - i~d_{\mu}B^{\mu}) = i~d_{\mu}T^{\mu}_{{\rm int},0}
\end{equation}
so the right hand side must be a co-boundary. If one writes the generic form of
$
T_{{\rm int},0}^{\mu}
$
one gets after tedious computations that by modifying the expressions
$
B^{I}
$
one can take
\begin{equation}
T_{{\rm int},0}^{\mu} = t^{\mu}_{\rm int}
\end{equation}
with
$
t^{\mu}_{\rm int}
$
the expression from the statement of the theorem. Because we have
$
d_{Q}t_{\rm int} = i~d_{\mu}t_{\rm int}^{\mu}
$
we get
\begin{equation}
d_{Q}(T_{\rm int} - t_{\rm int} - i~d_{\mu}B^{\mu}) = 0
\end{equation}
so known results leads to
\begin{equation}
T_{\rm int} = t_{\rm int} + d_{Q}B + i~d_{\mu}B^{\mu} + T_{{\rm int},0}
\end{equation}
where
$
T_{{\rm int}0} \in {\cal P}_{0}^{(5)}.
$
But there are no such expression i.e.
$
T_{{\rm int},0} = 0
$
and we have
\begin{equation}
T_{\rm int} = t_{\rm int} + d_{Q}B + i~d_{\mu}B^{\mu}
\end{equation}
which is the final result.
$\blacksquare$
\begin{rem}
(i) We note that we have obtained in a natural way the known expression of the energy-momentum tensor
$
T_{\mu\nu}
$
which is, up to a factor, the coefficient of
$
h^{\mu\nu}
$
from the expression
$
t_{\rm int}.
$
However, there is a supplementary ghost term in the first line of the formula (\ref{t-int}). This is due to the fact already explained in the Introduction: because we cannot impose in the quantum framework the Maxwell equation
\begin{equation}
\partial_{\mu}v_{a}^{\mu} = 0
\end{equation}
the energy-momentum tensor
\begin{equation}
T^{\mu\nu} \equiv \sum_{a,b \in I_{1}}~f_{ab}~
\left(F_{a}^{\mu\rho}~{F_{b}^{\nu}}_{\rho} -
{1\over 4}\eta^{\mu\nu}~F_{a\rho\sigma}~F_{b}^{\rho\sigma} \right)
\end{equation}
does not verify anymore the divergenceless condition
\begin{equation}
\partial_{\mu}T^{\mu\nu} = 0
\end{equation}
and without the extra ghost term in the first line of formula (\ref{t-int}) we do not have gauge invariance.
We note however that the ghost term from the first line of formula (\ref{t-int}) gives a null contributions between physical states (described as in theorem \ref{fock-0}) and this result propagates to all orders of perturbation theory. So it can be neglected in practical computations.
(ii) There are other approaches to the quantization of the massless vector fields in which one can impose the condition
$
\partial_{\mu}v_{a}^{\mu} = 0
$
namely the so-called Coulomb gauge, but the price to pay is the loss of the manifest Lorentz covariance and the appearence of a non-local interaction term so Epstein-Glaser method cannot be implemented in this approach.
(iii) One can prove that there are no bi-linear solution for the interaction.
\end{rem}
\section{Conclusions}
The cohomological methods presented in a previous paper \cite{cohomology} leads to the a simple understanding of quantum gravity in lower orders of perturbation theory. If we use the consistency Wess-Zumino equations we can give simple proofs for the gauge invariance and renormalization in the second order of perturbation theory for the massless and massive pure gravity.
The descent technique can be used to give the most general interaction including Yang-Mills fields (massless and massive), matter and massless gravity. In this paper we have considered only massless Yang-Mills fields and the general case will be treated in a forthcoming paper. Further restrictions follow from the cancellation of the anomalies in the second order of the perturbation theory. The analysis can be extended to the third order of perturbation theory and it will also be done elsewere. One should expect the appearance of the known gravitational anomaly (see for instance \cite{We}.)
\vskip 1cm
{\bf Acknowledgment:} The author wishes to thank Professor G. Scharf for the critical reading of the typescript and many valuable suggestions.
|
train/arxiv
|
BkiUbNI5qhLAB_0lxo08
| 5 | 1 |
\section{Introduction}
Spectral factorization plays a prominent role in a wide
range of fields in system theory and control engineering.
In the scalar case, which arises in systems with single
input and single output, the factorization problem is relatively
easy and several classical methods exist to perform
this task (see a survey paper \cite{SayKai}). The matrix spectral
factorization, which arises in multi-dimensional systems,
is significantly more difficult. Following Wiener's original
efforts \cite{Wie58}, dozens of papers addressed the development
of appropriate algorithms. None of the above methods can be implemented directly to solve the $J$-spectral factorization.
The Janashia–Lagvilava method is a relatively new algorithm for matrix spectral factorization \cite{JL99}, \cite{IEEE} which proved to be rather effective \cite{IEEE2018}. To describe this method of $r\times r$ matrix spectral factorization in a few words, one can say that it first performs a lower-upper triangular factorization with causal entries on the diagonal and then carries out an approximate spectral factorization of principle $m\times m$ submatrices step-by-step, $m=2,3,\ldots,r$. The decisive role in the latter process is played by unitary matrix functions
of certain structure, which eliminates many technical difficulties connected with computation.
In the present paper, we extend Janashia-Lagvilava method to $J$-spectral factorization case, by using appropriately chosen $J$-unitary matrix functions instead of aforementioned unitary matrices. So far, the method can be used for matrices which have constant signatures for all leading principle submatrices, however, we hope to remove this restriction in the future work. Furthermore, the method has a potential of identifying a simple necessary and sufficient condition for the existence of $J$-spectral factorization and of being further extended towards the factorization of a wider class of Hermitian matrices.
Performed numerical simulations confirm that the proposed algorithm, whenever applicable, is as effective as the existing matrix spectral factorization algorithm. On several occasions, the algorithm can also deal with the so called singular cases, where the zeros of the determinant occur on the boundary. Like the Janashia–Lagvilava method, the algorithm can be used to $J$-factorize non–rational matrices as well.
\section{Formulation of the problem}
Let
\begin{equation}\label{1}
S(z)=\begin{pmatrix} s_{11}(z)& s_{12}(z)& \cdots&s_{1r}(z)\\
s_{21}(z)& s_{22}(z)& \cdots&s_{2r}(z)\\
\vdots&\vdots&\vdots&\vdots\\s_{r1}(z)& s_{r2}(z)&
\cdots&s_{rr}(z)\end{pmatrix},
\end{equation}
$z\in\mathbb{T}:=\{z\in\mathbb{C}:|z|=1\}$, be a Hermitian $r\times r$ matrix function of constant signature, i.e. $S(z)=S^*(z)$ and the number of positive and negative eigenvalues of $S(z)$ are the constants $p$ and $q$, with $p+q=r$, for a.a. $z\in\mathbb{T}$.
$J$-spectral factorization of $S$ is by definition the representation
\begin{equation}\label{2}
S(z)=S_+(z)\,J\, S_+^*(z),
\end{equation}
where $S_+$ can be extended to a stable analytic function inside $\mathbb{T}$, the matrix function $S_+^*$ is the Hermitian conjugate of $S$, and $J=(I_p\,,\;-I_q)$ is the diagonal matrix with $p$ ones and $q$ negative ones on the diagonal. We do not specify the classes to which $S$ and $S_+$ belong. For simplicity, one can assume that they are (Laurent) matrix polynomials.
The necessity of factorization \eqref{2} arises in $\mathcal{H}_\infty$ control \cite{Fr87}, \cite{Ki97} and its solution is much more involved than the (standard) spectral factorization of positive definite matrix functions (when $p=r$ and $q=0$). Various algorithms for $J$-spectral factorization appear in the literature\cite{Se94}, \cite{St04} mostly for rational matrices.
Below, we present a new algorithm of $J$-spectral factorization which is an extension of Janashia-Lagvilava matrix spectral factorization method. Similarly to this method, we first perform a lower-upper triangular $J$-factorization of \eqref{1} with analytic entries on the diagonal.
This can be achieved only in the case where all the leading principal minors of $S$ have constant signs almost everywhere on $\mathbb{T}$, therefore, we impose this restriction on \eqref{1}.
Then we recursively $J$-factorize leading principle $m\times m$ submatrices of $S$, $m=2,3,\ldots, r$.
\section{Notation}
For any set $\mathcal{S}$, we denote by $\mathcal{S}^{m\times n}$ the set of $m\times n$ matrices with entries from $\mathcal{S}$.
For a matrix $M\in\mathbb{C}^{r\times r}$ we use the standard notation $M^T$ and $M^*:=\overline{M}^T$ for the transpose and the Hermitian conjugate of $M$. The leading principle $m\times m$ submatrix of $M$, $m\leq r$, is denoted by $[M]_{m\times m}$. The same notation is used for matrix functions as well.
The letter $J$ always denotes a signature, i.e. a square diagonal matrix with entries $\pm1$ on the diagonal. The sizes and entries of $J$ may vary on different occasions. We say that a Hermitian matrix
$A=A^*\in\mathbb{C}^{m\times m}$ has the signature
$J=(I_p\,,\;-I_q)$ if $A$ has $p$ positive and $q$ negative eigenvalues.
For a fixed signature matrix $J$, the set of $J$-unitary matrices, $\mathcal{U}_J$, is a group. Furthermore, $U\in\mathcal{U}_J \Longrightarrow U^T\in \mathcal{U}_J$, since $AJB=J \Longrightarrow BJA=J$.
The set of polynomials is denoted by $\mathcal{P}^+$, and the set of Laurent polynomials,
\begin{equation}\label{14.04}
P(z)=\sum\nolimits_{k=-n}^m p_kz^k,
\end{equation}
is denoted by $\mathcal{P}$. The set of Laurent polynomials of degree at most $N$ (i.e. $0\leq n,m\leq N$ in \eqref{14.04}) is denoted by $\mathcal{P}_N$, and
$$\mathcal{P}_N^+=\mathcal{P}_N \cap\mathcal{P}^+.$$
For Laurent polynomial \eqref{14.04}, let
$$
\widetilde{P}(z)=\sum\nolimits_{k=-n}^m \overline{p_k}z^{-k}.
$$
Suppose also $\mathcal{P}_N^-:=\{P:\widetilde{P}\in\mathcal{P}_N^+\}$. Obviously, $\mathcal{P}_N^-\cap \mathcal{P}_N^+$ consists of constant functions only.
A matrix polynomial $\mathbf{U}\in\mathcal{P}^{m\times m}$ is called $J$-unitary if $\mathbf{U}(z)$ is $J$-unitary for every $z\in\mathbb{T}$.
The $k$th Fourier coefficient of an integrable function $f\in
L_1(\mathbb{T})$ is denoted by $c_k\{f\}$.
If a function $f$ is square integrable, $f\in L_2=L_2(\mathbb{T})$, then
$$
f(z)=\sum\nolimits_{k=-\infty}^\infty c_k\{f\}z^k \;\;\text{ for a.a. }z\in\mathbb{T},
$$
and $\|f\|_2=2\pi \sum _{k=-\infty}^\infty |c_k\{f\}|^2$.
An integrable function $f$ is called analytic or causal if its Fourier expansion has the form
$$
f\sim \sum\nolimits_{k=0}^\infty c_k\{f\}z^k.
$$
It is called stable if $f(z)\not=0$ for each $z$ with $|z|<1$, and it is called optimal if (see, e.g., \cite[Th. 17.17]{Rud}
$$
\log |f(0)|=\frac{1}{2\pi} \int_0^{2\pi} \log |f(e^{it})|\,dt.
$$
For a positive integrable function $f$ defined on $\mathbb{T}$, which satisfies the Paley-Wiener condition
$$
\log f\in L_1,
$$
there exists a unique (up to a constant multiple with absolute value 1) causal, stable, and optimal function $f^+$ such that
$$
f(z)= f^+(z)\overline{ f^+(z)}=|f^+(z)|^2 \;\text{ for a.a. }z\in\mathbb{T}.
$$
Such a function $f^+$ is called the (canonical) scalar spectral factor of $f$ and it can be given explicitly by the formula
$$
f^+(z)= \sqrt{f(z)}\exp\left(\frac12 i \mathcal{C}\big(\log f\big)(z)\right),
$$
where $\mathcal{C}$ stands for the harmonic conjugate of $f$:
$$
\mathcal{C} (f)(z)=\frac{1}{2\pi}(P) \int_0^{2\pi}f(e^{it}) \cot\frac{t-\tau}{2}\,d\tau,\;\;\;z=e^{it}.
$$
This formula is the core of existing Exp-Log algorithm for scalar spectral factorization. It is the claim of well-known Fej\'er-Riesz lemma that if, in addition, $f\in\mathcal{P}_N$, then $f^+\in\mathcal{P}_N^+$. In Section V, we use the special notation
\begin{equation}\label{ssf}
f^+=\sqrt[+]{f}
\end{equation}
for the scalar spectral factor.
Finally, $\delta_{ij}$ stands for the Kronecker delta, i.e. $\delta_{ij}=1$ if $i=j$ and $\delta_{ij}=0$ otherwise.
\section{The main observation}
In this section we generalize the main theorem of Janashia-Lagvilava method for $J$-unitary matrices.
\begin{theorem} {\rm (cf. \cite[Th. 1]{IEEE})} Let $F$ be an $m\times m$ matrix function of the form
\begin{equation}\label{IE10}
F=\begin{pmatrix}1&0&\cdots&0&0\\
0&1&\cdots&0&0\\
\vdots&\vdots&\vdots&\vdots&\vdots\\
0&0&\cdots&1&0\\
\zeta^-_{1}&\zeta^-_{2}&\cdots&\zeta^-_{m-1}&f^+
\end{pmatrix},
\end{equation}
where
\begin{equation}\label{IE11}
\zeta^-_j\in \mathcal{P}_N^-,\;j=1,2,\ldots, m-1;\;
f^{+}\in\mathcal{P}_N^+,\;f^+(0)\not=0,
\end{equation}
for some positive integer $N$, and let $J$ be an arbitrary signature. Then (almost surely) there exists a $J$-unitary matrix function $U$ of
the form
\begin{equation}\label{IE12}
U=\begin{pmatrix}u_{11}&u_{12}&\cdots&u_{1m}\\
u_{21}&u_{22}&\cdots&u_{2m}\\
\vdots&\vdots&\vdots&\vdots\\
u_{m-1,1}&u_{m-1,2}&\cdots&u_{m-1,m}\\[3mm]
\widetilde{u_{m1}}&\widetilde{u_{m2}}&\cdots&\widetilde{u_{mm}}\\
\end{pmatrix},
\end{equation}
where
\begin{equation}
u_{ij}\in \mathcal{P}_N^+,\;\;i,j=1,2,\ldots,m,
\end{equation}
with constant determinant, such that
\begin{equation}\label{IE14}
F U\in (\mathcal{P}^+_N)^{m\times m}.
\end{equation}
\end{theorem}
\begin{remark}
A sketch of the proof below indicates the isolated cases where the theorem fails to hold. This is the sense in which we use the term ``almost surely". Whenever the solution exists, it is constructed explicitly.
\end{remark}
\smallskip
The proof follows literally the proof of Theorem 1 in \cite{IEEE}. We need only to change signs of some expressions accordingly. By this way, we naturally arrive at $J$-unitary matrix functions instead of unitary ones.
Indeed, for given functions $\zeta^-_j$, $j=1,2,\ldots,m-1$,
$f^+$ satisfying \eqref{IE11}, and the signature $J={\rm diag}(J_1,J_2,\ldots,J_{m-1},1)$, we consider the following system of $m$
conditions (cf. (15) in \cite{IEEE})
\begin{equation}\label{IE15}
\begin{cases} \zeta^-_1x_m-J_1\cdot f^+\widetilde{x_1}\in \mathcal{P}^+,\\
\zeta^-_2x_m-J_2\cdot f^+\widetilde{x_2}\in \mathcal{P}^+,\\
\cdot\hskip+1cm \cdot\hskip+1cm \cdot\\
\zeta^-_{m-1}x_m-J_{m-1}\cdot f^+\widetilde{x_{m-1}}\in \mathcal{P}^+,\\
\zeta^-_1x_1+\zeta^-_2x_2+\ldots+\zeta^-_{m-1}x_{m-1}
+f^+\widetilde{x_m}\in \mathcal{P}^+,
\end{cases}
\end{equation}
where $\big(x_1,x_2,\ldots,x_m\big)^T\in (\mathcal{P}^+_N)^{m\times 1} $ is the unknown vector function. We say that a vector function
\begin{equation}
\mathbf{u}=\big(u_1,u_2,\ldots,u_m\big)^T\in
(\mathcal{P}^+_N)^{m\times 1}
\end{equation}
is a solution of \eqref{IE15} if and only if all the conditions in \eqref{IE15} are satisfied whenever $x_i=u_k$, $i=1,2,\ldots,m$.
We make essential use of the following
\begin{lemma}
Let \eqref{IE11} hold and let
\begin{gather*}
\mathbf{u}=\big(u_1,u_2,\ldots,u_m\big)^T\in
(\mathcal{P}_N^+)^{m\times 1} \\
\mathbf{v}=\big(v_1,v_2,\ldots,v_m\big)^T\in
(\mathcal{P}_N^+)^{m\times 1}
\end{gather*}
be two $($possibly identical$)$ solutions of the system \eqref{IE15}.
Then
\begin{equation}\label{IE19}
\sum_{k=1}^{m-1}J_ku_k\widetilde{v_k}+\widetilde{u_m}v_m=\operatorname{const}.
\end{equation}
\end{lemma}
{\em Proof:}
Substituting the functions $v$ in the first $m-1$ conditions and
the functions $u$ in the last condition of \eqref{IE15}, and then
multiplying the first $m-1$ conditions by $u$ and the last
condition by $v_m$, we get
$$
\begin{cases} \zeta^-_1v_mu_1-J_1\cdot f^+\widetilde{v_1}u_1\in \mathcal{P}^+,\\
\zeta^-_2v_mu_2-J_2\cdot f^+\widetilde{v_2}u_2\in \mathcal{P}^+,\\
\cdot\hskip+1cm \cdot\hskip+1cm \cdot\\
\zeta^-_{m-1}v_mu_{m-1}-J_{m-1}\cdot f^+\widetilde{v_{m-1}}u_{m-1}\in \mathcal{P}^+,\\
\zeta^-_1u_1v_m+\zeta^-_2u_2v_m+\ldots+\zeta^-_{m-1}u_{m-1}v_m
+f^+\widetilde{u_m}v_m\in \mathcal{P}^+.
\end{cases}
$$
Subtracting the first $m-1$ conditions from the last condition in
the latter system, we get
\begin{equation}\label{IE20}
f^+\left(\sum_{k=1}^{m-1} J_k
u_k\widetilde{v_k}+\widetilde{u_m}v_m\right)\in
\mathcal{P}^+.
\end{equation}
Since the second multiple in \eqref{IE20} belongs to $\mathcal{P}_N$,
taking into account the last condition in \eqref{IE11}, we get
\begin{equation*}
\sum_{k=1}^{m-1}J_ku_k\widetilde{v_k}+\widetilde{u_m}v_m\in
\mathcal{P}_N^+.
\end{equation*}
We can interchange the roles of $u$ and $v$ in the above
discussion to get in a similar manner that
\begin{equation*}
\sum_{k=1}^{m-1}J_k v_k\widetilde{u_k}+\widetilde{v_m}u_m\in
\mathcal{P}_N^+.
\end{equation*}
Consequently, the function in
\eqref{IE19} belongs to $\mathcal{P}_N^+\cap \mathcal{P}_N^-$, which
implies \eqref{IE19}. \hfill $\blacksquare$
The proof of Theorem 1 proceeds as follows. We search for a
nontrivial polynomial solution
\begin{equation}
\mathbf{x}=\big(x_1,x_2,\ldots,x_m\big)^T\in
(\mathcal{P}_N^+)^{m\times 1}
\end{equation}
of the system \eqref{IE15}, where
\begin{equation}\label{IE24}
x_i(z)=\sum_{n=0}^N a_{in} z^n,\;\;\;i=1,2,\ldots, m,
\end{equation}
and explicitly determine the coefficients $a_{in}$. We will find
such $m$ linearly independent solutions of \eqref{IE15} which appear to be $m$ different columns of \eqref{IE12}
Equating all the Fourier coefficients with non-positive indices
of the functions in the left-hand side of \eqref{IE15} to zero, except the
$0$th coefficient of the $j$th function which we set equal to $1$, we
get the following system of algebraic equations in the block
matrix form which we denote by $\mathbb{S}_j$:
\begin{equation}\label{IE25}
\mathbb{S}_j:=
\begin{cases}\Gamma_1 X_m-J_1D\overline{X_1}={\bf 0}, \\
\Gamma_2 X_m-J_2D\overline{X_2}={\bf 0}, \\
\;\;\;\;\; \;\;\;\;\; \\
\Gamma_j X_m-J_{j}D\overline{X_j}={\bf 1}, \\
\;\;\;\;\; \;\;\;\;\; \\
\Gamma_{m-1} X_m-J_{m-1}D\overline{X_{m-1}}={\bf 0}, \\
\Gamma_1 X_1+\Gamma_2 X_2+\ldots+\Gamma_{m-1}
X_{m-1}+D\overline{X_m}={\bf 0}\;. \end{cases}
\end{equation}
Here the following matrix notation is used:
\begin{gather*}
D=\begin{pmatrix}d_0&d_1&d_2&\cdots&d_{N-1}&d_N\\
0&d_0&d_1&\cdots&d_{N-2}&d_{N-1}\\
0&0&d_0&\cdots&d_{N-3}&d_{N-2}\\
\cdot&\cdot&\cdot&\cdots&\cdot&\cdot\\
0&0&0&\cdots&0&d_0\end{pmatrix},\;\;\\
\Gamma_i=
\begin{pmatrix}\gamma_{i0}&\gamma_{i1}&\gamma_{i2}
&\cdots&\gamma_{i,N-1}&\gamma_{iN}\\
\gamma_{i1}&\gamma_{i2}&\gamma_{i3}&\cdots&\gamma_{iN}&0\\
\gamma_{i2}&\gamma_{i3}&\gamma_{i4}&\cdots&0&0\\
\cdot&\cdot&\cdot&\cdots&\cdot&\cdot\\
\gamma_{iN}&0&0&\cdots&0&0\end{pmatrix},
\end{gather*}
$i=1,2,\ldots,m-1$, where
$$
f^+(z)=\sum_{n=0}^N d_n z^n \;\text{ and }\; \zeta^-_i(z)=
\sum_{n=0}^N\gamma_{in}z^{-n};
$$
\begin{equation*}
{\bf 0}=(0,0,\ldots,0)^T \text{ and } {\bf
1}=(1,0,0,\ldots,0)^T\in \mathbb{C}^{N+1}.
\end{equation*}
The column vectors
\begin{equation*}
X_i=(a_{i0},a_{i1},\ldots,a_{iN})^T,\;\;i=1,2,\ldots,m,
\end{equation*}
(see \eqref{IE24}) are the unknowns.
Since $d_0=f^+(0)\not=0$ (see \eqref{IE11}), the matrix $D$ is invertible. Hence, determining $X_i$, $i=1,2,\ldots,m-1$, from the first $m-1$
equations of \eqref{IE25},
\begin{equation}\label{IE30}
X_i=J_i\left(\overline{D^{-1}}\;\overline{\Gamma_i}\;\overline{X_m}-\delta_{ij}\overline{D^{-1}}\;{\bf
1}\right),
\end{equation}
$i=1,2,\ldots,m-1$, and then substituting them in the last equation of \eqref{IE25}, we get
\begin{gather*}
J_1\Gamma_1\,\overline{D^{-1}}\;\overline{\Gamma_1}\;\overline{X_m}+J_2\Gamma_2\,\overline{D^{-1}}\;\overline{\Gamma_2}\;
\overline{X_m}+\cdots\\
+J_{m-1}\Gamma_{m-1}\,\overline{D^{-1}}\;\overline{\Gamma_{m-1}}\;\overline{X_m}+D\;\overline{X_m}=J_j\Gamma_j\,\overline{D^{-1}}\,{\bf 1}
\end{gather*}
(it is assumed that the right-hand
side is equal to ${\bf 1}$ when $j=m$) or, equivalently,
\begin{gather}
(J_1\Theta_1\,{\Theta_1^*}+J_2\Theta_2\,{\Theta_2^*}
+\!\ldots\!+J_{m-1}\Theta_{m-1}\,{\Theta_{m-1}^*}+I_{N+1})\,\overline{X_m}\notag
\\
=J_jD^{-1}\,\Gamma_j\,\overline{D^{-1}}\,{\bf 1}, \label{IE31}
\end{gather}
where
\begin{equation*}
\Theta_i=D^{-1}\,\Gamma_i\,,\;\;i=1,2,\ldots,m-1
\end{equation*}
(we wrote $\Theta^*$ instead of $\overline{\Theta}$ because $\Theta^T=\Theta$).
For each $j=1,2,\ldots,m$, \eqref{IE31} is a linear algebraic system of
$N+1$ equations with $(N+1)$ unknowns. This system \eqref{IE31} and consequently \eqref{IE25} has the unique solution for each $j=1,2,\ldots,m$ if and only if
\begin{equation}\label{dtn0}
\det(\Delta)\not=0,\;\text{ where }\; \Delta=\sum\nolimits_{k=1}^{m-1}J_k\Theta\Theta^*+I_{N+1}\,.
\end{equation}
\begin{remark}
Unlike the spectral factorization, where $\Delta$ is always positive definite and \eqref{dtn0} holds, there are isolated indefinite cases where \eqref{dtn0} does not hold. However, we can assume that \eqref{dtn0} holds (see Remark 1) and proceed with solution of \eqref{IE15}.
\end{remark}
\begin{remark}
As in the spectral factorization case (see \cite[Appendix]{IEEE}) the matrix $\Delta$ has a displacement structure of rank $m$ with respect to $Z$, where $Z$ is the upper triangular $(N+1)\times(N+1)$ matrix with 1's on the first superdiagonal and 0's elsewhere (i.e., a Jordan block with eigenvalue 0). Namely,
$$
R_Z\Delta:=\Delta-Z\Delta Z^*=AJA^*,
$$
where $A$ is the $(N+1)\times m$ matrix which has $i$-th column equal to the first column of $\Theta_i$, $i=1,2,\ldots,m-1$, and the last column is equal to $(0,0,\ldots,0,1)\in\mathbb{C}^{N+1}$. Consequently, the triangular factorization of $\Delta$ can be performed in $O(mN^2)$ operations instead of the traditional $O(N^3)$ ones, as it is described in \cite[Appendix F.1]{Kai99}. This substantially reduces the amount of operations if $N\gg m$.
\end{remark}
Finding the matrix vector $\overline{X_m}$ from \eqref{IE31} and then
determining $X_1,X_2,\ldots,X_{m-1}$ from \eqref{IE30}, we get the unique
solution of $\mathbb{S}_j$. To indicate its dependence on $j$, we
denote the solution of $\mathbb{S}_j$ by
$(X_1^j,X_2^j,\ldots,X_{m-1}^j,X_m^j)$,
\begin{equation}\label{IE35}
X_i^j:=(a_{i0}^j,a_{i1}^j,\ldots, a_{iN}^j)^T,
\;\;\;i=1,2,\ldots,m,
\end{equation}
so that if we construct a matrix function $V$,
\begin{equation}\label{IE36}
V=\begin{pmatrix}v_{11}&v_{12}&\cdots&v_{1m}\\
v_{21}&v_{22}&\cdots&v_{2m}\\
\vdots&\vdots&\vdots&\vdots\\
v_{m-1,1}&v_{m-1,2}&\cdots&v_{m-1,m}\\[3mm]
\widetilde{v_{m1}}&\widetilde{v_{m2}}&\cdots&\widetilde{v_{mm}}\\
\end{pmatrix},
\end{equation}
by letting (see \eqref{IE35})
\begin{equation}
v_{ij}(z)=\sum_{n=0}^N a_{in}^j z^n, \;\;\;1\leq i,j\leq m,
\end{equation}
then columns of \eqref{IE36} are solutions of the system \eqref{IE35}. Hence, because
of the last equation in (15),
\begin{equation*}
FV\in (\mathcal{P}^{+}_N)^{m\times m}
\end{equation*}
and, by virtue of Lemma 1,
\begin{equation}\label{Const2}
V(z)\,J\,V^*(z)=C,
\end{equation}
where $C$ is a constant Hermitian matrix with signature $J$. It can be also proved that (see \cite[p. 2322, II]{IEEE}) that
$$
\det V(z)={\rm const}.
$$
Decomposing the matrix $C$ as
\begin{equation}\label{CJC}
C=C_0J\,C_0^*,\;\text{ where }C_0=V(1),
\end{equation}
equations \eqref{Const2} and \eqref{CJC} imply
$$
C_0^{-1}V(z)\,J\,(C_0^{-1}V(z))^*=J.
$$
Hence,
$$U=C_0^{-1}V$$
is the required $J$-unitary matrix and it can be numerically computed by using the above equations. \hfill $\blacksquare$
\section{Description of the algorithm}
In this section we provide computational procedures for $J$-factorization of \eqref{1} which are similar to corresponding procedures presented in \cite{IEEE}.
{\bf Procedure 1.} First we perform the lower-upper triangular
$J$-factorization of $S$:
\begin{equation} \label{IE54}
S(z)=M(z)\,J\,M^*(z).
\end{equation}
Here
$$
M=\begin{pmatrix}f^+_1&0&\cdots&0&0\\
\xi_{21}&f^+_2&\cdots&0&0\\
\vdots&\vdots&\vdots&\vdots&\vdots\\
\xi_{r-1,1}&\xi_{r-1,2}&\cdots&f^+_{r-1}&0\\
\xi_{r1}&\xi_{r2}&\cdots&\xi_{r,r-1}&f^+_r
\end{pmatrix},
$$
where $f^+_m$, $m=1,2,\ldots,r$, are stable analytic functions (we also assume that all entries are square integrable). Such factorization can always be achieved under the restriction that
$\det [S]_{m\times m}$ has constant sign almost everywhere on $\mathbb{T}$ for each $m=1,2,\ldots,r$.
This happens, for example, if all principle minors are non-singular everywhere on $\mathbb{T}$, however, this condition is not necessary. We can apply the similar recursive formulas as for usual Cholesky factorization: $f_1^+=
\sqrt[+]{J_1s_{11}} $, $\xi_{i1}=J_1s_{i1}/\overline{f_1^+}$, $i=2,3,\ldots,r$;
\begin{gather*}
{f_j^+}=\sqrt[+]{J_j\left(s_{jj}-\sum\nolimits_{k=1}^{j-1}J_k\xi_{jk}
\overline{\xi_{jk}}\right)},\; j=2,3,\ldots,r;
\\
\xi_{ij}=J_j\left(s_{ij}-\sum\nolimits_{k=1}^{j-1}J_k\xi_{ik}
\overline{\xi_{jk}}\right)/\overline{f_j^+},
\end{gather*}
$j=2,3,\ldots,r-1$, $i=j+1,j+2,\ldots,r$, assuming that $\sqrt[+]{\cdot}$ performs the scalar spectral factorization (see \eqref{ssf}).
In actual computations, one can perform factorization \eqref {IE54} pointwise in frequency domain for selected values of $z\in\mathbb{T}$.
\smallskip
{\bf Procedure 2.} We approximate $M$ in $L_2$ keeping only a
finite number of coefficients with negative indices in the Fourier
expansions of the entries of $M$. For the convenience of
computations, we take a different number of these coefficients for
different entries below the main diagonal. Namely, for a large
positive integer $N$, let
\begin{equation}
M_N=\begin{pmatrix}f^+_1&0&\cdots&0&0\\[1mm]
\xi^{[N]}_{21}&f^+_2&\cdots&0&0\\[1mm]
\vdots&\vdots&\vdots&\vdots&\vdots\\[1mm]
\xi^{[(r-2)N]}_{r-1,1}&\xi^{[(r-3)N]}_{r-1,2}&\cdots&f^+_{r-1}&0\\[1mm]
\xi^{[(r-1)N]}_{r1}&\xi^{[(r-2)N]}_{r2}&\cdots&\xi^{[N]}_{r,r-1}&f^+_r
\end{pmatrix}
\end{equation}
where $\xi^{[{N}]}_{ij}(z)=\sum_{n=-{N}}^\infty
c_n\{\xi_{ij}\}z^n$, $2\leq i\leq r$, $1\leq j<r$. Let
$$
S_N(z)=M_N(z)\,J\,M_N^*(z).
$$
\smallskip
{\bf Procedure 3.} We compute explicitly $S_N^+$, a $J$-spectral
factor of $S_N$.
This is done recursively with respect to $m$. Namely, we represent $S_N^+$ as
$$
S_N^+=M_N\mathbf{U}_1\mathbf{U}_2\mathbf{U}_3\ldots
\mathbf{U}_r,
$$
where each $U_m$ is $J$-unitary and has the block matrix form
\begin{equation}\label{IE63}
\mathbf{U}_m(t)=\begin{pmatrix}U_{m}(t)&0\\0&I_{r-m}\end{pmatrix},
\end{equation}
$m=2,3,\ldots r$. Furthermore, each $[Q_m]_{m\times m}$ is $J$-spectral factor of $[S_N]_{m\times m}$
\begin{equation}\label{SJQ}
[S_N]_{m\times m}=[Q_m]_{m\times m}\,[J]_{m\times m} \,[Q_m]_{m\times m}^*,
\end{equation}
where
$$
Q_m=M_N\mathbf{U}_1\mathbf{U}_2\mathbf{U}_3\ldots
\mathbf{U}_m.
$$
We take $ \mathbf{U}_1=I_r$ and then \eqref{SJQ} is valid for $m=1$. Assume that $\mathbf{U}_2(t)$,$\mathbf{U}_2(t)${}$\ldots$,$\mathbf{U}_{m-1}(t)$
have already been constructed so that \eqref{SJQ} holds when $m$ is replaced by $m-1$ and suppose the last row of
$[Q_{m-1}]_{m\times m}$ is
$[\zeta^{m-1}_{1},\zeta^{m-1}_{2},\ldots,\zeta^{m-1}_{m-1},f_m^+]$.
Then we construct the next $J$-unitary matrix \eqref{IE63} by performing the following operations:
\begin{figure}[htbp]
\centerline{\includegraphics[width=95mm,scale=1.2]{fig1.png}}
\caption{Error in $J$-spectral factorization of matrix (29)}
\label{fig}
\end{figure}
{\sc Step 1.} Construct a matrix function $F(t)$ of the form \eqref{IE10}, where
$$\zeta^-_j(z)=\sum_{n=-(m-1)N}^0
c_n\big\{\zeta^{m-1}_j\big\}\,z^n, \;\;\;j=1,2,\ldots,m-1,
$$
and
$$f^+(z)=\sum\nolimits^{(m-1)N}_{n=0}c_n\{f_m^+\}\,z^n\,.$$
{\sc Step 2.} Using Theorem 1, construct $U$ of the form \eqref{IE12}, where
$u_{ij}\in\mathcal{P}^+_{(m-1)N}$, $1\leq i,j\leq m$, so that
\eqref{IE14} would hold.
{\sc Step 3.} Define $\mathbf{U}_m$ by the equation \eqref{IE63} where
$U_m=U$ is found in Step 2.
\section{Numerical Example}
To illustrate our approach, we present an approximate $J$-factorization of the following polynomial matrix function $S=$
\begin{equation}\label{14.01}
\begin{pmatrix}
-8z^{-1}-19-8z & -39z^{-1}-73-28z\\
-28z^{-1}-73-39z & -137z^{-1}-286-137z
\end{pmatrix}.
\end{equation}
This matrix satisfies the conditions imposed on $S$ in order for the algorithm to be applicable, namely $s_{11}(z)$ and
$$
\det S(z)=4(z^{-2}-2+z^2)=4(z^{-2}-1)(1-z^2)
$$
are both negative for $z\in\mathbb{T}$.
However, the matrix $S(z)$ is singular for $z=-1$ and $1$, which usually complicates the factorization process. The $J$-factorization of \eqref{14.01} is known in advance due to the corresponding example of the singular matrix in \cite{IEEE2018}:
$$
S(z)=S_+(z)\begin{pmatrix}
-1 & 0\\ 0& 1
\end{pmatrix} S_+^*(z),
$$
where
\begin{equation}\label{14.02}
S_+(z)=
\begin{pmatrix}
4 +2z & 1\\ 14 + 10z & 3 + z
\end{pmatrix}.
\end{equation}
However, we follow the steps of the proposed algorithm to produce an approximate result.
The triangular $J$-factorization of $S$ has the form
$$
S(z)=M(z)
\begin{pmatrix}
-1 & 0\\ 0& 1
\end{pmatrix}
M^*(z)
$$
where $M(z)=$
$$
\begin{pmatrix}
3.824\ldots+z\cdot2.092\ldots & 0\\[2mm]\dfrac{28z^{-1}+73+39z}{z^{-1}\cdot2.092\ldots+ 3.824\ldots} & \dfrac{1-z^2}{3.824\ldots+z\cdot2.092\ldots}
\end{pmatrix}
$$
with
$$
f_1^+(z):= 3.824\ldots+z\cdot2.092\ldots=\sqrt[+]{8z^{-1}+19+8z}
$$
and
$$
f_2^+(z):=(1-z^2)=\sqrt[+]{-z^{-2}+2-z^2}.
$$
We expand $\xi_{21}=-s_{21}/\widetilde{f^+_1}$ into Fourier series by the division of polynomials and, for a positive integer $N$, approximate it by ``cutting the tail":
$$
\xi_{21}(z)\approx\xi_{21}^{[N]}(z)=\sum\nolimits_{k=-N}^{\infty}c_k\{\xi_{21}\}z^k.
$$
Thus we get the approximation of $S$ by
$$
S_N=\begin{pmatrix}
f_1^+ & 0\\ \xi_{21}^{[N]}& f_2^+
\end{pmatrix}
\begin{pmatrix}
-1 & 0\\ 0& 1
\end{pmatrix}
\begin{pmatrix}
f_1^+ & 0\\ \xi_{21}^{[N]}& f_2^+
\end{pmatrix}^*
$$
and we obtain its $J$-spectral factor $S_N^+$ by finding explicitly a $J$-unitary matrix $U=U_N$ as it is described in Section V:
$$
S_N^+=\begin{pmatrix}
f_1^+ & 0\\ \xi_{21}^{[N]}& f_2^+
\end{pmatrix}\cdot U.
$$
The computation results coincide with the exact answer \eqref{14.02} within 16 digits (the Matlab double precision) for $N=53$.
\vfill\break
A total computational time to achieve this accuracy is less than 0.02 sec (on a laptop with the characteristics: Intel(R) Core(TM) i7 8650U CPU, 1.90 GHz, RAM 16.00 Gb). Fig. 1 shows how this accuracy increases with increasing $N$.
\section*{Acknowledgment}
Authors thank Professor Michael \v Sebek for bringing to their attention the importance of $J$-spectral factorization in Control Theory.
\def$'${$'$}
|
train/arxiv
|
BkiUdhI4uzlgqEf1Tbj4
| 5 | 1 |
\section{1.}{Introduction}
If we adopt the materialist vision that the physical world is an objective
reality then, necessarily, our geometrical conception of the universe is
limited by our psychological perception of it. There is in fact a
self-consistency in that physical laws generate the very mathematics
necessary to make those laws understandable. In other words, we can conceive
what nature allows us to conceive. In the scale of distances of our daily
life, i.e., distances much greater than the Planck length, the universe
behaves quite smoothly and one hopes that this behaviour might be
extrapolated to very large, cosmological, and also to very small, even
subnuclear, distances. This smooth behaviour would allow the universe to be
mathematically modeled by a differentiable manifold. Of course, the very
concept of a differentiable manifold is possible only because our perception
of space allows us to conceive it, and one can wonder how our mathematical
conceptions are restricted by this kind of anthropic principle.
It seems that the problem of determining the geometry realised in nature was
first addressed by Riemann$^1$ in his famous, but little read, thesis in
1854. He pointed out that this geometry has to be determined by purely
empirical, experimental and observational, means and cannot be decided upon
{\it a priori}. The first indirect statements about the metrical properties
of our universe can be found in the Pythagoras theorem which, in a modern
language, is equivalent to Riemannian geometry
$${ds}^2\,=\,g_{\mu\nu}(x)\,dx^\mu\,dx^\nu\,.\eqno{(1.1)}$$
\noindent The only thing we can try to understand now is the Riemannian, or
Pythagorean, nature of the geometry. Here we take recourse to the classical
argumentation by Riemann.$^1$ The infinitesimal element of distance $ds$
should be a function of the coordinates $x$'s and their differentials $dx$'s
$$ds\,=\,(x,\,dx)\,.\eqno{(1.2)}$$
\noindent This function must satisfy the single requirement
$$f(x,\,\lambda\,dx)\,=\,\mid\lambda\mid\,\,f(x,\,dx)\,.\eqno{(1.3)}$$
\noindent Of course, the possibilities are infinitely many. Let us restrict our
considerations to monomial functions
$$ds\,=\,{(G_{\mu_1\cdots\mu_r}\,dx^{\mu_1}\,\cdots\,dx^{\mu_r})}^{1/r}\,.
\eqno{(1.4)}$$
\noindent In order for this quantity to satisfy (1.3) $r$ must be an even number.
The simplest choice is $r=2$, which corresponds to Riemannian geometry.
As pointed out by Riemann, the next possibility is $r=4$. In this case the
line element is given by
$${ds}^4\,=\,G_{\mu\nu\lambda\rho}\,dx^\mu\,dx^\nu\,dx^\lambda\,dx^\rho\,.
\eqno{(1.5)}$$
\noindent Riemann went no farther in exploring the above geometry, and gave no
justification for that omission. Of course, at first sight, a space with a
line element of the form (1.5) may seem bizarre. However, such geometry
cannot be excluded {\it a priori} and its exclusion must be done in a
mathematically educated way.
This was partially done by Helmholtz.$^2$ He showed that the existence of
rigid bodies, which do not change their shapes and therefore the metric
relations under translations and rotations, leaves us with Riemannian
geometry as the only possibility. The Helmholtz result seemed quite
satisfactory and therefore no more concern for higher-rank geometries
appeared. It seems that the arrival of General Relativity, with its
underlying Riemannian geometry, caused this important problem to be
forgotten. However, the problem merits further attention, not only from a
mathematical point of view, but also for the applications it found in
theoretical and mathematical physics. It is here that the introductory
considerations come into play. In fact, the difficulty of conceiving
geometries other than the Riemannian limited their developments.
We are therefore going to develop that chapter of differential geometry in
which Riemann and Helmholtz stopped their scientific enquiries.
To close these historical comments. An indirect verification of the
Riemannian structure of the universe at our daily life scales was performed
by Gauss$^3$ in 1826. The experiment was intended to verify departures from
flatness, but as a side result he also verified no departures from
Riemannianicity.
At the scale of distances of our daily life fourth-rank geometry is not
realised in nature and the only place where it can play some role is in
high-energy, or short distances, physics. In fact, at high energies, a regime
to which we do not have direct experimental access, the very concept of rigid
body may be no longer valid and the Helmholtz argumentation no longer
applicable.
The natural question now is: why we would like to work with fourth-rank
geometry, and no other of the infinitely many possible generalisations of
Riemannian geometry, to describe the physics at high energies. The answer is
provided by experiments, such as deep inelastic scattering, which show that,
at very high energies, physical processes are scale, or conformally,
invariant. Therefore, high-energy physics is associated to a geometry
exhibiting, in a model independent way, conformal invariance in 4 dimensions.
In another work$^4$ we show that the critical dimension, for which field
theories are integrable, is equal to the rank of the metric. Therefore, if we
want to construct an integrable field theory in 4 dimensions showing
agreement with the observed conformal invariance at high energies we must
take recourse to fourth-rank geometry. This result also explains why, if one
relies only on Riemannian geometry, integrable conformal models can be
constructed only in 2 dimensions (strings).
We arrive therefore to the following scheme: at short distances,
high-energies, the geometry is of fourth-rank while at large distances, low
energies, the geometry is of second-rank, Riemannian. It is clear furthermore
that the Riemannian behaviour of the geometry must be recovered as the
low-energy limit of the high-energy theory. This would be possible if at
low-energies the fourth-rank metric tensor $G_{\mu\nu\lambda\rho}$ becomes of
the form
$$G_{\mu\nu\lambda\rho}\,=\,g_{(\mu\nu}\,g_{\lambda\rho )}\,.\eqno{(1.6a)}$$
\noindent In this case the line element factors and one is back to the Riemannian
case
$${ds}^4\,=\,{({ds}^2)}^2\,.\eqno{(1.6b)}$$
Our next task is to construct a geometric invariant to be used as the
Lagrangian describing the dynamics of the geometry, i.e., of the
gravitational field. From the metric alone it is imposible to construct any
invariant, apart from the trivial solution: a constant. Therefore, we must
take recourse to a further geometrical object: the Ricci tensor for an
arbitrary connection ${\Gamma^\lambda}_{\mu\nu}$
$$R_{\mu\nu}\,=\,\partial_\lambda{\Gamma^\lambda}_{\mu\nu}\,-\,\partial_\nu
{\Gamma^\lambda}_{\lambda\mu}\,+\,{\Gamma^\lambda}_{\lambda\sigma}\,{\Gamma^
\sigma}_{\mu\nu}\,-\,{\Gamma^\lambda}_{\mu\sigma}\,{\Gamma^\sigma}_{\lambda
\nu}\,.\eqno{(1.7)}$$
\noindent The simplest invariants which can be constructed with the metric
$G_{\mu\nu\lambda\rho}$ and the Ricci tensor $R_{\mu\nu}$ are
$$\langle R^2\rangle\,=\,G^{\mu\nu\lambda\rho}\,R_{\mu\nu}\,R_{\lambda\rho}
\,,$$
$$\langle R^4\rangle\,=\,G^{\mu\nu\lambda\rho}\,G^{\alpha\beta\gamma\delta}\,
R_{\mu\alpha}\,R_{\nu\beta}\,R_{\lambda\gamma}\,R_{\rho\delta}\,,\quad etc.
\eqno{(1.8)}$$
\noindent The Lagrangian therefore will be of the form
$${\cal L}\,=\,L(\langle R^2\rangle,\,\langle R^4\rangle,\,\cdots)\,G^{1/4}
\,,\eqno{(1.9)}$$
\noindent where $G$ is the determinant of $G_{\mu\nu\lambda\rho}$. The scalar
function $L$ to be put in (1.9) should make the Lagrangian a conformally
invariant function. Under rescalings of the metric
$$G_{\mu\nu\lambda\rho}\,\rightarrow\,\lambda\,G_{\mu\nu\lambda\rho}\,,
\eqno{(1.10)}$$
\noindent the inverse metric $G^{\mu\nu\lambda\rho}$ and $G^{1/4}$ transform as
$$G^{\mu\nu\lambda\rho}\,\rightarrow\,{\lambda}^{-1}\,G^{\mu\nu\lambda\rho}
\,,\eqno{(1.11a)}$$
$$G^{1/4}\,\rightarrow\,\lambda\,G^{1/4}\,.\eqno{(1.11b)}$$
\noindent Therefore the Lagrangian should be of the form
$${\cal L}\,=\,[\alpha\,\langle R^2\rangle\,+\,\beta\,{{\langle R^4\rangle}
\over{\langle R^2\rangle}}+\cdots]\,G^{1/4}\,.\eqno{(1.12)}$$
\noindent However, all the terms after the first one, are highly non-local.
Therefore, the only sensible solution is
$${\cal L}\,=\,\kappa_{CG}\,\langle R^2\rangle\,G^{1/4}\,,\eqno{(1.13)}$$
\noindent where
$$\kappa_{CG}\,\approx\,\kappa_E\,{L_{Planck}}^2\,=\,{{\hbar c}\over{8\pi}}\,
,\eqno{(1.14)}$$
\noindent is the Einstein gravitational constant $\kappa_E={c^4\over{8\pi G}}$,
times a constant of the order of ${L_{Planck}}^2$.
The total Lagrangian must also consider the contributions of matter. Now we
must apply a Palatini-like variational principle in which the connection and
the metric are varied independently. However, in all known cases of physical
interest the matter Lagrangian does not depend on the affine connection.$^5$
In this case the variation of the gravitational Lagrangian with respect to
the connection leads to a metricity condition for which the solution is
$${\Gamma^\lambda}_{\mu\nu}\,=\,\lbrace^\lambda_{\mu\nu}\rbrace(\gamma)\,.
\eqno{(1.15)}$$
\noindent I.e., the connection is the Christoffel symbol of the second kind of the
tensor $\gamma^{\mu\nu}$ given by
$$\gamma^{\mu\nu}=G^{\mu\nu\lambda\rho}\,R_{\lambda\rho}\,,\eqno{(1.16)}$$
\noindent which we have assumed to be regular. Equations (1.15) and (1.16) are a
metricity condition since they give the relation between
${\Gamma^\lambda}_{\mu\nu}$ and $G_{\mu\nu\lambda\rho}$. Therefore
$$R_{\mu\nu}(\Gamma)\,=\,R_{\mu\nu}(\gamma)\,.\eqno{(1.17)}$$
\noindent One easily verifies then that
$$\langle R^2\rangle\,=\,G^{\mu\nu\lambda\rho}\,R_{\mu\nu}\,R_{\lambda\rho}\,
=\,\gamma^{\mu\nu}\,R_{\mu\nu}(\gamma)\,=\,R(\gamma)\,.\eqno{(1.18)}$$
Variation of the Lagrangian with respect to the metric
$G_{\mu\nu\lambda\rho}$ gives
$$\kappa_{CG}\,[ R_{(\mu\nu}\,R_{\lambda\rho)}\,-\,{1\over4}\,\langle R^2
\rangle\,G_{\mu\nu\lambda\rho}]\,=\,T_{\mu\nu\lambda\rho}\,,\eqno{(1.19)}$$
\noindent where $T_{\mu\nu\lambda\rho}$ is the energy-momentum tensor of matter,
to be defined below.
The field equations (1.19) exhibit three energy regimes: low, medium, and
high. In the low-energy regime there is no matter and therefore the
fourth-rank metric is separable, $G_{\mu\nu\lambda\rho}=g_{(\mu\nu}
g_{\lambda\rho)}$, as can be read from (1.19). Then the line element would
factor, as in (1.6b), and one would be back to the Riemannian case. In the
medium-energy regime the geometry is still Riemannian, $G_{\mu\nu\lambda\rho
}=g_{(\mu\nu}g_{\lambda\rho)}$, but there is matter involved in the game.
This possibility is not excluded as a closer analysis of eqs. (1.19) reveals.
In this case the gravitational field couples in a different way, as compared
to General Relativity, to matter. Lastly, we have the true high-energy regime
in which there is matter and the geometry is truly fourth-rank.
Let us further analyse these energy regimes. In vacuum, the field equations
(1.19) are equivalent to
$$R_{\mu\nu}(\gamma)\,-\,{1\over4}\,R(\gamma)\,\gamma_{\mu\nu}\,=0\,.
\eqno{(1.20)}$$
\noindent For a spherically symmetric field the solution is the Kottler metric$^6$
which contains the Schwarzschild solution as a special case. Therefore the
predictions based on the \break Schwarzschild metric, which agree with
observation by 1 per cent or better, will be contained in this theory.
The large scale geometry of the universe seems to be Riemannian and, since
there is matter present in it, this corresponds to the medium-energy regime
mentioned above. In this context we develop a cosmological model based on the
Friedman-Robertson-Walker metric coupled to cosmic matter described by a
perfect fluid.
The theory predicts an increasing total entropy such that the expansion of
the universe is an adiabatic non-isoentropic process. Therefore, the
evolution of the universe, in the framework of fourth-rank cosmology is, as
expected on physical grounds, an irreversible process.
For $k_{obs}=0$, as imposed by the observed flatness of the universe, the
field equations give, for the present Universe,
$$\Omega\,=\,{4y\over{1-4y-y^2}}\,,\eqno{(1.21)}$$
\noindent where $y={p\over\rho}$. For $\Omega_{small}=0.01$ [7] we obtain $y_{pred}
\approx2.5\times10^{-3}$ which corresponds to an almost pressureless perfect
fluid. This must be compared with the observed value of $p\over\rho$, which
can be determined from the mean random velocity of typical galaxies and is
given by $y_{random}=1\times10^{-5}$. Therefore, our prediction differs by
two orders of magnitude with respect to the observed value. We hope to
improve this situation since the estimation of $y$ from the random motion of
galaxies is a quite rough one. Furthermore, eq. (1.21) was obtained under the
strong asumption that $y$ behaves like a constant. Therefore, there are hopes
that this theory shows a better agreement with the observed values of the
cosmological parameters.
For the early universe we find that causality is not violated for $t>t_{class
}\approx10^{19}t_{Planck}$ $\approx10^{-24}s$. At earlier times quantum
mechanical effects dominate the scene. In fact, the radius of the universe is
exactly the Compton wavelength associated to its mass. Our classical approach
breaks down so that the very concept of causality is meaningless. Therefore,
there is no violation of causality, or horizon problem.
Some final introductory comments. It is a popular view that the gravitational
field is correctly described by General Relativity. This is true of the pure
gravitational field, i.e., when no coupling to matter, or other fields, is
present. In fact, Einstein field equations are in excellent agreement, 1 per
cent or better, with observation when applied, for example, to the solar
system. However, when matter is coupled to gravity the observational
agreement is not so good. This is the case when General Relativity is applied
to cosmology where the gravitational field get coupled to cosmic matter
described by a perfect fluid. One obtains qualitatively good predictions, as
the evolution of the universe from an initial singularity and some good
quantitative predictions as the temperature of the microwave background and
the relative abundance of elements. However, the quantitative agreement is
weaker in other aspects. In fact, flatness, $k_{obs}=0$, implies
$\Omega_{GR}=1$, which is hardly observed. Furthermore, the Standard Model of
Cosmology predicts a constant entropy, something which is difficult to accept
on physical grounds. These are some of the reasons to look for an improved
theory for the gravitational field.
In previous works$^{10,11,12}$ we developed a similar model based on the
Lagrangian
$${\cal L}\,=\,\kappa_E\,{\langle R^2\rangle}^{1\slash2}\,G^{1\slash4}\,.
\eqno{(1.22)}$$
\noindent This Lagrangian was chosen in order to have only the Einstein
gravitational constant for dimensional purposes. Later on we became convinced
that the appearance of h in the Lagrangian (1.13) creates no conflict between
the classical character of the Lagrangian and the quantum origin of $\hbar$.
The paper is organised as follows: In Section 2 we start by giving some
mathematical considerations. In Section 3 we develop the fundamentals of
fourth-rank gravity. In Section 4 we consider the low energy regime and the
Schwarzschild solution. In Section 5 we apply fourth-rank gravity to
cosmology. Section 6 study the high-energy regime and the coupling to
conformal matter. Section 7 is dedicated to the conclusions. The Appendices
A, B and C, collect some standard results on Cosmography, General Relativity
and the Standard Model of Cosmology, respectively.
To our regret, due to the nature of this approach, in the Appendices we must
bore the reader by exhibiting some standard and well known results, but this
is necessary in order to illustrate where the new approach departs from the
standard one.
\section{2.}{Mathematical Preliminaries. Differentiable Manifolds}
Here we consider some elementary results for differentiable manifolds. Let us
start by considering the metric properties, which are related to the way in
which distances are measured. In what follows we take recourse to the
classical argumentation by Riemann.$^1$
Let $M$ be a {\it d}-dimensional differentiable manifold, and let $x^\mu$,
$\mu=0,\cdots,d-1$, be local coordinates. The infinitesimal element of
distance $ds$ should be a function of the coordinates $x$ and their
differentials $dx$'s
$$ds\,=\,f(x,\,dx)\,,\eqno{(2.1)}$$
\noindent which is homogeneous of the first-order in $dx$'s
$$f(x,\,\lambda\,dx)\,=\,\lambda\,f(x,\,dx)\,,\eqno{(2.2a)}$$
\noindent for $\lambda >0$, and is positive definite
$$f\,\geq\,0\,.\eqno{(2.2b1)}$$
Condition (2.2b1) was written in a time in which distances were, so to say,
positive. However, with the arrival of General Relativity one got used to
line elements with undefined signature. Condition (2.2b1) was there to
guarantee the invariance under the change $dx\rightarrow-dx$, {\it i.e.}, to
assure that distances measured when going in one direction are the same as
measured when going in the opposite direction. Therefore, we can replace
(2.2b1) by the weaker condition
$$f(x,\,-\,dx)\,=\,f(x,\,dx)\,.\eqno{(2.2b2)}$$
\noindent Conditions (2.2a) and (2.2b2) can now we resumed into the single
condition
$$f(x,\,\lambda\,dx)\,=\,\mid\lambda\mid\,f(x,\,dx)\,,\eqno{(2.2)}$$
\noindent with no restriction over the sign of $\lambda$.
Of course the possible solutions to (2.2) are infinitely many. Let us
restrict our considerations to monomial functions. Then we will have
$$ds\,=\,{(G_{\mu_1\cdots\mu_r}(x)\,dx^{\mu_1}\,\cdots\,dx^{\mu_r})}^{1\slash
r}\,.\eqno{(2.3)}$$
\noindent In order for this quantity to be positive definite $r$ must be an even
number.
The simplest choice is $r=2$
$$ds^2\,=\,g_{\mu\nu}\,dx^\mu\,dx^\nu\,,\eqno{(2.4)}$$
\noindent which corresponds to Riemannian geometry. The coefficients $g_{\mu\nu}$
are the components of the covariant metric tensor. The determinant of the
metric is defined by
$$g\,=\,{1\over d!}\,\epsilon^{\mu_1\cdots\mu_d}\,\epsilon^{\nu_1\cdots\nu_d}
\,g_{\mu_1\nu_1}\,\cdots\,g_{\mu_d\nu_d}\,.\eqno{(2.5)}$$
\noindent If $g\not= 0$ we can define the inverse metric by
$$g^{\mu\nu}\,=\,{1\over(d-1)!}\,{1\over g}\,\epsilon^{\mu\mu_1\cdots\mu_{d-1
}}\,\epsilon^{\nu\nu_1\cdots\nu_{d-1}}\,g_{\mu_1\nu_1}\,\cdots\,g_{\mu_{d-1}
\nu_{d-1}}\,,\eqno{(2.6)}$$
\noindent and satisfies
$$g^{\mu\lambda}\,g_{\lambda\nu}\,=\,\delta^\mu_\nu\,.\eqno{(2.7)}$$
\noindent Densities of weight one can be constructed in terms of the quantity
$g^{1\slash2}$.
As pointed out by Riemann,$^1$ the next possibility is $r=4$. In this case
the line element is given by
$${ds}^4\,=\,G_{\mu\nu\lambda\rho}\,dx^\mu\,dx^\nu\,dx^\lambda\,dx^\rho\,.
\eqno{(2.8)}$$
\noindent The coefficients $G_{\mu\nu\lambda\rho}$ are the components of a
covariant fourth-rank tensor. Since it is related to the metric properties of
the given manifold it is not an error to call it a "metric". The determinant
of the metric $G_{\mu\nu\lambda\rho}$ is defined as
$$G\,=\,{1\over d!}\,\epsilon^{\mu_1\cdots\mu_d}\,\cdots\,\epsilon^{\rho_1
\cdots\rho_d}\,G_{\mu_1\nu_1\lambda_1\rho_1}\,\cdots\,G_{\mu_d\nu_d\lambda_d
\rho_d}\,,\eqno{(2.9)}$$
\noindent where the $\epsilon$'s can be chosen as the usual completely
antisymmetric Levi-Civita symbols. If $G\not= 0$ we can define the inverse
metric by
$$G^{\mu\nu\lambda\rho}\,=\,{1\over{(d-1)!}}\,{1\over G}\,\epsilon^{\mu\mu_1
\cdots\mu_{d-1}}\,\cdots\,\epsilon^{\rho\rho_1\cdots\rho_{d-1}}\,G_{\mu_1\nu_
1\lambda_1\rho_1}\,\cdots\,G_{\mu_{d-1}\nu_{d-1}\lambda_{d-1}\rho_{d-1}}\,.
\eqno{(2.10)}$$
\noindent This inverse metric satisfies the relations
$$G^{\mu\alpha\beta\gamma}\,G_{\nu\alpha\beta\gamma}\,=\,\delta^\mu_\nu\,.
\eqno{(2.11)}$$
\noindent That eq. (2.11) holds true for $G^{\mu\nu\lambda\rho}$ as defined in
(2.10) can be verified by hand in the two-dimensional case and with computer
algebraic manipulation for three and four dimensions.$^8$ Now, densities of
weight one can be constructed in terms of the quantity $G^{1\slash 4}$.
It is clear that fourth-rank geometry is observationally excluded at the
scale of distances of our daily life. However, a Riemannian behaviour can be
obtained for separable spaces. A space is said to be separable if
$G_{\mu\nu\lambda\rho}$ is of the form
$$G_{\mu\nu\lambda\rho}\,=\,g_{(\mu\nu}\,g_{\lambda\rho )}\,=\,{1\over3}\,(g_
{\mu\nu}\,g_{\lambda\rho}\,+\,g_{\mu\lambda}\,g_{\nu\rho}\,+\,g_{\mu\rho}\,g_
{\nu\lambda})\,.\eqno{(2.12)}$$
\noindent In this case formula (2.8) reduces to (2.4). Separable metrics can also
be used as a quality control of later formal developments. In fact, all the
results and developments obtained for a generic metric
$G_{\mu\nu\lambda\rho}$ must reduce to those for Riemannian geometry when
applied to separable metrics.
In the case of a separable metric the determinant and the inverse metric are
given by
$$G\,=\,g^2\,,\eqno{(2.13a)}$$
$$G^{\mu\nu\lambda\rho}\,=\,{3\over{d+2}}\,g^{(\mu\nu}\,g^{\lambda\rho )}\,.
\eqno{(2.13b)}$$
Let us finish this Section with some considerations on the curvature
properties of manifolds. Curvature properties are described by the curvature
tensor
$$R^\lambda_{\rho\mu\nu}\,=\,\partial_\mu{\Gamma^\lambda}_{\nu\rho}\,-\,
\partial_\nu{\Gamma^\lambda}_{\mu\rho}\,+\,{\Gamma^\lambda}_{\mu\sigma}\,
{\Gamma^\sigma}_{\nu\rho}\,-\,{\Gamma^\lambda}_{\nu\sigma}\,{\Gamma^\sigma}_{
\mu\rho}\,,\eqno{(2.14)}$$
\noindent constructed in terms of a connection ${\Gamma^\lambda}_{\mu\nu}$.
The metric and the connection are, in general, independent objects. They can
be related through a metricity condition. In Riemannian geometry the
metricity condition reads
$$\nabla_\lambda g_{\mu\nu}\,=\,\partial_\lambda g_{\mu\nu}\,-\,{\Gamma^\rho}
_{\lambda\mu}\,g_{\rho\nu}\,-\,{\Gamma^\rho}_{\lambda\nu}\,g_{\mu\rho}\,=\,0
\,.\eqno{(2.15)}$$
\noindent The number of unknowns for a symmetric connection ${\Gamma^\lambda}_{\mu
\nu}$ and the number of equations (2.15) are the same, ${1\over2}d^2(d+1)$.
Therefore, since this is an algebraic linear system, the solution is unique
and is given by the familiar Christoffel symbols of the second kind
$${\Gamma^\lambda}_{\mu\nu}\,=\,\lbrace^\lambda_{\mu\nu}\rbrace(g)\,=\,{1
\over2}\,g^{\lambda\rho}\,(\partial_\mu g_{\nu\rho}\,+\,\partial_\nu g_{\mu
\rho}\,-\,\partial_\rho g_{\mu\nu})\,.\eqno{(2.16)}$$
\noindent Therefore, in Riemannian geometry one can talk of the curvature
properties of a metric $g_{\mu\nu}$. This can be done because there exist a
natural connection, the Christoffel symbol of the second kind, in terms of
which we can construct a curvature tensor.
In the case of a fourth-rank metric a condition analogous to (2.15) would
read
$$\nabla_\mu G_{\alpha\beta\gamma\delta}\,=\,\partial_\mu G_{\alpha\beta
\gamma\delta}\,-\,{\Gamma^\nu}_{\mu\alpha}\,G_{\nu\beta\gamma\delta}\,-\,{
\Gamma^\nu}_{\mu\beta}\,G_{\alpha\nu\gamma\delta}\,-\,{\Gamma^\nu}_{\mu
\gamma} \,G_{\alpha\beta\nu\delta}\,-\,{\Gamma^\nu}_{\mu\delta}\,G_{\alpha
\beta\gamma\nu}\,=0\,.\eqno{(2.17)}$$
\noindent However, in this case, the number of unknowns ${\Gamma^\lambda}_{\mu\nu}$
is, as before, ${1\over2}d^2(d+1)$, while the number of equations is
$${1\over24}\,d^2\,(d\,+\,1)\,(d\,+\,2)\,(d\,+\,3)\,>\,{1\over2}\,d^2\,(d\,+
\,1)\,.\eqno{(2.18)}$$
\noindent Therefore the system is overdetermined and some differentio-algebraic
conditions must be satisfied by the metric. Since, in general, such
restrictions will not be satisfied by a generic metric, one must deal with
${\Gamma^\lambda}_{\mu\nu}$ and $G_{\mu\nu\lambda\rho}$ as independent
objects. Therefore, for physical applications, the connection and the metric
must be considered as independent fields.
A metricity condition can be imposed consistently only if the number of
independent components of the metric is less than that naively implied by
(2.18). The maximum acceptable number of independent components is ${1\over2}
d(d+1)$. This can be achieved, for instance, if the metric is a separable
one. Furthermore, one can verify that in this case the metricity condition
(2.17) reduces to the usual metricity condition (2.15) for the metric
$g_{\mu\nu}$ and therefore ${\Gamma^\lambda}_{\mu\nu}$ is precisely that for
Riemannian geometry, i.e., the Christoffel symbol of the second kind.
\section{3.}{Conformal Fourth-Rank Gravity}
In this Section we develop a theory for the gravitational field based on
fourth-rank geometry. The use of fourth-rank geometry is motivated by the
following considerations. At very high energies the masses of particles
involved in physical processes become negligible as compared to the energies,
in fact they can be set equal to zero. Therefore, there is no fundamental
mass setting the scale of energies, and all physical processes must be scale
invariant. This especulation is confirmed by experiments, such as deep
inelastic scattering, which show that, in fact, at very high energies,
physical processes are scale invariant. It can furthermore be shown, from a
mathematical point of view, that scale invariance is equivalent to conformal
invariance. Therefore, high-energy physics is associated to a geometry
exhibiting, in a model independent way, conformal invariance in 4 dimensions.
In another work$^4$ we show that the critical dimension, for which field
theories are integrable, is equal to the rank of the metric. Therefore, if we
want to construct a field theory in 4 dimensions showing agreement with the
observed conformal invariance at high energies we must take recourse to
fourth-rank geometry.
We arrive therefore to the following scheme: at short distances,
high-energies, the geometry is of fourth-rank while at large distances, low
energies, the geometry is of second-rank, Riemannian. It is clear furthermore
that the Riemannian behaviour of the geometry must be recovered as the
low-energy limit of the high-energy theory. This would be possible if at
low-energies the fourth-rank metric tensor $G_{\mu\nu\lambda\rho}$ becomes
separable. In this case the line element factors and one is back to the
Riemannian case. This would explain why the universe, even when described by
a fourth-rank metric, looks Riemannian at large, low energy, scales. The
problem is now to obtain this Riemannian behaviour as the low-energy regime
of some field theory.
The conformal invariance requirement determines, almost uniquely the
geometrical invariant to be used as Lagrangian. The field equations exhibit
three energy regimes: low, medium, and high. In the low-energy regime there
is no matter and the fourth-rank metric is separable, $G_{\mu\nu\lambda\rho}=
g_{(\mu\nu}g_{\lambda\rho )}$. Then the line element factors and one is back
to the Riemannian case. In the medium-energy regime the geometry is still
Riemannian, $G_{\mu\nu\lambda\rho}=g_{(\mu\nu}g_{\lambda\rho )}$, but there
is matter involved in the game. In this case the gravitational field couples
in a different way, as compared to General Relativity, to matter. Lastly, we
have the true high-energy regime in which there is matter and the geometry is
truly fourth-rank. These energy regimes, and their observational
consequences, are further analysed in Section 4, 5, and 6.
\subsection{3.1}{Fourth-Rank Gravitational Equations}
As in General Relativity, in order to describe the dynamics of the
gravitational field we need to construct a geometrical invariant. From the
metric alone it is impossible to construct any invariant, apart from the
trivial solution: a constant. Therefore we must take recourse to another
geometrical object. The necessary object is the Ricci tensor for an arbitrary
connection ${\Gamma^\lambda}_{\mu\nu}$, which is obtained as a contraction of
the Riemann tensor, defined in (2.14),
$$R_{\mu\nu}\,=\,R^\lambda_{\mu\lambda\nu}\,=\,\partial_\lambda{\Gamma^
\lambda}_{\mu\nu}\,-\,\partial_\nu{\Gamma^\lambda}_{\lambda\nu}\,+\,{\Gamma^
\lambda}_{\lambda\sigma}\,{\Gamma^\sigma}_{\mu\nu}\,-\,{\Gamma^\lambda}_{\mu
\sigma}\,{\Gamma^\sigma}_{\lambda\nu}\,.\eqno{(3.1)}$$
\noindent The simplest invariants which can be constructed with the metric
$G_{\mu\nu\lambda\rho}$ and the Ricci tensor $R_{\mu\nu}$ are
$$\langle R^2\rangle\,=\,G^{\mu\nu\lambda\rho}\,R_{\mu\nu}\,R_{\lambda\rho}
\,,$$
$$\langle R^4\rangle\,=\,G^{\mu\nu\lambda\rho}\,G^{\alpha\beta\gamma\delta}\,
R_{\mu\alpha}\,R_{\nu\beta}\,R_{\lambda\gamma}\,R_{\rho\delta}\,,\quad etc.
\eqno{(3.2)}$$
\noindent The Lagrangian therefore will be of the form
$${\cal L}\,=\,L(\langle R^2\rangle,\,\langle R^4\rangle,\,\cdots)\,G^{1
\slash4}\,,\eqno{(3.3)}$$
\noindent where $G$ is the determinant of $G_{\mu\nu\lambda\rho}$. The scalar
function $L$ to be put in (3.3) should make the Lagrangian a conformally
invariant function. Under rescalings of the metric
$$G_{\mu\nu\lambda\rho}\,\rightarrow\,\lambda\,G_{\mu\nu\lambda\rho}\,,
\eqno{(3.4)}$$
\noindent the inverse metric $G^{\mu\nu\lambda\rho}$ and $G^{1\slash4}$ transform
as
$$G^{\mu\nu\lambda\rho}\,\rightarrow\,\lambda^{-1}\,G^{\mu\nu\lambda\rho}\,,
\eqno{(3.5a)}$$
$$G^{1\slash4}\,\rightarrow\,\lambda\,G^{1\slash 4}\,.\eqno{(3.5b)}$$
\noindent Therefore the Lagrangian should be of the form
$${\cal L}\,=\,[\alpha\,\langle R^2\rangle\,+\,\beta\,{{\langle R^4\rangle}
\over{\langle R^2\rangle}}\,+\,\cdots]\,G^{1\slash4}\,.\eqno{(3.6)}$$
\noindent However, all the terms after the first one, are highly non-local.
Therefore, the only sensible solution is
$${\cal L}_{CG}\,=\,\kappa_{CG}\,\langle R^2\rangle\,G^{1\slash4}\,,
\eqno{(3.7)}$$
\noindent where the coupling constant
$$\kappa_{CG}\,\approx\,\kappa_E\,{L_{Planck}}^2\,=\,{{\hbar c}\over{8\pi}}
\,,\eqno{(3.8)}$$
\noindent is the Einstein gravitational constant $\kappa_E={c^4\over{8\pi G}}$,
times a constant of the order of ${L_{Planck}}^2$.
The above is the analogue of the Palatini Lagrangian for General Relativity.
But now, since there is no metricity condition, a Lagrangian analogous to the
Einstein-Hilbert one simply does not exist.
The total Lagrangian must consider also the contributions of matter and is
given by
$${\cal L}\,=\,{\cal L}_{CG}\,+\,{\cal L}_{matter}\,.\eqno{(3.9)}$$
\noindent Variation with respect to the connection gives
$${{\delta{\cal L}}\over{\delta{\Gamma^\lambda}_{\mu\nu}}}\,=\,{{\delta{\cal
L}_{CG}}\over{\delta{\Gamma^\lambda}_{\mu\nu}}}\,+\,{{\delta{\cal L}_{matter}
}\over{\delta{\Gamma^\lambda}_{\mu\nu}}}\,=\,0\,,\eqno{(3.10)}$$
\noindent where
$${{\delta{\cal L}_{CG}}\over{\delta{\Gamma^\lambda}_{\mu\nu}}}\,=\,{{
\partial{\cal L}_{CG}}\over{\delta{\Gamma^\lambda}_{\mu\nu}}}\,-\,d_\rho\left
({{\partial{\cal L}_{CG}}\over{\partial(\partial_\rho{\Gamma^\lambda}_{\mu\nu
}}}\right)$$
$$=\,\gamma^{\alpha\beta}\,[{1\over2}\,(\delta^\nu_\lambda\,{\Gamma^\mu}_{
\alpha\beta}\,+\,\delta^\mu_\lambda\,{\Gamma^\nu}_{\alpha\beta})\,+\,\delta^
\mu_\alpha\,\delta^\nu_\beta\,{\Gamma^\sigma}_{\lambda\sigma}\,-\,\delta^\nu_
\beta\,{\Gamma^\mu}_{\lambda\alpha}\,-\,\delta^\mu_\beta\,{\Gamma^\nu}_{
\lambda\alpha}]\,G^{1\slash4}$$
$$-\,d_\rho[\gamma^{\alpha\beta}\,\left(\delta^\rho_\lambda\,\delta^\mu_\beta
\,\delta^\nu\alpha\,-\,{1\over2}\,\delta^\rho_\beta\,(\delta^\mu_\lambda\,
\delta^\nu_\alpha\,+\,\delta^\nu_\lambda\,\delta^\mu_\alpha)\right)\,G^{1
\slash4}]\,.\eqno{(3.11)}$$
\noindent with
$$\gamma^{\alpha\beta}\,=\,G^{\alpha\beta\gamma\delta}\,R_{\gamma\delta}\,,
\eqno{(3.12)}$$
\noindent (for simplicity, we have omitted $\kappa_{CG}$). In all known cases of
physical interest the matter Lagrangian does not depend on the
connection.$^5$ Therefore the second term in (3.10) vanishes and one remains
with a metricity condition which has the solution
$${\Gamma^\lambda}_{\mu\nu}\,=\,\lbrace^\lambda_{\mu\nu}\rbrace(\gamma)\,=\,
{1\over2}\,\gamma^{\lambda\rho}\,(\partial_\mu\gamma_{\nu\rho}\,+\,\partial_
\nu\gamma_{\nu\rho}\,-\,\partial_\rho\gamma_{\mu\nu})\,,\eqno{(3.13)}$$
\noindent i.e., the connection is the Christoffel symbol of the second kind for the
tensor $\gamma_{\mu\nu}$, which we have assumed to be regular. We can
therefore write
$$R_{\mu\nu}(\Gamma)\,=\,R_{\mu\nu}(\gamma)\,.\eqno{(3.14)}$$
\noindent Furthermore
$$\langle R^2\rangle\,=\,G^{\mu\nu\lambda\rho}\,R_{\mu\nu}\,R_{\lambda\rho}\,
=\,\gamma^{\mu\nu}\,R_{\mu\nu}(\gamma)\,=\,R(\gamma)\,.\eqno{(3.15)}$$
Variation with respect to $G_{\mu\nu\lambda\rho}$
$${{\delta{\cal L}}\over{\delta G^{\mu\nu\lambda\rho}}}\,=\,{{\partial{\cal L
}}\over{\partial G^{\mu\nu\lambda\rho}}}\,=\,d_\sigma\left({{\partial{\cal L}
}\over{\partial (\partial_\sigma G^{\mu\nu\lambda\rho})}}\right)\,=\,0\,,
\eqno{(3.16)}$$
\noindent gives
$$\kappa_{CG}\,[ R_{(\mu\nu}\,R_{\lambda\rho )}\,-\,{1\over4}\,\langle R^2
\rangle\,G_{\mu\nu\lambda\rho}]\,=\,T_{\mu\nu\lambda\rho}\,.\eqno{(3.17)}$$
\noindent where
$$T_{\mu\nu\lambda\rho}\,=\,-\,G^{-1\slash4}\,{{\delta{\cal L}_{matter}}\over
{\delta G^{\mu\nu\lambda\rho}}}\,,\eqno{(3.18)}$$
\noindent is the fourth-rank energy-momentum tensor.
More information can be obtained from eq. (3.17) by observing that the
energy-momentum tensor must decompose into one part proportional to the
metric and another part which is a separable tensor. In order to accommodate
all the symmetries is necessary to have
$$T_{\mu\nu\lambda\rho}\,=\,{{{L_{Planck}}^4}\over{\kappa_{CG}}}\,[S_{4,(\mu
\nu}\,S_{4,\lambda\rho)}\,-\,{1\over4}\,\langle{S_4}^2\rangle\,G_{\mu\nu
\lambda\rho}]\,,\eqno{(3.19)}$$
\noindent where
$$\langle{S_4}^2\rangle\,=\,G^{\mu\nu\lambda\rho}\,S_{4,\mu\nu}\,S_{4,\lambda
\rho}\,.\eqno{(3.20)}$$
\noindent In this case the field equations reduce to the simple form
$$\kappa_E\,R_{\mu\nu}(\gamma)\,=\,\pm\,S_{4,\mu\nu}\,;\eqno{(3.21)}$$
\noindent and, as a further consequence we have
$$\kappa_E\,R(\gamma)\,=\,\kappa_E\,\langle R^2\rangle\,=\,\langle{S_4}^2
\rangle\,=\,{S_4}^2(\gamma)\,,\eqno{(3.22)}$$
\noindent where $S_4(\gamma )=\gamma^{\mu\nu}S_{4,\mu\nu}(\gamma )$. One would be
tempted to replace $S_{4,\mu\nu}$ by the reduced energy-momentum tensor
appearing in (B.14). However, that tensor is derived from a Lagrangian
containing a metric $g_{\mu\nu}$, an object which is, in principle, absent in
fourth-rank geometry. Concerning the $\pm$ sign in (3.21), this must be
determined by taking recourse to some application, as will be done in Section
4.
\subsection{3.2}{The Different Energy Regimes}
The field equations (3.17) exhibit three energy regimes: low, medium and
high. In the low-energy regime there is no matter and therefore the geometry
is Riemannian, $G_{\mu\nu\lambda\rho}=g_{(\mu\nu}g_{\lambda\rho)}$, as can
be read from (3.17). In this case the field equations do not reduce to the
Einstein field equations in vacuum. In the medium-energy regime the geometry
is still Riemannian, $G_{\mu\nu\lambda\rho}=g_{(\mu\nu}g_{\lambda\rho)}$, but
now there is matter in the game. This possibility is not excluded as a closer
analysis of eqs. (3.17) reveals. Finally, we have the true high-energy regime
in which there is matter and the geometry is truly fourth-rank.
\subsubsection{3.2.1.}{The Low-Energy Regime}
In the low-energy regime ${\cal L}_{matter}=0$ and then the field equations
reduce to
$$R_{(\mu\nu}\,R_{\lambda\rho )}\,-\,{1\over4}\,\langle R^2\rangle\,G_{\mu\nu
\lambda\rho}\,=\,0\,.\eqno{(3.23)}$$
\noindent The only sensible solution is
$$G_{\mu\nu\lambda\rho}\,=\,g_{(\mu\nu}\,g_{\lambda\rho)}\,,\eqno{(3.24a)}$$
$$R_{\mu\nu}\,=\,{1\over2}\,{\langle R^2\rangle}^{1\slash2}\,g_{\mu\nu}\,,
\eqno{(3.24b)}$$
\noindent and therefore the geometry is Riemannian.
The tensor $\gamma^{\mu\nu}$ is given by
$$\gamma^{\mu\nu}\,=\,{1\over2}\,{\langle R^2\rangle}^{1\slash2}\,g^{\mu\nu}
\,,\eqno{(3.25)}$$
$$\gamma_{\mu\nu}\,=\,2\,{\langle R^2\rangle}^{-1\slash2}\,g_{\mu\nu}\,=\,2\,
R^{-1\slash2}(\gamma)\,g_{\mu\nu}\,.\eqno{(3.26)}$$
\noindent Then, eq. (3.24b) is rewritten as
$$R_{\mu\nu}(\gamma)\,-\,{1\over4}\,R(\gamma)\,g_{\mu\nu}=0\,.\eqno{(3.27)}$$
\noindent One must therefore compute equations (3.27) for a tensor $\gamma_{\mu
\nu}$, and then the physical metric $g_{\mu\nu}$ is obtained from (3.26).
Let us furthermore observe that the dimensions of $\gamma^{\mu\nu}$,
$\gamma_{\mu\nu}$, $R_{\mu\nu}(\gamma g)$ and $R(\gamma )$ are given by
$$dim(\gamma^{\mu\nu})\,=\,L^{-2}\,,$$
$$dim(\gamma_{\mu\nu})\,=\,L^2\,,$$
$$dim(R_{\mu\nu}(\gamma))\,=\,L^{-2}\, ,$$
$$dim(R(\gamma))\,=\,L^{-4}\,.\eqno{(3.28)}$$
Let us now rewrite the field equations (3.27) in terms of the metric
$g_{\mu\nu}$. Let us start by rewriting eq. (3.26) as
$$\gamma_{\mu\nu}\,=\,\lambda^2\,e^\psi\,g_{\mu\nu}\,,\eqno{(3.29)}$$
\noindent where
$$\lambda^2\,e^\psi\,=\,2\,R^{-1\slash2}(\gamma)\,,\eqno{(3.30)}$$
\noindent and $\lambda$ has dimensions of length. Therefore
$$R(\gamma)\,=\,{4\over{\lambda^{-4}}}\,e^{-2\psi}\,.\eqno{(3.31)}$$
The Ricci tensors are related by$^9$
$$R_{\mu\nu}(\gamma)\,=\,R_{\mu\nu}(g)\,+\,\nabla_\mu\psi_\nu\,-\,{1\over2}\,
\psi_\mu\,\psi_\nu\,+\,{1\over2}\,g_{\mu\nu}\,({\nabla_g}^2\psi\,+\,g^{
\alpha\beta}\,\psi_\alpha\,\psi_\beta)\,,\eqno{(3.32)}$$
\noindent while the scalar curvatures are related by
$$R(\gamma)\,=\,{1\over{\lambda^{-2}}}\,e^{-\psi}\,[ R(g)\,+\,3\,{\nabla_g}^2
\psi\,+\,{3\over2}\,g^{\alpha\beta}\,\psi_\alpha\,\psi_\beta]\,.
\eqno{(3.33)}$$
\noindent The field equations are rewritten as
$$R_{\mu\nu}(g)\,+\,\nabla_\mu\psi_\nu\,-\,{1\over2}\,\psi_\mu\,\psi_\nu\,-\,
{1\over4}\, g_{\mu\nu}\,[ R(g)\,+\,{\nabla_g}^2\psi\,-\,{1\over2}\,g^{\alpha
\beta}\,\psi_\alpha\,\psi_\beta]\,=\,0\,.\eqno{(3.34)}$$
Combining (3.31) and (3.33) we obtain the differential equation for the
conformal factor $\psi$
$$e^{-\psi}\,[ R(g)\,+\,3\,{\nabla_g}^2\psi\,+\,{3\over2}\,g^{\alpha\beta}\,
\psi_\alpha\,\psi_\beta]\,=\,{4\over{\lambda^{-2}}}\,e^{-2\psi}\,.
\eqno{(3.35)}$$
As mentioned in the introduction, in vacuum, General Relativity is in
excelent agreement with observation. Therefore, in this regime, our theory
must coincide with General Relativity. This is not evident from eqs. (3.27)
and in fact they are not equivalent. Therefore, the equivalence must be
established at the level of the solutions rather than of the field equations.
This regime is further explored in Section 4.
\subsubsection{3.2.2.}{The Medium-Energy Regime}
In the medium-energy regime the metrics $G_{\mu\nu\lambda\rho}$ and
$g_{\mu\nu}$ are still related by (3.24a). In this case it is therefore
reasonable to replace $S_{4,\mu\nu}$ with that appearing in (B.14)
$$\kappa_E\,R_{\mu\nu}(\gamma)\, =\,\pm\,S_{2,\mu\nu}(g)\,.\eqno{(3.36)}$$
\noindent However, the field equations (3.36) are not equivalent to Einstein field
equations since the Ricci tensor appearing here is for the tensor
$\gamma_{\mu\nu}$ and not for the metric $g_{\mu\nu}$. The above choice is a
delicate point since other mechanisms of coupling the fourth-rank geometry
with "second-rank" matter can be conceived. For example one can consider
$$\kappa_E\,G_{\mu\nu}(\gamma)\,=\,\pm\,T_{2,\mu\nu}(g)\,,\eqno{(3.37)}$$
\noindent which is not equivalent to (3.36). We have tested this and other
possibilities and we have concluded that (3.36) is the correct choice. Which
sign is to be chosen in (3.36) must be decided by considering some
application.
The large scale geometric structure of our universe seems to be well
described by Riemannian geometry, and since matter is involved, its
description belongs to medium-energy regime. We explore this possibility in
Section 5.
\subsubsection{3.2.3.}{The High-Energy Regime}
In this case $G_{\mu\nu\lambda\rho}$ is not a separable metric. Furthermore,
the energy-momentum tensor of matter is traceless, a property which is
equivalent to scale invariance. Therefore, scale invariance is present at
very high-energies, and one can confidently consider this as the high-energy
regime of the theory. This regime is further explored in Section 6.
\section{4.}{The Low-Energy Regime}
Now we explore the consequences of our field equations in the absence of
matter. Our field equations do not reduce to the vacuum Einstein field
equations. Therefore, the observational equivalence must be established at
the level of the solutions rather than at the level of the field equations.
\subsection{4.1.}{Static Spherically Symmetric Fields}
In order to check the validity of the field equations (3.27) we consider the
standard test of a spherically symmetric field. The line element is given by
$${ds}^2\,=\,A(r)\,{dt}^2\,-\,B(r)\,{dr}^2\,-\,r^2\,{d\Omega}^2\,,
\eqno{(4.1)}$$
\noindent where
$${d\Omega}^2\,=\,{d\theta}^2\,+\,{sin}^2\theta\,{d\varphi}^2\,.
\eqno{(4.2)}$$
\noindent One can certainly assume that $R(\gamma)$ will be a function of $r$ only.
The physical metric $g_{\mu\nu}$ and the tensor $\gamma_{\mu\nu}$ are related
by
$$\gamma_{\mu\nu}\,=\,f(R(\gamma))\,g_{\mu\nu}\,=\,f(r)\,g_{\mu\nu}\,.
\eqno{(4.3)}$$
\noindent Therefore, the line element associated to the tensor $\gamma_{\mu\nu}$ is
$${ds_\gamma}^2\,=\,f(r)\,[ A(r)\,{dt}^2\,-\,B(r)\,{dr}^2\,-\,r^2\,{d\Omega}
^2]\,,\eqno{(4.4)}$$
\noindent and, by a redefinition of $r$, it can be rewritten as in (4.1)
$${ds_\gamma}^2\,=\,\lambda^2\,[{\bar A}(r)\,{dt}^2\,-\,{\bar B}(r)\,{dr}^2\,
-\, r^2\,{d\Omega}^2]\,,\eqno{(4.5)}$$
\noindent where $\lambda$ has dimensions of length.
The solution is the Kottler metric$^6$
$$\gamma_{00}\,=\,\lambda^2\,(a\,+\,{b\over r}\,+\,c\,r^2)\,,$$
$$\gamma_{11}\,=\,-\,\lambda^2\,{(a+{b\over r}+c\,r^2)}^{-1}\,.\eqno{(4.6)}$$
\noindent The associated scalar curvature $R(\gamma)$ is constant
$$R(\gamma)\,=\,12\,{c\over a}\,\lambda^{-2}\,.\eqno{(4.7)}$$
We can now write the field equations (3.27) in terms of the metric
$g_{\mu\nu}$
$$R_{\mu\nu}(g)\,-\,{1\over4}\,R(g)\,g_{\mu\nu}\,=\,0\,.\eqno{(4.8)}$$
\noindent This is possible due to the fact that the Ricci tensor is homogeneous of
order zero in $g_{\mu\nu}$. Therefore, the constant conformal factor,
essentially (4.7), will cancel from the field equations. Then, the metric
$g_{\mu\nu}$ will be the Kottler metric as given in (4.6)
$$g_{00}\,=\,\left(a\,+\,{b\over r}\,+\,c\,r^2\right)\,,$$
$$g_{11}\,=\,-\,{\left(a\,+\,{b\over r}\,+\,c\,r^2\right)}^{-1}\,,
\eqno{(4.9)}$$
\noindent Now we can put $c=0$ and obtain the Schwarzschild metric.
This long detour was necessary in order to check that the limit
$c\rightarrow0$ was a consistent procedure.
\subsection{4.2.}{Comments}
This is the weakest energy regime and coincides with General Relativity.
Therefore, the Schwarzschild solution, the Newtonian limit and the properties
of gravitational radiation will be the same as for General Relativity. One
must therefore not check that the proper limit exists but that the
observables departures agree with observation. For this we must turn our
attention to the two following regimes.
\section{5.}{The Medium-Energy Regime. Fourth-Rank Cosmology}
Now we explore the consequences of our field equations in the medium-energy
regime. The ideal laboratory is the universe. The large scale geometry of the
universe seems to be Riemannian. Furthermore there is matter present.
Therefore, in the context of fourth-rank gravity, the description of the
universe belongs to the medium-energy regime. The metric
$G_{\mu\nu\lambda\rho}$ will be separable in terms of a metric $g_{\mu\nu}$
which we assume to be the FRW metric. Matter is described by a perfect fluid;
therefore we use the energy-momentum tensor appearing in (A.7).
When fourth-rank gravity is applied to cosmology one should deal with
equations analogous to the Einstein-Friedman equations of the Standard Model
of Cosmology. In fourth-rank gravity however, matter enters the field
equations in a non-linear way. An essential difference with respect to
General Relativity is the fact that the equations determining the evolution
of the universe involve not only the energy density and the pressure but also
their time derivatives. Therefore, in order to correctly deal with these
equations, one should provide a time dependent state equation. As a first
approach we restrict our considerations to the case of a time independent
state equation. Of course, this is a quite strong assumption.
The theory predicts an increasing total entropy such that the expansion of
the universe is an adiabatic non-isoentropic process. Therefore, the
evolution of the universe, in the framework of fourth-rank cosmology is, as
expected on physical grounds, an irreversible process.
The following conclusions are obtained after incorporating $k_{obs}=0$.
For the early universe matter is described by a state equation in which $y
\approx{1\over3}$. In this case $q>0$ and from this fact one deduces the
existence of a very dense state of matter at some time in the past. Causality
is not violated for $t>t_{class}\approx{10}^{19}t_{Planck}\approx{10}^{-24}
s$. At earlier times quantum mechanical effects dominate the scene. In fact,
the radius of the universe is exactly the Compton wavelength associated to
its mass. Our classical approach breaks down so that the very concept of
causality is meaningless. Therefore, there is no violation of causality, or
horizon problem.
For the present Universe it is necessary to assume $q<0$; this does not
contradict the observed expansion of the universe from an initial hot ball.
In General Relativity $q>0$ and $a(t)$ is a convex function of t. Due to this
fact one is used to think of the evolution of the universe with $q>0$.
However, an evolution from an initial singularity may also be conceived with
a concave function, $q<0$. The field equations predict $\Omega\approx4y$,
where $y={p\over\rho}$. For the present universe we use $\Omega_{small}=0.01$
and we obtain $y_{pred}=2.5\times{10}^{-3}$. {\it y} can be estimated from
the mean random velocity of typical galaxies to be
$y_{random}=1\times{10}^{-5}$.
\vfill\eject
\subsection{5.1.}{The Field Equations}
The field equations are
$$\kappa_E\,R_{\mu\nu}(\gamma)\,=\,\pm\,[(\rho\,+\,p)\,u_\mu\,u_\nu\,-\,{1
\over2}\,(\rho\,-\,p)\,g_{\mu\nu}]\,.\eqno{(5.1)}$$
\noindent On the other hand we have
$$\gamma^{\mu\nu}\,=\,G^{\mu\nu\lambda\rho}\,R_{\lambda\rho}\,=\,{1\over{3
\kappa_E}}\,[(\rho\,+\,p)\,u^\mu\,u^\nu\,-\,(\rho\,-\,2\,p)\,g^{\mu\nu}]\,.
\eqno{(5.2)}$$
The inverse of (5.2) is given by
$$\gamma_{\mu\nu}\,=\,3\,\kappa_E\,[{(\rho+p)\over{3p(\rho-2p)}}\,u_\mu\,u_
\nu\,-\,{1\over{(\rho-2p)}}\,g_{\mu\nu}]\,.\eqno{(5.3)}$$
\noindent There is a global $\pm$ sign in (5.2) and (5.3) which we have fixed at
will since the final result is independent of this choice.
The first step is to calculate the Ricci tensor for the metric
$\gamma_{\mu\nu}$. Let us start by writing the associated line element
$${ds_\gamma}^2\,=\,{1\over p}\,{dt}^2\,+\,{3\over{(\rho-2p)}}\,a^2\,{d\ell}
^2\,.\eqno{(5.4)}$$
\noindent We have omitted the constant in front of (5.3) since the final result is
independent of this factor. In what follows we assume $p>0$ and $\rho
-2p>0$. We assume furthermore that $\rho$ and $p$ are functions of $t$
only. Then we can introduce the new time coordinate
$$d\tau\,=\,{1\over{p^{1\slash2}}}\,dt\,.\eqno{(5.5)}$$
\noindent Then the line element (5.5) is rewritten as
$${ds}^2\,=\,{d\tau}^2\,+\,A^2\,{d\ell}^2\,,\eqno{(5.6)}$$
\noindent with
$$A\,=\,[{({3\over{(\rho-2p)}})}^{1\slash2}\,a](\tau)\,.\eqno{(5.7)}$$
\noindent The above is nothing more than a FRW line element with Euclidean
signature. We can therefore use eqs. (A.3) with $a^2\rightarrow-A^2$. The
Ricci tensor is then given by
$$R_{\mu\nu}(\gamma)\,=\,-\,{2\over{A^2}}\,[A\,A''\,-\,(-\,k\,+\,{A'}^2)]\,u_
\mu\,u_\nu\,-\,{1\over{A^2}}\,[A\,A''\,+\,2\,(-\,k\,+\,{A'}^2)]\,\gamma_{\mu
\nu}\,,\eqno{(5.8)}$$
\noindent where primes denote derivatives with respect to $\tau$. In the system of
coordinates involving $t$ the above expression is given by
$$R_{\mu\nu}(\gamma)\,=\,-\,{2\over{pA^2}}\,[A\,A''\,-\,(-\,k\,+\,{A'}^2)]\,
\delta^0_\mu\,\delta^0_\nu\,-\,{1\over{A^2}}\,[A\,A''\,+\,2\,(-\,k\,+\,{A'}^2
)]\,\gamma_{\mu\nu}\,,\eqno{(5.9)}$$
Comparison with the Ricci tensor obtained from the field equations, eq.
(5.1), gives
$$-\,3\,\kappa_E\,{1\over p}\,{A''\over A}\,=\,\pm\,{1\over2}\,(1\,+\,3\,y)\,
\rho\,,\eqno{(5.10a)}$$
$$-\,3\,\kappa_E\,{1\over{(\rho-2p)}}\,{1\over{A^2}}\,[ A\,A''\,+\,2\,(-\,k\,
+\,{A'}^2)]\,=\,\pm\,{1\over2}\,(1\,-\,y)\,\rho\,.\eqno{(5.10b)}$$
\noindent For the applications the field equations are better rewritten as
$$6\,\kappa_E\,{1\over{(\rho -2p)}}\,{1\over{A^2}}\,(-\,k\,+\,{A'}^2)\,=\,
\mp\,{1\over2}\,{{(1-4y-y^2)}\over{(1-2y)}}\,\rho\,,\eqno{(5.11a)}$$
$$-\,{{(1-4y-y^2)}\over{(1-2y)}}\,{1\over p}\,A\,A''\,+\,2\,(1\,+\,3\,y)\,
{1\over{(\rho-2p)}}\,(-\,k\,+\,{A'}^2)\,=\,0\,.\eqno{(5.11b)}$$
\noindent The field equations written in this form are of practical use since the
first one allows us to determine the value of $k$ when evaluated at the
present time. The second one allows us to determine the evolution of the
early universe.
\subsection{5.2.}{The Entropy of the Universe}
The entropy variation is governed by
$$T\,dS\,=\,dE\,+\,p\,dV\,=\,d(r\,a^3)\,+\,p\,d(a^3)$$
$$=\,{{(1+3y)(1-y^2)}\over{(1-4y-y^2)}}\,\rho\,a^2\,da\,+\,\rho\,a^3\,{{2(1+8
y^2+16y^3-5y^4)}\over{(1-2y)(1-2y+5y^2)(1-4y-y^2)}}\,dy\,.\eqno{(5.12)}$$
\noindent Since the radius of the universe grows at a rate much larger than that by
which y decreases, the above quantity is positive. Hence the theory predicts,
in a natural way, an increasing total entropy of the universe. Thus,
fourth-rank cosmology predicts an adiabatic non-isoentropic, and therefore
irreversible, expansion of the universe.
The next step is to go back to the time coordinate $t$. This contains the
time dependence of $p$ on $t$. We assume that the almost pressureless regime
has lasted for such a long time that we can confidently work under the
assumption that $p$ and $\rho$ are constant, i.e., a time independent state
equation. This is done now.
\vfill\eject
\subsection{5.3.}{Constant y}
We assume that the almost pressureless regime has lasted for such a long time
that we can confidently work under the assumption that $p$ and $\rho$ are
constant, i.e., a time independent state equation. In this case the relevant
equations are obtained with the simple replacements
$$A\,\rightarrow{\left({3\over{(\rho-2p)}}\right)}^{1\slash2}\,a\,,
\eqno{(5.13a)}$$
$$(\,)'\,\rightarrow\,p^{1\slash2}\,(\,)^.\,.\eqno{(5.13b)}$$
\noindent In this case eq. (5.10a) is rewritten like
$$-\,3\,\kappa_E\,{{\ddot a}\over a}\,=\,\pm\,{1\over2}\,(1\,+\,3\,y)\,\rho\,
.\eqno{(5.14)}$$
\noindent Equations (5.11) reduce to
$$6\,\kappa_E\,{1\over{a^2}}\,(-\,k\,+\,{3y\over{(1-2y)}}\,{\dot a}^2)\,=\,
\mp\,{3\over2}\,{{(1-4y-y^2)}\over{(1-2y)}}\,\rho\,,\eqno{(5.15a)}$$
$$-\,3\,{{(1-4y-y^2)}\over{(1-2y)}}a\,{\ddot a}\,+\,2\,(1\,+\,3\,y)\,
(-\,k\,+\,{3y\over{(1-2y)}}\,{\dot a}^2)\,=\,0\,.\eqno{(5.15b)}$$
\subsection{5.4.}{Incorporating Flatness}
Let us now incorporate the observed fact $k_{obs}=0$. Then, eqs. (5.15)
reduce to
$$18\,\kappa_E\,{y\over{(1-2y)}}\,{{\dot a}^2/over{a^2}}\,=\,\mp\,{3\over2}\,
{{(1-4y-y^2)}\over{(1-2y)}}\,\rho\,,\eqno{(5.16a)}$$
$$-\,3\,{{(1-4y-y^2)}\over{(1-2y)}}a\,{\ddot a}\,+\,6\,{y(1+3y)\over{(1-2y)}}
\,{\dot a}^2\,=\,0\,.\eqno{(5.16b)}$$
For the physically interesting cases $0<y<{1\over3}$. Therefore, the previous
equations can be simplified to
$$12\,\kappa_E\,y\,{{\dot a}^2/over{a^2}}\,=\,\mp\,(1\,-\,4\,y\,-\,y^2)\,\rho
\,,\eqno{(5.17a)}$$
$$-\,(1\,-\,4\,y\,-\,y^2)\,a\,{\ddot a}\,+\,y\,(1\,+\,3\,y)\,{\dot a}^2\,=\,
0\,.\eqno{(5.17b)}$$
In terms of the cosmological parameters these equations are rewritten as
$$\Omega\,=\,\mp{4y\over{1-4y-y^2}}\,.\eqno{(5.18a)}$$
$$q\,=\,\pm\,{1\over2}(1\,+3\,y)\,\Omega\,.\eqno{(5.18b)}$$
The only positive root of $(1-4y-y^2)$ is $\sqrt5-2\approx 0.236$. Therefore
we can distinguish two regimes:
I. $0.236<y<{1\over3}$. In this case we must choose the upper sign. The
resulting equations can be applied to the description of the early universe.
II. $0<y<0.236$. In this case we must choose the lower sign. The resulting
equations can be applied to the description of the present universe.
Let us observe that eqs. (5.18a) becomes singular for $y=0.236$. There is no
contradiction here since $\Omega={\rho\over{\rho_c}}$, $\rho_c=3\kappa_E
H^2=3\kappa_E{a^2\over{{\dot a}^2}}$, and for $y=0.236$ we have ${\dot a}=0$
as can be read from eq. (5.17b).
\subsection{5.5}{The Early Universe}
In this case $0.236<y<{1\over3}$ and eqs.(5.18) reduce to
$$\Omega\,=\,-\,{4y\over{1-4y-y^2}}\,.\eqno{(5.19a)}$$
$$q={1\over\,2}\,(1\,+\,3\,y)\,\Omega\,.\eqno{(5.19b)}$$
\noindent Let us observe that eq. (5.19b) is the same than that we obtain in
General Relativity, {\it viz.} (C.2a). As in General Relativity one concludes
the existence of a singularity in the past. As explained in the Appendix A,
since matter cannot be compressed beyond the Planck density, it is more
reasonable to consider an initial ball with finite radius. We call this the
"inflationary" stage of our model.
For the early universe matter is described by the state equation
$y={1\over3}$. Then, eqs. (5.19) reduce to
$$9\,\kappa_E\,{{\dot a}^2\over a^2}\,=\,\rho\,,\eqno{(5.20a)}$$
$$a\,{\ddot a}\,+\,3\,{\dot a}^2\,=\,0\,.\eqno{(5.20b)}$$
\noindent The solution to eq. (5.20b) is
$$a\,=\,a_0\,{(1\,+\,4\,{{\dot a}_0\over{a_0}})}^{1\slash4}\,\approx\,a_0\,+
\,{\dot a}_0\,t\,.\eqno{(5.21)}$$
\noindent In this approximation the horizon radius is
$$r_H(t)\,=\,{a_0\over{{\dot a}_0}}\,(1\,+\,{{\dot a}_0\over{a_0}}\,t)\,ln
(1\,+\,{{\dot a}_0\over{a_0}}\,t)\,.\eqno{(5.22)}$$
Causality is not violated when $r_H(t)>a(t)/c$. This condition is satisfied
for
$$t\,>\,t_{class}\,\approx\,{a_0\over c}\,\approx\,10^{19}\,t_{Planck}\,
\approx\,10^{_24} \,s\,.\eqno{(5.23)}$$
\noindent At earlier times quantum mechanical effects dominate the scene. In fact,
the radius of the Universe is exactly the Compton length associated to its
mass. Our classical approach breaks down so that the very concept of
causality is meaningless. Therefore, there is no violation of causality, or
horizon problem.
\subsection{5.6.}{The Present Universe}
In this case $0.236<y<{1\over3}$ and eqs.(5.18) reduce to
$$\Omega\,=\,{4y\over{1-4y-y^2}}\,.\eqno{(5.24a)}$$
$$q\,=\,-\,{1\over2}\,(1\,+\,3\,y)\,\Omega\,.\eqno{(5.24b)}$$
For the present universe matter is described by the state equation $y\approx
0$, i.e., almost pressureless matter. In this case eq. (5.17b) reduces to
$${\ddot a}\,\approx\,0\,.\eqno{(5.25)}$$
Therefore
$$a\,=\,\alpha\,t+\beta\,,\eqno{(5.26)}$$
\noindent where $\alpha$ and $\beta$ are integration constants. Therefore, in the
present time the radius of the universe grows linearly with time.
In this case eq. (5.24a) can be inverted to
$$y\,=\,{1\over\Omega}\,[\sqrt{4{(1\,+\,\Omega)}^2\,+\,\Omega^2}\,-\,2\,(1\,+
\,\Omega)]\,.\eqno{(5.27)}$$
Since today matter is almost pressureless small values of $\Omega$ are
favoured by our equation. For the smallest reported value$^7$
$\Omega_{small}=0.01$ we obtain
$$y_{small}\,=\,2.48\,\times\,{10}^{-3}\,.\eqno{(5.28)}$$
\noindent This should be compared with the observed value of $p\over\rho$. This can
be determined from the mean random velocity of typical galaxies, $\langle v
\rangle=1\times{10}^3 km/s$, and gives $y_{random}=1\times{10}^{-5}$.
Therefore, our prediction differs by two orders of magnitude with respect to
the observed value. We hope to improve this situation since the estimation of
$y$ from the the random motion of galaxies is a quite rough one. Furthermore,
eq. (5.27) was obtained under the assumption of a time independent state
equation.
\vfill\eject
\subsection{5.7.}{Comments}
The cosmological model we have developed here shows a reasonable agreement
with observational results. In fact, we obtain field equations which among
others: predict an increasing entropy of the universe; are almost consistent
with the observed flatness of the universe; and do not violate causality
(horizon problem). The model is still incomplete in that we have not yet
considered in details the effects of a time dependent state equation for
matter and how this modify the relation (5.18).
The calculation for $p<0$ is almost identical to that for $p>0$. Since one
is used to thinking of the evolution of the universe in terms of $q>0$ this
was the case we favoured in our previous works.$^{10,11,12}$ The entropy is
again an increasing function of time. In this case one has also a linear
growing of the radius of the universe.
\section{6.}{The High-Energy Regime}
Here we explore the high-energy regime of our theory. The form of the
Lagrangian, and of the field equations, for conformal fourth-rank gravity,
puts several strong restrictions, on the kind of matter which can be coupled
consistently to it. The first condition is that matter fields must be
described by conformally invariant Lagrangians in four dimensions. In fact
the trace of eq. (3.27) gives
$$T_4\,=\,G^{\mu\nu\lambda\rho}\,T_{4,\mu\nu\lambda\rho}\,=\,0\,,
\eqno{(6.1)}$$
\noindent which is always satisfied by conformal fields.
The situation is similar to that for Einstein gravity in 2 dimensions. The
field equations are
$$\kappa_E\,[R_{\mu\nu}\,-\,{1\over2}\,R\,g_{\mu\nu}]\,=\,T_{\mu\nu}\,.
\eqno{(6.2)}$$
\noindent The trace of this equation gives
$$T_2\,=\,g^{\mu\nu}\,T_{2,\mu\nu}\,=\,0\,.\eqno{(6.3)}$$
\noindent However, the previous equation collapses to a useless identity since
$$R_{\mu\nu}\,-\,{1\over2}\,R\,g_{\mu\nu}\,\equiv\,0\,.\eqno{(6.4)}$$
\noindent In our case, however, this collapse does not occur.
As a second property of our field equations let us observe that the coupling
to conformal fields automatically excludes the existence of a cosmological
constant. In fact, from the Lagrangian
$${\cal L}_4\,=\,\kappa_{CG}\,(\langle R^2\rangle\,+\,\Lambda)\,G^{1\slash4}
\,,\eqno{(6.5)}$$
\noindent we obtain
$$\kappa_{CG}\,[ R_{(\mu\nu}\,R_{\lambda\rho)}\,-\,{1\over4}\,(\langle R^2
\rangle\,+\,\Lambda)\,G_{\mu\nu\lambda\rho}]\,=\,T_{\mu\nu\lambda\rho}\,.
\eqno{(6.6)}$$
\noindent The trace of this equation is
$$-\,\kappa_{CG}\,\Lambda\,=\,T_4\,,\eqno{(6.7)}$$
\noindent but the conformal invariance, eq. (6.1), impose
$$\Lambda\,=\,0\,.\eqno{(6.8)}$$
This situation is again similar to that for Einstein gravity in 2 dimensions.
We have
$${\cal L}_{GR}\,=\,\kappa_E\,(R\,+\,\Lambda)\,g^{1\slash2}\,.\eqno{(6.9)}$$
\noindent The field equations are
$$\kappa_E\,[R_{\mu\nu}\,-\,{1\over2}\,(R\,+\,\Lambda)\,g_{\mu\nu}]\,=\,
T_{\mu\nu}\,.\eqno{(6.10)}$$
\noindent The trace of this equation gives
$$-\,\kappa_E\,\Lambda\,=\,T_2\,,\eqno{(6.11)}$$
\noindent but conformal invariance, eq. (6.3), gives
$$\Lambda\,=\,0\,.\eqno{(6.12)}$$
Therefore the high-energy regime of our theory exhibits all the properties
relevant to a conformal model. In fact it can be consistently coupled to
conformal fields; cf. Ref. 4 for further details. Furthermore, predicts a
cosmological constant which is exactly zero.
\section{7.}{Conclusions}
The results reported here are the product of more than one year effort. We
elaborated many previous versions which were corrected once and again, and
our work was often plagued by false starts.
The conception of new geometries has taught us the importance of observation
in physics and the close relation existing between physics and geometry.
The fourth-rank geometry combined with the observed scale invariance of
physical processes at high-energies leaves us with an almost unique choice
for a gravitational Lagrangian. What we have done here was just to explore
the consequences of this theory.
We would like to emphasize that we did not construct this theory in order to
solve specific problems. We just started from simple principles and explored
their consequences.
In fact, it was unexpected for us to find that the field equations in vacuum,
even when differing from Einstein field equations, give the same solution for
a static spherically symmetric field, namely, the Schwarzschild metric. More
surprising was the fact that our field equations started to differ from those
of General Relativity exactly where they are in disagreement with
observation. It was also unexpected that our field equations provided
solution for long unsolved problems. In fact, entropy is created in the
universe. Causality is not violated, etc.
Of course, from the few results we have presented here one cannot establish
the validity of this theory. It is our purpose to explore further
consequences of our field equations.
Since we began this work with a quotation, it seems convenient to close it in
the same way by including three more quotations.$^{13,14,15}$ Even when they
refer to other historical moments, they can be reread even today with changes
which are obvious. We think they speak by themselves so that no more comments
are necessary.
\bigskip
{\it "The danger of asserting dogmatically that an axiom based on the
experience of a limited region holds universally will now be to some extent
apparent to the reader. It may lead us to entirely overlook, or when
suggested at once reject, a possible explanation of phenomena. The hypothesis
that space is not homaloidal, and again, that its geometrical character may
change with the time, may or may not be destined to play a great part in the
physics of the future; yet, we cannot refuse to consider them as possible
explanations of physical phenomena, because they may be opposed to the
popular dogmatic belief in the universality of certain geometrical axioms- a
belief which has arisen from centuries of indiscriminating worship of the
genius of Euclid."}
\medskip
\rightline{\it W.K. Clifford, 1885}
\bigskip
{\it "[Saccheri's] brilliant failure is one of the most remarkable instances
in the history of mathematical thought of the mental inertia induced by an
education in obedience and orthodoxy, confirmed in mature life by an
excessive reverence for the perishable works of the inmortal dead [Euclid].
With two geometries, each as valid as Euclid's in his hand, Saccheri threw
both away because he was willfully determined to continue in the obstinate
worship of his idol, despite the insistent promptings of his own sane
reason."}
\medskip
\rightline{\it E.T. Bell, 1947}
\bigskip
{\it "People have often tried to figure out ways of getting these new
concepts. Some people work on the idea of the axiomatic formulation of the
present quantum mechanics. I don't think that will help at all. If you
imagine people having worked on the axiomatic formulation of the Bohr orbit
theory, they would never have been lead to Heisenberg's quantum mechanics.
They would never have thought of non-commutative multiplication as one of
their axioms which could be challenged. In the same way, any future
development must involve changing something which people have never
challenged up to the present, and which will we not be shown up by an
axiomatic formulation."}
\medskip
\rightline{\it P.A.M. Dirac, 1973}
\bigskip
\vfill\eject
\entry{Acknowledgements}
This work has been possible thank to the hospitality of the Laboratory of
Theoretical Physics, Joint Institute for Nuclear Research, Dubna, the
Istituto di Fisica Matematica "J.-Louis Lagrange", Universit\`a di Torino,
and the International Centre for Theoretical Physics, Trieste. One of the
authors (A. M.) would like to express his gratitude to Prof. H. Boutaleb-J.
for having introduced him to the field of gravitation and cosmology. He would
also like to thank the Arab grant available for the ICTP Associateship
scheme. The work has been much enriched, at different stages, by talks with
M. Ferraris, M. Francaviglia, P. Aichelburg and P. Minning.
\entry{Appendix A. Cosmography}
In this Section we collect the observational results concerning the structure
of the universe and its evolution. Further details can be found in refs. 16
and 17.
The observed isotropy and homogeneity of the universe gives as the only
possible Riemannian geometry for the universe a Friedman-Robertson-Walker
(FRW) geometry. FRW spaces are characterised by the cosmic radius $a(t)$ and
by the constant $k=1,0,-1$, corresponding to a closed, spatially flat, and
open universe, respectively. The curvature properties of a FRW geometry can
be rewritten in terms of the Hubble constant, $H$, and the deacceleration
parameter, $q$. These cosmological parameters can, in principle, be
determined from the observed distance versus velocity Hubble diagram.
At large scales cosmic matter can be described as a perfect fluid which is
characterised by the energy density, $\rho$, and the pressure, $p$, and they
are related by the state equation of matter, ${p\over\rho}=y$. For
$y={1\over3} $ one has a radiation dominated, or ultrarelativistic, perfect
fluid; for $y\approx 0$ one has instead a non-relativistic, or almost
pressureless, perfect fluid.
Associated to the FRW geometry, with the use of the Einstein gravitational
constant, there is a critical density parameter, $\rho_c =3\kappa_E H^2$,
which sets the scale of energy densities. One can then introduce the cosmic
density parameter $\Omega={\rho\over{\rho_c}}$.
The cosmological parameters $H$, $q$ and $\Omega$ are observable, however,
they are quite difficult to determine with accuracy. For the Hubble constant
$H$ there are two preferred values close to 50 and 100 km/sec/Mpc. However
the observed data does not allow to determine the value of the deacceleration
parameter $q$.$^{17}$ According to ref. 17, $\Omega_{obs}\approx 0.1-0.3$,
with an upper safety bound $\Omega_{safe}\approx0.18$. However, early reports
give smaller values such as $\Omega_{small}=0.01$.$^7$
The observed redshift of galaxies shows that the universe is expanding.
Therefore, it was very dense in the past and it is almost diluted today.
Since matter cannot be compressed beyond the Planck density one must consider
the universe as evolving from an initial hot ball. From the conservation of
mass one can furthermore estimate the radius of the initial hot ball to be
$a_0 ={10}^{19}L_{Planck}$.
Further observations, such as the galaxy count-volume test, show that to a
very big extent our universe is spatially quite flat. Therefore, the
parameter $k$ characterising the FRW geometries must be zero, $k_{obs}=0$,
i.e., its geometry is Euclidean.
The last important observation is the fact that the universe is quite
isotropic and homogeneous at large scales. This is an indication that matter
was in causal contact in the very remote past. This condition roughly
translates into $r_H>a$, where $r_H$ is the horizon radius. However, for
very small times, $t<t_{class}\approx{10}^{20}t_{Planck}$, quantum mechanical
effects dominate the scene. In fact, the radius of the universe is exactly
the Compton wavelength associated to its mass and the very concept of
causality is meaningless. Therefore, causality should not be violated only
for $t>t_{class}$.
Any proposed cosmological model must agree with the previously described
observational results. The next task is to develop a gravitational theory
fitting the above observations. The first candidate is General Relativity
(Appendix B).
\entry{A.1. Isotropy, the Cosmological Principle and FRW Spaces}
Observation shows that the universe is isotropic and homogeneous. These
properties give as the only possible Riemannian geometry a FRW metric. In
this case the line element is
$${ds}^2\,=\,{dt}^2\,-\,a^2(t)\,{d\ell}^2\,,\eqno{(A.1)}$$
\noindent where
$${d\ell}^2\,=\,{(1\,-\,k\,r^2)}^{-1}\,{dr}^2\,+\,r^2\,{d\Omega}^2\,,
\eqno{(A.2a)}$$
$${d\Omega}^2\,=\,{d\theta}^2\,+\,{sin}^2\theta\,{d\varphi}^2\,.
\eqno{(A.2b)}$$
\noindent In the above $a(t)$ is the cosmic scale factor and is interpreted as the
radius of the universe; ${d\ell}^2$ is the line element of a maximally
symmetric three-dimensional space-like section. The radial coordinate r is
written in units such that the constant k takes the values 1, 0 or -1. The
parameter $k$ characterises the geometry of the space-like sections of the
universe. For $k=1$ the universe is closed; for $k=0$ it is flat; for
$k=-1$ it is open.
The Ricci tensor is given by
$$R_{\mu\nu}\,=\,-\,{2\over{a^2}}\,[a\,{\ddot a}\,-\,(k\,+\,{\dot a}^2)]\,
\delta^0_\mu\,\delta^0_\nu\,-\,{1\over{a^2}}\,[a\,{\ddot a}\,+\,2\,(k\,+\,{
\dot a}^2)]\,g_{\mu\nu}\,.\eqno{(A.3)}$$
\noindent Hence, the scalar curvature is
$$R\,=\,-\,{6\over{a^2}}\,[a\,{\ddot a}\,+\,(k\,+\,{\dot a}^2)]\,.
\eqno{(A.4)}$$
\noindent These quantities can be parametrised in terms of the cosmological
parameters
$$H\,=\,{{\dot a}\over a}\,,\eqno{(A.5a)}$$
\noindent which is the Hubble "constant", and it is a true constant only for a de
Sitter space; and
$$q\,=\,-\,{{a{\ddot a}}\over{{\dot a}^2}}\,=\,-\,1-\,{{\dot H}\over{H^2}}\,,
\eqno{(A.5b)}$$
\noindent which is the deacceleration parameter.
In a FRW universe the luminosity distance $d_L$ and the redshift $z$ of a
galaxy are related by
$$d_L\,\approx\,{1\over H}\,(z\,+\,{1\over2}\,(q\,-\,1)\,z^2)\,.
\eqno{(A.6)}$$
\noindent The distance to a galaxy can be determined by different means. The
redshift is determined by simple spectral techniques. This constitutes the
distance versus redshift Hubble diagram. If $z>0$ one talks of redshifts and
galaxies are receding, while if $z<0$ one talks of blueshifts and galaxies
are approaching. Therefore $H$ can be determined from the slope of the Hubble
diagram while $q$ is related to its convexity.
\entry{A.2. The Matter Content of the Universe. The Perfect Fluid}
In order to be compatible with the observed homogeneity and isotropy of the
universe cosmic matter must be described as a perfect fluid.
A perfect fluid is characterised by the energy-momentum tensor
$$T_{\mu\nu}\,=\,(\rho\,+\,p)\,u_\mu\,u_\nu\,-\,p\,g_{\mu\nu}\,,
\eqno{(A.7)}$$
\noindent where $\rho$ and $p$ are the energy density and pressure of cosmic
matter, and
$$u_\mu\,=\,{(g^{00})}^{-1\slash2}\,\delta^0_\mu\,,\eqno{(A.8a)}$$
\noindent such that
$$g^{\mu\nu}\,u_\mu\,u_\nu\,=\,1\,.\eqno{(A.8b)}$$
\noindent The reduced energy-momentum tensor is
$$S_{\mu\nu}\,=\,T_{\mu\nu}\,-\,{1\over2}\,T\,g_{\mu\nu}\,=\,(\rho\,+\,p)\,
u_\mu\,u_\nu\,-\,{1\over2}\,(\rho\,-\,p)\,g_{\mu\nu}\,.\eqno{(A.9)}$$
In order to relate the energy density $\rho$ and the pressure $p$ one needs a
state equation. Two well understood regimes are the radiation dominated
regime in which $y={p\over\rho}={1\over3}$, and the matter dominated regime
in which $y$ approaches to zero for incoherent matter.
The coupling of gravity to matter needs the Einstein gravitational constant
$\kappa_E$. Combining this constant with the functions characterising the FRW
geometry we obtain the critical density
$$\rho_c\,=\,3\,\kappa_E\,H^2\,=\,1.96\,\times\,{10}^{-29}\,h^2\,g\slash cm^3
\,,\eqno{(A.10)}$$
\noindent where
$$h\,=\,{H\over{100\,km\slash sec\slash Mpc}}\,.\eqno{(A.11)}$$
\noindent This leads to the introduction of the cosmic energy density parameter
$$\Omega\,=\,{\rho\over{\rho_c}}\,,\eqno{(A.12)}$$
\noindent in such a way that $\rho_c$ sets the scale of energy densities.
\entry{A.3. Observed Values of the Cosmological Parameters}
Due to several practical difficulties the observed values of the cosmological
parameters are quite inaccurate. In some cases this inaccuracy does not allow
even to have a reliable value for some parameters. There exists a wide range
of reported values for the cosmological parameters depending on both the
nature of the performed observation and the interpretation of the observed
data. We use the values reported in ref. 17.
\entry{A.3.1. The Hubble Diagram}
In principle, the Hubble diagram should provide, at the same time, the Hubble
constant, $H$, and the deacceleration parameter, $q$. $H$ is related to the
slope of such diagram while $q$ is related to its convexity. However, the
Hubble diagram shows a large dispersion for large values of $H$ such that no
reliable value for $q$ exists today. The reported values are
$$h_{obs}\,=\,0.5\,-\,1.0\,,\eqno{(A.13)}$$
\noindent with preferred values closer to 0.5 and to 1.0. The determination of $q$
from deviations from the linear Hubble law is almost imposible with the
present day accuracy of the existing observations.
Since $H$ is positive one can conclude that the universe is expanding. Let
us observe that eq. (A.5a) can be rewritten as
$$a\,=\,{{\dot a}\over H}\,\approx\,{\dot a}\,T\,.\eqno{(A.14)}$$
\noindent This equation tells us that the radius of the universe is approximately
its velocity of expansion times the period of expansion. Therefore, $H^{-1}$
can be interpreted as the age of the universe.
\entry{A.3.2. The Energy Density}
The energy density is determined from the cosmic virial theorem and the
infall to the Virgo cluster. According to ref. 17 the observed value for the
energy density ratio is in the range
$$\Omega_{obs}\,\approx\,0.1\,-\,0.3\,.\eqno{(A.15)}$$
\noindent There is furthermore a safety upper bound
$$\Omega_{safe}\,\approx\,0.18\,.\eqno{(A.16)}$$
\noindent However, early reports$^7$ give smaller values
$$\Omega_{small}\,=\,0.01\,.\eqno{(A.17)}$$
\noindent {\bf A.3.3. The Pressure of the Universe}
Up to our knowledge there is no direct determination of the pressure of the
universe. However, an upper bound can be put based on the mean random
velocity, $\langle v\rangle$, of typical galaxies
$${p\over\rho}\,\approx\,{{\langle v\rangle}^2\over{c^2}}\,.\eqno{(A.18)}$$
\noindent The proper velocity can be determined from direct measurements and is
given approximately by $\langle v\rangle\approx 1\times{10}^3$ km/s.
Therefore we obtain
$$y_{random}\,\approx\,1\,\times\,{10}^{-5}\,.\eqno{(A.19)}$$
\noindent The previous figure put an upper bound to the $p\over\rho$ ratio. It
must be
$${p\over\rho}\,<\,1\,\times\,{10}^{-5}\,.\eqno{(A.20)}$$
\entry{A.3.4. The Radius of the Initial Universe}
The expansion of the universe shows that it has evolved from a very dense
regime in the past and it is almost diluted today. Since matter cannot be
compressed beyond the Planck density it is more reasonable to consider the
universe as evolving from an initial ball.
The radius of the initial universe can be estimated as follows. Let us assume
that the mass of the universe is a conserved quantity. At $t=0$ we assume
that mass was compressed at Planck density. This allows determining $a_0$ to
be
$$a_0\,=\,{[{{M_{Univ}}\over{M_{Planck}}}]}^{1\slash3}\,L_{Planck}\,.
\eqno{(A.21)}$$
\noindent The mass of the universe is given by
$$M_{Univ}\,\approx\,{{4\pi}\over3}\,\rho\,{R_{Univ}}^3\,,\eqno{(A.22)}$$
\noindent where $R_{Univ}$ is the radius of the universe. This is bounded by the
maximum velocity by which the universe can expand, the velocity of light $c$,
and the time of expansion $H^{-1}$. Therefore
$$R_{Univ}\,\approx\,{c\over H}\,.\eqno{(A.23)}$$
\noindent Finally
$$M_{Univ}\,\approx\,{{4\pi}\over3}\,{{\rho c^3}\over H^3}\,\approx\,{10}^
{57}\,M_{Planck}\,.\eqno{(A.24)}$$
\noindent Therefore
$$a_0\,\approx\,{10}^{19}\,L_{Planck}\,.\eqno{(A.25)}$$
\noindent Our estimation contains an error of a few orders of magnitude which is
not relevant to our analysis.
\entry{A.3.5. Entropy, Flatness and Causality}
{}From the microwave background radiation one observes that the present value
of the total entropy in the universe is so large as to be of order ${10}^{87
}$, in some convenient units.$^{18,19,20}$ On the other hand one would expect
entropy to be governed by the second thermodynamical principle, the statement
that the entropy is always a non-decreasing function of time. One is
therefore faced with the problem of determining whether the entropy of the
universe has always been as large as it is today, $dS=0$, or if it has
evolved from a smaller value. From an intuitive point of view it is quite
improbable that $dS=0$.
Further observations show that our present day universe is almost flat, i.e.,
its geometry is almost Euclidean; this means that
$$k_{obs}\,=\,0\,.\eqno{(A.26)}$$
Another observation concerns the observed isotropy of the universe over large
regions of space; this means that all regions were causally connected in the
past. For this to be the case one should have
$$r_H\,>\,a\,,\eqno{(A.27)}$$
\noindent where $r_H$ is the horizon radius which sets the size of the region in
which causal contact can be achieved.
For a FRW space the horizon radius is
$$r_H(t)\,=\,a(t)\,\int^t_0\,{{du}\over{a(u)}}\,,\eqno{(A.28)}$$
\noindent which is the maximum distance that light signals can travel during the
age $t$ of the universe.
Let us introduce a time scale
$$t_{class}\,=\,{a_0\over c}\,\approx\,{10}^{19}\,t_{Planck}\,\approx\,
{10}^{-24}\,s\,.\eqno{(A.29)}$$
\noindent For times smaller than $t_{class}$ quantum mechanical effects dominate
the scene. In fact, the radius of the universe is exactly the Compton
wavelength associated to its mass. Therefore the very concept of causality is
meaningless. Therefore, causality should be required only for times greater
than $t_{class}$.
\entry{Appendix B. General Relativity}
In General Relativity space-time is conceived as a Riemannian manifold and
the metric $g_{\mu\nu}$ is identified with the gravitational field.
In order to describe the dynamics of the gravitational field we need to
construct an invariant which might be used as Lagrangian. In Riemannian
geometry the simplest invariant which can be constructed is
$$R(g,\,\Gamma)\,=\,g^{\mu\nu}\,R_{\mu\nu}(\Gamma)\,,\eqno{(B.1)}$$
\noindent which in the case of a metric space is rewritten as
$$R(g)\,=\,g^{\mu\nu}\,R_{\mu\nu}(g)\,.\eqno{(B.2)}$$
The analytical formulation of General Relativity takes as its starting point
the \break Einstein-Hilbert Lagrangian
$${\cal L}_{EH}(g)\,=\,\kappa_E\,R(g)\,g^{1\slash2}\,,\eqno{(B.3)}$$
\noindent where $\kappa_E={c^4\over{8\pi G_N}}$ is the Einstein gravitational
constant; $G_N$ being the Newton constant. The full Lagrangian must consider
also the contributions of matter
$${\cal L}_1\,=\,{\cal L}_{EH}\,+\,{\cal L}_{matter}\,,\eqno{(B.4)}$$
\noindent Variation of the Lagrangian with respect to the metric
$${{\delta{\cal L}_1}\over{\delta g^{\mu\nu}}}\,=\,0\,,\eqno{(B.5)}$$
\noindent gives the Einstein field equations
$$\kappa_E\,[R_{\mu\nu}(g)\,-\,{1\over2}\,R(g)\,g_{\mu\nu}]\,=\,T_{2,\mu\nu}
\,,\eqno{(B.6)}$$
\noindent where
$$T_{2,\mu\nu}\,=\,{1\over g^{1\slash2}}\,{{\delta{\cal L}_{matter}}\over{
\delta g^{\mu\nu}}}\,,\eqno{(B.7)}$$
\noindent is the energy-momentum tensor of matter; the 2 stands for the fact that
the energy-momentum tensor is related to Riemannian, second-rank, geometry.
As a starting point for General Relativity one can also consider the
"Palatini" Lagrangian
$${\cal L}_P(g,\,\Gamma)\,=\,\kappa_E\,g^{\mu\nu}\,R_{\mu\nu}(\Gamma)\,g^{1
\slash2}\,.\eqno{(B.8)}$$
\noindent In this case one must also consider the contributions of matter
$${\cal L}_2\,=\,{\cal L}_P\,+\,{\cal L}_{matter}\,.\eqno{(B.9)}$$
\noindent Now the connection and the metric are varied independently in a procedure
known as the Palatini variational principle. Variation of the Lagrangian with
respect to $\Gamma$ gives
$${{\delta{\cal L}_2}\over{\delta{\Gamma^\lambda}_{\mu\nu}}}\, =\,{{\delta{
\cal L}_P}\over{\delta{\Gamma^\lambda}_{\mu\nu}}}\,+\,{{\delta{\cal L}_{matte
r}}\over{\delta{\Gamma^\lambda}_{\mu\nu}}}\,=\,0\,.\eqno{(B.10)}$$
\noindent In all known cases of physical interest one has$^5$
$${{\delta{\cal L}_{matter}}\over{\delta{\Gamma^\lambda}_{\mu\nu}}}\,=0\,.
\eqno{(B.11)}$$
\noindent In this case eq. (B.10) reduces to a metricity condition equivalent to
(2.15). Therefore the connection is given by the Christoffel symbol of the
second kind for the metric $g_{\mu\nu}$. Variation with respect to the metric
$${{\delta{\cal L}_2}\over{\delta g^{\mu\nu}}}\,=\,0\,,\eqno{(B.12)}$$
\noindent gives
$$\kappa_E\,[R_{\mu\nu}(\Gamma)\,-\,{1\over2}\,g^{\lambda\rho}\,
R_{\lambda\rho}(\Gamma)\,g_{\mu\nu}]\,=\,T_{2,\mu\nu}\,.\eqno{(B.13)}$$
\noindent If we now use the previously obtained metricity condition these equations
reduce to the original Einstein field equations (B.6). Therefore, the
procedures of imposing the metricity condition and of applying the
variational principle commute.
Einstein field equations can be rewritten in the Landau form
$$\kappa_E\,R_{\mu\nu}\,=\,S_{2,\mu\nu}\,,\eqno{(B.14)}$$
\noindent where
$$S_{2,\mu\nu}\,=\,T_{2,\mu\nu}\,-\,{1\over2}\,T_2\,g_{\mu\nu}\,,
\eqno{(B.15)}$$
\noindent with
$$T_2\,=\,g^{\mu\nu}\,T_{2,\mu\nu}\,,\eqno{(B.16)}$$
\noindent is the reduced energy-momentum tensor.
Einstein field equations have been applied to many physical situations. The
first classical test of any theory of gravitation is in the solar system. In
this case one needs to solve Einstein field equations in vacuum for a
spherically symmetric field. The solution is the exterior Schwarzschild
metric. Using this metric one can account for the anomalous shift of the
perihelion of inner planets and for the bending of light rays near the solar
surface to an accuracy of 1 per cent or better. In this case one is
describing the effects of the gravitational field alone.
The next test concerns the coupling of gravity to matter. This is achieved,
for instance, when considering the large scale structure of the universe
where gravity becomes coupled to a perfect fluid. As shown in the next
Appendix the agreement with observation is qualitatively good. One obtains
qualitatively good predictions, as the evolution of the universe from an
initial singularity and some good quantitative predictions as the temperature
of the microwave background and the relative abundance of elements. However,
the quantitative agreement is weaker in other aspects. In fact, flatness,
$k_{obs}=0$, implies $\Omega_{GR}=1$, which is hardly observed. Furthermore,
the Standard Model of Cosmology predicts a constant entropy, something which
is difficult to accept on physical grounds. These are some of the reasons to
look for an improved theory for the gravitational field.
Therefore one must consider the possibility that General Relativity is an
incomplete theory. However, it is in good agreement with observation in the
vacuum case: the Schwarzschild solution. Therefore General Relativity
describes well the dynamics of the gravitational field alone, but it fails
when coupled to matter.
Some hints, on how this problem can be approached, come from high-energy
physics. When one tries to quantise General Relativity one discovers that
there are irremovable ultraviolet divergences. This is taken as indicative
that at small distances the geometry of space-time may be different from the
Riemannian one. The current view is that General Relativity, with its
Riemannian structure, is only the low-energy, large distance, manifestation
of a more general theory at small distances. One must therefore construct a
field theory for a more general geometry. The field theory one constructs for
this new geometry must produce, in the absence of matter, a Riemannian
geometry. Furthermore, gravitation, in the form of a theory equivalent to
General Relativity must be recovered. Many possibilities have been explored
mainly in the direction of modifying the affine structure of space-time. Up
to our knowledge, modifications of the metric structure of space-time have
not yet been attempted. As stated in the Introduction, the purpose of this
work is to explore this possibility.
\entry{Appendix C. The Standard Model of Cosmology}
The Standard Model of Cosmology is based on the application of the Einstein
field equations to the universe. They provide the coupling of gravity, or
geometry, given by a FRW metric, to cosmic matter, described by a perfect
fluid.
The Einstein-Friedman equations are equivalent to
$$\rho\,=\,3\,\kappa_E\,{1\over{a^2}}\,(k\,+\,{\dot a}^2)\,>\,0\,,
\eqno{(C.1a)}$$
$$p\,=\,-\,\kappa_E\,{1\over{a^2}}\,[2\,a\,{\ddot a}\,+\,(k\,+\,{\dot a}^2)]
\,.\eqno{(C.1b)}$$
In terms of the cosmological parameters eqs. (C.1) can be rewritten as
$$q\,=\,{1\over2}\,(1\,+\,3\,y)\,\Omega\,,\eqno{(C.2a)}$$
$$\Omega\,=\,1\,+\,{k\over{{\dot a}^2}}\,.\eqno{(C.2b)}$$
The first equation shows that $q>0$ and from here one is used to conceive the
universe as expanding from an initial singularity. This is in agreement with
observation.
However, the Standard Model of Cosmology is in disagreement with some
observations as we will show in detail now.
\entry{C.1. The Entropy Problem}
One question the Standard Model of Cosmology is unable to answer concerns the
problem of the large total entropy in the universe.$^{18,19,20}$ One of the
predictions of the Standard Model of Cosmology is that the expansion of the
universe is an adiabatic isoentropic process. In fact, from the field
equations (C.1) one can easily deduce that
$$T\,dS\,=\,dE\,+\,p\,dV\,=\,d(r\,a^3)\,+\,p\,d(a^3)\,=\,0\,.\eqno{(C.3)}$$
\noindent Therefore, the Standard Model of Cosmology predicts that the expansion of
the universe is an adiabatic isoentropic process. There is no entropy
production and the entropy of the universe has always been as large as it is
today, something which is hard to accept based on physical grounds.
\entry{C.2. The Flatness Problem}
The observed flatness of the universe implies $k_{obs}=0$. If we put this
value in eq. (C.2b) we obtain $\Omega_{pred}=1$. However, this is
incompatible with the reported values for $\Omega_{obs}$, (A.14), (A.15) and
(A.16). There exist two possibilities in the face of this impasse. The first
one consists in assuming $k_{obs}=0$, $\Omega_{pred}=1$. This is preferred by
some authors for "aesthetic or philosophical reasons".$^{17}$ This takes us
to the "missing mass" problem. The second possibility is more difficult to
implement. If we accept $\Omega_{obs}<1$, then one should have $k_{pred}=-1$,
as deduced from (C.2a), which corresponds to an open universe. This
possibility is more or less excluded by the cosmological data$^{17}$
indicating that for the large scale structure of the universe $k$ is rather
close to zero. This ambiguous situation is known as the flatness problem.
The above inconsistencies can be removed if the true value of the energy
density parameter $\Omega_{obs}$ is greater than that observed. But dark
matter is hardly observed.
\entry{C.3. The Early Universe}
At early times matter is described by the state equation $y={1\over3}$. In
this case the Einstein field equations reduce to
$$a\,{\ddot a}\,+\,(k\,+\,{\dot a}^2)\,=\,0\,.\eqno{(C.4)}$$
\noindent The solution is
$$a\,=\,{(\alpha\,t\,+\,{a_0}^2)}^{1\slash2}\,,\eqno{(C.5a)}$$
$$a\,=\,{(c^2\,t^2\,+\,{a_0}^2)}^{1\slash2}\,,\eqno{(C.5b)}$$
\noindent for $k=0,-1$, respectively. The horizon radius are given by
$$r_H\,=\,2\,{c\over\alpha}\,{(\alpha\,t\,+\,{a_0}^2)}^{1\slash2}\,[{(\alpha
\,t\,+\,{a_0}^2)}^{1\slash2}\,-\,a_0]\,,\eqno{(C.6a)}$$
$$r_H\,=\,a_0\,{(1\,+\,x^2)}^{1\slash2}\,ln[x\,+\,{(1\,+\,x^2)}^{1\slash2}]\,
,\eqno{(C.6b)}$$
\noindent where $x={{ct}\over{a_0}}$. In both cases causality is not violated for
$t>t_{class}$.
In the Standard Model of Cosmology one usually assumes that $a_0=0$. In this
case causality would be violated. This is called the horizon problem. Our
result differs from the standard one since we have considered a universe
evolving from an initial hot ball with a finite radius rather than from an
initial singularity.
\entry{C.4. Comments}
Hence, it is clear that the observed cosmological data do not fit into the
field equations of the Standard Model of Cosmology, and that even under
strong assumptions on the observed values of the cosmological parameters the
situation cannot be much improved.
We must therefore conclude that the Standard Model of Cosmology is in
disagreement with some cosmological observations. In order to solve the above
problems one can consider inflation.$^{18,19,20}$ Inflation is intended to
solve the entropy, the flatness and the horizon problems. However,
inflationary cosmology will be sound only if later observations will show
that $\Omega_{obs}=1$.
Therefore, our conclusion is that General Relativity is an incomplete theory.
In fact, it describes well, to a very high accuracy, the effects of the
gravitational field alone: the shift of the perihelion of inner planets, the
bending of light in strong gravitational fields, etc., however it fails to
describe the coupling of the gravitational field to matter, for example, the
Standard Model of Cosmology.
We must look therefore for an improved theory for the gravitational field
coinciding with General Relativity in the vacuum case and with a different
way of coupling the gravitational field to matter. The theory of fourth-rank
gravity we have developed satisfies these requirements.
\vfill\eject
\entry{References}
\item{ a.} We thank J. Russo for clarifying us this point.
\item{ b.} The fact of calling $G_{\mu\nu\lambda\rho}$ a metric is a purely
linguistic issue completely unrelated to the mathematical properties of this
object.
\item{ 1.} B. Riemann, {\it \"Uber die hypothesen welche der Geometrie zu
grunde liegen} (1854). This thesis was presented on June 10th, 1854, in
G\"ottingen and it was first published in {\it Abh. K\"onigl. Gesellsch.
Wiss. G\"ottingen} {\bf 13}, 1 (1868). It was translated into English by
W. K. Clifford and published in {\it Nature} {\bf 8}, 14 (1873).
\item{ 2.} H. von Helmholtz, {\it \"Uber die Tatsachen, die der Geometrie zu
Grunde liegen}, {\it Nachr. Ges. Wiss. G\"ottingen} (1868). Reprinted in {\it
Wissenschaftliche Abhandlugen Leipzig} {\bf 2}, 618 (1883).
\item{ 3.} K. F. Gauss, {\it Suppl\'ement a la Th\'eorie de la Combinaison
des Observations}, G\"ottingen (1826).
\item{ 4.} V. Tapia, accompanying paper (1992).
\item{ 5.} C. W. Misner, K. S. Thorne and J. A. Wheeler, {\it Gravitation}
(Freeman, San Francisco, 1973), p. 504.
\item{ 6.} F. Kottler, {\it \"Uber die physikalischen Grundlagen der
Einsteinschen Gravitationstheorie}, {\it Annalen Physik} {\bf 56}, 410
(1918).
\item{ 7.} S. L. Shapiro, {\it Astron. J.} {\bf 76}, 291 (1971).
\item{ 8.} M. Ferraris, private communication (1991).
\item{ 9.} J. L. Synge, {\it Relativity: The General Theory (Interscience
Publishers, New York, 1960).
\item{10.} V. Tapia, preprint IC/92/65, Trieste (1992).
\item{11.} A. L. Marrakchi and V. Tapia, preprint IC/92/86, Trieste (1992).
\item{12.} A. L. Marrakchi and V. Tapia, preprint IC/92/124, Trieste (1992).
\item{13.} W. K. Clifford, {\it The Common Sense of the Exact Sciences}, ed.
by K. Pearson (1885). Reprinted (Dover, New York, 1955).
\item{14.} P. A. M. Dirac, {\it Development of the Physicist's Conception of
Nature}, in {\it The Physicist's Conception of Nature}, ed. by J. Mehra
(Reidel, Dordrecht, 1973).
\item{15.} E. T. Bell, {\it Developments of Mathematics}, (McGraw-Hill, New
York, 1947).
\item{16.} S. Weinberg, {\it Gravitation and Cosmology} (Wiley, New York,
1972).
\item{17.} G. B\"orner, {\it The Early Universe Facts and Fiction} (Springer,
Berlin, 1988).
\item{18.} A. Linde, {\it Rep. Prog. Phys.} {\bf 47}, 925 (1984).
\item{19.} H. Brandenberger, {\it Rev. Mod. Phys.} {\bf 57}, 1 (1985).
\item{20.} A. Linde, {\it Particles Physics and Cosmology} (Harwood Academic
Publishers, Chur, 1990).
\bye
|
train/arxiv
|
BkiUdtTxK1fBGsWmv7E1
| 5 | 1 |
\section{Introduction}
Exploring the geometry of data has shown to provide significant insight into high dimensional complex datasets. Applications include dimensionality reduction \cite{mcinnes2020umap}, computer vision \cite{10.5555/1145132}, chemistry \cite{Martin2010} and medicine\cite{AhmadiMoughari2020ADRMLAD}. Geometric methods in machine learning rely predominantly on \textit{the manifold hypothesis} \cite{Fefferman2016} which asserts that sample data $\Omega\subset \mathbb{R}^{n}$ in fact live in a smooth submanifold $\Omega \subset M\subset \mathbb{R}^{n}$ whose dimension is often much smaller than $n$. Methods assuming this hypothesis are therefore often called \textit{manifold learning} algorithms. Examples include PCA and nonlinear PCA \cite{Leeuw2013HistoryON}, Isomap \cite{Tenenbaum2000AGG}, and UMAP \cite{mcinnes2020umap}. However, the manifold hypothesis does not always hold, especially when the space underlying the data contains singularities. In fact, singularities are ubiquitous in mathematics and appear often in physicals models \cite{doi:10.1021/jp962746i,reiman1996singular}, hence there is a need to go beyond the manifold hypothesis.
To transition from the smooth to the singular setting we replace the language of differential geometry and smooth manifolds with algebraic geometry and algebraic varieties. At a basic level, algebraic geometry studies the geometry of the zeros of systems of polynomials. Those zero sets are called \textit{algebraic varieties}. At the heart of algebraic geometry lies the duality between geometry and algebra which allows us to jump back and forth between geometric spaces and computable algebraic procedures. Furthermore, algebraic geometry offers a natural setting for studying and working with singularities. As we shall see, this \textit{variety hypothesis} provides a great amount of flexibility and it lends itself well to computations.
To examine data in the singular setting we introduce the \textit{algebraic machine learning pipeline} depicted in Figure \ref{fig:pipeline}. This pipeline combines ideas from algebraic geometry and machine learning and it is the main subject of this paper.
\bigskip
\begin{figure}[H]
\centering
\begin{tikzcd}
& & \fbox{\Centerstack{Algebraic\\Computations}} \arrow[rd] & & \\
\fbox{\Centerstack{Data}} \arrow[r] & \fbox{\Centerstack{Learning the\\Underlying Variety}} \arrow[ru] \arrow[rd] & & \fbox{\Centerstack{Geometric\\Insights}} \\
& & \fbox{\Centerstack{Numerical\\Computations}} \arrow[ru] & &
\end{tikzcd}
\caption{}\label{fig:pipeline}
\end{figure}
\bigskip
\noindent
In Section \ref{sec:real-algebraic-geometry} we give a brief overview of real algebraic geometry, building the language we will be using throughout the paper. In Section \ref{sec:learning-variety}, we discuss the first stage of learning the underlying variety. We introduce the following learning problem:
\begin{itemize}
\item[(LP)]
\textit{Given a dataset $\Omega=(a_{1},\ldots ,a_{m})$ of points sampled from
a variety $V \subset \mathbb{R}^n$, find that variety.}
\end{itemize}
Building upon the work of \cite{Breiding2018} on learning algebraic varieties, we interpret this learning problem
as a Maximum A Posteriori (MAP) problem which can be solved in terms of an eigenvalue computation.
In Section \ref{sec:algebraic-computations} we introduce the algebraic computations stage.
Using the learned variety $V$ from the previous stage, we explore the use of Gr\"obner basis computations to obtain information about $V$. This includes invariants like the dimension and the
irreducible decomposition.
In Section \ref{sec:singular-heuristics} we consider the numerical computations stage which involves
working directly with the points in $\Omega$ and additional samples taken from $V\cap[0,1]^{n}$. In particular, we introduce a method called the \textit{singularity heuristic}
for detecting points in $[0,1]^{n}$ lying near the singular locus of $V$.
In Section \ref{sec:results} we test the algebraic machine learning pipeline on synthetic data and on chemical data sampled from the conformation space of cyclooctane \cite{Martin2010}. Finally, in Section \ref{sec:related-work}, we discuss other work related to the algebraic machine learning pipeline.
\textbf{Acknowledgement:}
The authors greatfully acknowledge support by the NSF under award OAC 1934725 for the project
\emph{DELTA: Descriptors of Energy Landscape by Topological Analysis}. M.J.P.~thanks David Jonas for
explanations on the conformations of cylooctane and Martin Schottenloher for constructive feedback.
The authors also thank Henry Adams, Aurora Clark, Howie Jordan and Y.Z.\ for helpful discussions.
\section{Background in Real Algebraic Geometry}
\label{sec:real-algebraic-geometry}
For the convenience of the reader, we collect in this section several fundamental concepts from real and complex
algebraic geometry which are used throughout this paper; see \cite{BocCosRoyRAG} and \cite{HarAG} for further details.
In this paper, we will be working mostly over the field $\mathbb{R}$ and its subfields $\mathbb{Q}$ and
$\mathbb{R}_{\textup{alg}} := \overline{\mathbb{Q}}\cap \mathbb{R}$ of rational and real algebraic numbers, respectively. The symbol $\mathbb{K}$ denotes a field of characteristic $0$.
\subsection{Ordered Fields}
A total order $\leq$ on a field $\mathbb{K}$ is called a
\textit{field order} if it is compatible with the field operations in the
following sense:
\begin{enumerate}[(M1)]
\item\label{ite:monotonyaddition} If $a\leq b$, then for all $c\in\mathbb{K}$ the
relation $a+c\leq b+c$ holds true.
\item If $0\leq a$ and $0\leq b$, then $0\leq a\cdot b$.
\end{enumerate}
A field $\mathbb{K}$ endowed with a field order $\leq$ is an \textit{ordered field}.
We always assume $\mathbb{R}$ and $\mathbb{Q}$ to be equipped with the standard field order
$\leq$. Note that the field of complex numbers $\mathbb{C}$ and the fields of
finite characteristic cannot be endowed with the structure of an ordered field.
The order on an ordered field $(\mathbb{K},\leq)$ is completely characterized by the
\emph{positive cone} it generates, which is the set
$C = \{ a \in \mathbb{K} \mid 0 \leq a \} $. Let us make this more precise.
In real algebraic geometry one understands by a \emph{proper cone} of $\mathbb{K}$
\cite[Se.~1.1]{BocCosRoyRAG} a subset $C \subset \mathbb{K}$ which fulfills the relations
\begin{equation}
\label{eq:cone-axioms}
C + C \subset C , \quad C \cdot C \subset C , \quad \mathbb{K}^2 \subset C, \text{ and } -1 \neq C \ .
\end{equation}
A proper cone $C$ of $\mathbb{K}$ is called a \emph{positive cone} if in addition
\begin{equation}
\label{eq:positivity-axiom}
\mathbb{K} = C \cup (-C) \ .
\end{equation}
A positive cone $C$ of $\mathbb{K}$ defines a unique field order $\leq_C$ such that
$C = \{ a \in \mathbb{K} \mid 0 \leq_C a \} $. By property (M\ref{ite:monotonyaddition}),
the order $\leq_C$ is determined by: $a \leq_C b$ if and only if $b-a \in C$.
For the fields $\mathbb{K} =\mathbb{R}$ or $\mathbb{K} = \mathbb{Q}$ the standard field order coincides with the order
$\leq_P$ associated to the positive cone
\[ P = \big\{ a \in \mathbb{K} \mid \exists\, a_1,\ldots, a_n \in \mathbb{K} : \: a = \sum_{i=1}^n a_i^2 \, \big\} \ . \]
An ordered field $(\mathbb{K},\leq)$ is called \textit{real closed} if every polynomial $f\in\mathbb{K}[x]$ of odd degree has a root in $\mathbb{K}$ and for every
element $a\in\mathbb{K}$ there exists $b\in\mathbb{K}$ such that $a=b^{2}$ or $a=-b^{2}$. For example, $(\mathbb{R},\leq)$ is real closed, but
$(\mathbb{Q},\leq)$ is not since $x^{2}-2$ does not have any rational roots.
For a real closed field $(\mathbb{K},\leq)$ the $\mathbb{K}$-vector space $\mathbb{K}^n$ can be endowed
with a $\mathbb{K}$-valued metric $\|\cdot\|_{2}$
\begin{equation}\label{eq:metric-real-closed-field}
\|a-b\|_{2}=\sqrt{(a_{1}-b_{1})^{2}+ \ldots +(a_{n}-b_{n})^{2}} \ .
\end{equation}
For any real closed subfield $\mathbb{K}\subset \mathbb{R}$ this results in the standard Euclidean metric restricted to $\mathbb{K}^n$.
The \textit{real closure} of an ordered field $(\mathbb{K},\leq_\mathbb{K})$ is an ordered field $(\mathbb{F},\leq_{\mathbb{F}})$ which is real closed and
extends $(\mathbb{K},\leq_\mathbb{K})$, that is, $\mathbb{K}\subset \mathbb{F}$ and $C_\mathbb{K}\subset C_\mathbb{F}$, where $C_\mathbb{K}$ and $C_\mathbb{F}$ are the positive cones
in $\mathbb{K}$ and $\mathbb{F}$, respectively. Real closures exist and are essentially unique. For example, the real closure of $(\mathbb{Q},\leq)$ is
$(\mathbb{R}_{\textup{alg}},\leq_P)$, where $P$ is understood as above to be the set of sums of squares in $\mathbb{R}_{\textup{alg}}$.
\subsection{The Nullstellensatz}
The set of polynomials in $n$ variables over an arbitrary field $\mathbb{K}$ forms a ring
$\mathbb{K}[x_{1},\ldots ,x_{n}]$. Given a subset $S$ of the polynomial ring $\mathbb{K}[x_{1},\ldots ,x_{n}]$,
we denote by $Z(S)$ the \emph{zero-set} of $S$, that is $Z(S)= \{a\in \mathbb{K}^{n}| f(a)=0 \text{ for all } f \in S\}$.
By $\langle S \rangle_\mathbb{K}$ or briefly $\langle S \rangle$ when the underyling ground
field is clear one denotes the ideal generated by $S$ which is the intersection of all
ideals in
$\mathbb{K}[x_{1},\ldots ,x_{n}]$ containing $S$. Both the set $S$ and the ideal $\langle S \rangle$ have the same
zero
set $Z(S)=Z(\langle S\rangle)$ since if $f_{1}(a)= \ldots =f_{k}(a)=0$ for polynomials $f_{1},\ldots ,f_{k}$, then
\[
g_{1}(a)\cdot f_{1}(a)+ \ldots +g_{k}(a)\cdot f_{k}(a)=0
\quad\text{for all } g_{1},\ldots ,g_{k} \in \mathbb{K}[x_{1},\ldots ,x_{n}] \ .
\]
\noindent
Since $\mathbb{K}[x_{1},\ldots ,x_{n}]$ is noetherian, any ideal $I\subset \mathbb{K}[x_{1},\ldots ,x_{n}]$ can be written as
$\langle f_{1},\ldots ,f_{k}\rangle$ for some $k$ and $f_{1},\ldots ,f_{k}\in I$. Therefore, starting with a possibly
infinite set $S$, $Z(S)$ can always be rewritten as $Z(S')$ for some finite set of polynomials $S'$.
Over a subfield of $\mathbb{R}$, this result can be sharpened further. For notational convenience we will denote $Z(\{f\})$ by $Z(f)$. Observe that the following proposition does not hold over the field $\mathbb{C}$.
\begin{proposition}
\label{prop:real-variety-zero-set-single-polynomial}
Let $\mathbb{K}$ be a subfield of $\mathbb{R}$. If $S$ is a non-empty subset of $\mathbb{K}[x_{1},\ldots ,x_{n}]$, then
$Z(S)$ can be written as $Z(f)$ for a single polynomial $f\in \mathbb{K}[x_{1},\ldots ,x_{n}]$.
\end{proposition}
\begin{proof}
First rewrite $Z(S)$ as $Z(S')$ for some finite set $S'\subset \mathbb{K}[x_{1},\ldots ,x_{n}]$.
Let $S'=\{f_{1},\ldots ,f_{k}\}$. If $f_{1}(a)= \ldots =f_{k}(a)=0$, then $f_{1}^{2}(a)+ \ldots +f_{k}^{2}(a)=0$ and
$Z(S')\subset Z(f^{2}_{1}+ \ldots +f^{2}_{k})$. Conversely,
$f^{2}_{1}(a)+ \ldots +f^{2}_{k}(a)=0$ implies $f^{2}_{1}(a)= \ldots =f^{2}_{k}(a)=0$ which then entails
$f_{1}(a)= \ldots =f_{k}(a)=0$. Hence $ Z(\{f_{1},\ldots , f_{k}\})=Z(f_{1}^{2}+ \ldots +f_{k}^{2})$.
\end{proof}
A set $V\subset \mathbb{K}^{n}$ is called an \emph{algebraic variety} or a \emph{variety} if $V=Z(S)$ for some set
$S\subset \mathbb{K}[x_{1},\ldots ,x_{n}]$. Depending on whether $\mathbb{K}=\mathbb{R}$ or $\mathbb{K}=\mathbb{C}$ the variety is called \emph{real}
or \emph{complex}.
Similar to the zero-set map $Z$ one also has the \textit{vanishing ideal} map $J$ which maps a subset $A\subset \mathbb{K}^{n}$ to
the set of all polynomials that vanish on $A$ i.e.\ to
\[ J(A)=\{f\in\mathbb{K}[x_{1},\ldots ,x_{n}]|f(a)=0 \text{ for all } a\in A\} \ . \]
To see why $J(A)$ is an ideal, note that if $f_{1}(a)=f_{2}(a)=0$ for some
$a\in A$ then $f_{1}(a)\pm f_{2}(a)=0$ and $q(a)\cdot f_{1}(a)=0$ for any
polynomial $q$.
Starting with an ideal $I\subset \mathbb{K}[x_{1},\ldots ,x_{n}]$, it is not necessarily the case that $I=J(Z(I))$. This is because there
might be other polynomials vanishing on $Z(I)$ which were not included in $I$, so one can only guarantee that $I\subset J(Z(I))$.
\begin{example} Consider the ideal $\langle x^{2} \rangle \subset\mathbb{C}[x]$.
Then $Z(\langle x^{2} \rangle )=\{0\}$. However,
\[ J(Z(\langle x^{2} \rangle ))=J(\{0\})=\langle x\rangle \ , \]
so $x$ is in $J(Z(\langle x^{2} \rangle ))$ but not in $\langle x^{2} \rangle $.
\end{example}
\noindent
Over an algebraically closed field, such as $\mathbb{C}$, the missing polynomials are characterized by the \emph{radical} of $I$, defined as $\sqrt{I} = \{ f |\exists r>0 \text{ such that } f^{r}\in I\}$. Note that $\sqrt{I}$ is itself an ideal.
\begin{theorem}[Hilbert's Nullstellensatz] Over an algebraically closed field $\mathbb{K}$ the equality $J(Z(I))= \sqrt{I}$ holds for every ideal
$I\subset \mathbb{K}[x_{1},\ldots ,x_{n}]$.
\end{theorem}
An ideal $I$ that satisfies $I=\sqrt{I}$ is called a \emph{radical} ideal. Over algebraically closed fields, Hilbert's Nullstellensatz establishes a one-to-one correspondence between between varieties and radical ideals in the polynomial ring.
For ordered fields there is a similar but more subtle result. To formulate it we need some further notation.
Let $(\mathbb{K},\leq)$ be an ordered field.
The \emph{real radical} of an ideal $I \subset \mathbb{K}[x_{1},\ldots ,x_{n}]$ then is defined as
\[
\realrad{\!\mathbb{K}\,}{I} =
\big\{f \in \mathbb{K}[x_{1},\ldots ,x_{n}]| \exists k \geq 0, r>0, f_{1},\ldots ,f_{k}\in \mathbb{K}[x_{1},\ldots ,x_{n}]
\text{ such that } f^{2r}+\sum_{i=1}^{k}f_{i}^{2}\in I\big\} \ .
\]
An ideal $I\subset \mathbb{K}[x_{1},\ldots ,x_{n}]$ satisfying $I=\realrad{\!\mathbb{K}\,}{I}$ is called \textit{real}.
If $(\mathbb{L},\leq)$ denotes the real closure of $(\mathbb{K},\leq)$ and $A$ is a subset of $\mathbb{L}^n$,
then the vanishing ideal of $A$ in $\mathbb{K}[x_{1},\ldots ,x_{n}]$ is the ideal
$J_{\mathbb{L}} (A) \cap \mathbb{K}[x_{1},\ldots ,x_{n}]$. We denote it by $J_{\mathbb{K}}(A)$.
\begin{theorem}[Real Nullstellensatz {\cite[Thm.~2.8]{Neuhaus1998}}] Let $(\mathbb{K},\leq)$ be an ordered field and $(\mathbb{L},\leq)$ be its
real closure. If $I\subset \mathbb{K}[x_{1},\ldots ,x_{n}]$ is an ideal, then
$J_{\mathbb{K}}(Z_{\mathbb{L}}(I))=\realrad{\!\mathbb{K}\,}{I}$.
In particular, if $S\subset \mathbb{Q}[x_{1},\ldots ,x_{n}]$, then
$J_\mathbb{Q}(Z_{\mathbb{R}_{\textup{alg}}}(S))=\realrad{\mathbb{Q}}{\langle S\rangle_\mathbb{Q}}$.
\end{theorem}
Note that $Z_{\mathbb{L}}(I)$ is the variety in $\mathbb{L}^{n}$ obtained by viewing $I$ as a set of polynomials
in $\mathbb{L}[x_{1},\ldots ,x_{n}]$, and that $J_{\mathbb{K}}(Z_{\mathbb{L}}(I))$ is the ideal of $\mathbb{K}$-polynomials vanishing on
$Z_{\mathbb{L}}(I)$.
The following density result is concerned with the metric given in Eq.~\eqref{eq:metric-real-closed-field}.
It is an immediate consequence of \cite[Prop.\ 5.3.5]{BocCosRoyRAG}.
\begin{theorem}
If $\mathbb{K}\subset \mathbb{L}$ are both real closed subfields of $\mathbb{R}$ and
$S\subset \mathbb{K}[x_{1},\ldots ,x_{n}],$ then
$Z_{\mathbb{K}}(S)$, viewed as a subset of $\mathbb{L}^{n}$, is dense in $Z_{\mathbb{L}}(S)$ with respect to the metric
$\|\cdot \|_{2}$.
In particular, if $S\subset \mathbb{Q}[x_{1},\ldots ,x_{n}]$ then $Z_{\mathbb{R}_{\textup{alg}}}(S)$ is dense in
$Z_{\mathbb{R}}(S)$.
\end{theorem}
Intuitively, this means that we can approximate points in $Z_{\mathbb{R}}(S)$ arbitrarily closely by points in $Z_{\mathbb{R}_{\textup{alg}}}(S)$. If $f$ vanishes on $Z_{\mathbb{R}_{\textup{alg}}}(S)$, then since $f$ is continuous on $\mathbb{R}^{n}$ and $Z_{\mathbb{R}_{\textup{alg}}}(S)$ is dense in $Z_{\mathbb{R}}(S)$, $f$ must also vanish on $Z_{\mathbb{R}}(S)$. Therefore, by the real Nullstellensatz
$\realrad{\mathbb{Q}}{\langle S\rangle_\mathbb{Q}}=J_{\mathbb{Q}}(Z_{\mathbb{R}_{\textup{alg}}}(S))=J_{\mathbb{Q}}(Z_{\mathbb{R}}(S))$.
\subsection{Properties of Algebraic Varieties}
Real varieties inherit topological and differential structures based on their embedding in
$\mathbb{R}^{n}$ and form a very large class of spaces. For example, a famous result due to John Nash \cite{nash1952} says that
every closed manifold is diffeomorphic to a real algebraic variety. Morover, by the fundamental work \cite{Whitney65}
of Hassler Whitney, every real algebraic variety carries a canonical minimal
Whitney stratification, which means that the variety can be decomposed into pairwise disjoint smooth manifolds so that Whitney's regularity condition (b)
is satisfied; see \cite{PflAGSSS} for details on stratified spaces and their regularity conditions.
Since manifolds of different dimensions may appear in the decomposition, varieties can also exhibit singularities
and non-smooth behavior.
This makes the variety hypothesis, i.e.\ the claim that the underlying space of the data is an algebraic variety, a very
reasonable assumption especially for scientific and computational purposes.
\begin{example}
\label{ex:union-sphere-plane}
Consider the sphere of radius $\frac{1}{2}$ centered at $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ and the plane $x-y=0$. Their union is
a variety which we can represent as the zero set of the polynomial
\[
f (x,y,z)= \left( (x-\frac{1}{2})^{2}+(y-\frac{1}{2})^{2}+(z-\frac{1}{2})^{2}-\frac{1}{4}\right)\cdot(x-y) \ .
\]
The variety $V= Z(f)$ is illustrated in Figure \ref{fig:zerolocus}.
Its singular points are the points of intersection between the plane and the variety. These points form the circle $Z(\langle 2x^{2}-2x+z^{2}-z+\frac{1}{2}\rangle)\cap Z(\langle x-y\rangle)$ illustrated in Figure \ref{fig:singularlocus}.
\begin{figure}[H]
\centering
\begin{subfigure}{.45\textwidth}\centering
\includegraphics[scale=.3]{mesh.png}
\caption{Zero locus of $f$}\label{fig:zerolocus}
\end{subfigure}
\begin{subfigure}{.45\textwidth}\centering
\includegraphics[scale=.3]{mesh_curve.png}
\caption{Singular locus}\label{fig:singularlocus}
\end{subfigure} \caption{The union of a sphere and a plane as a variety}\label{fig:varietyunionsphereplane}
\end{figure}
\end{example}
In what follows we review the concepts of dimension, the Jacobians, and the singular locus of a variety.
Let $V$ be a variety and let $J(V)\subset \mathbb{K}[x_{1},\ldots ,x_{n}]$ be its vanishing ideal.
The \emph{coordinate ring} of $V$, denoted $\mathbb{K}(V)$, is the quotient ring $\mathbb{K}[x_{1},\ldots ,x_{n}]/J(V)$. It
contains all the information about the variety $V$. In fact, $V$ can be reconstructed from $\mathbb{K}(V)$
using its spectrum; see \cite{HarAG}.
The \emph{Krull dimension} of a commutative ring $R$, denoted $\dim (R)$, is the length $n$ of the longest
chain $\langle 0\rangle= P_{0}\lneq P_{1}\lneq \ldots \lneq P_{n}=\mathbb{K}(V)$ of prime ideals in $\mathbb{K}(V)$. The following result gives a geometric interpretation of the Krull dimension of a coordinate ring. It is an immediate consequence of
\cite[Prop.~2.8.5]{BocCosRoyRAG}.
\begin{proposition}
For every real algebraic variety $V$,
$\dim (\mathbb{R}(V))=\dim(V)$, where
$\dim (V)$ is the maximal dimension of the manifolds in the canonical Whitney
stratification of $V$.
\end{proposition}
Even though a variety $V$ defined by a set $S$ of rational polymials is
completely characterized by the ideal $J_{\mathbb{R}}(Z_{\mathbb{R}}(S))$,
the subset $\realrad{\mathbb{Q}}{\langle S\rangle_\mathbb{Q}}$
still carries useful information.
\begin{proposition}\label{prop:dimension-geometric-algebraic}
If $S\subset \mathbb{Q}[x_{1},\ldots ,x_{n}]$, then
\[
\dim (Z_{\mathbb{R}}(S))=\dim(\mathbb{R}_{\textup{alg}}(Z_{\mathbb{R}_{\textup{alg}}}(S)))=
\dim (\mathbb{Q}[x_{1},\ldots ,x_{n}]/\realrad{\mathbb{Q}}{\langle S\rangle_\mathbb{Q}} \ .
\]
\end{proposition}
\begin{proof}
By \cite[Prop.~7]{Becker1993},
$\dim (\mathbb{R}_{\textup{alg}}(Z_{\mathbb{R}_{\textup{alg}}}(S))) = \dim(\mathbb{Q}[x_{1},\ldots ,x_{n}]/\realrad{\mathbb{Q}}{\langle S\rangle_\mathbb{Q}})$.
Furthermore, the Krull dimension is preserved under field extension, see e.g.\ \cite[II.~Ex.~3.20]{HarAG}.
Hence $\dim(\mathbb{R}_{\textup{alg}}(Z_{\mathbb{R}_{\textup{alg}}}(S)))=\dim(\mathbb{R}(Z_{\mathbb{R}}(S)))$,
which coincides with $\dim(Z_{\mathbb{R}}(S))$.
\end{proof}
Related to the dimension are the singularities of a variety $V\subset\mathbb{K}^{n}$.
Express $J(V)=\langle f_{1},\ldots ,f_{k}\rangle$ with appropriate polynomials
$f_{1},\ldots, f_{k}\in \mathbb{K} [x_1,\ldots ,x_n]$.
The \emph{Jacobian} of $V$ at the point $a\in V$ then is given by the matrix
\[
\operatorname{Jac}_a (f_{1},\ldots ,f_{k})=\begin{pmatrix}
\partial_{x_{1}} f_{1}(a) &\dots&\partial_{x_{n}} f_{1}(a)\\
\vdots & \ddots & \\
\partial_{x_{1}} f_{k}(a) & &\partial_{x_{n}} f_{k}(a)
\end{pmatrix} \ .
\]
\noindent
A \emph{singular point} or \emph{singularity} of $V$ now is a point $a\in V$ such that
\begin{equation*}
\operatorname{rk} \big( \operatorname{Jac}_a (f_{1},\ldots ,f_{k}) \big) < n- \dim(V) \ .
\end{equation*}
Note that this condition does not depend on the choice of generators $f_{1},\ldots ,f_{k}$
for $J(V)$.
Conversely, a \emph{non-singular} point is a point $a\in V$ where
\[ \operatorname{rk} \big( \operatorname{Jac}_a(f_{1},\ldots ,f_{k})\big) = n-\dim(V) \ . \]
The set of all singular points of $V$ is called the \textit{singular locus} of $V$ and is denoted
$\operatorname{Sing} (V)$. One immediately checks that $\operatorname{Sing} (V)$ is
again a variety by observing that $\operatorname{rk}\big(\operatorname{Jac}_a(f_{1},\ldots ,f_{k}\big))<n-r$
if and only if all $(n-r)$-minors of $\operatorname{Jac}_a(f_{1},\ldots ,f_{k})$ vanish.
For our purposes, it will be more convenient to use the following charactarization of
singularities which follows from \cite[Prop.~3.3.10]{BocCosRoyRAG}.
\begin{theorem}\label{thm:singular-locus-hypersurface}
If $V\subset\mathbb{R}^{n}$ is a variety such that
\begin{enumerate}[(i)]
\item $\dim(V)=n-1$, and
\item $V=Z(f)$ for an irreducible polynomial $f$,
\end{enumerate}
then $a\in V$ is a singular point if and only if
$\big(\frac{\partial f}{\partial x_{1}}(a),\ldots ,\frac{\partial f}{\partial x_{n}}(a)\big)=
(0,\ldots ,0)$.
\end{theorem}
A variety $V$ is called \emph{reducible} if it is the union of two or more proper non-empty subvarieties, otherwise it is called \emph{irreducible}. This suggests decomposing varieties into their irreducible components i.e.~expressing $V$ as a union $V_{1}\cup \ldots \cup V_{s}$ where each $V_{i}$ is irreducible. The irreducible decomposition of a variety $V$ is finite and unique and it can be obtained by finding the minimal primes $P_{1},\ldots ,P_{s}$ over $J(V)$. In this case, the irreducible decomposition of $V$ is given by $V =Z(P_{1}) \cup \ldots \cup Z(P_{s})$.
\begin{proposition}[cf.~{\cite[Thm 2.8.3.~(ii)]{BocCosRoyRAG}}]
\label{prop:irreducible-decomposition}
Given $S\subset \mathbb{Q}[x_{1},\ldots ,x_{n}]$ let $V=Z_{\mathbb{R}}(S)$.
If $P_{1},\ldots ,P_{s}$ are the minimal primes over $\realrad{\mathbb{Q}}{\langle S\rangle_\mathbb{Q}})$, then each prime $P_{i}$ is equal to $J_\mathbb{Q}(V_{i})$ for some variety $V_{i}\subset\mathbb{R}^{n}$. Furthermore, $V_{1}\cup \ldots \cup V_{s}$ is the irreducible decomposition of $V$.
\end{proposition}
\section{Learning the Underlying Variety}
\label{sec:learning-variety}
\noindent
In this section, we consider the following learning problem:
\begin{itemize}
\item[(LP)]
\textit{Given a dataset $\Omega=(a_{1},\ldots ,a_{m})$ of points sampled from
a variety $V \subset \mathbb{R}^n$, find that variety.}
\end{itemize}
\noindent
We interpret this as the optimization problem:
\begin{itemize}
\item[(OP)]
\textit{For a fixed class of varieties $\mathcal{V}$ and an objective function
$A:\mathcal{V}\times([0,1]^{n})^{m}\to\mathbb{R}^{+}$, given a dataset $\Omega=(a_{1},\ldots ,a_{m})$ of points in $[0,1]^{n}$ find $\argmax\limits_{V\in \mathcal{V}} A(V,\Omega)$.}
\end{itemize}
One way to find a suitable objective function $A$ is by following the Bayesian machine learning paradigm \cite{barber}. Instead of working with varieties
directly, we use the observation from Proposition \ref{prop:real-variety-zero-set-single-polynomial} that every real variety $V$ can be expressed as $Z(f)$ for a single polynomial $f\in\mathbb{R}[x_{1},...,x_{n}]$.
Therefore, one can single out a class of polynomials $\Theta\subset\mathbb{R}[x_{1},\ldots ,x_{n}]$ such that each variety $V\in\mathcal{V}$ is defined by
some polynomial $f\in \Theta$. Given a posterior probability distribution $p(f|\Omega)$ over $\Theta$, we can define an objective
function \[ A(V,\Omega):=\max\limits_{f\in\Theta,Z(f)=V}p(f|\Omega) \ . \]
The value $A(V,\Omega)$
is to be interpreted as being proportional to the probability that $V$ is the variety from which $\Omega$ was sampled. We need to take a maximum in this definition of $A$ since different polynomials in $\Theta$ may define the same variety $V$. With this choice of $A$, the learning problem $\argmax\limits_{V\in\mathcal{V}}A(V,\Omega)$ reduces to the Maximum A Posteriori (MAP) problem \[ \argmax\limits_{f\in\Theta}p(f|\Omega) \ .\]
Assume that we are given a likelihood distribution $p(x|f)$ over $[0,1]^{n}$ and a prior distribution $p(f)$ over $\Theta$ so that $p(x|f)= \frac{1}{\kappa} p(x|f)p(f)$ for a normalization constant $\frac{1}{\kappa}$. If furthermore the samples $\Omega=(a_{1},\ldots ,a_{m})$ are independent and
identically distributed (IID \cite{barber}), then the MAP problem is explicitly given by
\[ \argmax\limits_{f\in\Theta}\frac{1}{\kappa}\prod\limits_{i=1}^{m}p(a_{i}|f)p(f)^{m} \ . \]
The likelihood $p(x|f)$ should be roughly thought of as the probability of sampling $x\in[0,1]^{n}$ if the true underlying variety is $Z(f)$. To be robust towards noise and outliers, we allow points sampled from $Z(f)$ to be to be near $Z(f)$ even if they are not exactly on it. So instead of being supported on $Z(f)$, $p(x|f)$ should ideally depend on the distance of $x$ to $Z(f)$. The most obvious notion of distance here is the \textit{geometric distance} $d_{G}(x,f)=\inf\limits_{y\in Z(f)} \|x-y\|_{2}$. However, working with $d_{G}$ can be intractable, especially for optimization purposes. For a discussion on the complexity of this problem see \cite{Gfvert2020}.
Instead, we use a relaxation called the \textit{algebraic distance} \cite{Gander1994} given by
$d_{A}(x,f)=|f(x)|$. By Lojasiewicz's inequality \cite[Sec.~18, Thm.~2]{LojESA}
(see also \cite{Bierstone1988,ColdingMinicozzi}),
the algebraic distance is an upper bound on the geometric distance.
More precisely, Lojasiewicz's inequality entails the following result.
\begin{proposition}
Given a polynomial $f\in \mathbb{R}[x_1,\ldots,x_n]$, there exist $a,C>0$ with $0 < a < \frac{1}{2}$ such that
\[ d_{G}(x,f)\leq C \, d_{A}(x,f)^a \quad \text{for all } x\in [0,1]^{n} \ . \]
\end{proposition}
\noindent
Before proceeding further, we introduce some additional notation. We denote monomials using multi-indices:
\[
x^\alpha := x_{1}^{\alpha^{1}} \cdot \ldots \cdot x_{n}^{\alpha^{n}} \quad \text{for }
\alpha = (\alpha^{1},\ldots,\alpha^{n}) \in \mathbb{N}^n \ .
\]
The degree $D $ of the monomial $ x^\alpha$ then is $D = |\alpha| :=\alpha^{1}+\ldots +\alpha^{n}$.
Correspondingly, a polynomial $f$ of degree $\leq D$ can be written as
\[
f = \sum_{{k=1}}^{{N}} c_{{k}}x^{\alpha_{{k}}}
= c_{{1}}x^{\alpha_{{1}}}+\ldots +c_{{N-1}}x^{\alpha_{{N-1}}}+c_{{N}} \ ,
\]
where $N = \binom{n+D}{D}$, the coefficients $c_{1},\ldots ,c_{N}$ lie in $\mathbb{R}$,
and the multi-indices $\alpha_{k} = (\alpha_{k}^{1},\ldots, \alpha_{k}^{n})$
for $k=1,\ldots, N$
are elements of $\mathbb{N}^n$ with $\alpha_{N} =0$.
For a fixed monomial ordering, $f$ can be uniquely represented as a vector in terms of its coefficients
$(c_{1},\ldots ,c_{N})$. Thus we have an isomorphism
$F:(c_{1},\ldots ,c_{N})\mapsto c_{1}x^{\alpha_{1}}+\ldots +c_{N-1}x^{\alpha_{N-1}}+c_{N}$ from the space of coefficients
$\mathbb{R}^{N}$ to
the space $ \{f\in\mathbb{R}[x_{1},\ldots ,x_{n}]|\operatorname{Deg}(f) \leq D\}$ of polynomials of degree $\leq D$.
Evaluating $f$ at each point of the dataset $\Omega$ results in the vector $(f(a_{1}),\ldots ,f(a_{m}))$.
This operation can be expressed in matrix notation as
\[
\begin{pmatrix} f(a_{1}) \\ \vdots \\ f(a_{m}) \end{pmatrix}
= U \begin{pmatrix} c_{1} \\ \vdots \\ c_{N} \end{pmatrix} \ ,
\]
where $U$ is the \textit{multivariate Vandermonde matrix} \cite{Breiding2018} defined by
$U_{ij}=x^{\alpha_{j}}(a_{i})$.
One way to capture the inverse relationship between probability and distance is by taking the likelihood to be $p(x|f)=\frac{1}{r(f)}e^{-{d}_{A}(x,f)^{2}}=\frac{1}{r(f)}e^{-f(x)^{2}}$ where $r(f)$ is the normalization factor $\int\limits_{[0,1]^{n}}e^{-f(x)^{2}} dx$. Since the algebraic distance depends on the magnitude of the coefficients, we restrict to the class
\[ \Theta_{D}:=\{f\in\mathbb{R}[x_{1},\ldots ,x_{n}]|\operatorname{Deg}(f)\leq D, \: \|F^{-1}(f)\|=1\} \ , \]
where $\|F^{-1}(f)\|$ coincides with the norm on the coefficients
$\sqrt{(c_{1},\ldots ,c_{N})(c_{1},\ldots ,c_{N})^{\operatorname{t}}}$. The space $\Theta_{D}$ has an obvious
parametrization $F|_{\mathbb{S}^{N-1}}:\mathbb{S}^{N-1}\to \Theta_{D}$ which takes a point $(c_{1},\ldots ,c_{N})$ in the sphere
$\mathbb{S}^{N-1}$ to the polynomial $c_{1}x^{\alpha_{1}}+\ldots +c_{N-1}x^{\alpha_{N-1}}+c_{N}\in\Theta_{D}$.
The degree bound $D$ should be treated as a hyperparameter, and choosing the hypothesis class $\Theta_{D}$ restricts us to the class $\mathcal{V}_{D}$ of varieties defined by a single polynomial of degree $\leq D$ whose coefficients have norm $1$. As with the standard machine learning set-up, there is a trade-off where setting $D$ higher leads to better approximation but higher risk of over-fitting. This issue is explored in Section \ref{sec:results}.
We define a prior distribution on $\Theta_{D}$ by $p(f)= \frac{1}{\beta}r(f)$ where $\beta$ is the normalization
constant
\[ \beta = \int_{\mathbb{S}^{N-1}} r(F(y))dy=\int_{\mathbb{S}^{N-1}}\int_{[0,1]^{n}} e^{{-(F(y)(x))}^{2}}dxdy \ . \]
This prior disfavors polynomials with low values of $r(f)$. Intuitively, such polynomials have high values of $\int_{[0,1]^{n}}|f(x)|dx$, which means that, with respect to the algebraic distance, these polynomials have a large total distance from the domain $[0,1]^{n}$. With this prior, the joint distribution simplifies to $p(x|f)p(f)=\frac{1}{\beta}e^{-f(x)^{2}}$.
With this choice of likelihood and prior distributions, the MAP problem is given by:
\noindent
\begin{align*}
\argmax\limits_{f\in\Theta_{D}}\frac{1}{\kappa}\prod\limits_{i=1}^{m}\frac{1}{\beta}e^{-f(a_{i})^{2}}
=&\argmax\limits_{f\in\Theta_{D}}\frac{1}{\kappa}(\frac{1}{\beta^{m}}e^{-\sum_{i=1}^{m}f(a_{i})^{2}}) \\
=&\argmax\limits_{f\in\Theta_{D}}\ \log( \frac{1}{\kappa}(\frac{1}{\beta^{m}}e^{-\sum_{i=1}^{m}f(a_{i})^{2}}))\\
=&\argmax\limits_{f\in\Theta_{D}}\ -\log(\kappa)-\log(\beta^{m})-\sum_{i=1}^{m}f(a_{i})^{2}
= \argmin\limits_{f\in\Theta_{D}} \ \sum_{i=1}^{m}f(a_{i})^{2}\ .
\end{align*}
\noindent
This is equivalent to the problem
\[ \argmin\limits_{c \in \mathbb{S}^{N-1}} \ (c_{1},\ldots ,c_{N})\ U^{\operatorname{t}}U\ (c_{1},\ldots ,c_{N})^{\operatorname{t}} \]
\noindent
which is a convex Quadratically Constrained Quadratic Program (QCQP) \cite{Convex}. The solutions are the normalized elements of the eigenspace $E_{\lambda}$ where $\lambda$ is the smallest eigenvalue of $U^{\operatorname{t}}U$.
In the case where $\lambda>0$, the matrix $U^{\operatorname{t}}U$ is positive definite, in which case the QCQP is strictly convex \cite{Convex} so there is essentially one unique MAP solution $\hat{f}$. In the case where $\lambda=0$, the MAP solutions are the normalized elements of $E_{\lambda}= \operatorname{ker} (U^{\operatorname{t}}U)$. Here, every MAP solution $\hat{f}$ vanishes exactly on $\Omega$ and $p(\hat{f}|\Omega)=\frac{1}{\kappa\beta^{m}}$.
Therefore, under the assumptions that the hypothesis class is $\mathcal{V}_{D}$, the objective function is $A(V,\Omega)=\max\limits_{f\in\Theta,Z(f)=V}p(f|\Omega)$, and the posterior distribution is $p(f|\Omega)=\frac{1}{\kappa}(\frac{1}{\beta^{m}}e^{-\sum_{i=1}^{m}f(a_{i})^{2}})$ on $\Theta_{D}$, the solutions to the learning problem are the varieties $Z(\hat{f})$ for every normalized element $\hat{f}\in E_{\lambda}$. We call this the \emph{MAP model} and we summarize it in the algorithm below.
\begin{algorithm}[H]
\SetAlgoLined
\textbf{Input:} a dataset $\Omega\in ([0,1]^{n})^{m}$ and a degree bound $D$. \\
\textbf{Output:} a polynomial $\hat{f}$ in $\Theta_{D}$, such that $Z(\hat{f})$ solves the learning problem under the above assumptions.\\[2mm]
Fix an ordering and list all homogeneous monomials $x^{\alpha_{1}}, \ldots ,x^{\alpha_{N}}$ of degree $\leq D$.\\
Compute the multivariate Vandermonde matrix $U_{ij}=x^{\alpha_{j}}(a_{i})$.\\
Find the smallest eigenvalue $\lambda$ of $U^{\operatorname{t}}U$ and its corresponding eigenspace $E_{\lambda}$.\\
\textbf{return:} any normalized element $\hat{f}$ of $E_{\lambda}$
\caption{MAP Model}
\end{algorithm}
\subsection{Expanding on the MAP Model}
One way to refine the result for the case $\lambda = 0$ is to enlarge our hypothesis class. If $\lambda = 0$ and $f_{1},\ldots ,f_{k}$ is a normalized basis for $\operatorname{ker} (U^{\operatorname{t}}U)$, then $\hat{f}:=f_{1}^{2}+\ldots +f_{k}^{2}$ is a degree $2D$ polynomial whose zero-set satisfies
\[ Z(\hat{f}) = Z(\{f_{1},\ldots ,f_{k}\})=\bigcap\limits_{f\in\operatorname{ker}(U^{\operatorname{t}}U)} Z(f) \ . \]
That is, $\hat{f}$ defines the smallest variety given by a set of polynomials of degree $\leq D$. This method of taking intersections changes the hypothesis class and may no longer yield an MAP solution over $\Theta_{2D}$. However, this method yields a less redundant variety than the MAP solutions over $\Theta_{D}$ without the need to preform any further optimization. We call this the \emph{intersected MAP model} and we summarize it in the algorithm below.
\begin{algorithm}[H]
\SetAlgoLined
\textbf{Input:} a dataset $\Omega\in ([0,1]^{n})^{m}$ and a degree bound $D$. \\
\textbf{Output:} a polynomial $\hat{f}$ in $\Theta_{2D}$, such that $Z(\hat{f})$ is the intersection of all solutions in $\mathcal{V}_{D}$ to the learning problem under the previous assumptions.\\[2mm]
Fix an ordering and list all homogeneous monomials $x^{\alpha_{1}},\ldots ,x^{\alpha_{N}}$ of degree $\leq D$. \\
Compute the multivariate Vandermonde matrix $U_{ij}=x^{\alpha_{j}}(a_{i})$.\\
Find the smallest eigenvalue $\lambda$ of $U^{\operatorname{t}}U$ and its corresponding eigenspace $E_{\lambda}$.\\
\eIf{$\lambda>0$}{
\textbf{return:} the (essentially) unique normalized element $\hat{f}$ of $E_{\lambda}$
}{
Find an orthonormal basis $f_{1},\ldots ,f_{k}$ for $\operatorname{ker} (U^{\operatorname{t}}U)$.\\
\textbf{return:} $\hat{f}:= f_{1}^{2}+\ldots +f_{k}^{2}$.
}
\caption{Intersected MAP Model}
\end{algorithm}
\section{Algebraic Computations}
\label{sec:algebraic-computations}
Assume that $Z(f)$ is the true underlying underlying variety for the dataset $\Omega$. We can use tools from
commutative algebra to reveal information about the geometry of $Z(f)$, and this analysis can be automated
with the use of Gr\"obner basis methods. For a general overview on this topic, we suggest
\cite{10.5555/1557288}. To use Gr\"obner basis methods in a computer algebra system like SINGULAR \cite{SINGULAR}, we have to change the base field from $\mathbb{R}$ to $\mathbb{Q}$. If $f$ was obtained through a numerical procedure such as the MAP learning model, then the floating point coefficients of $f$ can be interpreted as rational numbers.
With Gr\"obner basis methods one can compute a generating set $f_{1},\ldots ,f_{k}$ for the ideal $\realrad{\mathbb{Q}}{\langle f\rangle_\mathbb{Q}}\subset \mathbb{Q}[x_{1},\ldots ,x_{n}]$.
Using this generating set we can construct the ring $\mathbb{Q}[x_{1},\ldots ,x_{n}]/\realrad{\mathbb{Q}}{\langle f\rangle_\mathbb{Q}}$ which is a subring of the coordinate ring $\mathbb{R}(Z_{\mathbb{R}}(f))$.
By Proposition \ref{prop:dimension-geometric-algebraic},
\[ \operatorname{dim}(Z_{\mathbb{R}}(f))=\operatorname{dim}(\mathbb{Q}[x_{1},\ldots ,x_{n}]/\realrad{\mathbb{Q}}{\langle f\rangle}_\mathbb{Q})\]
which can be computed using Gr\"obner bases. Similarly, we can compute the minimal primes over $\realrad{\mathbb{Q}}{\langle f\rangle}_\mathbb{Q})$. By Proposition \ref{prop:irreducible-decomposition} these are the ideals $J_{\mathbb{Q}}(V_{1}),\ldots ,J_{\mathbb{Q}}(V_{s})$,
where $V_{1},\ldots ,V_{s}$ are the irreducible components of $V$.
It should be noted however that even over $\mathbb{Q}$, Gr\"obner basis computations are in general very costly and may only be feasible for small $n$ and $D$, hence the need for numerical computations. For more details on the complexity of finding Gr\"obner bases see \cite{HUYNH1986196}.
\begin{example}
We can apply these concepts to the variety $V$ from Example \ref{ex:union-sphere-plane} using the following SINGULAR code.
\begin{verbatim}
// Define R = QQ[x,y,z] with lexicographic ordering.
ring R = 0,(x,y,z),lp;
poly f = ((x-1/2)^2 + (y-1/2)^2 + (z-1/2)^2 - 1/4)*(x-y);
ideal I = f;
LIB "realrad.lib";
ideal I2 = realrad(I);
size(reduce(I,I2));
//->0
size(reduce(I2,I));
//->0
LIB "primdec.lib";
minAssGTZ(I2);
//->[1]:
//-> _[1]=2x2-2x+2y2-2y+2z2-2z+1
//->[2]:
//-> _[1]=x-y
dim(I2);
//->2
\end{verbatim}
First we define the ideal $I=\langle((x-\frac{1}{2})^{2} + (y-\frac{1}{2})^{2} + (z-\frac{1}{2})^{2} - \frac{1}{4})\cdot(x-y)\rangle$ over $\mathbb{Q}$. We then compute the real radical using the library \texttt{realrad.lib} \cite{realrad} and
observe that in fact $I=\realrad{\mathbb{Q}}{I}$. As expected, we find that $\dim (\mathbb{Q}[x_{1},\ldots ,x_{n}]/\realrad{\mathbb{Q}}{I})=2$ which coincides with $\dim (V)$. Using the library \texttt{primdec.lib} we also determine the minimal primes above $\realrad{\mathbb{Q}}{I}$ to be the vanishing ideal of the sphere of radius $\frac{1}{2}$ centered around $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ and the
vanishing ideal of the plane $x-y$. This agrees with the decomposition of $V$.
\end{example}
\section{Numerical Computations}
\label{sec:singular-heuristics}
Again, assume that $Z(f)$ is the true underlying variety for $\Omega$. We can also use numerical methods to study the geometry of $Z(f)$ by sampling new points from $Z(f)$ and working directly with those samples. This approach has the advantage of being computationally more tracktable than the algebraic computations relying on Gr\"obner bases.
One reason to obtain a new sample set $\Omega'$ from $Z(f)\cap[0,1]^{n}$ is that if $Z(f)$ is only an approximate fit, then the sample $\Omega'$ will reflect the geometry of $Z(f)$ more closely than $\Omega$. Hence, if we are specifically studying the model $Z(f)$, generating a sample set $\Omega'$ can lead to more accurate results.
First notice that the likelihood distributions $p(x|f)$ on $[0,1]^{n}$ can be used to generate data. This can be achieved using \textit{rejection sampling} which is outlined in Algorithm \ref{alg:rejection-sampling} . This sampling process reveals the underlying assumptions that model makes about how the original data $\Omega$ was generated and how noise was introduced.
However, capturing the model's assumptions also captures the noisy process through which $\Omega$ was supposedly generated. If the likelihood distribution depends on the algebraic distance, such as the distribution used in Section \ref{sec:learning-variety}, then this noise can be avoided by fixing a small $\eta > 0$ and accepting a point $a\in[0,1]^{n}$ if and only if $d_{A}(a,Z(f))=|f(a)|<\eta$. We call this \textit{direct sampling} and we summarize it in Algorithm \ref{alg:direct-sampling}.
Setting a smaller $\eta$ threshold reduces the sampling noise, however this comes at the cost of increasing the efficiency of the sampling. This still works reasonably well for low values of $n$, but it does not scale well to higher dimensions. For higher dimensions, a more effective sampling method is given in \cite{8999343}. Alternatively, the variety could be sampled using Homotopy Continuation \cite{10.1007/978-3-319-96418-8_54}, which is a numerical method for computing the zero-set of a system of polynomials. In Homotopy Continuation, one begins with a simple system of polynomials whose roots are known, and then defines a homotopy, i.e.\ a continuous deformation, from the simple system to the system of polynomials that one is trying to solve. This method relies on tracking the paths that the roots take as the homotopy is being applied.
\begin{algorithm}[h]
\SetAlgoLined
\textbf{Input:} a polynomial $f$, a likelihood distribution $p(x|f)$, and a target number of samples $m$.\\
\textbf{Output:} a set $\Omega'$ of $m$ points sampled from $Z(f)\cap[0,1]^{n}$ according to $p(x|f)$.\\
Initialize $\Omega'$ to $\emptyset$.\\
\While{$|\Omega'|<m$}{
Draw a random point $a$ from the uniform distribution on $[0,1]^{n}$.\\
Draw a random number $\alpha$ from $[0,1]$.\\
\eIf{$\alpha < p(a|f))$}{
\textbf{Accept:} $\Omega'\xleftarrow{}\Omega'\cup \{a\}$.
}{
\textbf{Reject}.
}
}
\textbf{return:} $\Omega'$.
\caption{Rejection Sampling}\label{alg:rejection-sampling}
\end{algorithm}
\begin{algorithm}[h]
\SetAlgoLined
\textbf{Input:} a polynomial $f$, a sampling threshold $\eta$,
and a target number of samples $m$.\\
\textbf{Output:} a set $\Omega'$ of $m$ points sampled from $Z(f)\cap[0,1]^{n}$.\\
Initialize $\Omega'$ to $\emptyset$.\\
\While{$|\Omega'|<m$}{
Draw a random point $a$ from the uniform distribution on $[0,1]^{n}$.\\
\eIf{$|f(a)|<\eta$}{
\textbf{Accept:} $\Omega'\xleftarrow{}\Omega'\cup \{a\}$.
}{
\textbf{Reject}.
}
}
\textbf{return:} $\Omega'$.
\caption{Direct Sampling}\label{alg:direct-sampling}
\end{algorithm}
Let $\Omega'$ be a set of samples from $Z(f)\cap [0,1]^{n}$ obtained using direct sampling with an accuracy threshold of $\eta$. Under the hypothesis that the distribution $p(x|f)$ depends on $d_{A}(x,f)$, we propose a method for finding the points in $\Omega'$ near $\operatorname{Sing}(Z(f))$. First, by Theorem \ref{thm:singular-locus-hypersurface}, if $V$ is $n-1$ dimensional and $f$ is irreducible, then every point $b\in (0,1)^{n}$ which lies in
the singular locus $\operatorname{Sing}(Z(f))$ satisfies
\[ f(b)=\partial_{x_{1}}f(b)=\ldots =\partial_{x_{n}}f(b)=0 \ .\]
By continuity of the map $\| \nabla f(x)\|_{2}=\sqrt{(\partial_{x_{1}}f(x))^{2}+\ldots +(\partial_{x_{n}}f(x))^{2}}$
there exists for every $\epsilon >\eta$ an open neighborhood $U$ of $b$ such that all points
$a\in U\cap \Omega'$ satisfy $\| \nabla f(a)\|_{2}<\epsilon$.
Note however that the converse of this is not necessarily true, that is, it is not always the case that a point $a\in\Omega'$ satisfying $\| \nabla f(a)\|_{2}<\epsilon$ is near a point $b\in\text{Sing}(Z(f))$. Nevertheless, assuming the converse appears to be justified for computational purposes and it provides a powerful heuristic method for detecting singularities. More specifically, we can heuristically assume that regardless of the dimension of $V$ and the reducibility of $f$, if $\epsilon>0$ is small enough, then $\{a\in \Omega'|\|\nabla f(a)\|_{2}<\epsilon\}$ is a set of points at or near $\operatorname{Sing}(Z(f))\cap(0,1)^{n}$. We will denote this set by
$\operatorname{Sing} (\Omega')$ and call the described method the \textit{singularity heuristic}. Clearly, the
accuracy of this method depends on the magnitudes of $\eta$ and $\epsilon$ and the density of the sample
set $\Omega'$. We explore the efficacy of the singularity heuristic in Section \ref{sec:results}.
\begin{algorithm}[h]
\SetAlgoLined
\textbf{Input:} a polynomial $f$, a set of samples $\Omega'$ from $Z(f)\cap[0,1]^{n}$, and a singularity threshold $\epsilon$. \\
\textbf{Output:} $\operatorname{Sing}(\Omega')$ a subset of $\Omega'$, heuristically assumed to be near $\operatorname{Sing}(Z(f))$ \\
Initialize $\operatorname{Sing}(\Omega')$ to $\emptyset$.\\
Find the partial derivatives $\partial_{x_{1}}f(x),\ldots ,\partial_{x_{n}}f(x)$,\\
\For{$a\in \Omega'$}{
\eIf{$\|\nabla f(a)\|_{2}<\epsilon$}{
\textbf{Accept} $a$ $\operatorname{Sing}(\Omega')\xleftarrow{}\operatorname{Sing}(\Omega')\cup \{a\}$.
}{
\textbf{Reject}.
}
}
\textbf{Return:} $\operatorname{Sing}(\Omega')$.
\caption{Singularity Heuristic}
\end{algorithm}
Another crucial numerical task is to test if two varieties $Z(f)$ and $Z(g)$ are equal. Even if $Z(f)=Z(g)$ the polynomials $f$ and $g$ may have different coefficients. Furthermore, testing this equality using the polynomials $f$ and $g$ is obviously not possible if
one of the polynomials, say $g$, is unknown and we only had access to samples $\Lambda$ from
$Z(g)\cap [0,1]^{n}$. One solution is to work entirely in terms of samples, namely,
take a set of samples $\Omega'$ from $Z(f)\cap [0,1]^{n}$ and directly compare $\Lambda$
with $\Omega'$. To compare $\Lambda$ with $\Omega'$ we use the Wasserstein distance
\cite{ruschendorf1985wasserstein} $W(\Lambda,\Omega')$ which measures the cost of the
optimal transport taking the point cloud $\Lambda$ to the point cloud $\Omega'$. We can similarly apply
this in order to compare a set of samples from $\operatorname{Sing}(Z(g))\cap [0,1]^{n}$ with the set $\operatorname{Sing}(\Omega')$.
\section{Results}
\label{sec:results}
In this section, we test the MAP model from Section \ref{sec:learning-variety}, and the singularity heuristic from Section \ref{sec:singular-heuristics}.
Starting with the variety $V$ from Example \ref{ex:union-sphere-plane}, we used the parametric form of $V$ to produce a sample set $\Omega$ consisting of $1600$ points sampled from $V\cap [0,1]^{3}$ (Figure \ref{fig:7.1a}). We also used the parametric form of $\operatorname{Sing}(V)$ to
produce a sample set, call it $\operatorname{Sing}(\Omega)$, of $400$ points sampled from $\operatorname{Sing} (V)$ (Figure \ref{fig:7.1d}). Plotted in Figure \ref{fig:7.1c} is the learned variety.
\begin{figure}[H]
\centering
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=.3]{standard_sample.png}
\caption{}\label{fig:7.1a}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=.3]{learned_sample.png}
\caption{}\label{fig:7.1b}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=0.3]{noisy_sample.png}
\caption{}\label{fig:7.1c}
\end{subfigure}
\caption{Samples from $V$ and from the learned variety}\label{fig:7sampling-variety}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=.3]{sing_sample.png}
\caption{}\label{fig:7.1d}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=.3]{learned_sing_sample.png}
\caption{}\label{fig:7.1e}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\end{subfigure}\caption{Samples from the singular locus of $V$ and results from the singularity heuristic}\label{fig:7singular-locus}
\end{figure}
We applied the MAP model with $D=3$. After rounding and scaling, we obtained exactly the target polynomial
\[ f=x^{3}-x^{2}y-x^{2}+xy^{2}+xz^{2}-xz+\frac{1}{2}x-y^3+y^2-yz^{2}+yz-\frac{1}{2}y \ . \]
By direct sampling with $\eta=0.001$ we produced a dataset $\Omega'$ consisting of $1600$ points sampled from $Z(f)\cap[0,1]^{3}$
(Figure \ref{fig:7.1b}).
Using these samples, we applied the singularity heuristic with value $\epsilon = 0.02$ and obtained a set $\operatorname{Sing} (\Omega')$
consisting of $232$ points (Figure \ref{fig:7.1e}).
To test the MAP model under more general conditions, we used the dataset $\Omega$ as above and tested the MAP model for different values of $D$. For a more quantitative picture, we computed the Wasserstein distance between $\Omega$ and $1600$ points obtained through direct sampling from $Z(f)\cap[0,1]^{3}$ for each learned polynomial $f$ and for different values of $\eta$. We repeated each test $3$ times. The average values are given in Figure \ref{fig:7.2a}.
\begin{figure}[H]
\centering
\begin{subfigure}{0.3\textwidth}\centering
\includegraphics[scale=0.45]{noise_0_000_result.png}
\caption{}\label{fig:7.2a}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=0.45]{noise_0_000_sing_result.png}
\caption{}\label{fig:7.2b}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=0.39]{noise_0_000_sing_count.png}
\caption{}\label{fig:7.2c}
\end{subfigure}\caption{MAP model and singularity heuristic performance (noise-free)}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.3\textwidth}\centering
\includegraphics[scale=0.45]{noise_0_025_result.png}
\caption{}\label{fig:7.2d}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=0.45]{noise_0_025_sing_result.png}
\caption{}\label{fig:7.2e}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=0.39]{noise_0_025_sing_count.png}
\caption{}\label{fig:7.2f}
\end{subfigure}\caption{MAP model and singularity heuristic performance (noise added)}
\end{figure}
As one can see, increasing $\eta$ largely results in an increase in the Wasserstein distance. However, very low values of $\eta$ seem to perform worse than higher values. One explanation for this is that direct sampling does not produce a uniform set of samples over $Z(f)\cap[0,1]^{3}$. This can be seen for example in Figure \ref{fig:7.1b}. Nonetheless, we see that the overall minimal distance is attained with $D=3$ which is consistent with the fact that $3$ is the lowest degree of any single polynomial defining $V$.
To test the singularity heuristic, we used the above samples and learned polynomials. For each value of $D$, we selected the best
performing value of $\eta$ and applied the singularity heuristic to the $3$ sample sets corresponding to that value of $D$ and $\eta$.
Then we measured the Wasserstein distance between $\operatorname{Sing} (\Omega)$ and the set of points $\operatorname{Sing}(\Omega')$ which passed the singularity heuristic. The average
values are given in Figure \ref{fig:7.2b}. The missing results are the cases where $\operatorname{Sing}(\Omega')$ is empty. In Figure \ref{fig:7.2c} we give the numbers of points that do pass the singularity heuristic. Once again, we see that the overall minimal distance is attained with $D=3$.
To examine the effect of noise on the MAP model, we repeated the same process but added Gaussian noise with a standard deviation of $0.025$ to the original data $\Omega$. The result of this is illustrated in Figure \ref{fig:7.1c}. Samples from the learned polynomials were compared{red} to the original noise-free data in order to test the robustness towards noise. The results are given in Figures \ref{fig:7.2d}, \ref{fig:7.2e}, and \ref{fig:7.2f}.
The values $D=1,2,3$ show similar trends with the overall minimal distances still attained at $D=3$. However the values $D=4,5$ show significantly worse performance due to over-fitting to the added noise. This is to be expected since the models for $D=4,5$
contain many additional parameters which lead to higher model complexity.
\begin{figure}[h]
\centering
\includegraphics[scale=.25]{octane_symmetric.PNG}
\caption{Cyclooctane}\label{fig:7.3}
\end{figure}
\begin{example}
Cyclooctane is a cyclic molecule with chemical formula $(\text{CH}_{2})_{8}$ (Figure \ref{fig:7.3}). The
\textit{conformation space of cyclooctane} is the space of allowable positions for the eight carbon atoms.
The conformation space is the variety cut out by a set of $16$ equations in
$\mathbb{R}[x_{1},y_{1},z_{1},\ldots ,x_{8},y_{8},z_{8}]$:
\begin{align*}
&(x_{i}-x_{i+1})^{2}+(y_{i}-y_{i+1})^{2}+(z_{i}-z_{i+1})^{2}-2.21 = 0 \quad \text{and } \\
&(x_{i}-x_{i+2})^{2}+(y_{i}-y_{i+2})^{2}+(z_{i}-z_{i+2})^{2}-\frac{8}{3}2.21 = 0 \quad \text{for all }i\in\mathbb{Z}_{8} \ .
\end{align*}
A dataset of $6040$ points sampled from this variety was produced by \cite{Martin2010}. Additionally, a $5$ dimensional reduction of this variety
was constructed in \cite{Martin2010} which we will refer to as the \textit{reduced conformation space of cyclooctane}. We refer the reader to \cite{Martin2010} for details on the specific dimensionality reduction map that was used. The reduction results in a compact $2$-dimensional singular surface in $\mathbb{R}^{5}$. We are not aware of any proof that the reduced conformation space is also a variety, however, it is clear from the construction that the reduced conformation space is at least a compact subanalytic set \cite{Bierstone1988}. Applying the reduction to the original $6040$ points, we obtain a set $\Omega$ of points living on the reduced conformation space of cyclooctane.
We are particularly interested in the singular locus of the reduced conformation space. Based upon geometric considerations and the
sample tests discussed in \cite{Martin2010}, the reduced conformation space of cyclooctane is believed to be topologically equivalent to two M\"obius bands and a sphere glued together along two disjoint circles. Those two circles form the boundaries of the two M\"obius bands
and constitute the singular locus of the reduced conformation space. Chemically, the conformations represented by the two circles
are called \emph{peak (P)} and \emph{saddle (S)} and correspond to maximal and minimal energy conformations, respectively, \cite{Martin2010}.
Using this information, we applied an adhoc method to extract the set of singular points from $\Omega$ resulting in a set $\operatorname{Sing}(\Omega$) of $233$ points shown in
Figures \ref{fig:7.4a}, \ref{fig:7.4b} and \ref{fig:7.4c} for different coordinate axes.
We applied the MAP model to the reduced conformation space for various values of $D$ but $D=4$ seemed to preform best. From the learned polynomial, we sampled a set $\Omega'$ of $40000$ points at threshold value $\eta = 10^{-7}$. We applied the singularity heuristic at a threshold value of $\epsilon = 0.0003$ and obtained a set $\operatorname{Sing}(\Omega')$ of $235$ points. The set $\operatorname{Sing}(\Omega')$ is shown in
Figures \ref{fig:7.4d}, \ref{fig:7.4e} and \ref{fig:7.4f}.
The Wasserstein distance between $\operatorname{Sing}(\Omega$) and $\operatorname{Sing}(\Omega'$) was found to be $0.022$. We can see that despite the presence of noise coming from the sampling, the singularity heuristic succeeds to capture the overall geometry of the singular locus of the reduced conformation space of cyclooctane. This is further evidence that the reduced conformation space of cyclooctane is likely an algebraic variety and is likely defined by a polynomial of degree $4$.
\begin{figure}[H]
\centering
\begin{subfigure}{0.3\textwidth}\centering
\includegraphics[scale=0.3]{sing_4.png}
\caption{}\label{fig:7.4a}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=0.3]{sing_5.png}
\caption{}\label{fig:7.4b}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=0.3]{sing_6.png}
\caption{}\label{fig:7.4c}
\end{subfigure}
\caption{Sampled singular locus of the conformational space of $(\text{CH}_2)_8$}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{0.3\textwidth}\centering
\includegraphics[scale=0.3]{sing_1.png}
\caption{}\label{fig:7.4d}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=0.3]{sing_2.png}
\caption{}\label{fig:7.4e}
\end{subfigure}
\begin{subfigure}{.3\textwidth}\centering
\includegraphics[scale=0.3]{sing_3.png}
\caption{}\label{fig:7.4f}
\end{subfigure}
\caption{Learned singular locus of the conformational space of $(\text{CH}_2)_8$}
\end{figure}
\end{example}
\section{Related Work}
\label{sec:related-work}
The MAP and intersected MAP model developed in Section \ref{sec:learning-variety} are inspired
by the learning method given in \cite{Breiding2018}.
The method presented in \cite{Breiding2018} aims to find the kernel of the
Vandermonde matrix $U$ in a numerically stable manner. However, this is presented as a purely numerical method without a given statistical basis.
As such, the authors focus on the case where $\Omega$ is sampled from a variety without any anomalies present.
In the case when there is no sample noise, the method of \cite{Breiding2018} coincides with the MAP model from
Section \ref{sec:learning-variety} since in this case $\lambda=0$ and the MAP solutions would be given by $E_{\lambda}=\operatorname{ker}(U^{T}U)=\operatorname{ker}(U)$.
In the case when there is noise, the authors use a noise tolerance parameter $\tau>0$.
In one instance, they compute $\operatorname{ker}(U)$ by finding the singular vectors of $U$ whose singular values are
$\leq \tau$. This is similar to the QCQP from Section \ref{sec:learning-variety} since the eigenvectors of
$U^{T}U$ are the right-singular vectors of $U$ and the eigenvalues of $U^{T}U$ are the squares of the
singular values of $U$. The difference is that we only accept the smallest eigenvalue (or singular value).
This can have a significant effect if the noise levels are high or the underlying space is only approximately a variety as shown in Example \ref{comparison} below.
Intuitively, this phenomenon appears when $V_{1}$ and $V_{2}$ do not
contain the set $\Omega$ but both are close to it, yet $V_{1}\cap V_{2}$ is not close to $\Omega$.
\begin{example}
\label{comparison}
Consider the noisy data $\Omega$ (Figure \ref{fig:8.1a}) sampled from the line
\[ L_{1}=\{(0,0,1)+t(1,1,-1)|t\in\mathbb{R}\}\ . \]
Setting $D=1$, one finds that the planes defined by the polynomials
$\ell_{1}=0.25x+0.38y+0.63z -0.63 $ and $\ell_{2}=0.23x+0.39y+0.63z-0.63$ are both in $\bigcup\limits_{\lambda < 0.01}E_{\lambda}$. Moreover, both planes are close to the line $L_{1}$. However, the intersection $Z(f_{1})\cap Z(f_{2})$ is given by
the line
\[ L_{2}=\{(0,0,1)+t(1,2.02,-1.61)|t\in\mathbb{R}\}\]
which is not close to the original line $L_{1}$; see Figure \ref{fig:8.1b}.
\begin{figure}[H]
\centering
\begin{subfigure}{0.45\textwidth}\centering
\includegraphics[scale=0.35]{noisy_L_1.png}
\caption{}\label{fig:8.1a}
\end{subfigure}
\begin{subfigure}{.45\textwidth}\centering
\includegraphics[scale=0.35]{L_1.png}
\caption{}\label{fig:8.1b}
\end{subfigure}
\caption{}
\end{figure}
\end{example}
A related approach for singularity detection is taken in \cite{Stolz19664}. This approach involves local cohomology to test how well different regions of the dataset can be approximated by Euclidean space. In contrast
to algebraic varieties, the authors of \cite{Stolz19664} assume that the underlying geometry is a stratified space. However, their method does not rely on explicitly learning the underlying stratified space. Instead, persistent cohomology \cite{Carlsson2009} is used to detect the singular regions. This comes at a computational advantage, however, it also captures less information about the space itself. Furthermore, this method only applies to singularities where the space fails to be a topological manifold, and not the singularities where smoothness fails.
|
train/arxiv
|
BkiUdFA25V5hYDkHG2VA
| 5 | 1 |
\section{Introduction}
\label{sec:level1}
The question of interacting particles in a random potential has
recently got a great deal of attention (see for example \cite{Kirk}).
Indeed this problem is important for the understanding of
conduction of electrons in metals and disordered systems.
It is also very interesting from the theoretical view point
since it allows to understand the effects of interaction on
Anderson localization.
It is a common belief that in one-dimensional (1d) systems
near the ground state,
a repulsive interaction between particles leads to a stronger localization
if compared with non interacting case \cite{Tieri}.
Even if more complicate, the 2--dimensional (2d) problem
is assumed to be localized, while in
the 3--dimensional (3d) case
delocalization can take place in the presence
of interaction \cite{Kirk}. However this problem is rather
difficult for analytical, experimental and numerical investigations
and therefore it is quite far from its final resolution.
The complicated nature of the above problem can be illustrated
by the example of only two interacting particles (TIP) in a
random potential. Indeed in this case, contrary to the common
lore, even repulsive particles can create an effective
pair which is able to propagate on a distance $l_c$
much larger than its own size of the order of
one particle localization length $l_1$ \cite{TIP}.
{}From one side interference effects for non--interacting
particles force the particles to stay together
at a distance $\sim l_1$ even
in the repulsive case. But the relative motion between
the two particles leads to the destruction of such interference
and allows their coherent propagation on a distance $l_c \gg l_1$.
More explicitly, according to \cite{TIP},
in the quasi 1d case with $M$ transverse
channels one has
\begin{equation}
{l_{c} \over {l_1}} \sim {l_1 M} { U^2\over {32 V^2}}
\label{est}
\end{equation}
where $U$ is the strength of on site interaction and $V$ is the
one particle hopping matrix element. Here the inter-site distance
is $a=1$ and the wave vector $k_F \sim 1$.
Since $\l_1 \propto M$, then $l_c \propto M^3$.
In 2d
the effective number of channels due to
2d localization is $M \sim l_1$
so that $l_c \sim l_1^3$ and localization is preserved
(here and everywhere $l_1$ represents the one--particle
localization length in any dimension).
The sharp increase of $l_c$ with the number of transverse channels $M$ leads
to a straightforward possibility
of delocalization for a pair of particles in 3d
while one particle
remains localized \cite{Dreply,Imry,nonli}.
In this sense the 3d case
is much more interesting
due to the possibility of delocalization
and we qualitatively discuss it below.
One of the interesting features of pair delocalization in 3d is
that it is not due to a simple shift of the mobility edge produced
by the interaction (such possibility is indeed
not so interesting). In fact, it is possible to consider a system in which
{\it all} one-particle eigenstates are localized for {\it all}
energies. This can be
for example the 3d Lloyd model with diagonal disorder
$E_{n_1,n_2,n_3} = \tan \phi_{n_1,n_2,n_3}$ and hopping $V$ on a cubic lattice,
where $\phi_{n_1,n_2,n_3}$ are random phases homogeneously distributed in the
interval $[0,\pi]$. In this model {\it all} one-particle eigenstates are
localized for $V < V_{c} \sim 0.2$. However, two repulsive
particles with on-site
interaction $U$ can create a coupled state which is delocalized
and propagates through the lattice.
Indeed, a pair "feels" only smoothed potential \cite{TIP}
that corresponds to an effective renormalization of
the hopping matrix element
$V_{eff}$ which is strongly enhanced due to interaction and becomes
larger than $V_c$. Of course, the enhancement takes place only
for sufficiently large one-particle localization length $l_1 \gg 1$.
Therefore, the hopping $V<V_c$ should be not far from $V_c$
although there is, a priori, no requirement for $V$ to be very close
(parametrically) to $V_c$.
The two particles delocalization due to interaction
takes place only for states in which particles are on a distance $R<l_1$
from each other while for $R \gg l_1$ eigenstates are localized.
Such kind of situation is quite unusual since it means that the
absolutely continuous spectrum of the Schr\"{o}dinger operator,
corresponding to the delocalized pair, is {\it embedded} into
the pure point
spectrum of localized almost noninteracting particles states.
For $R \gg l_1$ the interaction between the two particles
is exponentially small and this implies a very
small coupling between the
states corresponding to these two kinds of spectra.
However, due to the quasi degeneracy of levels, even
a small coupling can lead to important modifications
of the above picture as it was discussed in \cite{nonli}.
Therefore, a direct numerical investigation of
interaction-assisted delocalization in 3d is highly desirable.
While the recent theoretical arguments and
numerical simulations in quasi--1d case
\cite{TIP} - \cite{oppen}
definitly demostrate the existence of enhancement
for $l_c$ no numerical simulations have been done
in 3d case. Indeed, in 3d basis grows as $N^6$,
where $N$ is the number of 1d unperturbed one-particle states,
and that leads to heavy numerical problems.
A similar type of interaction-assisted
delocalization can
be also realized in the kicked rotator model (KRM) \cite{KRM}
in 3d. In this case the unitary evolution
operator takes the place of Schr\"{o}dinger operator
and eigenenergies are replaced by quasi-energies.
The advantage of such models
is due to the independence of localization length on quasi-energy so
that all one-particle states in 3d are localized for
$V<V_c$ and delocalized for $V>V_c$.
Even if very efficient, numerical simulations for
KRM in 3d become very difficult;
for two particles situation becomes even worse
due to $N^6$ basis growth.
One of the ways to overcome
these numerical difficulties is the following.
For 1d KRM the number of dimensions can be effectively
modelled by introducing a frequency modulation of the perturbation
parameter \cite{Dima,1and}.
The case with $\nu$ incommensurate frequencies in the kick modulation
corresponds to an effective solid state model with
dimension $d=\nu+1$. For $\nu=2$, the effective dimension
$d=3$ and Anderson transition can be efficiently investigated
\cite{1and}.
A similar approach can be done for two interacting particles and
it allows to gain a factor $N^4$ in numerical simulations.
In this paper we investigate the model of two interacting kicked rotators
(KR) studied in \cite{TIP}, \cite{nonli}
with frequency modulation and $\nu=2,3$.
The quantum dynamics is described by the evolution operator~:
\begin{equation}
\begin{array}{c}
{\hat S_2} = \exp \{ -i [
H_{0}({\hat n})+H_{0}({\hat n'})+U\delta_{n,n'}] \} \\
\times \exp \{-i [ V(\theta,t) + V(\theta',t) ]\}
\end{array}
\label{qmap}
\end{equation}
with ${\hat n}^{(')}=-i {\partial}/{\partial {\theta^{(')}}}$.
Here $H_{0}({ n})$ is a random function of $n$ in the interval
$[0,2\pi]$ and it describes the unperturbed spectrum of rotational phases.
The perturbation $V$ gives the coupling between the unperturbed levels
and has the form
$V(\theta,t)= k(1+\epsilon \cos\theta_1 \cos\theta_2 \cos\theta_3) \cos\theta$
with $\theta_{1,2,3}=\omega_{1,2,3} \;t$.
In the case of two modulational frequencies
($\nu=2, \omega_3=0$), as in \cite{1and},
we choose frequencies $\omega_{1,2}$ to be
incommensurate with each other and with the frequency $2\pi$
of the kicks. Following \cite{1and} we take
$\omega_1=2\pi\lambda^{-1}$, $\omega_2=2\pi\lambda^{-2}$ with
$\lambda=1.3247...$ the real root of the cubic equation
$x^3-x-1=0$. For $\nu=3$ we used the same $\omega_{1,2}$ and
$\omega_{3} = 2\pi/{\sqrt{2}}$. We also studied another
case of functional dependence of $V(\theta)$
analogous to \cite{1and} and corresponding to the Lloyd model.
All computations have been done for symmetric configurations.
According to the theoretical arguments \cite{TIP} and numerical
simulations \cite{oppen} the antisymmetric configurations corresponding to
fermions with nearby site interaction should show a similar type of behaviour.
The paper is constructed as follows. In section II we
discuss the model and present the main results for $\nu=2$.
The case of $\nu=3$ is discussed in section III.
The kicked rotator model corresponding to the 3d
Lloyd model is studied in section IV.
Conclusions and discussions of results are presented in section V.
\section{The KRM model with two frequencies}
\label{sec:level2}
Before to discuss the effects of interaction let us first discuss the
noninteracting case $U=0$. Here the evolution operator can be
presented as a product of two operators $S_1$ describing the
independent propagation of each particle:
\begin{equation}
\begin{array}{c}
{\hat S_1} = \exp ( -i H_{0}({\hat n})) \exp (-i V(\theta,t))
\end{array}
\label{qmap1}
\end{equation}
Since $V$ depends on time in a quasiperiodic way with
$V(\theta,t)=V(\theta, \theta_1, \theta_2)$ and
$\theta_{1,2} = \omega_{1,2} t$, one can go to
the extended phase space \cite{Dima}, \cite{1and} with effective
dimension $d=3$. In this space the operator is independent on time and
has the form
\begin{equation}
\begin{array}{c}
{\hat {S_1}^{~}} = \exp ( -i H_{1}({\hat n, \hat n_1, \hat n_2}) )
\exp (-i V(\theta,\theta_1,\theta_2))
\end{array}
\label{exsp1}
\end{equation}
with $H_{1}(n, n_1, n_2) = H_0 (n)+\omega_1 n_1+\omega_2 n_2$.
Due to linearity in $n_{1,2}$ the transformation from (\ref{qmap1}) to
(\ref{exsp1}) is exact. However, the numerical simulations of
(\ref{qmap1}) are $N^2$ times more effective than for (\ref{exsp1}).
The system (\ref{exsp1}) corresponds to
an effective 3d model. Numerical simulations in \cite{1and}
showed that the variation of coupling amplitude $V$ gives
the transition from localized to diffusive regime as in usual
Anderson transition in $3d$. In \cite{1and} the form
of the kick $V$ had been chosen as
\begin{equation}
\begin{array}{c}
V(\theta,\theta_1,\theta_2)=
-2\tan^{-1}[2k(\cos\theta+\cos\theta_1+\cos\theta_2)-E]
\end{array}
\label{LL}
\end{equation}
In this case after a mapping
similar to the one used in \cite{FISH} the equation for
eigenfunction with a quasi-energy $\mu$ can be presented in a usual
solid-state form:
\begin{equation}
\begin{array}{c}
T_{\bf n} u_{\bf n} +k
{\sum_{\bf r}} u_{\bf n - r} =
E u_{\bf n}
\end{array}
\label{TT}
\end{equation}
where the sum is taken only over nearby sites and
$T_{\bf n}=\tan((H_1(n,n_1,n_2)-\mu)/2)$,
${\bf n} = (n, n_1, n_2) $. For random phases under tangent
the diagonal disorder is distributed in Lorentzian way
and the model becomes equivalent to the 3d Lloyd model.
While we also investigated the kick form (\ref{LL})
(see section IV) our main results have been obtained for
\begin{equation}
\begin{array}{c}
V(\theta,\theta_1,\theta_2)=k\cos \theta (1+\epsilon \cos\theta_1 \cos\theta_2)
\end{array}
\label{VKR}
\end{equation}
in the case of two frequencies $\nu=2$. According to \cite{D87}
in this case the equation for eigenfunctions can be also reduced
to an effective solid-state Hamiltonian which, however, has a bit more
complicated form than (\ref{TT}). We choose (\ref{VKR})
since it was numerically more efficient than (\ref{LL}).
To decrease the number of parameters we always kept $\epsilon=0.75$.
The one-particle transition as a function of coupling (hopping)
parameter $k$ in (\ref{VKR}) is presented in Fig.1.
Similar to \cite{1and} the localization length $l_1$ is
determined from the stationary probability distribution over
unperturbed levels ${\vert \psi_n \vert}^2 \sim
\exp(-2 \vert n \vert/l_1)$ while the diffusion rate is extracted
from the gaussian form of the probability distribution
$\ln W_n \sim -n^2/2Dt$ with
$D=<n^2>/t$. According to Fig.1 the transition takes place
at the critical hopping value $k_{cr} \approx 1.8$.
Below $k_{cr}$ all quasi-energy states are localized.
The independence of transition point from quasi-energy
is one of useful properties of KR models.
Our main aim was the investigation of TIP effects well below the
transition point $k_{cr}$. As in \cite{nonli}
we characterized the dymamics (\ref{qmap}) by the second moments along
the diagonal line $n=n'$~:
$
\sigma_{+} (t) = \langle ( \vert n\vert +
\vert n' \vert )^2 \rangle_t /4
$
and across it
$
\sigma_{-} (t) = \langle ( \vert n \vert - \vert n' \vert )^2 \rangle_t
$.
We also computed the total probability distribution along and
across this diagonal \cite{nonli}: $P_{\pm}(n_{\pm})$
with $n_{\pm} = \vert n \pm n' \vert /2^{1/2}$. The typical case
is presented in Figs. 2,3. These pictures definitely show the
appearence of pair propagation even if the interaction is neither
attractive nor repulsive. Indeed, the pair size
is much less than the distance on which two particles are
propagating together. If we fit the probability distibution in Fig.3
as $P_{\pm} \sim \exp(-2n_{\pm}/l^{\pm})$ then we can see
that the ratio $l^+/l^- \sim 25$ ($l_c \approx l^+ \approx 95$) is quite large.
It is interesting to note that $P_+(n_+)$ at different moments of time
is closer to an exponential ($\ln P_+ \sim n_+$) than to a gaussian
($\ln P_+ \sim {n_+}^2$). The spreading along the lattice
leads only to growth of $l^+$ with time but the shape of distribution does
not corresponds to a diffusive process.
Another interesting feature of Fig.2 is the slow decrease of the rate
of $\sigma_+$ growth and the slow growth of $\sigma_-$. To check if
the growth of $\sigma_+$ is completely suppressed with time
we analysed its dependence on $t$ for different values of interaction $U$
(Fig.4, probability distribution is shown
in Fig.5). For $U \leq 0.5$ the growth of $\sigma_+$ is
completely suppressed while for $U=1$ the complete suppression
is a bit less evident. To understand in a better way the case
$U=1$ we can look on the dependence of the number of effectively
excited levels $\Delta N$ on time. To estimate $\Delta N$ we should
rewrite (\ref{qmap}) in the extended basis where the evolution operator
has the form:
\begin{equation}
\begin{array}{c}
{\hat {S_2}^{~}} = \exp ( -i [ H_{0}({\hat n})+H_{0}({\hat n'})
+\omega_1 {\hat n_1} +\omega_2 {\hat n_2}+U\delta_{n,n'}] )
\exp (-i [V(\theta,\theta_1,\theta_2)+V(\theta',\theta_1,\theta_2)])
\end{array}
\label{exsp2}
\end{equation}
Since $\Delta n_{1,2} \approx \Delta n^{(')} \approx \Delta n_+$
the number of excited levels
can be estimated as $\Delta N \approx
\Delta n_+ \Delta n_- \Delta n_1 \Delta n_2
\approx {\sigma_+}^{3/2} {\sigma_-}^{1/2}$.
Following the standard estimate based on the
uncertainty relation \cite{D87}
a delocalization can take place only if $\Delta N$ grows
faster than the first power of $t$. As our numerical data show
( see Fig. 6) the ratio $W=\Delta N /t$ remains approximately constant
or is even slightly decreasing in time. This indicates that our
case is similar to localization in $2d$ where this ratio also
remains constant for very long time.
The reason why below $k_{cr}$ the situation is similar to
$2d$ can be understand in the following way.
According to (\ref{exsp2}) the total dimension
is 4 and we have there 2 particles. Therefore, we can argue
that the dimension per particle is 2 and that below
the 3d delocalization border $k_{cr}$ our system effectively
represents two particles in an effective
dimension $d_{eff}=2$. However, two particles in 2d
are always localized but the localization length can be
exponentially large. The dependence of $\sigma_+$ at fixed moment of
time on $k$ is presented in Fig.7 and indeed, it demonstrates
a sharp increase of $\sigma_+$ with $l_1$ approaching
$k_{cr}$. Therefore, we conclude that for $k<k_{cr}$ our model
effectively represents TIP in 2d. Below $k_{cr}$ the pair created
by interaction remains localized but the localization length $l_c$
grows exponentially with $l_1$. To see effects of interaction
in $d_{eff} > 2$ we should study the system with three modulational
frequencies $\nu=3$. But before to analyse the case
$\nu=3$ we would like to discuss the behaviour of $\sigma_-$.
Indeed, Figs. 2,4 clearly demonstrate a slow growth of $\sigma_-$
with time which means the increase of the size of the pair $\kappa$.
The results presented in Fig.8 show that $\kappa \approx {\sigma_-}^{1/2}$
grows logarithmically with time as $\kappa \approx C_L \ln t$ where
$C_L$ is some time independent factor being $C_L \approx 0.8 (U=2)$
and $C_L \approx 0.6 (U=1)$. Of course, this logarithmic growth should
terminate after the complete localization in $\sigma_+$
but this time scale $t_c$ is very large and for $t < t_c$
we have clear logarithmic growth of $\kappa$. As discussed in \cite{nonli}
we attribute this growth to the fact that propagating in a random potential
the pair is affected by some effective noise which leads to a slow separation
of two particles. Indeed, the matrix elements of interaction $U_-$
decay exponentially fast with the growth of the pair size $n_- = \kappa$
according to a rough estimate
$U_- \sim U_s\exp(- \vert n_- \vert /l_1)$ with $U_s \sim U/{l_1}^{3/2}$.
These small but finite matrix elements lead to the growth of the
pair size $\kappa$ with slow diffusion rate $D_- \propto {U_s}^2
\exp(- 2\vert n_- \vert /l_1)$. According to the relation
$\kappa^2/t \approx D_-$ the pair size grows as $\kappa \sim l_1\ln t/2 $
\cite{nonli} which is in agreement with data of Fig.7.
More detailed numerical
simulations are required to verify the dependence $C_L \sim l_1$.
\section{The KRM model with three frequencies}
\label{sec:level3}
According to the above discussion the suppression of diffusive
growth of $\sigma_+$ can be explained by two factors. The first one
is that the effective dimension is $d_{eff}=2$ and localization always
takes place in 2d. Another reason is the slow logarithmic growth of
pair size.
To separate these two effects we investigated the dynamics of
TIP in the KRM with three modulational frequencies $\nu=3$. In the extended
phase space the evolution operator has the form
\begin{equation}
\begin{array}{c}
{\hat {S_2}^{~}} = \exp ( -i [ H_{0}({\hat n})+H_{0}({\hat n'})
+\omega_1 {\hat n_1} +\omega_2 {\hat n_2}+\omega_3 {\hat n_3}
+ U\delta_{n,n'}] ) \\
\exp (-i [V(\theta,\theta_1,\theta_2,\theta_3)+
V(\theta',\theta_1,\theta_2,\theta_3)])
\end{array}
\label{exsp3}
\end{equation}
with
\begin{equation}
\begin{array}{c}
V(\theta,\theta_1,\theta_2)=k\cos \theta
(1+\epsilon \cos\theta_1 \cos\theta_2 \cos\theta_3)
\end{array}
\label{VKR3}
\end{equation}
For $U=0$ we have one particle in 4d and transition to delocalization
takes place above a critical value of perturbation parameter $k_{cr}$.
According to our numerical data $k_{cr} \approx 1.15$
for $\epsilon = 0.9$ (Fig.9). Below $k_{cr}$ all eigenstates are exponentially
localized. For $U \neq 0$ the total dimension in (\ref{exsp3}) is 5 and
since we have 2 particles the effective dimension per particle
is $d_{eff} =5/2$. Since $d_{eff} > 2$ the first argument given above
becomes not relevant and we expect TIP delocalization below $k_{cr}$.
Let us to note that above $k_{cr}$ the above TIP problem becomes
not interesting since even without interaction the particles
spread along the lattice and the interaction between them does not
affect significantly their dynamics.
The numerical simulations of TIP for (\ref{exsp3}) - (\ref{VKR3})
in one-particle localized phase $k < k_{cr}$ demonstrate strong enhancement
of two particles propagation. A typical case is presented in Figs. 10,11.
According to these data the growth of $\sigma_+$ is unlimited
and TIP delocalization takes place below $k_{cr}$. The analysis
of $\sigma_-(t)$ shows that pair size
$\kappa \approx {\sigma_-}^{1/2}$ grows logarithmically with
time similar to the case with $\nu=2$. We think that this slow growth
of $\kappa$ is responsable for a slow decrease of the pair diffusion rate
$D_+=\sigma_+/t$ with time. Due to the increase of $\kappa$
the probability to have a distance between particles
of the order of $l_1$ decreases as
$1/\kappa(t) \sim 2/(l_1\ln t)$ and therefore, we expect that the diffusion
rate
of the pair will decrease with time as $D_+ \sim D_{ef}/{\ln}^\mu t$.
Here $\mu=1$ and $D_{ef}$ is some effective "subdiffusion" rate.
While the above probability argument gives $\mu=1$ it is quite possible
that sticking in the region with $\kappa \gg l_1$ will give a
faster decrease of $D_+$ with a higher value of $\mu$.
As it was discussed in \cite{nonli} the growth of
pair size should also give logarithmic corrections to the
coherent localization length in the quasi-1d case
(\ref{est}) ($l_c \sim {l_1}^2/\ln^{\mu} l_1$).
Another confirmation for the delocalization transition below
one--particle threshold is given by the analysis of the
number of effectively excited states. Indeed for $\nu=3$
one has $\Delta N \approx \sigma_+^2 \sigma_-^{1/2}$.
According to our data, for sufficiently strong interaction $U$
the quantity $W=\Delta N/t$ grows approximately linearly with time
(see Fig. 12) while for small $U$ values $W$ decreases with time.
Contrary to the case $\nu=2$ this indicates that TIP delocalization
takes place for $U$ values bigger than a critical
$U_{cr} \approx 0.7$. Above this critical value the number of
excited states at a given time grows when $k$ approaches
to one--particle delocalization border $k_{cr}$ (Fig. 13).
\section{The effective Lloyd model}
\label{sec:level4}
We also studied the model (\ref{qmap}) with the kick perturbation
given by (\ref{LL}). In this case, the non interacting problem
can be reduced to the Lloyd model with pseudo--random
sites energies \cite{1and}. For $\nu=2$ and $E=0$
the one particle delocalization
border is $k_{cr} \approx 0.46$ \cite{1and}. For TIP problem
the behaviour of this model is similar to that of section II.
The strong enhancement of propagation is demonstrated in Fig. 14.
Even if the investigation of this model is more difficult for
the numerical simulations, our data indicate, as it was in section II,
that for $\nu=2$ the suppression of $\sigma_+$ is always present.
In the same way we attribute this behaviour
to the effective two dimensionality of the model $d_{eff} =2$.
We also analyzed the tangent model (\ref{qmap}), (\ref{LL})
for the case of three frequencies $\nu=3$
($ V(\theta,\theta_1,\theta_2,\theta_3) =
-2 \tan^-1 (2 k (\cos\theta +\cos\theta_1 +\cos\theta_2
+\cos\theta_3) ) $).
This case is similar to that discussed in section III.
The behaviour of $\sigma_{\pm}$ is presented in Fig. 15
and indicates the existence of delocalization transition
for TIP below one particle delocalization border with
$k_{cr} \approx 0.22$.
\section{Conclusions and discussions}
\label{sec:level5}
Our numerical investigations definitely demonstrate the effect
of enhancement of the localization length for TIP in a random
potential.
These results were obtained for kicked rotators models with
frequency modulation. Such approach allows to model efficiently
TIP problem in an effective dimension $d_{eff} \geq 2$.
Numerical data for these models confirm the theoretical
expectations \cite{Dreply,Imry,nonli}
that TIP delocalization in $d > 2$ is possible below
one--particle delocalization border.
In agreement with \cite{nonli} we found TIP pair delocalization
and, at the same time, a logarithmic growth of the pair size.
We attribute this growth to the noise produced by the random
potential. Indeed a pair propagating in a random potential
sees different realizations of disorder which act like
some effective noise. Such noise originates transitions which
increase the distance between the two particles.
Even if the amplitude of these transitions is exponentially
decreasing with the two--particle distance, it gives rise
to logarithmic growth of pair size with time.
This in turn produces a subdiffusive pair propagation
$( \Delta n_+ )^2 \sim D_{ef} t/\ln^\mu t$.
We give arguments for $\mu=1$, but it is possible that due to
sticking in the region with large distance between particles
one can have $\mu > 1$.
Further investigations should be done in order to determine
the exact value of $\mu$.
Another qualitative argument for the subdiffusive propagation
is the tunneling between states in which the two particles are far
from each other (at a distance $R \gg l_1$) and states
in which particles stay within $l_1$ ($R \leq l_1$).
These states are quasi-degenerate since the spectrum
of delocalized states is embedded in the spectrum of localized ones.
In this situation even an exponentially small overlapping between
these two kinds of states becomes important and it can lead to
a subdiffusive pair propagation.
Further work should be done for a better understanding of the
final spectrum structure and the eigenfunctions properties for TIP
in 3d.
\section{Acknowledgments}
\label{sec:level6}
One of us (D.L.S) would like to thank the University of Como
for hospitality during the final stage of this work.
|
train/arxiv
|
BkiUelvxK7IADIh7Q1bw
| 5 | 1 |
\section{Introduction}
With the development of artificial intelligence, text to speech (TTS) has made a remarkable progress. Conditional waveform generation has migrated from traditional digital signal processing (DSP) techniques, such as the Griffin-Lim \cite{griffin1984signal} and the World \cite{morise2016world}, to neural networks. There are mainly two types of the state-of-the-art neural vocoders: AR and Non-AR vocoders. Since WaveNet \cite{oord2016wavenet}, the AR vocoder has made a significant breakthrough, as the AR structure improves the continuity reconstruction in the generated waveform. On this account, the main weakness of AR model is the inference speed. WaveRNN \cite{kalchbrenner2018efficient} is similar to WaveNet, but a smaller model size and weighting sparsification method are applied to improve inference speed. LPCNet \cite{2019LPCNET} realizes the efficient and fast neural speech synthesis by linear prediction method.
Non-AR models such as GAN have been demonstrated to be computationally efficient for conditional waveform generation. It could learn the probabilistic distribution in the parallel way, which means less training and inference time cost, such as MelGAN \cite{kumar2019melgan} and HiFi-GAN \cite{kong2020hifi}. A well trained Non-AR vocoder outperforms the AR vocoder in terms of both speech quality and inference speed \cite{ping2020waveflow, prenger2019waveglow}. DiffWave \cite{kong2020diffwave} is also a Non-AR model, which converts the noise signal into waveform through a Markov chain. It matches WaveRNN in terms of speech quality, but the inference speed of a diffusion model is still not fast enough for real-time synthesis tasks.
Despite advances made in neural vocoders, we find that there is a quality gap between the neural networks generated speech and the real human voice. In real TTS tasks, especially when applying GAN vocoders, we can hear some vocal tremors at specific places, as demonstrated in Fig. 1(b), we could observe a fracture in HiFi-GAN generated spectrogram, which results in the tremor of generated voice. It has been exhibited \cite{morrison2021chunked}, that some artifacts of generated speech, such as pitch error effects and periodicity artifacts, are mainly due to the pitch and period mismatch caused by the mechanism of non-AR models.
\begin{figure}
\centering
\includegraphics[width= 5cm]{f3.pdf}
\caption{A spectrogram comparison of ground truth (a), HiFi-GAN (b) and our model (c) generated waveform}
\label{fig:my_label1}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.65\textwidth]{f5.pdf}
\caption{Full version of model, the blue trapezoid are the generator and discriminator from original HiFi-GAN V2. The green dash boxes represent inference and AR loop respectively, where AR loop is applied only in training. To get the next frame of signal (the blue shadowed rectangle), we use one previous frame of output speech, real speech and one current frame of generated speech (the red shadowed rectangle), i.e. "two frames in, one frame out". The output audio of AR loop will sent to discriminators and influence the update of the generator parameters.}
\label{model}
\end{figure*}
In order to solve this problem, some hybrid models are brought to the forefront, which combine the advantages of AR and non-AR model. One productive model is WaveFlow \cite{ping2020waveflow}, which uses an AR module to learn short range dependencies and a non-regressive 2-D convolutional architecture to capture long range dependencies. CARGAN \cite{morrison2021chunked} reduces the pitch and periodicity errors with an AR loop. However, it still takes 18 times longer on GPU and 3 times longer on CPU by inference compared with HiFi-GAN.
Inspired by prior works, in this paper, we aim to find a feasible solution on the lack of sequential modeling in GAN vocoders without any reduction of inference speed. We will systematically present a series of experiments, then discuss how GAN can be equipped with self-attention and AR to capture long-range decencies within frames. Finally, we verify that the proposed model leads to a better generation performance compared with the baseline and no extra time cost on inference.
We summarize our contributions as follows:
\begin{itemize}
\item We propose a self-attention and AR loop based post-net, which could capture long-term dependencies within waveform frames in training and will not be used in inference.
\item We use a new objective loss function called Teager Energy Operator loss to enhance the interaction of frames.
\item We indicate that our proposed post-net is robust and can be integrated into other GAN vocoders for a performance gain.
\end{itemize}
\section{Related Works}
\subsection{Hifi-GAN} In HiFi-GAN, several transposed convolutional layers with diverse upsampling rate, and the residual stack, which is made up of dilation convolutional layers and normal convolutional layers, are used to transform Mel-spectrogram into waveform. The multi-receptive field fusion in residual stacks could observe patterns of various length in parallel and improve the quality of speech. Besides, HiFi-GAN upgrades the discriminators from MelGAN to multi-period (MPD) and multi-scale (MSD) perspective, handling the portion of periodic signals and evaluating audio signal at different levels.
\subsection{Chunked autoregressive GAN (CARGAN)} In CARGAN, an extra AR conditioning stack is proposed to constraint the generation of waveform. The previous samples and the generated speech are summarized into fixed-length, injected to an encoder and finally spliced with the raw Mel-spectrogram feature to generate a new chunk of speech.
\subsection{Multi-resolution STFT Loss}
The multi-resolution short-time Fourier transform (STFT) loss proposed in Parallel WaveGAN \cite{yamamoto2020parallel}. It is the sum of Mel-spectrogram losses with various STFT analysis parameters. It consists of spectral convergence loss ($L_{SC}$) and magnitude loss ($L_{Mag}$):
\begin{equation}
L_{SC}(\mathbf{x},\mathbf{\tilde{x}}) = \frac{\left \| \left |STFT(\mathbf{x}) \right | -\left |STFT(\mathbf{\tilde{x}}) \right | \right \| _F }{\left \| \left | STFT(\mathbf{x}) \right | \right \| _F},
\end{equation}
\begin{equation}
L_{Mag}(\mathbf{x},\mathbf{\tilde{x}}) = \frac{1}{N}\left \| log\left | STFT(\mathbf{x}) \right | -log\left | STFT(\mathbf{\tilde{x}}) \right | \right \|_1,
\end{equation}
where $\mathbf{x}$ and $\mathbf{\tilde{x}}$ mean target speech and generated speech, $\left \|\cdot\right \|_F$ and $\left \|\cdot\right \|_1$ mean $Frobenius$ and $L1$ norm, respectively. $|STFT(\cdot)|$ and $N$ denote the STFT magnitudes and the number of elements in the magnitude, respectively. So the multi-resolution STFT loss can be converted into:
\begin{equation}
L_{mr\_stft}(\mathcal{G}) = \mathbb{E} _{\mathbf{x},\mathbf{\tilde{x}}}\left[\frac{1}{M}\sum_{m=1}^M(L^m_{sc}(\mathbf{x},\mathbf{\tilde{x}})+L^m_{mag}(\mathbf{x},\mathbf{\tilde{x}}))\right],
\end{equation}
where $M$ is the number of STFT parameter groups.
\section{Our Model}
\subsection{GAN Architecture}
Conforming to GAN's principle\cite{goodfellow2014generative}, on the one hand, the generator is designated for learning the reverse mapping from acoustic features such as Mel-spectrogram to audio waveform. On the other hand, the discriminator plays the role of a binary classifier which distinguished the real audio samples from the dataset as true, and the fake samples produced by generator as false. Simultaneously, discriminator guides the parameters updating of generator.
\subsection{Auto-regressive Loop}
As mentioned in the introduction, most GAN vocoders possess the fracture spectrum problem due to the Non-AR mechanism. The continuity between speech signal frames is ignored, which leads to the phase and periodicity mismatch of the audio samples. Although there exist AR GAN vocoders such as CARGAN \cite{morrison2021chunked} addressing this problem, the inference speed becomes their shortcoming. Therefore, we consider using a posterior AR loop to assist the training of generator, but it will not participate in inference.
The joint probability of a waveform ${\mathbf{x}=\{x_1, ... , x_T\}}$, where $x_i$ is a single sample, $i$ is the time index and $T$ is the length of the waveform. $\mathbf{x}$ could be factorized as a product of conditional distribution as follows:
\begin{equation}
p(\mathbf{x}) = \prod_{t=1}^{T} p(x_t\mid x_1,...,x_{t-1}),
\label{eq7}
\end{equation}
we regard each audio sample as the conditioned distribution of all previous samples. Differing from CARGAN, we utilize the AR structure only in the time domain (as shown in Fig. \ref{model}). One frame of previous AR-loop's output $\mathbf{\tilde{x}}$, previous real speech $\mathbf{x}$ and current generated speech $\mathbf{\hat{x}}$ are concatenated, then directly fed into the post-net, to generate a new frame of waveform, the Equ. \ref{eq7} could be reformed as:
\begin{equation}
p(\mathbf{\tilde{x}}) = \prod_{i=1}^{N} p(\mathbf{\tilde{x}}_{i\cdot T}\mid \mathbf{\tilde{x}}_{(i-1)\cdot T}, \mathbf{x}_{(i-1)\cdot T}, \mathbf{\hat{x}}_{i\cdot T}),
\label{eq8}
\end{equation}
where $N$ is the number of frames of the whole audio, $i$ is the current waveform frame, and $T$ is the length of one frame. Here we handle the waveform into frames, there are two reasons: firstly, we believe handling with every sample is inefficient. Secondly, the speech signal is stationary in a short time frame. The size of input frames is one of the hyper-parameters and we will discuss it in the ablation study shortly. This AR loop is used only in training, and the output will be sent to discriminator to be classified as real or fake, then to affect the iteration of generator parameters through back propagation.
\subsection{Post Self-attention Augmented Network}
Considering that the self-attention layer can better capture the context information in time sequence, we develop it according to the self-attention GAN \cite{zhang2019self} and the non-local net \cite{wang2018non}. As shown in Fig. 3(b), given the feature map of $\mathbf{F} \in \mathbb{R}^{L\times C}$ as input of the self-attention layer, where L is the time dimension and C is the number of channels, the query matrix $\mathbf{Q}$, the key matrix $\mathbf{K}$, and the value matrix $\mathbf{V}$ are obtained via matrix transformation:
\begin{equation}
\mathbf{Q} = \mathbf{FW}^Q, \mathbf{K} = \mathbf{FW}^K, \mathbf{V}=\mathbf{FW}^V,
\end{equation}
where $\mathbf{W}^Q, \mathbf{W}^K, \mathbf{W}^V \in \mathbb{R}^{C \times \frac{C}{k}}$ denote the learnable weight matrices of the $1\times1$ convolutional layer. Therefore, the dimension of the metrics $\mathbf{Q}, \mathbf{K}, \mathbf{V}$ are $\mathbb{R}^{L \times \frac{C}{k}}$ and the attention map $A$ is then computed as:
\begin{equation}
\mathbf{A} = \mathrm{softmax}(\mathbf{QK}^T), \mathbf{A} \in \mathbb{R}^{L\times L},
\end{equation}
\begin{equation}
a_{j,i} = \frac{exp(s_{ij})}{\sum^L_{i=1}exp(s_{ij})}, s_{i,j}=\mathbf{Q}(x_i)\mathbf{K}(x_j)^T,
\end{equation}
$a_{j,i}$ denotes the extent to which the model attends to the $i$-th location when synthesizing the $j$-th column $v_j$ of $\mathbf{V}$. The output of the attention layer $\mathbf{O}$ is computed as:
\begin{equation}
\mathbf{O} = (\mathbf{AV})\mathbf{W}^O, \mathbf{W}^O \in \mathbb{R}^{\frac{C}{k} \times C},
\end{equation}
with the weight matrix $\mathbf{W}^O$ realized by a $1\times1$ convolution layer of C filters, the shape of $\mathbf{O}$ is restored to the origin shape L $\times$ C. Eventually, there is a learnable scalar weight $\gamma$ in the output of the attention layer, so the final output is:
\begin{equation}
\tilde{\mathbf{F}} = \gamma\mathbf{O} +\mathbf{F}.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.57\textwidth]{f4.pdf}
\caption{Illustration of the proposed post-net and the self-attention layer, with L = 6, C = 6 and k = 2.}
\label{fig:my_label2}
\end{figure}
We integrate the self-attention layer into the autoencoder network \cite{hinton1993autoencoders}, as illustrated in Fig. 3(a), we also tried to realize the regressive loop directly with a LSTM layer of 256 hidden units, but the performance was not satisfied. Here we adopt the traditional autoencoder with the shortcut connection. The encoder consists of 2 one-dimensional strided convolutional layers with the same kernel size of 31 and a stride of 2, and the increased filter number of 8, 16. Self-attention layer is applied in higher-dimensional latent space to enhance the continuity in adjacent frames. The decoder, on the other hand, reverses the encoding process by deconvolution layer and restores information into waveform.
\begin{table*}[t]
\caption{Comparison of HiFi-GAN V2 (base), MB MelGAN, CARGAN, Our model, MB MelGAN with post-net}
\label{tab:word_styles1}
\centering
\begin{tabular}{lcccccc}
\toprule
\textbf{Model}&\textbf{Params(M)}$\downarrow$ &\textbf{MOS}$\uparrow$ &\textbf{RTF}$\downarrow$ & \textbf{BCR}$\downarrow$ &\textbf{MOS$-$TTS}$\uparrow$\\
\midrule
Ground Truth &$-$ &4.43$\pm{0.03}$ &$-$ &$-$ &4.48$\pm{0.03}$\\
\midrule
\midrule
\textbf{Our model} &0.92 &4.17$\pm{0.02}$ &0.385&0.13&3.87$\pm{0.03}$ \\
\midrule
CARGAN &25.5 &4.20$\pm{0.03}$ &1.035&0.03&3.90$\pm{0.03}$ \\
\midrule
Hifi$-$GAN(v2) &0.92 &4.12$\pm{0.05}$ &0.385&0.3&3.84$\pm{0.04}$ \\
\midrule
MB MelGAN &1.62 &4.00$\pm{0.04}$ &0.069&0.27&3.73$\pm{0.06}$ \\
\midrule
MB MelGAN with post$-$net &1.62 &4.07$\pm{0.05}$ &0.069&0.15&3.73$\pm{0.03}$ \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Training Objectives}
We replace the Mel-spectrogram loss from original HiFi-GAN to the multi-resolution STFT loss, the benefit of which is that the generator is able to learn the features of speech in the time-frequency domain \cite{wang2019neural}, and could prevent over-fitting in a fixed STFT representation. Moreover, we propose a new time-domain loss to enhance the temporal dependencies within frames.
\\
\noindent\textbf{Teager Energy Operator (TEO) Loss}
The TEO was proposed by H. M. Teager \cite{teager1990evidence} while working on non-linear audio signals, which is widely used to detect voice event:
\begin{equation}
\Psi (x(t)) = \dot{x} (t) ^2 - x(t)*\ddot{x}(t),
\end{equation}
for discrete speech signal, it could be rewritten as:
\begin{equation}
\Psi\left(x[n]\right) = x[n]^2 - x[n-1]*x[n+1],
\end{equation}
where $x[n]$, $x[n-1]$ and $x[n+1]$ represent the current, past and next audio sample, $n$ is the discrete time index. Normally, the audio signal can not change suddenly in a short time, the non-continuity could be captured by this difference $\Psi$. Therefore, the TEO loss could constraint signal generation in the time domain.
The TEO loss is defined as:
\begin{equation}
L_{TEO}(\mathcal{G}) = \mathbb{E}_{\mathbf{x},\mathbf{\tilde{x}}}\left[\frac{1}{N}\sum _{i=1}^N \left \| \Psi(x_i) - \Psi(\tilde{x}_i)\right \|_1 \right],
\end{equation}
where $N$ is the length of the original speech samples. Here, we calculate the TEO of original speech $\mathbf{x}$ and generated speech $\mathbf{\tilde{x}}$ respectively, then compute them in $L1$ norm.
\\
\noindent\textbf{Final Loss}
Combined with all objective loss functions, we could summarize a jointly loss function:
\begin{equation}
\begin{aligned}
\min_{\mathcal{G}}\quad & \mathbb{E}_{\mathbf{x},\mathbf{\tilde{x}}}\left [ \sum_{k=1}^{K}(\mathcal{D}_k(\mathcal{G}) -1)^2 \right]\\
+ \quad & \mathbb{E}_{\mathbf{x},\mathbf{\tilde{x}}}\left[ L_{mr\_stft}\right(\mathcal{G}) + \lambda L_{TEO}(\mathcal{G})],
\end{aligned}
\end{equation}
where $D_k$ denotes the $k$-th sub-discriminator in MPD and MSD, and $L_{mr\_stft}$ stands for multi-resolution STFT loss.
\section{Experiments}
\subsection{Dataset}
In our experiments, we choose Chinese Mandarin Speech Corpus (CSMSC) dataset for training and testing, which consists of 10,000 audios of 12 hours' recording, and all audios were downsampled by 16kHz with 16-bit PCM data format. We preprocess all raw audios into the 80-dimensional Mel-spectrogram with hop size as 256.
\subsection{Experimental Setup}
During training, the Adam optimizer was adopted with a learning rate varying with number of iterations for both generator and discriminator. For multi-resolution STFT loss, we applied three STFT groups ($M$ = 3) with frame size as 512, 1,024, 2,048, the window size as 240, 600, and 1,200 and the frameshift as 50, 120, and 240. We set the $\lambda$ of TEO loss as 50.
The synthesis quality was measured on the mean opinion score (MOS) which were obtained using the crowd sourcing methodology described in P.808 \cite{r26} with 95$\%$ confidence intervals. We chose 20 native Chinese speakers and randomly selected 200 sentences to score. Here, in order to clarify that the proposed model can have better performance on spectral fracture problem, we propose a new subjective evaluation named bad-case rate BCR, which stands for the rate of speech which has the spectral fracture problem. It could be computed as $G_{audio}/Total_{audio}$, where ${G_{audio} }$ represents the number of the generated audios in which the audio artifacts could be heard, and the ${Total_{audio}}$ is the total number of original audios. In our experiment, we use 200 audio examples in total for the BCR calculation.
The synthesis speed is measured on GPU and CPU based on the research regarding efficiency of neural networks \cite{kumar2019melgan}. We choose 50 sentences to test RTF and averaged over three times. The devices are single NVIDIA TESLA (R) P40 GPU and an Intel Xeon (R) CPU E5-2630 v4 @ 2.20 GHz.
\subsection{Experimental Results}
We benchmark the HiFi-GAN V2, our model, CARGAN, Multi-band (MB) MelGAN, and MB MelGAN integrated with proposed post-net, in terms of MOS, number of parameters, RTF, and BCR. As shown in Tab.1, we find that our model gained an MOS improvement when compared to the baseline. Meanwhile, our model has a smaller model size when compared to CARGAN, but has the same RTF as baseline. By utilizing the post-net, the BCR dropped more than twice as baseline \footnote{https://cookingbear.github.io/research/}.
As for the TTS task, we also test vocoders with a acoustic model of Fastspeech2 \cite{ren2019fastspeech}. FastSpeech2 predicts mel-spectrograms, which are fed into the vocoder to generate waveform. The results indicate that our model outperforms base models in TTS tasks.
\subsection{Ablation Study}
We design the a series of ablation studies to confirm the effect of each component and the corresponding results are showing in Tab.2, each model was trained for 600k steps in order to get stable results. Note that the absence of a regression loop means that we use a forward structure to replace AR loop but preserve the same post-net. As shown, every component has a positive contribution to the inference quality. When we remove the AR structure, the MOS and BCR fall noticeably. Simultaneously, the self-attention layer also benefits the connection detail within frames.
It is worth mentioning that the size of input for the AR post-net is a vital factor. To evaluate this, we conducted a series of tests, where the $n$-frame is the number of input frames of the post-net, here we choose 1, 2, 4, and as demonstrated in Tab.2. We find that the 2-frame input (our model) has the lowest BCR.
\begin{table}[h]
\caption{MOS result of ablation study}
\label{tab:word_styles2}
\centering
\begin{tabular}{lcc}
\toprule
\textbf{Model} &\textbf{MOS} &\textbf{BCR}\\
\midrule
\textbf{Our model} &4.17$\pm{0.02}$ &0.13\\
\midrule
1-frame &4.17$\pm{0.02}$ &0.16\\
\midrule
4-frame &4.12$\pm{0.02}$ &0.15\\
\midrule
w/o AR loop &4.10$\pm{0.02}$ &0.25\\
\midrule
w/o self-attention layer &4.18$\pm{0.02}$ &0.16\\
\midrule
w/o TEO loss &4.16$\pm{0.02}$ &0.15\\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusions}
In this paper, we designed a neural vocoder based on AR loop and self-attention. By using the proposed post-net integrated in the AR loop, the temporal dependency of generated audios is improved. Our experiment shows that the model outperforms base models in both subjective and objective evaluation. Furthermore, our model is generic and can easily be applied in existing non-AR vocoders to obtain potential improvements.
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUclXxK4tBVhvvrn8k
| 5 | 1 |
\section{Introduction}
Beam equations became the focus of attention for mathematical modelling, analysis and numerics of complex, multi-component systems, in particular mechanical systems for the modelling of aeroplanes, bridges, nowadays more and more electromechanical systems and robotics, at least since the 1980's.
Several types of partial differential equations serve as, and compete as models for such vibrating beams or strings: from the wave equation, probably one of the most commonly and most detailed discussed models in mathematics, to the Rayleigh beam and the Euler-Bernoulli beam, to the Timoshenko beam and even more sophisticated models. In many cases, these equations are non-linear in principle, but for the analysis and numerics of complex systems it is often useful, to consider the linear or linearised versions of these equations.
In this article, we treat the linear Euler-Bernoulli beam model
\[
\rho(\zeta) \omega_{tt}(t,\zeta)
+ (EI(\zeta) \omega_{\zeta\z})_{\zeta\z}(t,\zeta)
= 0,
\quad
t \geq 0, \, \zeta \in (0,l)
\]
where $\rho(\zeta)$ denotes the \emph{mass density times cross section area} of a beam of length $l > 0$, and $E(\zeta)$ and $I(\zeta)$ its \emph{modulus of elasticity} and \emph{area moment of the cross section}, respectively.
G.~Chen and several coauthors \cite{ChenEtAl_1987}, \cite{ChenEtAl_1987a}, \cite{ChenEtAl_1989} considered three particular important situations for the Euler-Bernoulli beam:
\begin{enumerate}
\item
A single beam is stabilised by dissipative boundary feedback at one end of the beam and conservative boundary conditions at the other end \cite{ChenEtAl_1987a}.
\item
A pair of \emph{identical} beams is damped via dissipative point feedback at the joint \cite{ChenEtAl_1989}.
\item
An arbitrary long, but finite chain of serially connected beams is damped at one end of the chain \cite{ChenEtAl_1987}.
\end{enumerate}
In all these cases, the authors assume that the beam parameters $\rho$, $E$ and $I$ are constant along each of the beams.
Since then for all three cases the corresponding articles inspired further mathematical research for more general models.
E.g.\ in \cite{GuoHuang_2004} and \cite{Morgul_2001}, situations have been considered where for a single Euler-Bernoulli beam the collocated feedback at the dissipative end is perturbed, i.e.\ the feedback input cannot be expressed solely by (the traces of) the energy variables $\omega_{t}$ and $EI \omega_{\zeta\z}$ and their spatial derivatives.
\newline
Other works, e.g.\ \cite{AmmariTucsnak_2000}, \cite{GuoChan_2001}, \cite{AmmariLiuTucsnak_2002}, \cite{GuoXie_2004}, \cite{AbriolaEtAl_2017} further dealt with the problem of dissipative point feedback at the joint between two Euler-Bernoulli beams.
These works highlighted that such a feedback law is not a good choice for exponential stabilisation (or, it is not a good model for such systems), because typically they gave the result that the property of asymptotic and uniform exponential stability depends on whether the actuation position $\xi \in (0,l)$, at which the damper acts, the fraction $\frac{\xi}{l} \in (0,1)$ lies in some subset $\tilde Q$ of $\mathbb{Q} \cap [0,1]$ which is still dense in $[0,1]$.
A very unsatisfactory result from engineering perspective.
\newline
At the same time, more general networks defining the interconnection structure of Euler-Bernoulli beams and their stability properties have been considered, e.g.\ in \cite{DekoninckNicaise_2000}, \cite{MercierRegnier_2008} and \cite{MercierRegnier_2008a}.
\newline
The methods used mostly for proving stability essentially break down into three more or less heavily used methods:
\begin{enumerate}
\item
Construction of a suitable \emph{Lyapunov function}:
This method has been applied in \cite{ChenEtAl_1987}, \cite{AmmariTucsnak_2000}.
\item
Analysis of the asymptotic behaviour of the (discrete) eigenvalues $\lambda_n$ for $n \rightarrow \infty$, see e.g.\ \cite{ChenEtAl_1987a}, \cite{DekoninckNicaise_2000}, \cite{GuoChan_2001}, \cite{GuoXie_2004}, \cite{MercierRegnier_2008}, \cite{MercierRegnier_2008a}.
\item
\emph{Frequency domain method}: Resolvent estimates on the imaginary axis based on the Gearhart-Pr\"uss-Huang Theorem, i.e.\
\[
\sup_{\beta \in \mathbb{R}} \norm{(\, \mathrm{i} \beta - \mathcal{A})^{-1}} < \infty,
\]
e.g.\ in \cite{Morgul_2001}, \cite{AmmariLiuTucsnak_2002}, \cite{GuoHuang_2004}.
\end{enumerate}
Each of these methods has its own advantages and disadvantages.
E.g.\ the first method is suitable to allow for non-linear perturbations in the dissipative boundary feedback, but the method seems to be restricted to Euler-Bernoulli beams with almost homogeneous parameters $\rho$ and $EI$, cf.\ \cite{Augner_2018+}, and it is not clear at all whether all cases for which uniform exponential stabilisation is already known can be covered by this method as well.
\newline
Even more restrictive seems the second method, which mainly can be used for homogeneous beam models, whereas the frequency domain method in generally is suitable for non-homogeneous beams as well (and will be applied in this article).
At the same time, both the second and the third method are restricted to the case of linear boundary feedback, and leave stability questions concerning nonlinear feedback wide open.
\newline
Note that the papers listed above almost exclusively cover \emph{homogeneous} beam equations, i.e.\ $\rho$ and $EI$ are constant, at least on each beam.
This brings up the question:
Is homogeneity of the beams only a technical restriction for the proofs? Can the general inhomogeneous case be reduced to the special homogeneous case? Does a (sufficiently regular) inhomogeneity influence well-posedness or stability at all?
As it turns out, for the last question, which actually consists of two separate questions (well-posedness and stability), one of which has an easy answer, the other not so.
In fact, for dissipative systems well-posedness (in the sense of semigroup generation, i.e.\ existence, uniqueness and continuous dependence on the initial datum for the corresponding abstract Cauchy problem) is invariant under perturbation by a coercive and continuous operator, see e.g.\ \cite[Lemma 7.2.3]{JacobZwart_2012} or the much more general results in \cite{CalvertGustafson_1972}.
(For a background on strongly continuous semigroups ($C_0$-semigroups), we refer to the monograph \cite{EngelNagel_2000}.)
Does the same result hold if the term \emph{strongly continuous contraction semigroup} is replaced by \emph{uniformly exponentially stable, strongly continuous contraction semigroup}?
Unfortunately not! Actually, there are already examples on finite dimensional Hilbert spaces which serve as counter examples, and in the class of \emph{infinite-dimensional port-Hamiltonian systems} \cite{VanDerSchaftMaschke_2002}, \cite{LeGorrecZwartMaschke_2005}, \cite{JacobZwart_2012}, i.e.\ a hyperbolic vector-valued PDE on an interval (in which form the Euler-Bernoulli beam can be rewritten) a striking counter example is known \cite{Engel_2013}.
Though this particular counter example does not belong to the class of Euler-Bernoulli beams, yet it motivates the standpoint we take in this paper: Stability of inhomogeneous beams should be addressed additionally to the question of stability for their homogeneous counterparts. Therefore, we generalise the results of \cite{ChenEtAl_1987} in this direction, which -- to our knowledge -- has not yet been achieved up to now.
G.~Chen et al.\ \cite{ChenEtAl_1987} investigated a system of Euler-Bernoulli beams which are serially interconnected (in a conservative or dissipative way), and which is damped at one of the two ends of the chain, e.g.\
{\allowdisplaybreaks[1]
\begin{align*}
\rho^j \omega_{tt}(t,\zeta)
+ (E^j I^j \omega_{\zeta\z})_{\zeta\z}(t,\zeta)
&= 0,
&&t \geq 0, \, \zeta \in (l^{j-1}, l^j), \, j = 1, \ldots, m
\\
\omega(t,0)
&= 0,
&&t \geq 0
\\
\omega_\zeta(t,0)
&= 0,
&&t \geq 0
\\
\omega(t, l^j -)
&= \omega(t, l^j +),
&&t \geq 0, \,j = 1, \ldots, m-1
\\
\omega_\zeta(t,l^j -)
&= \omega_\zeta(t, l^j +),
&&t \geq 0, \, j = 1, \ldots, m-1
\\
- (E^j I^j \omega_{\zeta\z}) (t, l^j -)
&= - (E^{j+1} I^{j+1} \omega_{\zeta\z}) (t, l^j +),
&&t \geq 0, \, j = 1, \ldots, m-1
\\
(E^j I^j \omega_{\zeta\z})_\zeta (t,l^j -)
&= (E^j I^j \omega_{\zeta\z})_\zeta (t, l^j +),
&&\, j = 1, \ldots, m-1
\\
- (E^m I^m \omega_{\zeta\z})(t,L)
&=0,
&&t \geq 0
\\
(E^m I^m \omega_{\zeta\z})_\zeta(t,L)
&= \kappa \omega_t(t,L)
&&t \geq 0
\\
\omega(0,\zeta)
&= \omega_0(\zeta),
&&\zeta \in (l^{j-1}, l^j), \, j = 1, \ldots, m
\\
\omega_t(0,\zeta)
&= \omega_1(\zeta),
&&\zeta \in (l^{j-1}, l^j), \, j = 1, \ldots, m
\end{align*}
}where $0 = l^0 < l^1 < \ldots < l^m = L$ is a division of the interval $(0, L)$ for some $L > 0$ and $m \in \mathbb{N}$, and $\kappa > 0$ is some damping parameter.
Here and in the following, we write
\[
f(\zeta \pm)
:= \lim_{\omega \rightarrow \zeta \pm} f(\omega)
\]
for the one-sided limits of a function at position $\zeta$.
For the special case of a pair of beams ($m = 2$) of unit total length ($L = 1$) and a joint at position $l^1 = l \in (0,1)$, this system reads as
{\allowdisplaybreaks[1]
\begin{align*}
\rho^1 \omega_{tt}(t,\zeta) + (E^1 I^1 \omega_{\zeta\z})_{\zeta\z}(t,\zeta)
&= 0,
&&t \geq 0, \, \zeta \in (0,l)
\\
\rho^2 \omega_{tt}(t,\zeta) + (E^2 I^2 \omega_{\zeta\z})_{\zeta\z}(t,\zeta)
&= 0,
&&t \geq 0, \, \zeta \in (l,1)
\\
\omega(t,0)
&= 0,
&&t \geq 0
\\
\omega_\zeta(t,0)
&= 0,
&&t \geq 0
\\
\omega_\zeta(t,l-)
&= \omega_\zeta(t,l+),
&&t \geq 0
\\
- (E^1 I^1 \omega_{\zeta\z})(t,l-)
&= - (E^2 I^2 \omega_{\zeta\z})(t,l+),
&&t \geq 0
\\
(E^1 I^1 \omega_{\zeta\z})_\zeta(t,l-)
&= (E^2 I^2 \omega_{\zeta\z})_\zeta(t,l+),
&&t \geq 0
\\
(E^2 I^2 \omega_{\zeta\z})(t,1)
&= 0,
&&t \geq 0
\\
(E^2 I^2 \omega_{\zeta\z})_\zeta (t,1)
&= \kappa \omega_t(t,1),
&&t \geq 0,
\\
\omega(0,\zeta)
&= \omega_0(\zeta),
&&\zeta \in (0,1) \setminus \{l\}
\\
\omega_t(0,\zeta)
&= \omega_1(\zeta),
&&\zeta \in (0,1) \setminus \{l\}.
\end{align*}
}For first reading, the reader may always have this special case in mind since it already includes most of the relevant features of a chain of Euler-Bernoulli beams.
The demonstration of the results in \cite{ChenEtAl_1987} is based on an energy multiplier method which provides a Lyapunov function for the Euler-Bernoulli beam system.
For example, in this case uniform exponential decay of the energy of the coupled system
\[
H(t)
:= \frac{1}{2} \sum_{j = 1}^m \int_{l^{j-1}}^{l^j} \rho^j \abs{\omega(t,\zeta)}^2 + E^j I^j \abs{\omega_{\zeta\z}(t,\zeta)}^2 \, \mathrm{d} \zeta
\leq M \mathrm{e}^{\eta t} H(0),
\quad
t \geq 0
\]
for some constants $M \geq 1$ and $\eta < 0$ which are independent of the initial data,
has been shown in \cite{ChenEtAl_1987} for a strictly positive damping parameter $\kappa > 0$ under the following additional structural constraints:
\begin{enumerate}
\item
On each interval $(l^{j-1}, l^j)$, the mass density times cross sectional area $\rho(\zeta) = \rho^j$, the modulus of elasticity $E(\zeta) = E^j$ and the area moment of the cross section $I(\zeta) = I^j$ are constant.
\item
The parameters $\rho^j > 0$, $E^j > 0$ and $I^j > 0$ satisfy the monotonicity constraints
\[
\rho^j \leq \rho^{j+1},
\quad
E^j I^j \geq E^{j+1} I^{j+1},
\quad
j = 1, \ldots, m.
\]
\end{enumerate}
In this paper, we are going to remove the first of these constraints, i.e.\ we show the same uniform exponential stability result for arbitrary piecewise Lipschitz-continuous and strictly positive $\rho \in \operatorname{Lip}((l^{j-1}, l^j);\mathbb{R})$ and $EI \in \operatorname{Lip}((l^{j-1}, l^j);\mathbb{R})$ (note that this implies that for each junction point $l^j$, the one-sided limits $\rho(l^j-)$ and $\rho(l^j+)$ etc.\ exist) replacing the constant parameters $\rho^j$ and $E^j I^j$, but still satisfying a jump conditions for possible discontinuities of $\rho$ and $EI$ at the junction points:
\begin{assumption}[Regularity of physical parameters and jump conditions]
For the stability results will assume the following:
\begin{align}
\rho, EI &\in \operatorname{Lip}(l^{j-1},l^j)
\quad \text{and uniformly positive},
&&j = 1, \ldots, m
\tag{{\bf R}}
\label{R},
\\
\rho(l^j-) &\leq \rho(l^j+),
\quad
(EI)(l^j-) \geq (EI)(l^j+),
&&j = 1, \ldots, m.
\tag{{\bf M}}
\label{M}
\end{align}
\end{assumption}
We prove our results within the framework of $C_0$-semigroups, applying a special case (for compact resolvents) of the Arendt-Batty-Lyubich-V\~u Theorem for asymptotic stability and the Gearhart-Pr\"uss-Huang Theorem for uniform exponential stability of $C_0$-semigroups.
Moreover, we consider a slight generalisation of the situation described above by allowing dynamic boundary feedback via impedance passive finite-dimensional control systems and which all are internally stable.
For well-posedness (here, in the sense that dissipativity of the interconnected system implies well-posedness with non-increasing energy for the system), we use abstract well-posedness results for so-called \emph{infinite-dimensional linear port-Hamiltonian systems} \cite{LeGorrecZwartMaschke_2004}, \cite{LeGorrecZwartMaschke_2005}, \cite{Villegas_2007}, \cite{AugnerJacob_2014} (which rely on the Lumer-Phillips Theorem), and also employ the techniques used in \cite{AugnerJacob_2014} for uniform exponential stabilisation of a (single) Euler-Bernoulli beam within the port-Hamiltonian framework to show the uniform exponential energy decay.
Note that in \cite{Augner_2020} quite general interconnection structures of infinite-dimensional port-Hamiltonian type have been considered, especially well-posedness and stability properties. The general setup considered there, however, is not enough to cover exponential stability of serially connected Euler-Bernoulli beams except for some very restrictive conditions on the dissipative structure of the interconnection.
This paper is organised as follows:
In Section \ref{well-posedness} we formally consider possible interconnection and boundary conditions leading to a dissipative system of joint Euler-Bernoulli beams.
More precisely, we give classes of boundary control and observation maps leading to an open loop impedance passive system, thus leading to a dissipative system for dissipative (linear) closure relations.
Using the abstract theory of infinite-dimensional linear port-Hamiltonian systems, this immediately implies well-posedness results in the sense that for any sufficiently regular initial data there is a unique solution with non-increasing energy and the solution depends continuously on the initial data.
In other words: The operator $\mathcal{A}$ governing the dynamics of the beam-observer-feedback-actuator system generates a strongly continuous contraction semigroup on a suitable energy state space $\mathcal{X}$.
Then, Section \ref{stability} is devoted to the discussion of stability properties.
We give sufficient conditions on the interconnection structure by means of dissipative static feedback or feedback via interconnection with an internally stable impedance passive finite-dimensional linear controller.
Here, the main results of that section and this manuscript are Theorem \ref{thm:asymptotic_stability} on asymptotic, i.e.\ strong, stability and Theorem \ref{thm:exp_stability} on uniform exponential stability, i.e.\ uniform exponential energy decay.
In Theorem \ref{exa:EB} we reformulate the previous well-posedness and stability results in the language of Euler-Bernoulli beam equations and show that our results cover the inhomogeneous beam versions of the uniform exponential stability results already presented in \cite{ChenEtAl_1987}, especially including a discussion of several relevant conservative boundary conditions on the non-dissipative end of the serial chain of Euler-Bernoulli beams, which already had been mentioned in \cite{ChenEtAl_1987}, but under the standing assumption of piecewise constant parameters $\rho$ and $EI$.
We conclude the paper with some final remarks in Section \ref{conclusion}.
\section{Well-posedness}
\label{well-posedness}
We start by discussing the well-posedness for systems of PDE modelling serially connected Euler-Bernoulli beams.
Slightly generalising the setup in \cite{ChenEtAl_1987}, we consider a decomposition $0 = l^0 < l^1 < \ldots < l^m = L$ of an interval $(0,L)$ ($L > 0$) and on each subinterval $(l^{j-1}, l^j)$ the Euler-Bernoulli beam equation
\[
\rho(\zeta) \omega_{tt}(t,\zeta) + (EI(\zeta) \omega_{\zeta\z}(t,\zeta))_{\zeta\z} = 0,
\quad
\zeta \in (l^{j-1}, l^j), \, j = 1, \ldots, m.
\tag{EB}
\label{EB}
\]
where, in contrast to the situation in \cite{ChenEtAl_1987}, we allow for spatial dependence of $\rho$ and $EI$ on $\zeta \in (l^{j-1}, l^j)$. For the moment, it is enough to let $\rho, EI \in L_\infty(l^{j-1}, l^j)$ be uniformly positive, i.e.\ there is $\varepsilon > 0$ such that
\[
\rho(\zeta), EI(\zeta) \geq \varepsilon,
\quad
\text{a.e.\ } \zeta \in (l^{j-1}, l^j), \, j = 1, \ldots, m.
\]
The total energy of this linear system is defined as the sum of kinetic energy and strain energy of all beams
\begin{align*}
H(t)
&:= \frac{1}{2} \int_0^L \big( \rho(\zeta) \abs{\omega_t(t,\zeta)}^2 + EI(\zeta) \abs{\omega_{\zeta\z}(t,\zeta)}^2 \big) \, \mathrm{d} \zeta
\\
&= \sum_{j=1}^m \frac{1}{2} \int_{l^{j-1}}^{l^j} \big( \rho(\zeta) \abs{\omega_t(t,\zeta)}^2 + EI(\zeta) \abs{\omega_{\zeta\z}(t,\zeta)}^2 \big) \, \mathrm{d} \zeta
=: \sum_{j=1}^m H^j(t).
\end{align*}
For sufficiently regular solutions of the Euler-Bernoulli beam equations on each subinterval $(l^{j-1}, l^j)$, we formally obtain the power balance for the corresponding beam as
\begin{align*}
\frac{\, \mathrm{d}}{\, \mathrm{d} t} H^j(t)
&= \Re \int_{l^{j-1}}^{l^j} \rho(\zeta) \omega_{tt}(t,\zeta) \overline{\omega_t(t,\zeta)} + EI(\zeta) \omega_{\zeta\z}(t,\zeta) \overline{\omega_{\zeta\z t}(t,\zeta)} \, \mathrm{d} \zeta
\\
&= \Re \int_{l^{j-1}}^{l^j} - (EI \omega_{\zeta\z})_{\zeta\z}(t,\zeta) \overline{\omega_t(t,\zeta)} + EI(\zeta) \omega_{\zeta\z}(t,\zeta) \overline{\omega_{\zeta\z t}(t,\zeta)} \, \mathrm{d} \zeta
\\
&= \Re \left[ - (EI \omega_{\zeta\z})_\zeta(t,l^j-) \overline{\omega_t(t,l^j-)} + (EI \omega_{\zeta\z})(t,l^j-) \overline{\omega_{t\zeta}(t,l^j-)} \right]
\\
&\quad
- \Re \left[ - (EI \omega_{\zeta\z})_\zeta(t,l^{j-1}+) \overline{\omega_t(t,l^{j-1}+)} + (EI \omega_{\zeta\z})(t,l^{j-1}+) \overline{\omega_{t\zeta}(t,l^{j-1}+)} \right].
\end{align*}
Putting these equations together, we obtain the change of total energy of sufficiently regular solutions as
\begin{align}
\frac{\, \mathrm{d}}{\, \mathrm{d} t} H(t)
= \sum_{j=1}^m \frac{\, \mathrm{d}}{\, \mathrm{d} t} H^j(t)
&= \Re \left[ - (EI \omega_{\zeta\z})_\zeta(t,L-) \overline{\omega_t(t,L-)} + (EI \omega_{\zeta\z})(t,L-) \overline{\omega_{t\zeta}(t,L-)} \right]
\nonumber \\
&\quad
- \Re \left[ - (EI \omega_{\zeta\z})_\zeta(t,0+) \overline{\omega_t(t,0+)} + (EI \omega_{\zeta\z})(t,0+) \overline{\omega_{t\zeta}(t,0+)} \right]
\nonumber \\
&\quad
+ \sum_{j=1}^{m-1} \Re \left[ - (EI \omega_{\zeta\z})_\zeta(t,l^j-) \overline{\omega_t(t,l^j-)} + (EI \omega_{\zeta\z})(t,l^j-) \overline{\omega_{t\zeta}(t,l^j-)} \right]
\nonumber \\
&\qquad
- \Re \left[ - (EI \omega_{\zeta\z})_\zeta(t,l^j+) \overline{\omega_t(t,l^j+)} + (EI \omega_{\zeta\z})(t,l^j+) \overline{\omega_{t\zeta}(t,l^j+)} \right].
\label{eqn:energy-balance}
\end{align}
We see that, when aiming for interconnection of the beams in a dissipative way, the most natural way to do so is by imposing dissipative boundary conditions (which in this context may include conservative boundary conditions as well) at the left ($\zeta = l^0 = 0$) and right end ($\zeta = l^m = L$) and a dissipative interconnection at the junction points $\zeta = l^j$ ($j = 1, \ldots, m-1$).
To achieve the latter, at every junction point $l^j$ ($j = 1, \ldots, m-1$) we demand continuity conditions of the type
\[
\omega_t(t,l^j-) = \omega_t(t,l^j +)
\quad \text{ or } \quad
- (EI \omega_{\zeta\z})_\zeta(t,l^j-) = - (EI \omega_{\zeta\z})_\zeta(t,l^j+)
\]
and
\[
\omega_{t\zeta}(t,l^j-) = \omega_{t\zeta}(t,l^j +)
\quad \text{ or } \quad
(EI \omega_{\zeta\z})(t,l^j-) = (EI \omega_{\zeta\z})(t,l^j+).
\]
(In \cite{Pilkey_1969}, \cite{ChenEtAl_1987} it has been discussed that for dissipativity of the system, at least one of two state variables which are \emph{dual} (or \emph{complementary}) to each other has to be continuous. This justifies these conditions.)
We may thus distinguish between four different cases of static interconnections:
\begin{enumerate}
\item
For $\omega_t(t,l^j-) = \omega_t(t,l^j +)$ and $\omega_{t\zeta}(t,l^j-) = \omega_{t\zeta}(t,l^j +)$:
\[
\left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j-) + (EI \omega_{\zeta\z})_\zeta(t,l^j+) \\ (EI \omega_{\zeta\z})(t,l^j-) - (EI \omega_{\zeta\z})(t,l^j+) \end{array} \right) = - K^j \left( \begin{array}{c} \omega_t(t,l^j) \\ \omega_{t\zeta}(t,l^j) \end{array} \right)
\]
\item
For $\omega_t(t,l^j-) = \omega_t(t,l^j +)$ and $(EI \omega_{\zeta\z})(t,l^j-) = (EI \omega_{\zeta\z})(t,l^j+)$:
\[
\left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j-) + (EI \omega_{\zeta\z})_\zeta(t,l^j+) \\ \omega_{t\zeta}(t,l^j-) - \omega_{t\zeta}(t, l^j+) \end{array} \right) = - K^j \left( \begin{array}{c} \omega_t(t,l^j) \\ (EI \omega_{\zeta\z})(t,l^j) \end{array} \right).
\]
\item
For $- (EI \omega_{\zeta\z})_\zeta(t,l^j-) = - (EI \omega_{\zeta\z})_\zeta(t,l^j+)$ and $\omega_{t\zeta}(t,l^j-) = \omega_{t\zeta}(t,l^j +)$:
\[
\left( \begin{array}{c} \omega_t(t,l^j-) - \omega_t(t,l^j+) \\ (EI \omega_{\zeta\z})(t,l^j-) - (EI \omega_{\zeta\z})(t,l^j+) \end{array} \right) = - K^j \left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j) \\ \omega_{t\zeta}(t,l^j) \end{array} \right).
\]
\item
For $- (EI \omega_{\zeta\z})_\zeta(t,l^j-) = - (EI \omega_{\zeta\z})_\zeta(t,l^j+)$ and $(EI \omega_{\zeta\z})(t,l^j-) = (EI \omega_{\zeta\z})(t,l^j+)$:
\[
\left( \begin{array}{c} \omega_t(t,l^j-) - \omega_t(t,l^j+) \\ \omega_{t\zeta}(t,l^j-) - \omega_{t\zeta}(t, l^j+) \end{array} \right) = - K^j \left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j) \\ (EI \omega_{\zeta\z})(t,l^j) \end{array} \right).
\]
\end{enumerate}
In each case $K^j \in \mathbb{K}^{2 \times 2}$ denotes a matrix with positive semidefinite symmetric part $\He K = \frac{K + K^*}{2}$.
(At first reading the reader might consider the special case $m = 2$ and the particular interconnection condition
\begin{align*}
\omega_t(l^1+)
&= \omega_t(t,l^1-)
\\
\omega_{t\zeta}(t,l^1+)
&= \omega_{t\zeta}(t,l^1-)
\\
\left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^1-) + (EI \omega_{\zeta\z})_\zeta(t,l^1+) \\ (EI \omega_{\zeta\z})(t,l^1-) - (EI \omega_{\zeta\z})(t,l^1+) \end{array} \right)
&= - K^1 \left( \begin{array}{c} \omega_t(t,l^1) \\ \omega_{t\zeta}(t,l^1) \end{array} \right)
\end{align*}
to make the following statements easier digestible.)
Reformulating this problem in a more abstract way, makes the theory of infinite-dimensional linear port-Hamiltonian systems, cf.\ e.g.\ \cite{LeGorrecZwartMaschke_2005}, \cite{AugnerJacob_2014}, applicable, and we, therefore, may easily deduce well-posedness from abstract theory, simply by checking dissipativity.
First, we exploit that $\rho$ and $EI$ are bounded and uniformly positive and write
\begin{align*}
\tilde \H(\zeta)
&:= \left[ \begin{array}{cc} \tilde \H_1(\zeta) & \\ & \tilde \H_2(\zeta) \end{array} \right]
:= \left[ \begin{array}{cc} \rho(\zeta)^{-1} & \\ & EI(\zeta) \end{array} \right]
\in \mathbb{K}^{2 \times 2},
\\
\tilde x(t,\zeta)
&:= \left( \begin{array}{c} \tilde x_1(t,\zeta) \\ \tilde x_2(t,\zeta) \end{array} \right)
:= \left( \begin{array}{c} \rho(\zeta) \omega_t(t,\zeta) \\ \omega_{\zeta\z}(t,\zeta) \end{array} \right)
\in \mathbb{K}^2,
\end{align*}
so that the Euler-Bernoulli beam equations may be equivalently expressed as
\[
\frac{\partial}{\partial t} \tilde x(t,\zeta)
= \left[ \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right] \frac{\partial^2}{\partial \zeta^2} (\tilde \H(\zeta) \tilde x(t,\zeta)),
\quad
t \geq 0, \, \zeta \in (l^{j-1}, l^j), \, j = 1, \ldots, m.
\]
With $\tilde P_2 = \left[ \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right]$, this almost looks like an infinite-dimensional linear port-Hamiltonian system of order 2 as considered in \cite{LeGorrecZwartMaschke_2005} or \cite{AugnerJacob_2014}, but due to the possible discontinuities in the junction points it is not quite yet.
However, by performing a parameter transformation and writing the resulting PDE as a system of PDE on the unit interval $(0,1)$,
\begin{align*}
\H^j(\zeta) &:= \tilde \H((1-\zeta) l^{j-1} + \zeta l^j),
\quad
x^j(t,\zeta) := \tilde x(t,(1-\zeta) l^{j-1} + \zeta l^j),
\quad
\zeta \in (0,1), \, j = 1, \ldots, m
\\
\H(\zeta) &:= \left[ \begin{array}{ccc} \H^1(\zeta) && \\ & \ddots & \\ && \H^m(\zeta) \end{array} \right]
\in \mathbb{K}^{2m \times 2m}
\\
x(t,\zeta) &:= \left( \begin{array}{c} x^1(t,\zeta) \\ \vdots \\ x^m(t,\zeta) \end{array} \right)
\in \mathbb{K}^{2m}
\\
P_2 &= \left[ \begin{array}{ccc} \frac{1}{(l^1 - l^0)^2} \left[ \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right] && \\ & \ddots & \\ && \frac{1}{(l^m - l^{m-1})^2} \left[ \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right] \end{array} \right]
\in \mathbb{K}^{2m \times 2m},
\\
P_1
&= P_0 = 0 \in \mathbb{K}^{2m \times 2m}
\end{align*}
we find that the Euler-Bernoulli equations take the port-Hamiltonian form
\[
\frac{\partial}{\partial t} x(t,\zeta)
= \left( P_2 \frac{\partial^2}{\partial \zeta^2} \right) (\H(\zeta) x(t,\zeta))
=: \left( \mathfrak{A} x(t) \right)(\zeta),
\quad
t \geq 0, \, \zeta \in (0,1).
\]
Let the \emph{energy state space} $X= L_2(0,1;\mathbb{K}^{2m})$ be equipped with the energy inner product
\[
\sp{x}{y}_X
:= \int_0^1 \sp{x(\zeta)}{\H(\zeta) y(\zeta)}_{\mathbb{K}^{2m}} \, \mathrm{d} \zeta,
\quad
x, y \in X.
\]
By \cite[Theorem 4.1]{LeGorrecZwartMaschke_2005}, the operator $A = \mathfrak{A}|_{D(A)}$ defined as the restriction of $\mathfrak{A}$ to any domain $D(A) \subseteq D(\mathfrak{A}) = \{ x \in L_2(0,1;\mathbb{K}^{2m}): \, \H x \in H^2(0,1;\mathbb{K}^{2m}) \}$ of the form \[ D(A) = \{ x \in D(\mathfrak{A}): \, W \left( \begin{array}{c} (\H x)(0) \\ (\H x)(1) \end{array} \right) = 0 \} \quad \text{for some full rank matrix } W \in \mathbb{K}^{2m \times 4m} \] generates a contractive $C_0$-semigroup on $X$ if and only if $A$ is dissipative. I.e., the boundary (and here: interconnection) conditions restricting $D(\mathfrak{A})$ to a linear subspace $D(A)$ need to ensure that
\[
\Re \sp{Ax}{x}_X
\leq 0,
\quad
x \in D(A).
\]
Note that thanks to the uniform boundedness and positivity of $\rho$ and $EI$, $\sp{\cdot}{\cdot}_X$ induces a norm on $X$ which is equivalent to the standard inner product on $L_2(0,1;\mathbb{K}^{2m})$.
A similar statement holds true whenever the system of beams is interconnected by a finite dimensional linear control system $\Sigma_c = (A_c, B_c, C_c, D_c)$, i.e.\
\begin{align*}
\frac{\, \mathrm{d}}{\, \mathrm{d} t} x_c(t)
&= A_c x_c(t) + B_c u_c(t),
\quad
t \geq 0
\\
y_c(t)
&= C_c x_c(t) + D_c u_c(t),
\quad
t \geq 0.
\end{align*}
Here, $x_c(t)$ lies in the controller state space $X_c$, a finite dimensional Hilbert space, and the Hilbert spaces $U_c$ and $Y_c$ are the finite dimensional control and observation spaces for the control system.
To formulate this well-posedness result rigorously, we introduce the following boundary control and observation operators for the Euler-Bernoulli beam system in port-Hamiltonian form.
\begin{dfntn}[Pointwise Control and Observation Operators]
Let matrices $W_B^j, W_C^j \in \mathbb{K}^{2 \times 4}$ be given such that $\left[ \begin{smallmatrix} W_B^j \\ W_C^j \end{smallmatrix} \right] \in \mathbb{K}^{4 \times 4}$ is invertible.
We define the linear operators $\mathfrak{B}^0, \mathfrak{C}^0, \mathfrak{B}^m, \mathfrak{C}^m: D(\mathfrak{A}) \rightarrow \mathbb{K}^2$ by
\begin{align*}
\left( \begin{array}{c} \mathfrak{B}^0 x \\ \mathfrak{C}^0 x \end{array} \right)
&:= \left[ \begin{array}{c} W_B^0 \\ W_C^0 \end{array} \right] \left( \begin{array}{c} (\H^1 x^1)(0) \\ \tilde P_2 (\H^1 x^1)'(0) \end{array} \right)
= \left[ \begin{array}{c} W_B^0 \\ W_C^0 \end{array} \right] \left( \begin{array}{c} \omega_t(t,0) \\ (EI \omega_{\zeta\z})(t,0) \\ - (EI \omega_{\zeta\z})_\zeta(t,0) \\ \omega_{t\zeta}(t,0) \end{array} \right)
\\
\left( \begin{array}{c} \mathfrak{B}^m x \\ \mathfrak{C}^m x \end{array} \right)
&:= \left[ \begin{array}{c} W_B^m \\ W_C^m \end{array} \right] \left( \begin{array}{c} (\H^m x^m)(1) \\ - \tilde P_2 (\H^m x^m)'(1) \end{array} \right)
= \left[ \begin{array}{c} W_B^m \\ W_C^m \end{array} \right] \left( \begin{array}{c} \omega_t(t,L) \\ (EI \omega_{\zeta\z})(t,L) \\ (EI \omega_{\zeta\z})_\zeta(t,L) \\ - \omega_{t\zeta}(t,L) \end{array} \right)
\end{align*}
and for each junction $j \in \{1, \ldots, m-1\}$, depending the chosen type of continuity conditions, we define linear maps $\mathfrak{B}_0^j$, $\mathfrak{B}^j$ and $\mathfrak{C}^j: D(\mathfrak{A}) \rightarrow \mathbb{K}^2$ as follows:
\begin{enumerate}
\item
For the case where $\omega_t(t,l^j-) = \omega_t(t,l^j +)$ and $\omega_{t\zeta}(t,l^j-) = \omega_{t\zeta}(t,l^j +)$ are continuous in $l^j$:
\begin{align*}
\mathfrak{B}_0^j x
&:= \left( \begin{array}{c} (\H^j_1 x^j_1)(1) - (\H^{j+1}_1 x^{j+1}_1)(0) \\ (\H^j_1 x^j_1)'(1) - (\H^{j+1}_1 x^{j+1}_1)'(0) \end{array} \right)
= \left( \begin{array}{c} \omega_t(t,l^j-) - \omega_t(t,l^j+) \\ \omega_{t\zeta}(t,l^j-) - \omega_{t\zeta}(t,l^j+) \end{array} \right)
\\
\left( \begin{array}{c} \mathfrak{B}^j x \\ \mathfrak{C}^j x \end{array} \right)
&:= \left( \begin{array}{c} - (\H^j_2 x^j_2)'(1) + (\H^{j+1}_2 x^{j+1}_2)'(0) \\ (\H^j_2 x^j_2)(1) - (\H^{j+1}_2 x^{j+1}_2)(0) \\ \tfrac{1}{2} ((\H^j_1 x^j_1)(1) + (\H^{j+1}_1 x^{j+1}_1)(0)) \\ \tfrac{1}{2} ((\H^j_1 x^j_1)'(1) + (\H^{j+1}_1 x^{j+1}_1)'(0)) \end{array} \right)
= \left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j-) + (EI \omega_{\zeta\z})_\zeta(t,l^j+) \\ (EI \omega_{\zeta\z})(t,l^j-) - (EI \omega_{\zeta\z})(t,l^j+) \\ \omega_t(t,l^j) \\ \omega_{t\zeta}(t,l^j) \end{array} \right)
\end{align*}
where for any spatial dependent quantity $f$ we write $f(\zeta) := f(\zeta+) = f(\zeta-)$ whenever the two one-sided limits exist and coincide.
\item
For the case $\omega_t(t,l^j-) = \omega_t(t,l^j+)$ and $(EI \omega_{\zeta\z})(t,l^j-) = (EI \omega_{\zeta\z})(t,l^j+)$:
\begin{align*}
\mathfrak{B}_0^j x
&:= \left( \begin{array}{c} (\H^j_1 x^j_1)(1) - (\H^{j+1}_1 x^{j+1}_1)(0) \\ (\H^j_2 x^j_2)(1) - (\H^{j+1}_2 x^{j+1}_2)(0) \end{array} \right)
= \left( \begin{array}{c} \omega_t(t,l^j-) - \omega_t(t,l^j+) \\ (EI \omega_{\zeta\z})(t,l^j-) - (EI \omega_{\zeta\z})(t,\zeta^j+) \end{array} \right)
\\
\left( \begin{array}{c} \mathfrak{B}^j x \\ \mathfrak{C}^j x \end{array} \right)
&:= \left( \begin{array}{c} - (\H^j_2 x^j_2)'(1) + (\H^{j+1}_2 x^{j+1}_2)'(0) \\ (\H^j_1 x^j_1)'(1) - (\H^{j+1}_1 x^{j+1}_1)'(0) \\ \tfrac{1}{2} ((\H^j_1 x^j_1)(1) + (\H^{j+1}_1 x^{j+1}_1)(0)) \\ \tfrac{1}{2} ((\H^j_2 x^j_2)(1) + (\H^{j+1}_2 x^{j+1}_2)(0)) \end{array} \right)
= \left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j-) + (EI \omega_{\zeta\z})_\zeta(t,l^j+) \\ \omega_{t\zeta}(t,l^j-) - \omega_{t\zeta}(t,l^j+) \\ \omega_t(t,l^j) \\ (EI \omega_{\zeta\z})(t,l^j) \end{array} \right)
\end{align*}
\item
For $- (EI \omega_{\zeta\z})_\zeta(t,l^j-) = - (EI \omega_{\zeta\z})_\zeta(t,l^j+)$ and $\omega_{t\zeta}(t,l^j-) = \omega_{t\zeta}(t,l^j +)$:
\begin{align*}
\mathfrak{B}_0^j x
&:= \left( \begin{array}{c} - (\H^j_2 x^j_2)'(1) - (\H^{j+1}_2 x^{j+1}_2)'(0) \\ (\H^j_1 x^j_1)'(1) - (\H^{j+1}_1 x^{j+1}_1)'(0) \end{array} \right)
= \left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j-) - (EI \omega_{\zeta\z})_\zeta(t,l^j+) \\ \omega_{t\zeta}(t,l^j-) - \omega_{t\zeta}(t,l^j+) \end{array} \right)
\\
\left( \begin{array}{c} \mathfrak{B}^j x \\ \mathfrak{C}^j x \end{array} \right)
&:= \left( \begin{array}{c} - (\H^j_1 x^j_1)(1) + (\H^{j+1}_1 x^{j+1}_1)(0) \\ (\H^j_2 x^j_2)(1) - (\H^{j+1}_2 x^{j+1}_2)(0) \\ - \tfrac{1}{2} ((\H^j_2 x^j_2)'(1) + (\H^{j+1}_2 x^{j+1}_2)'(0)) \\ \tfrac{1}{2} ((\H^j_1 x^j_1)'(1) + (\H^{j+1}_1 x^{j+1}_1)'(0)) \end{array} \right)
= \left( \begin{array}{c} \omega_t(t,l^j-) - \omega_t(t, l^j +) \\ (EI \omega_{\zeta\z})(t,l^j-) - (EI \omega_{\zeta\z})(t,l^j+) \\ - (EI \omega_{\zeta\z})_\zeta(t, l^j) \\ \omega_{t\zeta}(t,l^j) \end{array} \right)
\end{align*}
\item
For $- (EI \omega_{\zeta\z})_\zeta(t,l^j-) = - (EI \omega_{\zeta\z})_\zeta(t,l^j+)$ and $(EI \omega_{\zeta\z})(t,l^j-) = (EI \omega_{\zeta\z})(t,l^j+)$:
\begin{align*}
\mathfrak{B}_0^j x
&:= \left( \begin{array}{c} - (\H^j_2 x^j_2)'(1) - (\H^{j+1}_2 x^{j+1}_2)'(0) \\ (\H^j_2 x^j_2)(1) - (\H^{j+1}_2 x^{j+1}_2)(0) \end{array} \right)
= \left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t, l^j-) - (EI \omega_{\zeta\z})_\zeta(t, l^j+) \\ (EI \omega_{\zeta\z})(t, l^j-) - (EI \omega_{\zeta\z})(t, l^j+) \end{array} \right)
\\
\left( \begin{array}{c} \mathfrak{B}^j x \\ \mathfrak{C}^j x \end{array} \right)
&:= \left( \begin{array}{c} (\H^j_1 x^j_1)(1) - (\H^{j+1}_1 x^{j+1}_1)(0) \\ (\H^j_1 x^j_1)'(1) - (\H^{j+1}_1 x^{j+1}_1)'(0) \\ - \tfrac{1}{2} ((\H^j_2 x^j_2)'(1) + (\H^{j+1}_2 x^{j+1}_2)'(0)) \\ \tfrac{1}{2} ((\H^j_2 x^j_2)(1) + (\H^{j+1}_2 x^{j+1}_2)(0)) \end{array} \right)
= \left( \begin{array}{c} \omega_t(t,l^j-) - \omega_t(t,l^j+) \\ \omega_{t\zeta}(t,l^j-) - \omega_{t\zeta}(t, l^j+) \\ - (EI \omega_{\zeta\z})_\zeta(t,l^j) \\ (EI \omega_{\zeta\z})(t,l^j) \end{array} \right).
\end{align*}
\end{enumerate}
\end{dfntn}
When interconnecting with a control system, or closing the system via dissipative boundary feedback and interconnection at the junction points, it is convenient to have an \emph{impedance passive} system.
We define the linear operators $\mathfrak{A}_0$, $\mathfrak{B}$ and $\mathfrak{C}$ by
\begin{align*}
\mathfrak{A}_0
&= \mathfrak{A}|_{\ker \mathfrak{B}_0}
= \mathfrak{A}|_{\cap_{j=1}^{m-1} \ker \mathfrak{B}_0^j}:
\quad
D(\mathfrak{A}_0) \subseteq X \rightarrow X
\\
\mathfrak{B}
&= (\mathfrak{B}^j)_{j=0}^m:
\quad
D(\mathfrak{A}_0) \subseteq X \rightarrow U := \mathbb{K}^{2(m+1)}
\\
\mathfrak{C}
&= (\mathfrak{C}^j)_{j=0}^m:
\quad
D(\mathfrak{A}_0) \subseteq X \rightarrow Y := \mathbb{K}^{2(m+1)}.
\end{align*}
For this choice, the power balance \eqref{eqn:energy-balance} then implies that for sufficiently regular solutions of $x_t(t,\zeta) = \mathfrak{A}_0 x(t,\zeta)$ the energy changes as $\frac{\, \mathrm{d}}{\, \mathrm{d} t} \frac{1}{2} \norm{x}_X^2 = \Re \sp{\mathfrak{A}_0 x}{x}_X$, where
\begin{align*}
\Re \sp{\mathfrak{A}_0 x}{x}_X
&= \sum_{j = 1}^{m-1} \Re \sp{\mathfrak{B}^j x}{\mathfrak{C}^j x}
+ \Re \sp{- \tilde P_2 (\H^1 x^1)'(0)}{(\H^1 x^1)(0)}
\\
&\quad
+ \Re \sp{\tilde P_2 (\H^m x^m)'(1)}{(\H^m x^m)(1)} ,
\quad
x \in D(\mathfrak{A}^0).
\end{align*}
From here, conditions on the matrices $\left[ \begin{array}{c} W_B^0 \\ W_C^0 \end{array} \right]$ and $\left[ \begin{array}{c} W_B^m \\ W_C^m \end{array} \right]$ may lead to impedance passivity of the system $\mathfrak{S}_0 = (\mathfrak{A}_0, \mathfrak{B}, \mathfrak{C})$.
\begin{lmm}
The triplet $(\mathfrak{A}_0, \mathfrak{B}, \mathfrak{C})$ is impedance passive, i.e.\
\[
\Re \sp{\mathfrak{A}_0 x}{x}
\leq \Re \sp{\mathfrak{B} x}{\mathfrak{C} x}_U
= \sum_{j=0}^m \Re \sp{\mathfrak{B}^j x}{\mathfrak{C}^j x}_{\mathbb{K}^2},
\quad
x \in D(\mathfrak{A}_0),
\]
if and only if
\[
\left[ \begin{array}{c} W_B^0 \\ W_C^0 \end{array} \right]^{\ast} \left[ \begin{array}{cc} 0 & I \\ I & 0 \end{array} \right] \left[ \begin{array}{c} W_B^0 \\ W_C^0 \end{array} \right] - \left[ \begin{array}{cc} 0 & I \\ I & 0 \end{array} \right] \geq 0,
\quad
\left[ \begin{array}{c} W_B^m \\ W_C^m \end{array} \right]^{\ast} \left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right] \left[ \begin{array}{c} W_B^m \\ W_C^m \end{array} \right] - \left[ \begin{array}{cc} 0 & I \\ I & 0 \end{array} \right] \geq 0
\]
are both positive semidefinite.
\end{lmm}
\textbf{Proof.}
This can easily be proved using the energy balance \eqref{eqn:energy-balance}.
\qed
\begin{assumption}[Impedance-passivity]
The triplet $\mathfrak{S}_0 = (\mathfrak{A}_0, \mathfrak{B}, \mathfrak{C})$ is impedance passive.
\end{assumption}
The class of control systems we consider for linear closing of the open loop system $(\mathfrak{A}_0, \mathfrak{B}, \mathfrak{C})$ is always assumed to be a linear finite dimensional controller.
\begin{assumption}[Finite-dimensional, linear control system]
Let $X_c$ be any finite dimensional $\mathbb{K}$-Hilbert space (including the possible choice $X_c = \{0\}$ leading to static boundary feedback) and (w.l.o.g.) let $U_c = Y_c = \mathbb{K}^{2(m+1)}$ and $A_c \in \mathcal{B}(X_c)$, $B_c \in \mathcal{B}(U_c,X_c)$, $C_c \in \mathcal{B}(X_c, Y_c)$ and $D_c \in \mathcal{B}(U_c,Y_c)$ be bounded linear operators, defining the finite dimensional linear control system $\Sigma_c = (A_c, B_c, C_c, D_c)$ with dynamics
\begin{align*}
\frac{\, \mathrm{d}}{\, \mathrm{d} t} x_c(t)
&= A_c x_c(t) + B_c u_c(t)
\\
y_c(t)
&= C_c x_c(t) + D_c u_c(t),
\quad
t \geq 0.
\end{align*}
\end{assumption}
We consider the standard feedback interconnection between the chain of Euler-Bernoulli beams and the controller which is given by
\[
\mathfrak{B} x(t)
= - y_c(t),
\quad
u_c(t)
= \mathfrak{C} x(t),
\qquad
t \geq 0.
\]
The dynamics of the interconnected system is then described by the abstract Cauchy problem
\begin{equation}
\begin{cases}
\frac{\, \mathrm{d}}{\, \mathrm{d} t} (x,x_c)(t)
= \mathcal{A} (x,x_c)(t),
&
t \geq 0
\\
(x,x_c)(0)
= (x_0, x_{c,0})
\end{cases}
\tag{ACP}
\label{eqn:ACP}
\end{equation}
for some given initial data $(x_0,x_{c,0}) \in \mathcal{X} := X \times X_c$. Here, the product Hilbert space $\mathcal{X}$ is equipped with the inner product
\[
\sp{(x,x_c)}{(z,z_c)}_{\mathcal{X}}
= \sp{x}{z}_X + \sp{x_c}{z_c}_{X_c},
\quad
(x,x_c), (z,z_c) \in \mathcal{X} = X \times X_c
\]
and the linear operator $\mathcal{A}: D(\mathcal{A}) \subseteq \mathcal{X} \rightarrow \mathcal{X}$ is defined by
\begin{align*}
\mathcal{A} \left( \begin{array}{c} x \\ x_c \end{array} \right)
&= \left[ \begin{array}{cc} \mathfrak{A} & 0 \\ B_c \mathfrak{C} & A_c \end{array} \right] \left( \begin{array}{c} x \\ x_c \end{array} \right)
\\
D(\mathcal{A})
&= \left\{ (x,x_c) \in D(\mathfrak{A}) \times X_c: \quad \mathfrak{B}_0 x = 0, \, \mathfrak{B} x = - (C_c x_c + D_c \mathfrak{C} x) \right\}
\\
&= \left\{ (x,x_c) \in D(\mathfrak{A}_0) \times X_c: \quad \mathfrak{B} x = - (C_c x_c + D_c \mathfrak{C} x) \right\}
\end{align*}
where $\mathfrak{B}_0: D(\mathfrak{B}_0) = D(\mathfrak{A}) \subseteq X \rightarrow \mathbb{K}^{2(m-1)}$ is defined by
\[
\mathfrak{B}_0 x
= (\mathfrak{B}_0^j x^j)_{j=1}^{m-1}.
\]
\begin{prpstn}[Well-posedness of the abstract Cauchy problem]
The operator $\mathcal{A}$ generates a contractive $C_0$-semigroup $(\mathcal{T}(t))_{t \geq 0}$ on $\mathcal{X}$ if and only if $\mathcal{A}$ is dissipative, i.e.\
\[
\Re \sp{\mathcal{A} (x,x_c)}{(x,x_c)}_{\mathcal{X}}
\leq 0,
\quad
(x,x_c) \in D(\mathcal{A}) \times X_c.
\]
Moreover, in that case the operator $\mathcal{A}$ has compact resolvent.
In particular, this is the case if $\mathfrak{S}_0 = (\mathfrak{A}_0, \mathfrak{B}, \mathfrak{C})$ and $\Sigma_c = (A_c, B_c, C_c, D_c)$ are impedance passive, i.e.\
\begin{align*}
\Re \sp{\mathfrak{A}_0 x}{x}_X
&\leq \Re \sp{\mathfrak{B} x}{\mathfrak{C} x}_U,
&&x \in D(\mathfrak{A}_0)
\\
\Re \sp{A_c x_c + B_c u_c}{x_c}_{X_c}
&\leq \Re \sp{C_c x_c + D_c u_c}{u_c}_{U_c},
&&x_c \in x_c, \, u_c \in U_c.
\end{align*}
\end{prpstn}
\textbf{Proof.}
See \cite[Theorem 3.1]{AugnerJacob_2014}.\qed
In other words, for every linear closure via static feedback or a dynamic linear, finite dimensional control system such that the resulting interconnected system is dissipative, and for any initial data $(x_0, x_{c,0}) \in X \times X_c$ there is a unique strong solution $(x,x_c) \in C(\mathbb{R}_+; X \times X_c)$ of the abstract Cauchy problem with non-increasing energy
\[
H^{\mathrm{tot}}(t)
= \frac{1}{2} \norm{x(t)}_X^2 + \frac{1}{2} \norm{x_c(t)}_{X_c}^2,
\quad
t \geq 0.
\]
\section{Stability properties}
\label{stability}
Having established well-posedness for every set of dissipative interconnection and boundary conditions, in this section we investigate stability properties under a slightly more restrictive condition on the finite dimensional control system.
Namely, we assume that $\Sigma_c$ has block diagonal form in the following sense: There are control systems $\Sigma_c^j = (A_c^j, B_c^j, C_c^j, D_c^j)$, $j = 0, 1, \ldots, m$, defined on finite-dimensional controller state spaces $X_c^j$ such that $X_c = \prod_{i=0}^m X_c^j$ and input and output spaces $U_c^j = Y_c^j = \mathbb{K}^2$ such that $U_c = \prod_{i=0}^m U_c^j = \prod_{i=0}^m Y_c^j = Y_c$, and linear operators $A_c^j \in \mathcal{B}(X_c^j)$, $B_c^j \in \mathcal{B}(U_c^j, X_c^j)$, $C_c^j \in \mathcal{B}(X_c^j; Y_c^j)$ and $D_c^j \in \mathcal{B}(U_c^j; Y_c^j)$ such that $A_c = \operatorname{diag}(A_c^j)_{j=0,\ldots,m}$, $B_c = \operatorname{diag}(B_c^j)_{j=0,\ldots,m}$ etc.\ are block-diagonal linear operators.
The standard feedback interconnection then reads as
\[
\mathfrak{B}^j x = - y_c^j,
\quad
\mathfrak{C}^j x = u_c^j,
\quad
j = 0, 1, \ldots, m
\]
with the dynamics of the finite dimensional controllers being modelled by
\begin{align*}
\frac{\, \mathrm{d}}{\, \mathrm{d} t} x_c^j(t)
&= A_c^j x_c^j(t) + B_c u_c^j(t)
\\
y_c^j(t)
&= C_c^j x_c^j(t) + D_c^j u_c^j(t),
\quad
j = 0, 1, \ldots, m, \,
t \geq 0,
\end{align*}
see Figure \ref{img:interconnection_structure}.
\begin{figure}
\centering
\includegraphics{Schaltdiagramm.pdf}
\caption{At each junction point $l^j$ the beam ends meeting there are interconnected via a finite dimensional control system $\Sigma_c^j$.}
\label{img:interconnection_structure}
\end{figure}
\begin{assumption}[Passive and internally stable controllers]
\label{assmpt:passive_stable_controller}
All control systems $\Sigma_c^j = (A_c^j, B_c^j, C_c^j, D_c^j)$ are impedance passive and for $j = 0, 1, \ldots, m$ the following conditions hold:
\begin{enumerate}
\item
\label{assmpt:passive_stable_controller-i}
There is $\kappa_j > 0$ such that
\[
\Re \sp{A_c^j x_c^j + B_c^j u_c^j}{x_c^j}_{X_c^j}
\leq \Re \sp{C_c^j x_c^j + D_c^j u_c^j}{u_c^j}_{U_c^j} - \kappa_j \abs{D_c^j u_c^j},
\quad
x_c^j \in X_c^j, \, u_c^j \in U_c^j.
\]
\item
$\ker D_c^j \subseteq \ker B_c^j$.
\item
$A_c^j$ has spectrum $\sigma(A_c^j) \subseteq \mathbb{C}_0^- := \{\lambda \in \mathbb{C}: \Re \lambda < 0\}$, i.e.\ the semigroup $(\mathrm{e}^{t A_c^j})_{t \geq 0}$ is exponentially stable.
\end{enumerate}
\end{assumption}
\begin{rmrk}[Collocated input and output]
If $B_c^j = (C_c^j)^\ast$ for all $j = 0, 1, \ldots, m$, i.e.\ collocated input and output, and each $D_c^j$ is either a diagonal matrix with non-negative entries or the symmetric part $\He(D_c^j)$ is positive definite, then Assumption \ref{assmpt:passive_stable_controller}.\eqref{assmpt:passive_stable_controller-i} is satisfied.
Namely, impedance passivity implies that each $A_c^j$ is dissipative, and in either case one has $\Re \sp{D_c^j u_c^j}{u_c^j}_{U_c^j} = \abs{(\He D_c^j)^{1/2} u_c^j}_{U_c^j}^2 \geq \kappa \abs{D_c^j u_c^j}^2$ for some $\kappa > 0$.
\end{rmrk}
\begin{rmrk}
\label{rem:on_assmpt_3.1}
Using standard feedback interconnection between the impedance passive systems $\Sigma_c$ and $\mathfrak{S}$ in the definition of the operator $\mathcal{A}$, in connection with Assumption \ref{assmpt:passive_stable_controller} gives that
\begin{align*}
\Re \sp{\mathcal{A} (x,x_c)}{(x,x_c)}_{\mathcal{X}}
&= \Re \sp{\mathfrak{A} x}{x}_X + \Re \sp{A_c x_c + B_c \mathfrak{C} x}{- \mathfrak{B} x}_{X_c}
\\
&\leq \Re \sp{\mathfrak{B} x}{\mathfrak{C} x}_{U} + \Re \sp{\mathfrak{C} x}{- \mathfrak{B} x}_U - \sum_{j=0}^m \kappa_j \abs{D_c^j \mathfrak{C}^j x}^2
\\
&= - \sum_{j=0}^m \kappa_j \abs{D_c^j \mathfrak{C}^j x}^2,
\quad
(x,x_c) \in D(\mathcal{A}).
\end{align*}
\end{rmrk}
Under this structural assumption and some slight regularity conditions on $\rho$ and $EI$, we are able to formulate the following asymptotic stability result for linear boundary damping feedback at one of the two ends of the chain of Euler-Bernoulli beams.
\begin{thrm}[Asymptotic Stability]
\label{thm:asymptotic_stability}
Assume that $\rho$ and $EI$ have regularity \eqref{R}, the control systems satisfy Assumption \ref{assmpt:passive_stable_controller} and for the operator $\mathcal{A}$ defined in Section \ref{well-posedness} it holds
\begin{equation}
\Re \sp{\mathcal{A} (x,x_c)}{(x,x_c)}_{\mathcal{X}}
\leq - \kappa \abs{\mathfrak{R} x}_{\mathbb{K}^4}^2,
\quad
x \in D(\mathcal{A})
\label{eqn:diss_asymptoctic}
\end{equation}
where $\kappa > 0$ and $\mathfrak{R}: D(\mathfrak{A}) \rightarrow \mathbb{K}^4$ is one of the functions
\[
\left( \begin{array}{c} (\H^1_1 x^1_1)(0) \\ (\H^1_1 x^1_1)'(0) \\ (\H^1_2 x^1_2)(0) \\ (\H^m_2 x^m_2)'(1) \end{array} \right) \, \text{or} \,
\left( \begin{array}{c} (\H^1_1 x^1_1)(0) \\ (\H^1_1 x^1_1)'(0) \\ (\H^1_2 x^1_2)'(0) \\ (\H^m_2 x^m_2)(1) \end{array} \right) \, \text {or} \,
\left( \begin{array}{c} (\H^1_1 x^1_1)(0) \\ (\H_2^1 x_2^1)(0) \\ (\H^1_2 x^1_2)'(0) \\ (\H^m_1 x^m_1)'(1) \end{array} \right) \, \text{or} \,
\left( \begin{array}{c} (\H^1_1 x^1_1)'(0) \\ (\H^1_2 x^1_2)(0) \\ (\H^1_2 x^1_2)'(0) \\ (\H^m_1 x^m_1)(1) \end{array} \right).
\]
Then the $C_0$-semigroup $(\mathcal{T}(t))_{t \geq 0}$ generated by $\mathcal{A}$ is asymptotically stable on $\mathcal{X}$, i.e.\ for every initial value $(x_0, x_{c,0}) \in \mathcal{X}$ one has
\[
\mathcal{T}(t) (x_0, x_{c,0}) \rightarrow 0.
\]
In particular, $\sigma(\mathcal{A}) = \sigma_p(\mathcal{A}) \subseteq \mathbb{C}_0^-$.
\end{thrm}
\textbf{Proof.}
Since the operator $\mathcal{A}$ has compact resolvent and generates a strongly continuous contraction semigroup on $\mathcal{X}$, by the Arendt-Batty-Lyubich-V\~u Theorem, see e.g.\ \cite[Theorem V.2.21]{EngelNagel_2000}, the semigroup is asymptotically stable if and only if
\[
\sigma_p(\mathcal{A}) \cap \, \mathrm{i} \mathbb{R}
= \emptyset.
\]
Therefore, let $(\hat x, \beta) \in D(\mathcal{A}) \times \mathbb{R}$ be such that $\, \mathrm{i} \beta \hat x = \mathcal{A} \hat x$.
Then, in particular
\[
0
= \Re \sp{\, \mathrm{i} \beta \hat x}{\hat x}_{\mathcal{X}}
= \Re \sp{\mathcal{A} \hat x}{\hat x}_{\mathcal{X}}
\leq - \kappa \abs{\mathfrak{R} x}_{\mathbb{K}^4}^2
\leq 0,
\]
i.e.\ $\mathfrak{R} x = 0$.
Then at least three of the four components of $((\H^1 x^1)(0), (\H^1 x^1)'(0))$ are zero.
Moreover, since the systems $\mathfrak{S} = (\mathfrak{A}_0, \mathfrak{B}, \mathfrak{C})$ and $\Sigma_c = (A_c, B_c, C_c, D_c)$ are both impedance passive and interconnected by standard feedback interconnection, it follows from Remark \ref{rem:on_assmpt_3.1} that
\[
0
= \Re \sp{\, \mathrm{i} \beta \hat x}{\hat x}_{\mathcal{X}}
\leq - \sum_{j=0}^m \kappa_j \abs{D_c^j \mathfrak{C}^j x}_{U_c^j}^2,
\]
which means that $D_c^j \mathfrak{C}^j x = 0$ for $j = 0, 1, \ldots, m$ as well.
Since $\ker D_c^j \subseteq \ker B_c^j$, this implies that $B_c^j \mathfrak{C}^j x = 0$ for $j = 0, 1, \ldots, m$ as well and, therefore,
\[
\, \mathrm{i} \beta x_c^j
= A_c^j x_c^j + B_c^j \mathfrak{C}^j x
\quad \Rightarrow \quad
x_c^j
= (\, \mathrm{i} \beta - A_c^j)^{-1} B_c^j \mathfrak{C}^j x
= 0.
\]
Then again, $\mathfrak{B}^j x = - C_c^j x_c^j - D_c^j \mathfrak{C}^j x = 0$ for $j = 0, 1, \ldots m$ follows at once.
By definition of $\mathfrak{B}^j$, this means that $(\H^j x^j)(1) = (\H^{j+1} x^{j+1})(0)$ and $(\H^j x^j)'(1) = (\H^{j+1} x^{j+1})'(0)$ for every junction $j = 1, \ldots, m-1$.
First, assume that $\beta \not= 0$.
After possible multiplication of $\hat x$ by some scalar $\alpha \in \mathbb{C} \setminus \{0\}$, we may and will assume that $(\H_1^1 x^1_1)(0) \geq 0$, $(\H^1_1 x^1_1)'(0) \geq 0$, $\, \mathrm{i} \beta (\H^1_2 x^1_2)(0) \geq 0$ and $\, \mathrm{i} \beta (\H^1_2 x^1_2)'(0) \geq 0$ (since three of these terms equal zero anyway).
By \cite[Lemma 4.2.9]{Augner_2016}, then either $\H^1 x^1 = 0$ on $[0,1]$, or $(\H_1^1 x^1_1)(\zeta) > 0$, $(\H^1_1 x^1_1)'(\zeta) > 0$, $\, \mathrm{i} \beta (\H^1_2 x^1_2)(\zeta) > 0$ and $\, \mathrm{i} \beta (\H^1_2 x^1_2)'(\zeta) > 0$ for all $\zeta \in (0,1]$.
Repeating this procedure and using the continuity conditions $(\H^j x^j)(1) = (\H^{j+1} x^{j+1})(0)$ and $(\H^j x^j)'(1) = (\H^{j+1} x^{j+1})'(0)$ then shows the following:
\begin{align*}
&\H^1 x^1 \equiv 0,
\quad \Rightarrow \quad
(\H^1 x^1)(1) = (\H^1 x^1)'(1) = 0
\\
&\quad \Rightarrow \quad
(\H^2 x^2)(0) = (\H^2 x^2)'(0) = 0
\quad \Rightarrow \quad
\H^2 x^2 \equiv 0
\\
&\quad \Rightarrow \quad
\H^j x^j \equiv 0, \, j = 1, \ldots, m
\quad \Rightarrow \quad
\hat x \equiv 0 \quad \text{ is not an eigenfunction},
\\
\intertext{and}
&\H^1 x^1 \not\equiv 0
\quad \Rightarrow \quad
(\H^1_1 x^j_1)(1) > 0, \, (\H^1_1 x^j_1)'(1) > 0, \, \, \mathrm{i} \beta (\H^1_2 x^1_2)(1) > 0, \, \, \mathrm{i} \beta (\H^1_2 x^1_2)'(1) > 0
\\
&\quad \Rightarrow \quad
(\H^2_1 x^2_1)(0) > 0, \, (\H^2_1 x^2_1)'(0) > 0, \, \, \mathrm{i} \beta (\H^2_2 x^2_2)(0) > 0, \, \, \mathrm{i} \beta (\H^2_2 x^2_2)'(0) > 0
\\
&\quad \Rightarrow \quad
(\H^2_1 x^2_1)(\zeta) > 0, \, (\H^2_1 x^2_1)'(\zeta) > 0, \, \, \mathrm{i} \beta (\H^2_2 x^2_2)(\zeta) > 0, \, \, \mathrm{i} \beta (\H^2_2 x^2_2)'(\zeta) > 0
\quad \text{for all } \zeta \in [0,1]
\\
&\quad \Rightarrow \quad
(\H^j_1 x^j_1)(0) > 0, \, (\H^j_1 x^j_1)'(0) > 0, \, \, \mathrm{i} \beta (\H^j_2 x^j_2)(0) > 0, \, \, \mathrm{i} \beta (\H^j_2 x^j_2)'(0) > 0
\quad \text{for } j = 1, \ldots, m
\\
&\quad \Rightarrow \quad
(\H^j_1 x^j_1)(\zeta) > 0, \, (\H^j_1 x^j_1)'(\zeta) > 0, \, \, \mathrm{i} \beta (\H^j_2 x^j_2)(\zeta) > 0, \, \, \mathrm{i} \beta (\H^j_2 x^j_2)'(\zeta) > 0
\quad \text{for all } \zeta \in [0,1], \, j = 1, \ldots, m
\\
&\quad \Rightarrow \quad
\text{contradiction with the condition } \mathfrak{R} x = 0, \text{ hence, this case is impossible.}
\end{align*}
This excludes the situation where $\sigma_p(A) \cap (\, \mathrm{i} \mathbb{R} \setminus \{0\}) \neq \emptyset$.
For the case $\beta = 0$, one has $(\H^j x^j)'' = 0$ on $(0,1)$ for each $j = 1, \ldots, m$, but as above one sees that $(\H^{j+1} x^{j+1})(0) = (\H^j x^j)(1)$ and $(\H^{j+1} x^{j+1})'(0) = (\H^j x^j)'(1)$, so that
\[
((\H^j x^j)(\zeta), (\H^j x^j)'(\zeta))
= ((\H^1 x^1)(0), (\H^1 x^1)'(0))
= ((\H^m x^m)(1), (\H^m x^m)'(1)),
\quad
\zeta \in [0,1], \, j = 1, \ldots, m,
\]
but by the condition $\mathfrak{R} x = 0$ this can only be the case if $x = 0$, and then as before also $x_c = 0$,
Hence, we have shown that $\sigma(\mathcal{A}) \cap \, \mathrm{i} \mathbb{R} = \emptyset$.
Asymptotic stability follows by the Arendt-Batty-Lyubich-V\~u-Theorem.\qed
\newline
In the language of \cite{AugnerJacob_2014} and as a by-product, the proof of Theorem \ref{thm:asymptotic_stability} shows the following:
\begin{lmm}
Let $\mathfrak{S}_0$, $\Sigma_c$ and $\mathfrak{R} = (\mathfrak{R}_1, \ldots, \mathfrak{R}_4)$ be as in Theorem \ref{thm:asymptotic_stability}.
\begin{enumerate}
\item
The pair $(\mathfrak{A}_0, \mathfrak{R})$ has property \textrm{ASP}, i.e.\
\[
\forall \beta \in \mathbb{R}: \quad
\ker(\mathfrak{A}_0 - \, \mathrm{i} \beta) \cap \ker \mathfrak{R} = \{0\}.
\]
\item
Let $\mathfrak{R}' = (\mathfrak{R}_1, \mathfrak{R}_2, \mathfrak{R}_3): D(\mathfrak{R}') = D(\mathfrak{R}) \rightarrow \mathbb{K}^3$, then
\[
\forall \beta \in \mathbb{R} \setminus \{0\}: \quad
\ker(\mathfrak{A}_0 - \, \mathrm{i} \beta) \cap \ker \mathfrak{R}' = \{0\}.
\]
In particular, in the situation of Theorem \ref{thm:asymptotic_stability}, we have that $\sigma_p(\mathcal{A}) \cap \, \mathrm{i} \mathbb{R} \subset \{0\}$, if the condition \eqref{eqn:diss_asymptoctic} is weakened to
\[
\Re \sp{\mathcal{A} (x,x_c)}{(x,x_c)}_{\mathcal{X}}
\leq - \kappa \abs{\mathfrak{R}' x}_{\mathbb{K}^3}^2,
\quad
x \in D(\mathcal{A}).
\]
\end{enumerate}
\end{lmm}
Asymptotic stability may be seen as the first step towards exponential stability.
To even ensure uniform exponential stability, we adjust the setting by
\begin{enumerate}
\item
imposing further restrictions on the boundary conditions at the left ($\zeta = \zeta_0 = 0$) and right end ($\zeta = \zeta_m = L$) of the chain of beams and
\item
imposing monotonicity conditions on the parameter functions $\rho$ and $EI$ at the junction points $l^j \in (0,L)$, $j = 1, \ldots, m-1$, see condition \eqref{M} resp.\ condition \eqref{M'} below.
\end{enumerate}
Condition \eqref{M} in the abstract port-Hamiltonian setting reads as
\begin{assumption}[Jump conditions in port-Hamiltonian formulation]
We assume that
\[
\H^{j+1}(0) - \H^j(1)
\quad \text{is positive semidefinite, for all }
j = 1, \ldots, m-1.
\tag{{\bf M'}}
\label{M'}
\]
\end{assumption}
\begin{thrm}[Uniform Exponential Stability]
\label{thm:exp_stability}
Let Assumption \ref{assmpt:passive_stable_controller} and conditions \eqref{R} and \eqref{M'} be satisfied and the operator $\mathcal{A}$ be such that
\[
\Re \sp{\mathcal{A} (x,x_c)}{(x,x_c)}_{\mathcal{X}}
\leq - \kappa \abs{\mathfrak{R} x}_{\mathbb{K}^5}^2
\]
for some constant $\kappa > 0$ with $\mathfrak{R}: D(\mathfrak{A}) \rightarrow \mathbb{K}^5$ of the form
\[
\mathfrak{R} x
= \left( \begin{array}{c} (\H^1 x^1)(0) \\ (\H^1_1 x^1_1)'(0) \, \text{or} \, (\H^1_2 x^1_2)'(0) \\ (\H^m_1 x^m_1)(1) \, \text{or} \, (\H^m_2 x^m_2)'(1) \\ (\H^m_1 x^m_1)'(1) \, \text{or} \, (\H^m_2 x^m_2)(1) \end{array} \right).
\]
Then, the $C_0$-semigroup $(\mathcal{T}(t))_{t \geq 0}$ generated by $\mathcal{A}$ is uniformly exponentially stable if and only if $(\mathcal{T}(t))_{t \geq 0}$ is asymptotically stable.\newline
In particular, this is the case if $\sigma(A_c^j) \subseteq \mathbb{C}_0^-$ for $j = 0, 1, \ldots, m$ and
\[
\mathfrak{R} x
= \left( \begin{array}{c} (\H^1 x^1)(0) \\ (\H^1_1 x^1_1)'(0) \\ (\H^m_1 x^m_1)(1) \, \text{or} \, (\H^m_2 x^m_2)'(1) \\ (\H^m_2 x^m_2)(1) \end{array} \right)
\quad \text{or} \quad
\mathfrak{R} x
= \left( \begin{array}{c} (\H^1 x^1)(0) \\ (\H^1_2 x^1_2)'(0) \\ (\H^m_1 x^m_1)(1) \, \text{or} \, (\H^m_2 x^m_2)'(1) \\ (\H^m_1 x^m_1)'(1) \end{array} \right).
\]
\end{thrm}
\textbf{Proof.}
First, we show that the pair $(\mathfrak{A}_0, (\mathfrak{R}, \mathfrak{B}))$ has property AIEP as introduced in Definition 2.8 of \cite{AugnerJacob_2014}, i.e.\ for every sequence $(x_n, \beta_n)_{n \geq 1} \subseteq D(\mathfrak{A}_0) \times \mathbb{R}$ with $\sup_{n \in \mathbb{N}} \norm{x_n}_X < \infty$ and $\abs{\beta_n} \rightarrow 0$ and such that
\[
\, \mathrm{i} \beta_n x_n - \mathfrak{A}_0 x_n
\rightarrow 0 \quad \text{in } X,
\quad
\mathfrak{R} x_n
\rightarrow 0 \quad \text{in } \mathbb{K}^5,
\quad
\mathfrak{B} x_n
\rightarrow 0 \quad \text{in } \mathbb{K}^{2m},
\]
it follows that $x_n \rightarrow 0$ in $X$.
To this end, for $j = 1, \ldots, m$ fix functions $q^j \in C^2([0,1];\mathbb{R})$ which will be specified below.
By the proof of \cite[Proposition 4.3.19]{Augner_2016}, for every sequence $(x_n, \beta_n)_{n \geq 1} \subseteq D(\mathfrak{A}_0) \times \mathbb{R}$ with $\sup_{n \in \mathbb{N}} \norm{x_n}_X < \infty$, $\abs{\beta_n} \rightarrow 0$ and $\mathfrak{A}_0 x_n - \, \mathrm{i} \beta_n x_n \rightarrow 0$ in $X$, it holds that
\begin{align*}
&\Re \sp{x^j_n}{(2 (q^j)' \H^j - q^j (\H^j)') x^j_n}_{L_2}
\\
&= o(1)
- 2 \Re \left[ \sp{- (\H^j_2 x^j_{n,2})'(\zeta)}{\frac{\, \mathrm{i} q^j(\zeta)}{\beta_n} (\H^j_1 x^j_{n,1})'(\zeta)}_{\mathbb{K}} \right]_0^1
+ \left[ \sp{x^j_n(\zeta)}{q^j \H^j(\zeta) x^j_n(\zeta)}_{\mathbb{K}} \right]_0^1
\\
&\quad
+ \left[ \Re \sp{- (\H^j_2 x^j_{n,2})'(\zeta)}{\frac{\, \mathrm{i} (q^j)'(\zeta)}{\beta_n} (\H^j_1 x^j_{n,1})(\zeta)}_{\mathbb{K}} \right]_0^1
- \left[ \Re \sp{-(\H^j_2 x^j_{n,2})(\zeta)}{\frac{\, \mathrm{i} (q^j)'}{\beta_n} (\H^j_1 x^j_{n,1})'(\zeta)}_{\mathbb{K}} \right]_0^1
\end{align*}
where $o(1)$ denotes further terms which tend to zero as $n \rightarrow \infty$.
(Note that there is a typo in \cite[equation (4.27)]{Augner_2016}: There actually should be a minus sign in front of the last line of the equation.)
Summing up these equalities and writing $Q(\zeta) = \operatorname{diag}\, (q^j(\zeta))_{j=1}^m$, we find that
{\allowdisplaybreaks[1]
\begin{align*}
&\Re \sp{x_n}{(2 Q' \H - Q \H') x_n}_{L_2}
\\
&= o(1)
- 2 \Re \left[ \sp{- (\H^m_2 x^m_{n,2})'(1)}{\frac{\, \mathrm{i} q^m(1)}{\beta_n} (\H^m_1 x^m_{n,1})'(1)}_{\mathbb{K}} \right]
+ \left[ \sp{x^m_n(1)}{q^m \H^m(1) x^m_n(1)}_{\mathbb{K}} \right]_0^1
\\
&\quad
+ \Re \sp{- (\H^m_2 x^m_{n,2})'(1)}{\frac{\, \mathrm{i} (q^m)'(1)}{\beta_n} (\H^m_1 x^m_{n,1})(1)}_{\mathbb{K}}
- \Re \sp{-(\H^m_2 x^m_{n,2})(1)}{\frac{\, \mathrm{i} (q^m)'(1)}{\beta_n} (\H^m_1 x^m_{n,1})'(1)}_{\mathbb{K}}
\\
&\quad
+ 2 \Re \sp{- (\H^1_2 x^1_{n,2})'(0)}{\frac{\, \mathrm{i} q^1(0)}{\beta_n} (\H^1_1 x^1_{n,1})'(0)}_{\mathbb{K}}
- \sp{x^1_n(0)}{q^1 \H^1(0) x^1_n(0)}_{\mathbb{K}}
\\
&\quad
- \Re \sp{- (\H^1_2 x^1_{n,2})'(0)}{\frac{\, \mathrm{i} (q^1)'(0)}{\beta_n} (\H^1_1 x^1_{n,1})(0)}_{\mathbb{K}}
+ \Re \sp{-(\H^1_2 x^1_{n,2})(0)}{\frac{\, \mathrm{i} (q^1)'(0)}{\beta_n} (\H^1_1 x^1_{n,1})'(0)}_{\mathbb{K}}
\\
&\quad
- 2 \sum_{j=1}^{m-1} \Re \left[ \sp{- (\H^j_2 x^j_{n,2})'(1)}{\frac{\, \mathrm{i} q^j(1)}{\beta_n} (\H^j_1 x^j_{n,1})'(1)}_{\mathbb{K}} - \sp{- (\H^{j+1}_2 x^{j+1}_{n,2})'(0)}{\frac{\, \mathrm{i} q^{j+1}(0)}{\beta_n} (\H^{j+1}_1 x^{j+1}_{n,1})'(0)}_{\mathbb{K}} \right]
\\
&\quad
+ 2 \sum_{j=1}^{m-1} \left[ \sp{x^j_n(1)}{q^j \H^j(1) x^j_n(1)}_{\mathbb{K}} - \sp{x^{j+1}_n(0)}{q^{j+1} \H^{j+1}(0) x^{j+1}_n(0)}_{\mathbb{K}} \right]
\\
&\quad
- 2 \sum_{j=1}^{m-1} \left[ \Re \sp{(\H^j_2 x^j_{n,2})'(1)}{\frac{\, \mathrm{i} (q^j)'(1)}{\beta_n} (\H^j_1 x^j_{n,1})(1)}_{\mathbb{K}} \hspace{-0.15cm} - \Re \sp{(\H^{j+1}_2 x^{j+1}_{n,2})'(0)}{\frac{\, \mathrm{i} (q^{j+1})'(0)}{\beta_n} (\H^{j+1}_1 x^{j+1}_{n,1})(0)}_{\mathbb{K}} \right]
\\
&\quad
+ 2 \sum_{j=1}^{m-1} \left[ \Re \sp{(\H^j_2 x^j_{n,2})(1)}{\frac{\, \mathrm{i} (q^j)'(1)}{\beta_n} (\H^j_1 x^j_{n,1})'(1)}_{\mathbb{K}} \hspace{-0.15cm} - \Re \sp{(\H^{j+1}_2 x^{j+1}_{n,2})(0)}{\frac{\, \mathrm{i} (q^{j+1})'(0)}{\beta_n} (\H^{j+1}_1 x^{j+1}_{n,1})'(0)}_{\mathbb{K}} \right]_0^1.
\end{align*}
}We show that for a suitable choice of the functions $q^j$ and under the imposed dissipation and monotonicity assumptions, all the terms on the right hand side vanish as $n \rightarrow \infty$.
To this end, observe that $\, \mathrm{i} \beta_n x_n - \mathfrak{A} x_n \rightarrow 0$ in $X$ and $(x_n)_{n \geq 1} \subseteq X$ is bounded whereas $\abs{\beta_n} \rightarrow \infty$, so that $\frac{\mathfrak{A}_0 x_n}{\beta_n} = P_2 (\H x_n)'' \rightarrow 0$ in $X$, thus $(\H x_n)'' \rightarrow 0$ in $X$.
From \cite[Lemma 2.15]{AugnerJacob_2014}, an interpolation argument and embedding theorems for Sobolev Slobodetskii spaces into $C^k$-spaces we conclude that
\begin{equation}
\frac{\H x_n}{\beta_n} \rightarrow 0,
\quad
\text{in } C^1([0,1];\mathbb{K}^{2m}).
\label{eqn:C^1-o(1)}
\end{equation}
Only from here on, we additionally assume that $\mathfrak{R} x_n \rightarrow 0$.
\begin{enumerate}
\item
The terms
\[
- 2 \Re \left[ \sp{- (\H^m_2 x^m_{n,2})'(1)}{\frac{\, \mathrm{i} q^m(1)}{\beta_n} (\H^m_1 x^m_{n,1})'(1)}_{\mathbb{K}} \right]
+ \left[ \sp{x^m_n(1)}{q^m \H^m(1) x^m_n(1)}_{\mathbb{K}} \right]_0^1
\]
not only tend to zero, but equal zero for all $n \in \mathbb{N}$, if we choose the function $q^m$ such that
\begin{equation}
q^m(1) = 0.
\label{cond:q-1}
\end{equation}
\item
For the third term, we have
\begin{align*}
\abs{ \Re \sp{- (\H^m_2 x^m_{n,2})'(1)}{\frac{\, \mathrm{i} (q^m)'(1)}{\beta_n} (\H^m_1 x^m_{n,1})(1)}_{\mathbb{K}} }
&\leq |(q^m)'(1)| \frac{1}{\abs{\beta_n}} \abs{(\H^m_2 x^m_{n,2})'(1))} \abs{(\H^m_1 x^m_{n,1})}
\rightarrow 0
\end{align*}
as from $\mathfrak{R} x_n \rightarrow 0$ and the definition of $\mathfrak{R}$, at least one of the terms $\abs{(\H^m_2 x^m_{n,2})'(1))}$ or $\abs{(\H^m_1 x^m_{n,1})(1)}$ tends to zero and by \eqref{eqn:C^1-o(1)} in any case $\frac{1}{\abs{\beta_n}} \abs{(\H^m_1 x^m_{n,1})(1)}$ and $\frac{1}{\abs{\beta_n}} \abs{(\H^m_2 x^m_{n,2})'(1))}$ tend to zero.
\item
Similarly,
\begin{align*}
\abs{ \Re \sp{-(\H^m_2 x^m_{n,2})(1)}{\frac{\, \mathrm{i} (q^m)'(1)}{\beta_n} (\H^m_1 x^m_{n,1})'(1)}_{\mathbb{K}} }
&\leq |(q^m)'(1)| \frac{1}{\abs{\beta_n}} \abs{(\H^m_2 x^m_{n,2})(1))} \abs{(\H^m_1 x^m_{n,1})'}
\rightarrow 0
\end{align*}
as $\abs{(\H^m_2 x^m_{n,2})(1))}$ or $\abs{(\H^m_1 x^m_{n,1})'(1)}$ tends to zero due to $\mathfrak{R} x_n \rightarrow 0$ and the definition of $\mathfrak{R}$.
\item
Since at least one of the terms $(\H^1_2 x^1_{n,2})'(0)$ or $(\H^1_1 x^1_{n,1})'(0)$ tends to zero as well, by the same reasoning
\[
\abs{ \Re \sp{- (\H^1_2 x^1_{n,2})'(0)}{\frac{\, \mathrm{i} q^1(0)}{\beta_n} (\H^1_1 x^1_{n,1})'(0)}_{\mathbb{K}} }
\rightarrow 0.
\]
\item
For the terms
\begin{align*}
- \left[ \sp{x^1_n(0)}{q^1 \H^1(0) x^1_n(0)}_{\mathbb{K}} \right]
- \Re \sp{- (\H^1_2 x^1_{n,2})'(0)}{\frac{\, \mathrm{i} (q^1)'(0)}{\beta_n} (\H^1_1 x^1_{n,1})(0)}_{\mathbb{K}}
\\
- \Re \sp{-(\H^1_2 x^1_{n,2})(0)}{\frac{\, \mathrm{i} (q^1)'(0)}{\beta_n} (\H^1_1 x^1_{n,1})'(0)}_{\mathbb{K}}
&\rightarrow 0
\end{align*}
we use that $(\H^1 x^1_n)(0) \rightarrow 0$ since $\mathfrak{R} x_n \rightarrow 0$ and then $(\H^1 x^1_n)(0) \rightarrow 0$.\newline
This concludes the discussion of the boundary terms at the left and right end of the chain of beams.
For the junction points, we use the continuity and jump conditions, and obtain the following.
\item
For each $j = 1, \ldots, m-1$ we have that
\begin{align*}
&+ \Re \sp{- (\H^j_2 x^j_{n,2})'(1)}{\frac{\, \mathrm{i} (q^j)'(1)}{\beta_n} (\H^j_1 x^j_{n,1})(1)}_{\mathbb{K}}
- \Re \sp{- (\H^{j+1}_2 x^{j+1}_{n,2})'(0)}{\frac{\, \mathrm{i} (q^{j+1})'(0)}{\beta_n} (\H^{j+1}_1 x^{j+1}_{n,1})(0)}_{\mathbb{K}}
\\
&- \Re \sp{- (\H^j_2 x^j_{n,2})'(1)}{\frac{\, \mathrm{i} (q^j)'(1)}{\beta_n} (\H^j_1 x^j_{n,1})(1)}_{\mathbb{K}} + \Re \sp{- (\H^{j+1}_2 x^{j+1}_{n,2})'(0)}{\frac{\, \mathrm{i} (q^{j+1})'(0)}{\beta_n} (\H^{j+1}_1 x^{j+1}_{n,1})(0)}_{\mathbb{K}}
\\
&=
\Re \sp{ \mathfrak{B}^j x_n}{ \frac{\, \mathrm{i} (q^{j+1})'(0)}{\beta_n} \mathfrak{C}^j x_n}
\end{align*}
if we choose the functions $q^j$ and $q^{j+1}$ such that their derivatives at the junction points match, i.e.\
\begin{equation}
(q^j)'(1)
= (q^{j+1})'(0),
\quad
j = 1, \ldots, m-1.
\label{cond:q-2}
\end{equation}
One may easily check this claim for all the cases of the control and observation maps $\mathfrak{B}^j$ and $\mathfrak{C}^j$ we allowed for.
Moreover, since $\mathfrak{B} x_n \rightarrow 0$ by assumption and $\frac{\mathfrak{C} x_n}{\beta_n} \rightarrow 0$ by \eqref{eqn:C^1-o(1)}, we find that these terms tend to zero as $n \rightarrow \infty$ as well:
\[
\Re \sp{ \mathfrak{B}^j x}{ \frac{\, \mathrm{i} (q^{j+1})'(0)}{\beta_n} \mathfrak{C}^j x}
\rightarrow 0.
\]
\item
To handle the terms
\begin{align*}
&\Re \sp{- (\H^j_2 x^j_{n,2})'(1)}{\frac{\, \mathrm{i} q^j(1)}{\beta_n} (\H^j_1 x^j_{n,1})'(1)}_{\mathbb{K}}
- \Re \sp{- (\H^{j+1}_2 x^{j+1}_{n,2})'(0)}{\frac{\, \mathrm{i} q^{j+1}(0)}{\beta_n} (\H^{j+1}_1 x^{j+1}_{n,1})'(0)}_{\mathbb{K}}
\\
&= \Re \sp{- (\H^j_2 x^j_{n,2})'(1) + (\H^{j+1}_2 x^{j+1}_{n,2})'(0)}{\frac{\, \mathrm{i} q^{j+1}(0)}{\beta_n} (\H^{j+1}_1 x^{j+1}_{n,1})'(0)}_{\mathbb{K}}
\\
&\quad
- \Re \sp{- (\H^{j+1}_2 x^{j+1}_{n,2})'(0)}{\frac{\, \mathrm{i} q^{j+1}(0)}{\beta_n} \left( (\H^{j+1}_1 x^{j+1}_{n,1})'(0) - (\H^j_1 x^j_{n,1})'(1) \right)}_{\mathbb{K}}
\\
&= \Re \sp{- (\H^j_1 x^j_{n,1})'(1) + (\H^{j+1}_1 x^{j+1}_{n,1})'(0)}{\frac{\, \mathrm{i} q^{j+1}(0)}{\beta_n} (\H^{j+1}_2 x^{j+1}_{n,2})'(0)}_{\mathbb{K}}
\\
&\quad
- \Re \sp{- (\H^{j+1}_1 x^{j+1}_{n,1})'(0)}{\frac{\, \mathrm{i} q^{j+1}(0)}{\beta_n} \left( (\H^{j+1}_2 x^{j+1}_{n,2})'(0) - (\H^j_1 x^j_{n,2})'(1) \right)}_{\mathbb{K}},
\end{align*}
we note that both terms $(\H^j_1 x^j_1)'(1) - (\H_1^{j+1} x^{j+1}_1)'(0)$ and $(\H^j_2 x^j_2)'(1) - (\H_2^{j+1} x^{j+1}_2)'(0)$ tend to zero (if the corresponding term is a component of $\mathfrak{B}^j x$) or even equal zero (if the corresponding term constitutes a component of $\mathfrak{B}_0^j x$).
As $\frac{1}{\beta} (\H_1^{j+1} x^{j+1}_1)'(0)$ and $\frac{1}{\beta_n} (\H_2^{j+1} x_2^{j+1})'(0)$ tend to zero by \eqref{eqn:C^1-o(1)}, it follows that
\begin{align*}
&\Re \sp{- (\H^j_2 x^j_{n,2})'(1)}{\frac{\, \mathrm{i} q^j(1)}{\beta_n} (\H^j_1 x^j_{n,1})'(1)}_{\mathbb{K}}
\\
&- \Re \sp{- (\H^{j+1}_2 x^{j+1}_{n,2})'(0)}{\frac{\, \mathrm{i} q^{j+1}(0)}{\beta_n} (\H^{j+1}_1 x^{j+1}_{n,1})'(0)}_{\mathbb{K}}
\longrightarrow 0,
\quad
n \rightarrow \infty.
\end{align*}
\item
Lastly, the terms
\begin{align*}
&\sp{x^j_n(1)}{(q^j \H^j)(1) x^j_n(1)}_{\mathbb{K}} - \sp{x^{j+1}_n(0)}{(q^{j+1} \H^{j+1})(0) x^{j+1}_n(0)}_{\mathbb{K}}
\\
&= q^{j+1}(0) \sp{x^j_n(1)}{\H^j(1) x^j_n(1)}_{\mathbb{K}} - \sp{x^{j+1}_n(0)}{\H^{j+1}(0) x^{j+1}_n(0)}_{\mathbb{K}}
\end{align*}
have to be handled, where due to previous restrictions on the choice of the functions $q^j$ and $q^{j+1}$, necessarily $q^j(1) = q^{j+1}(0)$.
For the moment, we leave these terms as they are and employ the monotonicity conditions on $\H^j(1) - \H^{j+1}(0)$ later on.
\end{enumerate}
Putting things together, and choosing the functions $q^j$ such that $q^j(\zeta) = q(j-1+\zeta)$ for some function $q \in C^2([0,m];\mathbb{R})$ with $q(m) = 0$, we find that
\begin{align*}
&\Re \sp{x_n}{(2 Q' \H - Q \H') x_n}_{L_2}
\\
&= 2 \sum_{j=1}^{m-1} q(j) \left[ \sp{x^j_n(1)}{\H^j(1) x^j_n(1)}_{\mathbb{K}} - \sp{x^{j+1}_n(0)}{\H^{j+1}(0) x^{j+1}_n(0)}_{\mathbb{K}} \right]
+ o(1).
\end{align*}
We will now choose the function $q$ such that the term on the left hand side defines an equivalent inner product on $L_2(0,1;\mathbb{K}^{2m})$ and to this end choose $q \in C^2([0,m];\mathbb{R})$ with $q(m) = 0$ and $q' > 0$, $q \leq 0$ such that
\[
2 (q^j)' \H^j - q^j (\H^j)'
\geq 2 m_0 (q^j)' + q^j M_1
\geq \varepsilon_0
\]
where
\begin{align*}
m_0
&:= \sup \{ \varepsilon > 0: \H^j(\zeta) - \varepsilon I \text{ positive semidefinite for a.e.\, } \zeta \in (0,1), j = 1, \ldots, m \},
\\
M_1
&:= \inf \{ \varepsilon > 0: \varepsilon I - (\H^j)'(\zeta) \text{ positive semidefinite for a.e.\, } \zeta \in (0,1), j = 1, \ldots, m \}
\end{align*}
and $\varepsilon_0 > 0$.
We then have the estimate
\[
\varepsilon_0 \norm{x_n}_{L_2}^2
\leq 2 \sum_{j=1}^{m-1} q(j) \left[ \sp{x^j_n(1)}{\H^j(1) x^j_n(1)}_{\mathbb{K}} - \sp{x^{j+1}_n(0)}{\H^{j+1}(0) x^{j+1}_n(0)}_{\mathbb{K}} \right]
+ o(1).
\]
Since for every $j = 1, \ldots, m-1$, the term $q(j) < 0$ is strictly negative, the right hand side is less or equal $o(1)$, if for each junction $j = 1, \ldots, m-1$, we have that
\begin{align*}
&\sp{x^j_n(1)}{\H^j(1) x^j_n(1)}_{\mathbb{K}}
- \sp{x^{j+1}_n(0)}{\H^{j+1}(0) x^{j+1}_n(0)}_{\mathbb{K}}
\\
&= \sp{(\H^j x^j_n)(1) - (\H^{j+1} x^{j+1}_n)(0)}{(\H^j (1)^{-1} - \H^{j+1}(0)^{-1}) \left( (\H^j x^j_n)(1) - (\H^{j+1} x^{j+1}_n)(0) \right)}_{\mathbb{K}}
\\
&\quad
+ 2 \sp{(\H^{j+1} x^{j+1}_n)(0)}{(\H^j(1)^{-1} - \H^{j+1}(0)^{-1}) (\H^j x^j_n)(1)}
\\
&= \frac{3}{2} \sp{(\H^j x^j_n)(1) - (\H^{j+1} x^{j+1}_n)(0)}{(\H^j (1)^{-1} - \H^{j+1}(0)^{-1}) \left( (\H^j x^j_n)(1) - (\H^{j+1} x^{j+1}_n)(0) \right)}_{\mathbb{K}}
\\
&\quad
+ \frac{1}{2} \sp{\left( (\H^j x^j_n)(1) + (\H^{j+1} x^{j+1}_n)(0) \right)}{(\H^j(1)^{-1} - \H^{j+1}(0)^{-1}) \left( (\H^j x^j_n)(1) + (\H^{j+1} x^{j+1}_n)(0) \right)}
\end{align*}
is a sum of $o(1)$-terms.
By assumption, $\mathfrak{B}^j x_n \rightarrow 0$ and $\mathfrak{B}_0^j x_n = 0$.
By this, the term $(\H^j x^j_n)(1) - (\H^{j+1} x^{j+1}_n)(0)$ and, hence, the first term of the latter sum converge to zero as $n \rightarrow \infty$.
For the second term, we can at least ensure that it is non-negative, by using the structural monotonicity assumption that
\[
\H^j(1)^{-1} - \H^{j+1}(0)^{-1}
\quad
\text{is positive semidefinite}.
\]
(Recall that in terms of $\rho$ and $EI$ this means that $\rho(l^j-) \geq \rho(l^j+)$ and $(EI)(l^j-) \leq (EI)(l^j+)$.)
Then,
\[
\sp{x_n}{(2 Q' \H - Q \H') x_n}_{L_2}
\rightarrow 0
\]
and since this inner product induces a norm which is equivalent to the standard $L_2$-norm $\norm{\cdot}_{L_2}$ and the energy norm $\norm{\cdot}_X$ on $X$, this implies that $x_n \rightarrow 0$ in $X$. We have successfully proved property AIEP.
Let us state this preliminary result as
\begin{prpstn}
\label{prop:AIEP}
Let the conditions of Theorem \ref{thm:exp_stability} be satisfied.
Then, the pair $(\mathfrak{A}_0, (\mathfrak{R}, \mathfrak{B}))$ has property AIEP, i.e.\ for every sequence $(x_n, \beta_n)_{n \geq 1} \subseteq D(\mathfrak{A}_0) \times \mathbb{R}$ with
\begin{align*}
\sup_{n \in \mathbb{N}} \norm{x_n}_X
&< \infty,
\\
\abs{\beta_n}
&\rightarrow 0,
\\
(\mathfrak{A}_0 - \, \mathrm{i} \beta_n) x_n
&\rightarrow 0,
\\
\mathfrak{R} x_n, \mathfrak{B} x_n
&\rightarrow 0
\end{align*}
it follows that
\[
\norm{x_n}_{L_2}
\rightarrow 0.
\]
\end{prpstn}
\textbf{Proof of Theorem \ref{thm:exp_stability} (continued).}
We are now in position to show the assertion of Theorem \ref{thm:exp_stability}.
Assume that $(\mathcal{T}(t))_{t \geq 0}$ is asymptotically stable, so that $\sigma(\mathcal{A}) = \sigma_p(\mathcal{A}) \subseteq \mathbb{R}$ by the Arendt-Batty-Lyubich-V\~u Theorem.
Thanks to the Gearhart-Pr\"uss-Huang Theorem, see e.g.\ \cite[Theorem V.1.11]{EngelNagel_2000}, it suffices to prove that
\[
\sup_{\beta \in \mathbb{R}} \norm{(\, \mathrm{i} \beta - \mathcal{A})^{-1}}_{\mathcal{X}}
< \infty
\]
and this is equivalent to the statement
\[
\left.
\begin{array}{c}
(\hat x_n, \beta_n)_{n \geq 1} \subseteq D(\mathcal{A}) \times \mathbb{R}
\\
\sup_{n \in \mathbb{N}} \norm{\hat x_n}_{\mathcal{X}} < \infty,
\quad
\abs{\beta_n} \rightarrow \infty,
\quad
\mathcal{A} \hat x_n - \, \mathrm{i} \beta_n \hat x_n
\rightarrow 0
\end{array}
\right\}
\quad \Rightarrow \quad
\norm{\hat x_n}_{\mathcal{X}} \rightarrow 0,
\]
where we write $\hat x_n = (x_n, x_{n,c}) \in X \times X_c$; see, e.g.\ \cite[Remark 2.7]{AugnerJacob_2014}.
Let $(\hat x_n, \beta_n)_{n \geq 1}$ be such a sequence. Then, in particular
\[
0
\leftarrow \Re \sp{(\mathcal{A} - \, \mathrm{i} \beta_n) \hat x_n}{\hat x_n}
= \Re \sp{\mathcal{A} \hat x_n}{\hat x_n}
\leq - \kappa \abs{\mathfrak{R} x_n}_{\mathbb{K}^5}
\]
and hence $\mathfrak{R} x_n \rightarrow 0$.
Having property AIEP for the pair $(\mathfrak{A}_0, (\mathfrak{R}, \mathfrak{B}))$ at hand, we show that
\begin{enumerate}
\item
$x_{c,n} \rightarrow 0$ in $X_c$ and
\item
$\mathfrak{B} x_n \rightarrow 0$ in $\mathbb{K}^{2(m+1)}$
\end{enumerate}
By property AIEP of the pair $(\mathfrak{A}_0, (\mathfrak{R}, \mathfrak{B}))$, we may then conclude $x_n \rightarrow 0$ in $X$ and hence $\hat x_n \rightarrow 0$ in $\mathcal{X}$.
By \eqref{eqn:C^1-o(1)}, the term $\frac{\mathfrak{C} x_n}{\beta_n}$ tends to zero as $n \rightarrow \infty$ and, hence, we obtain that
\[
x_{c,n}
= (\, \mathrm{i} \beta_n - A_c)^{-1} (\mathfrak{C} x_n + o(1))
= \beta_n (\, \mathrm{i} \beta_n - A_c)^{-1} \left( \frac{\mathfrak{C} x_n}{\beta_n} + \frac{1}{\abs{\beta_n}} o(1) \right)
\rightarrow 0,
\]
as the resolvent operators $\norm{\beta (\, \mathrm{i} \beta - A_c)^{-1}} = \norm{ (\, \mathrm{i} - A_c/\beta)^{-1}}$ are uniformly bounded for $\beta \in \mathbb{R}$.
Moreover, by Assumption \ref{assmpt:passive_stable_controller} and Remark \ref{rem:on_assmpt_3.1}
\[
0
\leftarrow \Re \sp{(\mathcal{A} - \, \mathrm{i} \beta_n) \hat x_n}{\hat x_n}_{\mathcal{X}}
\leq - \sum_{j=0}^m \kappa_j \abs{D_c^j \mathfrak{C}^j x_n}^2,
\]
so that $D_c^j \mathfrak{C}^j x_n \rightarrow 0$.
Since $\ker D_c^j \subseteq \ker B_c^j$ this implies that $B_c^j \mathfrak{C}^j x_n \rightarrow 0$ as well, thus
\[
\mathfrak{B} x_n
= - (C_c (\, \mathrm{i} \beta_n - A_c)^{-1} B_c + D_c) \mathfrak{C} x_n + \frac{o(1)}{\abs{\beta_n}}
\rightarrow 0.
\]
By property AIEP of the pair $(\mathfrak{A}_0, (\mathfrak{R}, \mathfrak{B}))$ as stated in Proposition \ref{prop:AIEP}, it follows that $x_n \rightarrow 0$ in $X$, therefore, $\hat x_n \rightarrow 0$ in $\mathcal{X}$ and this concludes the proof.\qed
Let us translate the conditions on $\mathfrak{R}$ into suitable (for exponential energy decay) choices of the conservative boundary conditions at the right end of the chain.
\begin{rmrk}
Our approach covers all the conservative boundary conditions mentioned in Subsection 4.1 of \cite{ChenEtAl_1987}:
\begin{enumerate}
\item
$\omega(t,1) = (EI \omega_{\zeta\z})(t,1)$ (simply supported or pinned right end),
\item
$\omega_{\zeta\z}(t,1) = (EI \omega_{\zeta\z})_\zeta(t,1) = 0$ (free right end),
\item
$\omega_\zeta(t,1) = (EI \omega_{\zeta\z})_\zeta(t,0) = 0$ (shear hinge right end),
\item
$\omega_t(t,1) = \omega_\zeta(t,1) = 0$ (clamped left end),
\item
$\omega_t(t,1) = (EI \omega_{\zeta\z})(t,1) = 0$,
\item
$\omega_{t\zeta}(t,1) = (EI \omega_{\zeta\z})_\zeta(t,1) = 0$.
\end{enumerate}
In the energy state space formulation we used for the proof of well-posedness, asymptotic and exponential stability these are actually only four cases:
In the energy state space formulation one does neither distinct between the cases $\omega(t,1) = 0$ and $\omega_t(t,1) = 0$ (i.e.\ $\omega(t,1) = c$), cf.\ the first and fifth case in the list just above, nor between the cases $\omega_\zeta(t,1) = 0$ and $\omega_{t\zeta}(t,1) = 0$ (i.e.\ $\omega_\zeta(t,1) = c$), cf.\ the third and last case.
In energy state space the conditions above therefore read as:
\begin{enumerate}
\item
$(\H^m x^m)(1) = 0$,
\item
$(\H^m_2 x^m_2)(1) = (\H^m_2 x^m_2)'(1) = 0$,
\item
$(\H^m x^m)'(1) = 0$,
\item
$(\H^m_1 x^m_1)(1) = (\H^m_1 x^m_1)'(1) = 0$,
\end{enumerate}
\end{rmrk}
To make the formulation of the stability results more digestible, we introduce the following
\begin{assumption}[Boundary and interconnection conditions]
At the left and right end and at the junction points we assume the following:
\newline
Condition {\bf (D)}:
\begin{align*}
\left( \begin{array}{c} (EI \omega_{\zeta\z})(0) \\ - (EI \omega_{\zeta\z})_\zeta(0) \end{array} \right)
&= - K_0 \left( \begin{array}{c} \omega_{t\zeta}(t,\zeta) \\ \omega_t(t,\zeta) \end{array} \right)
\label{eqn:dissipative_end}
\\
\intertext{for some $K_0 \in \mathbb{K}^{2 \times 2}$ with}
K_0
&= \left( \begin{array}{cc} k^0_{11} & 0 \\ 0 & 0 \end{array} \right)
\text{ for some } k^0_{11} > 0
\quad
\text{or}
\quad
\He(K_0) \quad \text{is symmetric positive definite},
\end{align*}
Condition {\bf (C)}:
\begin{enumerate}
\item
$\omega_t(t,1) = (EI \omega_{\zeta\z})(t,1) = 0$ (simply supported or pinned right end) and $\He (K_0)$ is positive definite, or
\item
$\omega_{\zeta\z}(t,1) = (EI \omega_{\zeta\z})_\zeta(t,1) = 0$ (free right end) and $\He(K_0)$ is positive definite, or
\item
$\omega_{t\zeta}(t,1) = (EI \omega_{\zeta\z})_\zeta(t,0) = 0$ (shear hinge right end), or
\item
$\omega_t(t,1) = \omega_{t\zeta}(t,1) = 0$.
\end{enumerate}
Condition {\bf (I)}: For $j = 1, \ldots, m-1$,
\begin{enumerate}
\item
$\omega_t(t,l^j-) = \omega_t(t,l^j +)$, $\omega_{t\zeta}(t,l^j-) = \omega_{t\zeta}(t,l^j +)$ and
\[
\left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j-) + (EI \omega_{\zeta\z})_\zeta(t,l^j+) \\ (EI \omega_{\zeta\z})(t,l^j-) - (EI \omega_{\zeta\z})(t,l^j+) \end{array} \right) = - K^j \left( \begin{array}{c} \omega_t(t,l^j) \\ \omega_{t\zeta}(t,l^j) \end{array} \right)
\]
\item
$\omega_t(t,l^j-) = \omega_t(t,l^j +)$, $(EI \omega_{\zeta\z})(t,l^j-) = (EI \omega_{\zeta\z})(t,l^j+)$ and
\[
\left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j-) + (EI \omega_{\zeta\z})_\zeta(t,l^j+) \\ \omega_{t\zeta}(t,l^j-) - \omega_{t\zeta}(t, l^j+) \end{array} \right) = - K^j \left( \begin{array}{c} \omega_t(t,l^j) \\ (EI \omega_{\zeta\z})(t,l^j) \end{array} \right)
\]
\item
$- (EI \omega_{\zeta\z})_\zeta(t,l^j-) = - (EI \omega_{\zeta\z})_\zeta(t,l^j+)$, $\omega_{t\zeta}(t,l^j-) = \omega_{t\zeta}(t,l^j +)$ and
\[
\left( \begin{array}{c} \omega_t(t,l^j-) - \omega_t(t,l^j+) \\ (EI \omega_{\zeta\z})(t,l^j-) - (EI \omega_{\zeta\z})(t,l^j+) \end{array} \right) = - K^j \left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j) \\ \omega_{t\zeta}(t,l^j) \end{array} \right)
\]
\item
$- (EI \omega_{\zeta\z})_\zeta(t,l^j-) = - (EI \omega_{\zeta\z})_\zeta(t,l^j+)$, $(EI \omega_{\zeta\z})(t,l^j-) = (EI \omega_{\zeta\z})(t,l^j+)$ and
\[
\left( \begin{array}{c} \omega_t(t,l^j-) - \omega_t(t,l^j+) \\ \omega_{t\zeta}(t,l^j-) - \omega_{t\zeta}(t, l^j+) \end{array} \right) = - K^j \left( \begin{array}{c} - (EI \omega_{\zeta\z})_\zeta(t,l^j) \\ (EI \omega_{\zeta\z})(t,l^j) \end{array} \right)
\]
\end{enumerate}
where $K^j \in \mathbb{K}^{2 \times 2}$ is a diagonal, positive semidefinite, symmetric matrix or has a positive definite symmetric part $\He(K^j) > 0$.
\end{assumption}
The exponential stability result for static boundary feedback and interconnection conditions, then translates as
\begin{thrm}[Stability of Serially Connected Euler-Bernoulli Beams]
\label{exa:EB}
Let conditions \eqref{R}, \eqref{M}, {\bf (C)}, {\bf (D)} and {\bf (I)} be satisfied.
Then, for every initial datum
\[
(\omega(0,\cdot), \omega_t(0,\cdot))
= (\omega_0, \omega_1)
\in H^2((0,L) \setminus \{l^j\}_{j=1}^m )^2,
\]
there is a unique strong solution
\[
\omega \in C([0,\infty); H^2((0,L) \setminus \{l^j\}_{j=1}^m)
\]
of the Euler-Bernoulli-Beam system \eqref{EB} with the imposed boundary and interconnection conditions in {\bf (C)}, {\bf (D)} and {\bf (I)}.
The solution depends continuously on the initial data $(\omega_0, \omega_1)$, and there are constants $M \geq 1$ and $\eta < 0$, independent of the initial data, such that the energy
\[
H(t)
:= \int_0^t \rho(\zeta) \abs{\omega_t(t,\zeta)}^2 + EI(\zeta) \abs{\omega_{\zeta\z}(t,\zeta)}^2 \, \mathrm{d} \zeta
\]
decays uniformly exponentially, i.e.\
\[
H(t)
\leq M \mathrm{e}^{\eta t} H(0),
\quad
t \geq 0.
\]
\end{thrm}
\textbf{Proof.}
In either case, we have
\[
\Re \sp{\mathcal{A} \hat x}{\hat x}
\leq - \kappa \left( \abs{(\H^1_2 x^1_2)'(0)}^2 + \abs{(\H^1 x^1)'(0)}^2 \right),
\quad
x \in D(\mathcal{A})
\]
for some $\kappa > 0$.
For the first and fourth case, asymptotic stability follows by Theorem \ref{thm:asymptotic_stability}, whereas for the second and third case the case that $0 \in \sigma_p(\mathcal{A})$ has to be excluded by demanding that $\He(K_0) > 0$, so that
\[
\Re \sp{\mathcal{A} \hat x}{\hat x}
\leq - \kappa \left( \abs{(\H^1 x^1)'(0)}^2 + \abs{(\H^1 x^1)'(0)}^2 \right),
\quad
x \in D(\mathcal{A}).
\]
In all cases, uniform exponential stability follows by Theorem \ref{thm:exp_stability}.\qed
\begin{rmrk}
If, in the second and third case of Theorem \ref{exa:EB}, one considers the dissipative feedback with $K_0 = \operatorname{diag}(k^0_{11}, 0)$ for some $k^0_{11} > 0$ at the left end, either $\sigma_p(\mathcal{A}) \cap \, \mathrm{i} \mathbb{R} = \emptyset$, e.g.\ by suitable damping in one of the junction points $l^j$, and then by Theorem \ref{thm:exp_stability} the system is again uniformly exponentially stable, or $\sigma_p(\mathcal{A}) \cap \, \mathrm{i} \mathbb{R} = \{0\}$. (This follows from the proof of Theorem \ref{thm:asymptotic_stability} which shows that $\sigma_p(\mathcal{A}) \cap \, \mathrm{i} \mathbb{R} \subseteq \{0\}$ already for $\Re \sp{\mathcal{A} \hat x}{\hat x}_{\mathcal{X}} \leq - \kappa \abs{\mathfrak{R} x}^2$ where $\mathfrak{R} x = ((\H^1_2 x^1_2)(0), (\H^1 x^1)'(0))$.)
The only candidate for an eigenfunction in this case satisfies
\[
(\H_2 x_2)(\zeta) = 0,
\quad
(\H_1 x_1)(\zeta) = \zeta (\H_1 x_1)'(0)
\]
i.e.\ from $(\H^j_1 x^j_1)(1) = (\H^{j+1}_1 x^{j+1}_1)(0)$ it follows that
\[
(\H^j_1 x^j_1)(\zeta)
= (j - 1 + \zeta) (\H^1_1 x^1_1)'(0)
= (j - 1 + \zeta) c,
\quad \zeta \in [0,1], \, j = 1, \ldots, m
\]
for some $c \in \mathbb{K}$ and the eigenspace $\ker (\mathcal{A})$ is one-dimensional.
This corresponds to the dynamical solution $\omega_t(t,\zeta) = (j - 1 + \zeta) c + \omega_t(0,\zeta)$, clearly a solution which already for moderately large $t > 0$ does not satisfy the underlying modelling assumptions for a linear Euler-Bernoulli beam model. In particular, the assumption that $\abs{\omega(t,\zeta) - \omega^{\mathrm{ref}}(\zeta)} \ll 1$ for some reference configuration $\omega^{\mathrm{ref}}$ will be violated then.
A physical interpretation of this eigenstate would be a beam which is rotating in the transversal flat.
Such phenomena are, as already stated, not covered by the linear beam model.
However, after restricting the initial data to $\ker(\mathcal{A})^\perp$, i.e.\
\begin{align*}
\sum_{j=1}^m \int_0^1 (j-1+\zeta) (\H^j_1 x^j_1)(0,\zeta) \, \mathrm{d} \zeta
&= \sum_{j=1}^m \int_0^1 (j-1+\zeta) \tilde \H_1 ((1-\zeta) l^j + \zeta l^{j+1}) \tilde x_1((1-\zeta)l^j + \zeta l^{j+1})
\\
&= \sum_{j=1}^m \int_{l^{j-1}}^{l^j} \frac{j - 1 + \zeta}{\zeta_j - \zeta_{j-1}} \omega_t(0,\zeta)
= 0,
\end{align*}
by linearity, compactness of the resolvent ($\sigma_p(\mathcal{A}) \cap B_1(0)$ is discrete!) and Theorem \ref{thm:exp_stability} also in the second and third case, the solution tends uniformly exponentially to zero, even for the choice $K_0 = \operatorname{diag}\, (k^0_{11}, 0)$ for some $k^0_{11} > 0$.
\end{rmrk}
\section{Conclusion}
\label{conclusion}
In this paper we presented a proof via the resolvent method for the uniform stabilisation of a chain of serially connected inhomogeneous Euler-Bernoulli beams with damping at one end.
We considered several possible interconnection conditions and pairs of dissipative / conservative boundary conditions at the ends of the chain which enforce uniform exponential energy decay for the beam system.
We thereby not only generalised the results in \cite{ChenEtAl_1987} to the case of non-uniform beams (which in this generality seems not to be possible by their method), but identified several other possible combinations of dissipative-conservative pairs boundary conditions at the left and right end of the chain leading to exponential energy decay as well.
Moreover, we showed that instead of static boundary or feedback interconnections, dynamic feedback interconnections with finite dimensional control systems can be used as well to achieve well-posedness and stability results.
\section*{Acknowledgment}
I am much obliged to Birgit Jacob who introduced me to the topic of infinite-dimensional port-Hamiltonian systems and encouraged me to carry on research in this area. Moreover, I would like to thank the anonymous referees for their careful reading and their valuable remarks, leading to significant improvement of the manuscript.
|
train/arxiv
|
BkiUd9U4eIZijkbdOkMv
| 5 | 1 |
\section{Preliminaries}
Almost distance-regular graphs, recently studied in the literature, are graphs
which share some, but not necessarily all, of the regularity properties that
characterize distance-regular graphs. Two examples of the former are partially
distance-regular graphs \cite{p91} and $m$-walk-regular graphs \cite{dfg09}.
In this paper we propose and characterize two dual concepts of almost
distance-regularity, and study some cases where distance-regularity is
attained. As in the theory of distance-regular graphs, the two proposed
concepts lead to several duality results. Our results can also be seen as a
generalization of the so-called spectral excess theorem for distance-regular
graphs (see \cite{fg97}; for short proofs, see \cite{vd08,fgg09}). This theorem
characterizes distance-regular graphs by their spectra and the average number
of vertices at extremal distance. A dual version of this theorem is also
derived.
We use standard concepts and results for distance-regular graphs
\cite{biggs,bcn}, spectral graph theory \cite{cds80,g93}, and spectral and
algebraic characterizations of distance-regular graphs \cite{f02}. Moreover,
for some more details and other concepts of almost distance-regularity (such
as distance-polynomial and partially distance-regular graphs), we refer the
reader to our recent paper \cite{ddfgg10}. In what follows, we recall the main
concepts, terminology, and results involved.
Let $\Gamma$ be a simple, connected, $\delta$-regular graph, with vertex set
$V$, order $n=|V|$, and adjacency matrix $\textbf{\emph{A}}$. The {\it
distance} between two vertices $u$ and $v$ is denoted by $\mathop{\rm dist }\nolimits (u,v)$, so the
{\it diameter} of $\Gamma$ is $D=\textrm{max}_{u,v\in V}\mathop{\rm dist }\nolimits(u,v)$. The set of
vertices at distance $i$ from a given vertex $u\in V$ is denoted by
$\Gamma_i(u)$, for $i=0,1,\ldots,D$. The {\em distance-$i$ graph} $\Gamma_i$ is the
graph with vertex set $V$ and where two vertices $u$ and $v$ are adjacent if
and only if $\mathop{\rm dist }\nolimits(u,v)=i$ in $\Gamma$. Its adjacency matrix $\textbf{\emph{A}}_i$
is usually referred to as the {\em distance-$i$ matrix} of $\Gamma$. The spectrum
of $\Gamma$ is denoted by $ \textrm{sp}\,\Gamma =
\{\lambda_0^{m_0},\lambda_1^{m_1},\ldots, \lambda_d^{m_d}\}, $ where the
different eigenvalues of $\Gamma$ are in decreasing order,
$\lambda_0>\lambda_1>\cdots >\lambda_d$, and the superscripts stand for their
multiplicities $m_i=m(\lambda_i)$.
\subsection{The predistance and preidempotent polynomials}
From the spectrum of $\Gamma$, we consider the {\em
predistance polynomials} $\{p_i\}_{0\le i\le d}$ which are orthogonal with respect to the
following scalar product in $\mathbb{R}_d[x]$:
\begin{equation}
\label{product}
\langle f, g\rangle_{\vartriangle} =\frac{1}{n}\textrm{tr}\,
(f(\textbf{\emph{A}})g(\textbf{\emph{A}}))=\frac{1}{n} \sum_{i=0}^d m_i f(\lambda_i) g(\lambda_i),
\end{equation}
and which satisfy $\textrm{deg}\,p_i=i$ and $\langle p_i,p_j
\rangle_\vartriangle= \delta_{ij}p_i(\lambda_0)$, for all $i,j=0,1,\ldots,d$.
For more details, see \cite{fg97}. Like every sequence of orthogonal
polynomials, the predistance polynomials satisfy a three-term recurrence of the
form
\begin{equation}
\label{recur-pol}
xp_i=\beta_{i-1}p_{i-1}+\alpha_i p_i+\gamma_{i+1}p_{i+1},\qquad i=0,1,\ldots,d,
\end{equation}
with $\beta_{-1}=\gamma_{d+1}=0$. Some basic properties of these coefficients,
such as $\alpha_i+\beta_i+\gamma_i=\lambda_0$ for $i=0,1,\ldots, d$, and
$\beta_i n_i=\gamma_{i+1}n_{i+1}\neq0$ for $i=0,1,\ldots, d-1$, where
$n_i=\|p_i\|_{\vartriangle}^2=p_i(\lambda_0)$, can be found in \cite{cffg09}.
Let $\omega_i$ be the leading coefficient of $p_i$. Then, from the above
recurrence and since $p(0)=1$, it is immediate that $\omega_i=
(\gamma_1\gamma_2\cdots \gamma_i)^{-1}$ for $i=1,\ldots,d$.
For any graph, the sum of all the predistance polynomials gives the {\em
Hoffman polynomial} $H$ satisfying $H(\lambda_i)=n\delta_{0i}$,
$i=0,1,\ldots,d$, which characterizes regular graphs via the condition
$H(\textbf{\emph{A}})=\textbf{\emph{J}}$, the all-$1$ matrix \cite{hof63}. Note
that the leading coefficient $\omega_d$ of $H$ (and also of $p_d$) is
$\omega_d=n/\pi_0$.
From the predistance polynomials, we define the so-called {\em preidempotent
polynomials} $q_j$, $j=0,1,\ldots, d$, by
$$
q_j(\lambda_i)= \frac{m_j}{n_i}p_i(\lambda_j),\qquad i=0,1,\ldots, d,
$$
which are orthogonal with respect to the scalar product
\begin{equation}
\label{product-preadj}
\langle f,g\rangle_\blacktriangle =\frac{1}{n}\textrm{tr}\,
(f\{\textbf{\emph{A}}\}g\{\textbf{\emph{A}}\})=\frac{1}{n}\sum_{i=0}^d n_i f(\lambda_i) g(\lambda_i),
\end{equation}
where $f\{\textbf{\emph{A}}\}=\frac{1}{\sqrt{n}}\sum_{i=0}^d f(\lambda_i)
p_i(\textbf{\emph{A}})$. Note that, since $q_j(\lambda_0)=m_j$, the duality
between the two scalar products (\ref{product}) and (\ref{product-preadj}) and
their associated polynomials is made apparent by writing
\begin{eqnarray}
\langle p_i, p_j\rangle_\vartriangle &=& \frac{1}{n}\sum_{l=0}^d m_l p_i(\lambda_l) p_j(\lambda_l)=\delta_{ij} n_i,\qquad i,j=0,1,\ldots,d, \label{basic-predistance} \\
\langle q_i, q_j\rangle_{\blacktriangle} &=& \frac{1}{n}\sum_{l=0}^d n_l q_i(\lambda_l) q_j(\lambda_l)=\delta_{ij} m_i,\qquad i,j=0,1,\ldots,d. \label{basic-preadj}
\end{eqnarray}
\subsection{Vector spaces, algebras and bases}
\label{subsec_alg}
Let $\Gamma$ be a graph with diameter $D$, adjacency matrix $\textbf{\emph{A}}$ and
$d+1$ distinct eigenvalues. We consider the vector spaces ${\cal A}=
\mathbb{R}_{d}[\textbf{\emph{A}}] = \linebreak \textrm{span}
\{\textbf{\emph{I}}, \textbf{\emph{A}}, \textbf{\emph{A}}^2, \ldots,
\textbf{\emph{A}}^{d}\}$ and ${\cal D}= \textrm{span}
\{\textbf{\emph{I}},\textbf{\emph{A}},\textbf{\emph{A}}_2,\ldots,\textbf{\emph{A}}_D\}$,
with dimensions $d+1$ and $D+1$, respectively. Then, ${\cal A}$ is an algebra
with the ordinary product of matrices, known as the {\it adjacency algebra},
with orthogonal bases
$A_p=\{p_0(\textbf{\emph{A}}),p_1(\textbf{\emph{A}}),p_2(\textbf{\emph{A}}),\ldots,
p_d(\textbf{\emph{A}})\}$ and
$A_\lambda=\{\textbf{\emph{E}}_0,\textbf{\emph{E}}_1,\ldots,
\textbf{\emph{E}}_d\}$, where the matrices $\textbf{\emph{E}}_i$,
$i=0,1,\ldots,d$, corresponding to the orthogonal projections onto the
eigenspaces, are the {\it $($principal\/$)$ idempotents} of
$\textbf{\emph{A}}$. Besides, since
$\textbf{\emph{I}},\textbf{\emph{A}},\textbf{\emph{A}}^2,\ldots,\textbf{\emph{A}}^D$
are linearly independent, we have that $\textrm{dim}\, \mathcal{A}=d+1\ge D+1$
and, therefore, we always have $D\le d$ \cite{biggs}. Moreover, ${\mathcal D}$
forms an algebra with the entrywise or Hadamard product of matrices, defined by
$(\textbf{\emph{X}}\circ\textbf{\emph{Y}})_{uv}=\textbf{\emph{X}}_{uv}\textbf{\emph{Y}}_{uv}$.
We call ${\mathcal D}$ the {\em distance $\circ$-algebra}, which has orthogonal
basis $D_{\lambda}=
\{\textbf{\emph{I}},\textbf{\emph{A}},\textbf{\emph{A}}_2,\ldots,\textbf{\emph{A}}_d\}$.
From now on, we work with the vector space ${\cal T}={\cal A}+{\cal D}$, and
relate the distance-$i$ matrices $\textbf{\emph{A}}_i \in {\mathcal D}$ to the
matrices $p_i(\textbf{\emph{A}}) \in {\mathcal A}$. Note that
$\textbf{\emph{I}}$, $\textbf{\emph{A}}$, and $\textbf{\emph{J}}$ are matrices
in ${\cal A}\cap{\cal D}$ since $\textbf{\emph{J}}=H(\textbf{\emph{A}})\in
\mathcal{A}$. Recall that ${\mathcal A}={\mathcal D}$ if and only if $\Gamma$ is
distance-regular (see \cite{biggs,bcn}). In this case, we have $D=d$, and the
predistance polynomials become the {\em distance polynomials} satisfying
$\textbf{\emph{A}}_i=p_i(\textbf{\emph{A}})$. In ${\cal T}$, we consider the
following scalar product:
\begin{equation}
\label{equationscalarproduct}
\langle\textbf{\emph{R}},\textbf{\emph{S}}\rangle=
\frac 1n\textrm{tr}\, (\textbf{\emph{RS}})= \frac
1n\textrm{sum}\,(\textbf{\emph{R}}\circ\textbf{\emph{S}}),
\end{equation}
where $\textrm{sum}\,(\textbf{\emph{M}})$ denotes the sum of all entries of
$\textbf{\emph{M}}$. Observe that the factor $1/n$ assures that
$\|\textbf{\emph{I}}\|^2=1$, whereas $\|\textbf{\emph{J}}\|^2=n$. Note also
that the {\em average degree} of $\Gamma_i$ is
$\overline{\delta}_i=\|\textbf{\emph{A}}_i\|^2$ and the {\em average
multiplicity} of $\lambda_j$ is
$\overline{m}_j=\frac{m_j}{n}=\|\textbf{\emph{E}}_j\|^2$. According to
(\ref{product}), this scalar product of matrices satisfies $\langle
f(\textbf{\emph{A}}),g(\textbf{\emph{A}})\rangle=\langle
f,g\rangle_\vartriangle$.
\section{Two dual approaches to almost distance-regularity}
Here we limit ourselves to the case of graphs with spectrally maximum diameter
(or the `non-degenerate' case) $D=d$. Consequently, we will use
indiscriminately the two symbols, $D$ and $d$, depending on what we are
referring to. In this context, let us consider the following two definitions of
almost distance-regularity:
\begin{defi}
\label{D1}
For a given $i$, $0\le i\le D$, a graph $\Gamma$ is {\em $i$-punctually distance-regular} when there exist constants $p_{ji}$ such that
\begin{equation}
\label{def(a)}
\textbf{A}_i\textbf{E}_j = p_{ji}\textbf{E}_j
\end{equation}
for every $j=0,1,\ldots,d$;
and $\Gamma$ is {\em $m$-partially distance-regular} when it is $i$-punctually distance-regular for all $i\le m$.
\end{defi}
\begin{defi}
\label{D2}
For a given $j$, $0\le j\le d$, a graph $\Gamma$ is {\em $j$-punctually eigenspace distance-regular} when
there exist constants $q_{ij}$ such that
\begin{equation}\label{def(b)}
\textbf{E}_j\circ \textbf{A}_i = q_{ij}\textbf{A}_i
\end{equation}
for every $i=0,1,\ldots,D$;
and $\Gamma$ is {\em $m$-partially eigenspace distance-regular} when it is $j$-punctually eigenspace distance-regular for all $j\le m$.
\end{defi}
Notice that the concepts of $D$-partial distance-regularity and $d$-partial
\linebreak eigenspace distance-regularity coincide with the known dual
definitions of distance-regularity (see \cite{bcn}).
Some basic characterizations of punctual distance-regularity, in terms of the
distance matrices and the idempotents, were given in \cite{ddfgg10}.
\begin{pro}[\cite{ddfgg10}]
\label{first-charac(D1)} Let $D=d$. Then, $\Gamma$ is $i$-punctually
distance-regular if and only if any of the following conditions holds:
\begin{enumerate}
\item[$(a1)$] $\textbf{A}_i\in {\cal A}$,
\item[$(a2)$] $p_i(\textbf{A})\in {\cal D}$,
\item[$(a3)$] $\textbf{A}_i=p_i(\textbf{A})$.
\end{enumerate}
\end{pro}
Following the
duality between Definitions \ref{D1} and \ref{D2}, it seems natural to conjecture
the dual of this proposition:
A graph $\Gamma$ is {\em $j$-punctually eigenspace
distance-regular}
if and only if any of the following conditions is satisfied:
\begin{enumerate}
\item[$(b1)$] $\textbf{\emph{E}}_j\in {\cal D}$,
\item[$(b2)$] $q_j[\textbf{\emph{A}}]\in {\cal A}$,
\item[$(b3)$] $\textbf{\emph{E}}_j=q_j[\textbf{\emph{A}}]$,
\end{enumerate}
where $f[\textbf{\emph{A}}]=\frac{1}{n}\sum_{i=0}^d
f(\lambda_i) \textbf{\emph{A}}_i$.
However, although $(b1)$ is clearly equivalent to Definition
\ref{D2} and $(b3)\Rightarrow (b1),(b2)$, until now we have not
been able to prove any of the other equivalences and we leave
them as conjectures.
In order to derive some new characterizations of punctual distance-regularity,
besides the already defined $\overline \delta_i$ and $\overline m_j$, we
consider the following average numbers:
\begin{itemize}
\item
The {\em average crossed
local multiplicities} are
\begin{equation}\label{averagecrossed}
\overline{m}_{ij}=\frac1{n\overline{\delta}_i}\sum_{\mathop{\rm dist }\nolimits(u,v)=i}m_{uv}(\lambda_j)
=\frac{\langle
\textbf{\emph{E}}_j,\textbf{\emph{A}}_i\rangle}{\|\textbf{\emph{A}}_i\|^2},
\end{equation}
where $m_{uv}(\lambda_j)=(\textbf{\emph{E}}_j)_{uv}$ are the {\em crossed local
multiplicities}.
\item
The {\em average number of shortest $i$-paths from a vertex} is
\begin{equation}
\label{meanPi}
\overline P_i= \frac{1}{n}\sum_{u\in V} P_i(u)=\frac{1}{n}\textrm{sum}\,(\textbf{\emph{A}}^i\circ \textbf{\emph{A}}_i)=\langle \textbf{\emph{A}}^i, \textbf{\emph{A}}_i\rangle=
\frac{1}{\omega_i}\langle p_i(\textbf{\emph{A}}), \textbf{\emph{A}}_i\rangle,
\end{equation}
where $P_i(u)$ denotes the number of shortest paths from a vertex $u$ to
the vertices in $\Gamma_i(u)$ and $\omega_i=(\gamma_1 \gamma_2 \cdots
\gamma_i)^{-1}$ is the leading coefficient of $p_i$, $i=1,\ldots,d$.
\item
The {\em average
number of shortest $i$-paths} is
\begin{equation}
\label{mean-aii}
\overline a_i^{(i)}=\frac{1}{n\overline\delta_i}\textrm{sum}\,(\textbf{\emph{A}}^i \circ \textbf{\emph{A}}_i)=\frac{\overline P_i}{\overline\delta_i}.
\end{equation}
\end{itemize}
\begin{pro}
\label{propo-punt-dr} Let $\Gamma$ be a graph with predistance polynomials $p_i$ and
recurrence coefficients $\gamma_i,\alpha_i,\beta_i$,
$i=0,1,\ldots, d$. Then, $\Gamma$ is $i$-punctually distance-regular if and only if any of the following equalities holds:
\begin{itemize}
\item[$(a1)$]
$\displaystyle \frac{1}{\overline{\delta}_i} = \sum_{j=0}^d\frac{\overline{m}_{ij}^2}{\overline m_j}$.
\item[$(a2)$]
$
\overline P_i= \frac{1}{\omega_i}\sqrt{p_i(\lambda_0)\overline \delta_i}
=\sqrt{\beta_0\beta_1\cdots \beta_{i-1}\overline \delta_i \gamma_i\gamma_{i-1}\cdots \gamma_1}$.
\item[$(a3)$]
$
\label{bound-aii}
\omega_i\overline a_i^{(i)}=1 \quad\mbox{and}\quad \overline \delta_i=p_i(\lambda_0)$.
\end{itemize}
Moreover, $\Gamma$ is $j$-punctually eigenspace distance-regular if and only if
\begin{itemize}
\item[$(b1)$]
$\displaystyle \overline m_j=\sum_{i=0}^D\overline{\delta}_i\overline{m}_{ij}^2$.
\end{itemize}
\end{pro}
\begin{pf}
$(a1)$ This is a result from \cite{ddfgg10}.
$(a2)$ From (\ref{meanPi}) and the Cauchy-Schwarz inequality, we get
\begin{equation}
\label{boundPi}
\omega_i \overline P_i =\langle p_i(\textbf{\emph{A}}), \textbf{\emph{A}}_i\rangle
\le \|p_i(\textbf{\emph{A}})\|\|\textbf{\emph{A}}_i\|=\sqrt{p_i(\lambda_0)\overline \delta_i} \nonumber\\
= \sqrt{\frac{\beta_0\beta_1\cdots \beta_{i-1}}{\gamma_1\gamma_2\cdots \gamma_i}\overline \delta_i}.
\end{equation}
Moreover, equality occurs if and only if the
matrices $p_i(\textbf{\emph{A}})$ and $\textbf{\emph{A}}_i$ are
proportional, which is equivalent to $\Gamma$ being $i$-punctually
distance-regular by Proposition \ref{first-charac(D1)}.
$(a3)$ From (\ref{mean-aii}) and (\ref{boundPi}) we have that
$\omega_i \overline a_i^{(i)}\le \sqrt{p_i(\lambda_0)/\overline\delta_i}$,
with equality if and only if $\Gamma$ is $i$-punctually distance-regular. Thus, if
the conditions in $(a3)$ hold, $\Gamma$ satisfies the claimed property. Conversely,
if $\Gamma$ is $i$-punctually distance-regular, both equalities in $(a3)$ are
simple consequences of $p_i(\textbf{\emph{A}})=\textbf{\emph{A}}_i$. Indeed,
the first one comes from considering the $uv$-entries, with $\mathop{\rm dist }\nolimits(u,v)=i$, in
the above matrix equation, whereas the second one is obtained by taking square
norms.
$(b1)$ From (\ref{averagecrossed}), we find that the orthogonal projection of
$\textbf{\emph{E}}_j$ on ${\cal D}$ is $ \widehat{\textbf{\emph{E}}_j}
=\sum_{i=0}^D \overline m_{ij}\textbf{\emph{A}}_i $. Now, from
$\|\widehat{\textbf{\emph{E}}_j}\|^2\le \|\textbf{\emph{E}}_j\|^2$ we get
$$
\sum_{i=0}^D \overline m_{ij}^2\|\textbf{\emph{A}}_i\|^2
= \sum_{i=0}^D\overline{\delta}_i\overline{m}_{ij}^2\le \overline{m}_j
$$
and, in the case of equality, Definition \ref{D2} applies with
$q_{ij}=\overline m_{ij}$.
\end{pf}
Notice the duality between $(a1)$ and $(b1)$ with $\frac{1}{\overline{\delta}_i}$ and $\overline{m}_j$.
Now, let us consider the more global concept of partial distance-regularity. In
this case, we also have the following new result where, for a given $0\le i\le
d$, $s_i=\sum_{j=0}^i p_j$, $t_i=H-s_{i-1}=\sum_{j=i}^d p_j$,
$\textbf{\emph{S}}_i=\sum_{j=0}^i \textbf{\emph{A}}_j$, and
$\textbf{\emph{T}}_i=\textbf{\emph{J}}-\textbf{\emph{S}}_{i-1}=\sum_{j=i}^d
\textbf{\emph{A}}_j$.
\begin{pro}
\label{charac-pardr}
A graph $\Gamma$ is $m$-partially distance-regular if and only if any of the following conditions holds:
\begin{itemize}
\item[$(a1)$]
$\Gamma$ is $i$-punctually distance-regular for $i=m,m-1,\ldots,\emph{max}\{2,2m-d\}$.
\item[$(a2)$] $\Gamma$ is $m$-punctually distance-regular and
$t_{m+1}(\textbf{A})\circ \textbf{S}_m=\textbf{O}$.
\item[$(a3)$]
$s_i(\textbf{A})=\textbf{S}_i$ for $i=m,m-1$.
\end{itemize}
\end{pro}
\begin{pf}
In all cases, the necessity is clear since
$p_i(\textbf{\emph{A}})=\textbf{\emph{A}}_i$ for every $0\le i\le m$ (for
$(a2)$, note that
$t_{m+1}(\textbf{\emph{A}})=\textbf{\emph{J}}-s_m(\textbf{\emph{A}})$). Then,
let us prove sufficiency. The result in $(a1)$ is basically Proposition 3.7 in
\cite{ddfgg10}. In order to prove $(a2)$, we show by (backward) induction that
$p_i(\textbf{\emph{A}})=\textbf{\emph{A}}_i$ and
$t_{i+1}(\textbf{\emph{A}})\circ \textbf{\emph{S}}_i=\textbf{\emph{O}}$ for $i=
m,m-1,...,0.$ By assumption, these equations are valid for $i=m$. Suppose now
that $p_i(\textbf{\emph{A}})=\textbf{\emph{A}}_i$ and
$t_{i+1}(\textbf{\emph{A}})\circ \textbf{\emph{S}}_i=\textbf{\emph{O}}$ for
some $i>0$. Then, $t_i(\textbf{\emph{A}})\circ
\textbf{\emph{S}}_i=\textbf{\emph{A}}_i$ and, multiplying both terms by
$\textbf{\emph{S}}_{i-1}$ (with the Hadamard product), we get
$t_{i}(\textbf{\emph{A}})\circ \textbf{\emph{S}}_{i-1}=\textbf{\emph{O}}$. So,
what remains is to show that
$p_{i-1}(\textbf{\emph{A}})=\textbf{\emph{A}}_{i-1}$. To this end, let us
consider the following three cases:
\begin{itemize}
\item[$(i)$] For $\mathop{\rm dist }\nolimits(u,v)>i-1$, we have
$(p_{i-1}(\textbf{\emph{A}}))_{uv}=0$.
\item[$(ii)$] For $\mathop{\rm dist }\nolimits(u,v)=i-1$, we have
$(t_{i+1}(\textbf{\emph{A}}))_{uv}=0$, so
$(p_{i-1}(\textbf{\emph{A}}))_{uv}=(s_{i-1}(\textbf{\emph{A}}))_{uv}$ $=
(s_{i-1}(\textbf{\emph{A}}))_{uv}+(\textbf{\emph{A}}_{i})_{uv}=(s_{i}(\textbf{\emph{A}}))_{uv}=
1-(t_{i+1}(\textbf{\emph{A}}))_{uv}=1$.
\item[$(iii)$] For $\mathop{\rm dist }\nolimits(u,v)<i-1$, we use the recurrence
(\ref{recur-pol}) to write
\begin{eqnarray*}
xt_i=\sum_{j=i}^d xp_j & = & \sum_{j=i}^d (\beta_{j-1}p_{j-1}+\alpha_j p_j+ \gamma_{j+1}p_{j+1})\\
& = & \beta_{i-1}p_{i-1}- \gamma_i p_i + \sum_{j=i}^d(\alpha_j+\beta_j+\gamma_j) p_j \\
& = & \beta_{i-1}p_{i-1}- \gamma_ip_i+ \delta t_i ,
\end{eqnarray*}
which gives
$$
\textbf{\emph{A}}t_i(\textbf{\emph{A}})=\beta_{i-1}p_{i-1}(\textbf{\emph{A}}) -\gamma_i \textbf{\emph{A}}_i+ \delta t_i(\textbf{\emph{A}}).
$$
Then, since
$(t_i(\textbf{\emph{A}}))_{uv}=(\textbf{\emph{A}}_i)_{uv}=0$
and $\beta_{i-1}\neq 0$, we get
$$
(p_{i-1}(\textbf{\emph{A}}))_{uv}=\frac1{\beta_{i-1}}(\textbf{\emph{A}}t_i(\textbf{\emph{A}}))_{uv}=
\frac1{\beta_{i-1}}\sum_{w\in
\Gamma(u)}(t_i(\textbf{\emph{A}}))_{wv}=0,
$$
because $\mathop{\rm dist }\nolimits(v,w)\le \mathop{\rm dist }\nolimits(v,u)+\mathop{\rm dist }\nolimits(u,w) \le i-1$ for
the relevant $w$.
\end{itemize}
From $(i),(ii),$ and $(iii)$, we have that
$p_{i-1}(\textbf{\emph{A}})=\textbf{\emph{A}}_{i-1}$, so by
induction $\Gamma$ is $m$-partially distance-regular, and the
sufficiency of $(a2)$ is proven. Finally, the sufficiency of $(a3)$
follows from that of $(a2)$ because
$s_i(\textbf{\emph{A}})=\textbf{\emph{S}}_i$ for every $i\in\{m-1,m\}$
implies that
$p_m(\textbf{\emph{A}})=(s_m-s_{m-1})(\textbf{\emph{A}})=\textbf{\emph{S}}_m-\textbf{\emph{S}}_{m-1}=\textbf{\emph{A}}_m$
and $t_{m+1}(\textbf{\emph{A}})\circ
\textbf{\emph{S}}_m=(\textbf{\emph{J}}-s_m(\textbf{\emph{A}}))\circ
\textbf{\emph{S}}_m=(\textbf{\emph{J}}-\textbf{\emph{S}}_m)\circ
\textbf{\emph{S}}_m= \textbf{\emph{O}}$.
\end{pf}
Given some vertex $u$ and an integer $i\le \textrm{ecc}(u)$, we denote by
$N_i(u)$ the {\em $i$-neighborhood} of $u$, which is the set of vertices that
are at distance at most $i$ from $u$. In \cite{f02} it was proved that
$s_i(\lambda_0)$ is upper bounded by the harmonic mean of the numbers
$|N_i(u)|$ and equality is attained if and only if
$s_i(\textbf{\emph{A}})=\textbf{\emph{S}}_i$. A direct consequence of this
property and Proposition \ref{charac-pardr}$(a3)$ is the following
characterization.
\begin{thm}
\label{thm-pdr}
A graph $\Gamma$ is $m$-partially distance-regular if and only if, for every $i\in \{m-1,m\}$,
$$
s_i(\lambda_0) = \frac{n}{\sum_{u\in V}|N_i(u)|^{-1}}.
$$
\end{thm}
\section{Distance-regular graphs}
Let us particularize our results to the case of distance-regular graphs. With
this aim, we use the following theorem giving some known characterizations.
\begin{thm}[\cite{f01,fgy1b}]
\label{charac-drg} A graph
$\Gamma$ with $d+1$ distinct eigenvalues and diameter $D=d$ is
distance-regular if and only if any of the following statements
is satisfied:
\begin{itemize}
\item[$(a)$] $\Gamma$ is $D$-punctually distance-regular.
\item[$(b)$] $\Gamma$ is $j$-punctually eigenspace distance-regular for
$j=1,d$.
\end{itemize}
\end{thm}
In fact, notice that $(a)$ corresponds to any of the conditions in Proposition \ref{charac-pardr} with $m=d$.
Moreover, the duality between $(a)$ and $(b)$ is made apparent when they are
stated as follows:
\begin{itemize}
\item[$(a)$] $\textbf{\emph{A}}_0(=\textbf{\emph{I}}),\textbf{\emph{A}}_1(=\textbf{\emph{A}}),\textbf{\emph{A}}_D\in {\cal A}$;
\item[$(b)$] $\textbf{\emph{E}}_0(=\frac{1}{n}\textbf{\emph{J}}),\textbf{\emph{E}}_1,\textbf{\emph{E}}_d\in {\cal D}$.
\end{itemize}
Then, by using Theorem \ref{charac-drg} and Proposition
\ref{propo-punt-dr}$(a1)$ and $(b1)$, and Theorem \ref{thm-pdr} (with $m=d$),
we have the spectral excess theorem \cite{fg97} in the next condition $(a)$,
its dual form in $(b)$, and its harmonic mean version \cite{f02,vd08} in $(c)$.
\begin{thm}
A regular graph $\Gamma$ with $D=d$ is distance-regular if and only if any of the
following equalities holds:
\begin{itemize}
\item[$(a)$]
$\displaystyle \frac{1}{\overline{\delta}_d}= \sum_{j=0}^d\frac{\overline{m}_{dj}^2}{\overline m_j}$.
\item[$(b)$]
$\displaystyle \overline m_j=\sum_{i=0}^D\overline{\delta}_i\overline{m}_{ij}^2\ \mbox{ for $j=1,d$}$.
\item[$(c)$] $\displaystyle s_{d-1}(\lambda_0) = \frac{n}{\sum_{u\in V}|N_{d-1}(u)|^{-1}}$.
\end{itemize}
\end{thm}
In fact, condition $(a)$ is usually written in its equivalent form
$\overline{\delta}_d=p_d(\lambda_0)$ as, when $i=d$, the first condition in
Proposition \ref{bound-aii}$(a.3)$ always holds since
$$
\overline a_d^{(d)}=\frac{1}{\overline \delta_d}\langle\textbf{\emph{A}}^d, \textbf{\emph{A}}_d\rangle=
\frac{1}{\overline \delta_d \omega_d}\langle H(\textbf{\emph{A}}), \textbf{\emph{A}}_d\rangle=
\frac{1}{\overline \delta_d \omega_d}\langle \textbf{\emph{J}}, \textbf{\emph{A}}_d\rangle=
\frac{1}{\overline \delta_d \omega_d}\| \textbf{\emph{A}}_d\|^2=\frac{1}{\omega_d}.
$$
Notice also that, in $(c)$, we do not need to impose the condition of Theorem \ref{thm-pdr} for $i=d$ since $s_d(\lambda_0)=H(\lambda_0)=N_d(u)=n$ for every $u\in V$.
|
train/arxiv
|
BkiUa__xK0-nUh8iHvLz
| 5 | 1 |
\section{Introduction}
\label{sec:introduction}
Recently there has been a resurgence in attempts to model the dispersion
interaction between low-dimensional nano-scale objects more accurately.
Using an array of electronic structure \cite{Spencer09:thesis,MisquittaSSA10,
DrummondN07} and analytical \cite{DobsonWR06} techniques, several groups have
demonstrated that the dispersion interaction between one- and two-dimensional
systems can deviate strongly from that expected from the well-known
{\em additive} picture of $r^{-6}$-type interactions
\cite{Stone:book:13,Kaplan05:book}.
For the case of parallel one-dimensional (1D) metallic wires separated by distance $d$,
Dobson \emph{et al.}\xspace \cite{DobsonWR06} demonstrated that the van der Waals dispersion
interaction should decay as $\sim - d^{-2}[\ln (\gamma d)]^{-3/2}$, where
$\gamma$ is a constant that depends on the wire width.
This analytic result was subsequently verified by Drummond and Needs
\cite{DrummondN07} using diffusion quantum Monte Carlo (DMC) calculations
\cite{FoulkesMNR01}. This change in the power-law of the dispersion energy
can be understood as arising from correlations in extended plasmon modes
in the metallic wires \cite{DobsonWR06,Dobson07a,DobsonMcLRWGLD01}.
These plasmon modes would be expected in any low-dimensional system with a
delocalized electron density.
Misquitta \emph{et al.}\xspace \cite{MisquittaSSA10} have recently extended these results
to the more general case of finite- and infinite-length wires with arbitrary
band gap. Using dispersion models that include non-local charge-flow
polarizabilities they were able to describe the dispersion interactions in all
cases, including the insulating and semi-metallic wires. In these models the plasmon-like
fluctuations are modelled by the charge-flow polarizabilities which, at
lowest order, result in a $-d^{-2}$ dispersion interaction
\cite{Stone:book:13,MisquittaSSA10}. For metallic wires these terms are dominant
at all separations and yield the result of Dobson \emph{et al.}\xspace for the dispersion.
Curiously, many of these results were known as early as 1952. Using a tight-binding
H\"{u}ckel-type model for linear polyenes, Coulson and Davies \cite{CoulsonD52}
investigated the dispersion interactions between the chains in a variety of
configurations and with a range of highest occupied to lowest unoccupied molecular
orbital (HOMO--LUMO) gaps. Their conclusions about the non-additivity of the
dispersion interaction and the changes in power law (deviations from the expected
effective $-d^{-5}$ London behaviour) are essentially
identical to those reached by Misquitta \emph{et al.}\xspace \cite{MisquittaSSA10}. A few years
later Longuet-Higgins and Salem \cite{Longuet-HigginsS61} reached similar conclusions
and related the non-additivity of the dispersion to the existence of long-range
correlations within the system. A decade later Chang \emph{et al.}\xspace \cite{ChangCDY71}
used Lifshitz theory to derive an analytic form of the dispersion interaction
between two metallic wires that is identical to the expression of
Dobson \emph{et al.}\xspace \cite{DobsonWR06}, though the latter considered many more cases.
The current interest in this field stems from two sources.
First we have recently witnessed an explosion of work on nano-scale devices
confined in one or two dimensions. Examples are carbon nanotubes and devices
based on graphene and related materials. To model accurately the self-assembly
of these materials we need to describe correctly their interactions, particularly
the ubi\-qui\-tous dispersion interaction.
Second, {\em ab initio}\xspace electronic structure methods have now achieved a level of
accuracy and computational efficiency that allows them to be applied to such systems.
These methods have exposed the inadequacies of assumptions and approximations
made in many empirical models. From the research cited above we now know that
the dispersion energy exhibits much more substantial non-additivity than assumed
previously.
We emphasise here that empirical models for the dispersion energy prove inadequate
because they rely on the assumption of additivity through the pair-wise
$C_6^{ab}/r_{ab}^6$ model with van der Waals coefficients $C_6^{ab}$ between
sites $a$ and $b$ assumed to be isotropic constants, with little or no variation
with changes in chemical environment.
Part of the missing non-additivity arises from the local
chemical environment changes and from through-space coupling between the dipole
oscillators. The remainder arises from the metallic-like contributions that
are responsible for the anomalous dispersion effects that are the subject of
this paper. We stress that while the first kind of non-additivity can be described
by coupled-oscillator models \cite{GobreT13} and {\em ab initio}\xspace derived dispersion
models such as those obtained from the Williams--Stone--Misquitta
\cite{MisquittaS08a,MisquittaSP08} effective local polarizability models,
as we shall see next, the latter, that is, the non-additivity arising from
metallic contributions, requires models that take explicit account of
extended charge fluctuations.
\begin{figure}
\includegraphics[width=0.4\textwidth,clip]{Triwires-vdW-Fig1.png}
\caption[Physical origin of anomalous dispersion energy between two parallel
1-D wires]{
Electronic fluctuations in (infinite) 1D wires (in blue) arise from
the tightly bound electrons (not shown) and electrons at the band edge
(represented by the red arrows). The extent of these fluctuations will
depend on the band gap (see text) and will have a typical length scale
$l_c$.
An extended fluctuation of $+\cdots-$ in one wire will induce
a $-\cdots+$ fluctuation in the other.
If $d$ is the separation, we can identify two cases: (1) $d \lt l_c$ and
(2) $d \gg l_c$. As explained in the text, the leading-order dispersion
interaction in the former is associated with charge-induced-charge
interactions, and that of the latter with dipole-induced-dipole interactions.
\label{fig:edisp2-physics}
}
\end{figure}
The unusual nature of the second-order dispersion
energy, \Edisp{2}, for infinite, parallel 1D wires of arbitrary
band gap can be understood as follows. The electronic fluctuations in the
wire are broadly of two types: the short-range fluctuations associated
with tightly bound electrons and the long-range plasmon-type fluctuations
associated with electrons at the band edge. The former give rise to the
standard dispersion model while the latter are responsible for the effects
discussed in this paper and those cited above. For systems with a finite
gap, the plasmon-like modes will be associated with a finite length scale,
$l_c$, defined, for example, via the Resta localization tensor \cite{Angyan09a}.
For metallic systems this length scale is expected to diverge.
Consider now the two cases depicted in Fig.~\ref{fig:edisp2-physics}.
In the first case the wires are separated by $d \lt l_c$. Here, the
leading-order contribution from the spontaneous extended fluctuation
depicted in the figure is that between charges and leads to the $-d^{-2}$
behaviour of \Edisp{2}: the spontaneous fluctuation at the first wire
results in a field $\sim d^{-1}$ at the second and this interacts with the
first via another $d^{-1}$ interaction leading to the favourable $-d^{-2}$
dispersion energy.
Only local charge-pairs contribute to this leading order interaction,
consequently the dispersion interaction per unit length remains $-d^{-2}$.
If, on the other hand, $d \gg l_c$, the extended fluctuation
at the first wire generates a dipole field of strength $\sim d^{-3}$ at the second,
and the resulting induced (extended) dipole interacts with the first via
a dipole-dipole interaction leading to another factor of $d^{-3}$.
This gives a nett favourable dispersion interaction of $-d^{-6}$.
In this case, to find the nett dispersion interaction per unit wire length
we need to sum over all the interactions between an element of one
wire and all elements of the other, which
leads to an effective $-d^{-5}$ dispersion interaction just as for the
point-like fluctuating dipoles of the tightly-bound electrons
\cite[p.173]{Parsegian05:book}.
In both cases, the usual $-d^{-5}$ effective dispersion interaction
from the tightly bound electrons must be included too.
The length-scale $l_c$ is expected to diverge in a metal, leading to a single power law
$-d^{-2}$ for \Edisp{2}. For finite-gap wires we expect the two regimes
described above. This is exactly the conclusion reached by Misquitta \emph{et al.}\xspace
\cite{MisquittaSSA10} and, much earlier, by Coulson and Davies \cite{CoulsonD52}.
The second-order dispersion energy is, however, only part of the story. For a group
of interacting monomers (possibly of different types) the dispersion energy
includes contributions from second-order as well as third- and higher-order terms.
The third-order dispersion includes two- and three-body terms \cite{Stogryn71a};
the former will be denoted by $\Edisp{3}[2]$ and the latter by $\Edisp{3}[3]$.
$\Edisp{3}[2]$ is expected to be important for small-gap systems,
since these are associated with large hyperpolarizabilities, but we may
expect \emph{a priori} that as long as $\Edisp{3}[3]$ decays slowly enough
with trimer separation, it is the three-body non-additive energy
$\Edisp{3}[3]$ that will be the dominant contributor in the condensed
phase due to the far larger number of trimers compared with dimers.
The three-body non-additive energy $\Edisp{3}[3]$ is usually modelled using
the triple-dipole Axilrod--Teller--Muto expression (see Sec.\ \ref{sec:theory})
\cite{AxilrodT43,Muto43} from which $\Edisp{3}[3] \sim R^{-9}$, that is, the
non-additivity decays very rapidly with separation.
As will be demonstrated below, this expression is not valid for small-gap
systems; instead a more general expression is derived that includes contributions
from correlations between the long-wavelength plasmon-like modes.
From the physical picture of the second-order dispersion energy given above
we may {\em a priori} expect that the true $\Edisp{3}[3]$ will be qualitatively
different from that suggested by the triple-dipole expression. As we shall see
below, this is indeed the case.
The multipole expansion is a powerful method, but it would be reassuring to
verify its predictions using a non-expanded \textit{ab initio} approach.
In order to obtain hard numerical data describing the nonadditivity of the
dispersion interactions between metallic wires, we have evaluated the binding
energy of
three parallel, metallic wires in an equilateral-triangle configuration using
the variational and diffusion quantum Monte Carlo (VMC and DMC) methods. VMC
allows one to take expectation values with respect to explicitly correlated
many-electron wave functions by using a Monte Carlo technique to evaluate the
multidimensional integrals. The DMC method projects out the ground-state
component of a trial wave function by simulating drift, diffusion, and
branching processes governed by the Schr\"{o}dinger equation in imaginary
time. In our QMC calculations each wire was modelled as a 1D homogeneous
electron gas (HEG)\@. The dependences of the biwire and triwire interactions
on the wire separation $d$ were evaluated in order to determine the asymptotic
power law for the interaction and the non-additive three-body contribution.
We find that the long-range non-additivity is repulsive and scales as a power
law in $d$ with an exponent slightly less than three.
The paper is organised as follows. The underlying theory is described in
Sec.\ \ref{sec:theory}. In Sec.\ \ref{sec:results} we describe the
computational details and present our results. Finally, we discuss the
physical consequences of our results in Sec.\ \ref{sec:discussion}.
\section{Theory}
\label{sec:theory}
The non-expanded 3-body, non-additive dispersion energy has been shown to be \cite{Stogryn71a}
(all formulae will be given in SI units, but results will be in atomic units)
\begin{align}
\Edisp{3}[3] & = -\frac{\hbar}{\pi(\ensuremath{4\pi \epsilon_{0}})^3} \int_{0}^{\infty} du \int
{\mathrm d}^{3}{\mathbf r}_{1} {\mathrm d}^{3}{\mathbf r}_{1'} {\mathrm d}^{3}{\mathbf r}_{2} {\mathrm d}^{3}{\mathbf r}_{2'} {\mathrm d}^{3}{\mathbf r}_{3} {\mathrm d}^{3}{\mathbf r}_{3'} \nonumber \\
& \frac{\alpha^{A}({\mathbf r}_{1},{\mathbf r}_{1'};iu)
\alpha^{B}({\mathbf r}_{2},{\mathbf r}_{2'};iu)
\alpha^{C}({\mathbf r}_{3},{\mathbf r}_{3'};iu)}
{|{\mathbf r}_{1'}-{\mathbf r}_{2}| |{\mathbf r}_{2'}-{\mathbf r}_{3}| |{\mathbf r}_{3'}-{\mathbf r}_{1}|}.
\label{eq:Edisp33}
\end{align}
Here $\alpha^{X}({\mathbf r}_{1},{\mathbf r}_{1'};iu)$ is the frequency-dependent density susceptibility
(FDDS) function for monomer $X$ evaluated at imaginary frequency $iu$
\cite{Longuet-Higgins65,Stone85}.
The sign of the above expression has been chosen so that the polarizability tensor
defined as
\begin{align}
\A{aa'}{\gra\gra'}(\omega) = - \iint \oQ{a}{\gra}({\mathbf r}_1) \alpha({\mathbf r}_{1},{\mathbf r}_{1'};\omega)
\oQ{a'}{\gra'}({\mathbf r}_{1'}) {\mathrm d}^{3}{\mathbf r}_{1}{\mathrm d}^{3}{\mathbf r}_{1'}
\label{eq:pol}
\end{align}
is positive-definite.
Here $\oQ{a}{\gra}$ is the multipole moment operator for site $a$ with component
$\gra=00,10,11c,11s,\cdots$ using the notation described by Stone
\cite{Stone:book:13}.
As defined, $\A{aa'}{\gra\gra'}(\omega)$ is the {\em distributed} polarizability
for sites $a$ and $a'$. It describes the linear response of the expectation
value of the local operator $\oQ{a}{\gra}$ to the frequency-dependent
(local) perturbation $\oQ{a'}{\gra'} \cos(\omega t)$ \cite{MisquittaS06}.
That is, the distributed polarizability $\A{aa'}{\gra\gra'}(\omega)$ describes
the first-order change in multipole moment of component $\gra$ at site $a$ in
response to the frequency-dependent perturbation of component $\gra'$ at a site $a'$.
For the sake of clarity we will use the following notation in subsequent
expressions: sites associated with monomers $A$, $B$ and $C$ will be
designated by $a,a'$, $b,b'$ and $c,c'$, and angular
momentum labels by $\gra,\gra'$, $\grb,\grb'$ and $\grg,\grg'$, respectively.
Molecular labels are hence redundant and will be used only if there is the possibility of
confusion.
The multipole expansion of $\Edisp{3}[3]$ is obtained by expanding the
Coulomb terms in Eq.~\eqref{eq:Edisp33} as follows
\begin{align}
\frac{1}{|{\mathbf r}_1 - {\mathbf r}_2|} = \oQ{a}{\gra}({\mathbf r}_1) \T{ab}{\gra\grb} \oQ{b}{\grb}({\mathbf r}_2),
\label{eq:multipole}
\end{align}
where $\T{ab}{\gra\grb}$ is the interaction function \cite{Stone:book:13}
between multipole $\gra$ on site $a$ (in subsystem $A$) and multipole
$\grb$ on site $b$ (in subsystem $B$).
At lowest order, the interaction function $\T{ab}{00,00} = |{\mathbf r}^a-{\mathbf r}^b|^{-1}$ describes
the interaction of the charge on $a$ with that on $b$.
With this multipole expansion (MP) Eq.~\eqref{eq:Edisp33} takes the form
\begin{align}
\Edisp{3}[3] & \rightarrow \EdispMP{3}[3] =
+ \frac{\hbar}{\pi(\ensuremath{4\pi \epsilon_{0}})^3}
\T{a'b}{\gra'\grb} \T{b'c}{\grb'\grg} \T{c'a}{\grg'\gra} ~ \times \nonumber \\
& \int_0^{\infty} \left[ \iint {\mathrm d}^{3}{\mathbf r}_1 {\mathrm d}^{3}{\mathbf r}_{1'} \oQ{a}{\gra}({\mathbf r}_1) \alpha^{A}({\mathbf r}_1,{\mathbf r}_{1'};iu)
\oQ{a'}{\gra'}({\mathbf r}_{1'}) \right] \nonumber \\
& ~~~~~~~ \left[ \iint {\mathrm d}^{3}{\mathbf r}_2 {\mathrm d}^{3}{\mathbf r}_{2'} \oQ{b}{\gra}({\mathbf r}_2) \alpha^{B}({\mathbf r}_2,{\mathbf r}_{2'};iu)
\oQ{b'}{\gra'}({\mathbf r}_{2'}) \right] \nonumber \\
& ~~~~~~~ \left[ \iint {\mathrm d}^{3}{\mathbf r}_3 {\mathrm d}^{3}{\mathbf r}_{3'} \oQ{c}{\grg}({\mathbf r}_3) \alpha^{C}({\mathbf r}_3,{\mathbf r}_{3'};iu)
\oQ{c'}{\grg'}({\mathbf r}_{3'}) \right] du \nonumber \\
& = + \frac{\hbar}{\pi(\ensuremath{4\pi \epsilon_{0}})^3}
\T{a'b}{\gra'\grb} \T{b'c}{\grb'\grg} \T{c'a}{\grg'\gra} \nonumber \\
& ~~~~~~~~ \times ~
\int_0^{\infty} \A{aa'}{\gra,\gra'}(iu) \A{bb'}{\grb\grb'}(iu)
\A{cc'}{\grg\grg'}(iu) du.
\label{eq:Edisp33MP}
\end{align}
This is the generalized (distributed) multipole expansion for the three-body non-additive
dispersion energy.
For systems with large HOMO--LUMO gaps (band gaps in infinite systems) Misquitta \emph{et al.}\xspace
\cite{MisquittaSSA10} have shown that the non-local polarizabilities decay rapidly
with inter-site separation. The characteristic decay length becomes smaller as the
gap increases. In this case, the non-local polarizabilities can be {\em localized} using
a multipole expansion \cite{LeSueurS94,LillestolenW07} and we can replace
$\A{aa'}{\gra\gra'}$ by a local equivalent $\A{a}{\gra\gra'} \delta_{aa'}$ in
Eq.~\eqref{eq:Edisp33MP} to give:
\begin{align}
\EdispMP{3}[3](\mbox{loc}) & =
+ \frac{\hbar}{\pi(\ensuremath{4\pi \epsilon_{0}})^3} \T{ab}{\gra'\grb} \T{bc}{\grb'\grg} \T{ca}{\grg'\gra}
\nonumber \\
& ~~~~~ \times ~ \int_0^{\infty} \A{a}{\gra\gra'}(iu) \A{b}{\grb\grb'}(iu) \A{c}{\grg\grg'}(iu) du.
\label{eq:Edisp33MPloc}
\end{align}
This is the form of the three-body non-additive dispersion energy derived by
Stogryn \cite{Stogryn71a}, which is valid for large-gap systems only.
If we retain only the dipole-dipole terms in the Stogryn expression and make the
further assumption that we are dealing with systems of isotropic sites of
(average) polarizability $\bar{\alpha}^{a}$, we can use
$\A{a}{\gra\gra'} \delta_{aa'} = \bar{\alpha}^{a}\delta_{\gra\gra'}$,
and we obtain the Axilrod--Teller--Muto \cite{AxilrodT43,Muto43} triple-dipole term
\cite{AxilrodT43,Muto43}:
\begin{align}
\EdispMP{3}[3,\mbox{ATM}] & = \sum_{abc} C^{abc}_9
\frac{1 + 3 \cos \hat{a} \cos \hat{b} \cos \hat{c}}
{(\ensuremath{4\pi \epsilon_{0}})^3 R_{ab}^3 R_{ac}^3 R_{bc}^3}
\label{eq:ATM}
\end{align}
where the $C^{abc}_9$ dispersion coefficient is defined by
\begin{align}
C^{abc}_9 = \frac{3\hbar}{\pi} \int_0^{\infty} \bar{\alpha}^{a}(iu) \bar{\alpha}^{b}(iu)
\bar{\alpha}^{c}(iu) du
\end{align}
and $\hat{a}$ is the angle subtended at site $a$ by unit vectors $\hat{{\mathbf r}}^{ab}$ and
$\hat{{\mathbf r}}^{ac}$, with similar definitions for the angles $\hat{b}$ and $\hat{c}$.
This is the more commonly used form of the non-additive dispersion energy, though,
as we see from this derivation, like the Stogryn expression, Eq.~\eqref{eq:ATM}
is valid only for large-gap systems (insulators).
\section{Computational details and results}
\label{sec:results}
\subsection{$\Edisp{3}[3]$ from non-local polarizabilities}
\label{ssec:results-pol}
The na\"{i}ve evaluation of Eq.~\eqref{eq:Edisp33MP} incurs a computational cost that
scales as $\mathcal{O}(n^6 (l+1)^{12} K)$, where $n$ is the number of sites,
$l$ is maximum rank of the polarizability matrix, and $K$ is the number of
quadrature points, typically 10. The
scaling may be improved by calculating and storing the following
intermediates:
\begin{align}
\IIu{ab}{\gra\grg} & = \sum_{a',\grb} \Au{aa'}{\gra\grb} \T{ba'}{\grg\grb}
\nonumber \\
\IIu{bc}{\grg\gre} & = \sum_{b',\grd} \Au{bb'}{\grg\grd} \T{cb'}{\gre\grd}
\nonumber \\
\IIu{ca}{\gre\gra} & = \sum_{c',\grp} \Au{cc'}{\gre\grp} \T{ac'}{\gra\grp}
\end{align}
The total computational cost of calculating these intermediates is
$\mathcal{O}(n^3(l+1)^6 K)$. Equation \eqref{eq:Edisp33MP} now takes
the form
\begin{align}
\EdispMP{3}[3] & = \frac{\hbar}{\pi (\ensuremath{4\pi \epsilon_{0}})^3}
\int_{0}^{\infty} \IIu{ab}{\gra\grg} \IIu{bc}{\grg\gre} \IIu{ca}{\gre\gra}
du \nonumber \\
& = \frac{\hbar}{\pi (\ensuremath{4\pi \epsilon_{0}})^3} \int_{0}^{\infty}
\JJu{ac}{\gra\gre} \IIu{ca}{\gre\gra} du
\label{eq:3bodydisp-efficient}
\end{align}
where we have defined yet another intermediate
\begin{align}
\JJu{ac}{\gra\gre} = \sum_{b,\grg} \IIu{ab}{\gra\grg} \IIu{bc}{\grg\gre}
\end{align}
which incurs a computational cost of $\mathcal{O}(n^3 (l+1)^6 K)$.
Equation \eqref{eq:3bodydisp-efficient} is evaluated with a computational
cost of $\mathcal{O}(n^2 (l+1)^4 K)$, so the overall cost of the
calculation is only $\mathcal{O}(4 n^3 (l+1)^6 K)$; a significant improvement
from the na\"{i}ve cost reported above.
We have studied the interactions between two parallel {\it finite}
\HH{64} chains with bond-alternation parameters $\ensuremath{\eta}\xspace = 2.0, 1.5$ and $1.0$,
where \ensuremath{\eta}\xspace is the ratio of the alternate bond lengths.
Frequency-dependent polarizability calculations were performed with
coupled Kohn--Sham perturbation theory using the PBE functional and the adiabatic
LDA linear-response kernel with the Sadlej-pVTZ basis set \cite{Sadlej88}.
Calculations on shorter chains indicated that the PBE results were qualitatively the
same as those from the more computationally demanding PBE0 functional.
The Kohn--Sham DFT calculations were performed using the {\sc NWChem} program
\cite{NWChem} and the coupled Kohn--Sham perturbation theory and polarizability
calculations were performed with the {\sc CamCASP}\xspace program \cite{CamCASP}.
Dispersion energies were calculated with the {\sc Dispersion} program that
is available upon request.
The finite hydrogen chains with bond-length alternation is a convenient model
for 1D wires as we can control the metallicity of the system
using the alternation parameter $\ensuremath{\eta}\xspace$: with $\ensuremath{\eta}\xspace=2.0, 1.5$ and
$1.0$, the Kohn--Sham HOMO--LUMO gap of the chain is $7.5, 3.1$ and $1.6$ eV,
respectively, the undistorted chain being the most metallic.
We have calculated distributed non-local polarizabilities with terms from
rank $0$ (charge) to $4$ (hexadecapole) using a constrained density-fitting algorithm \cite{MisquittaS06}.
This technique has been demonstrated to result in a compact and accurate description
of the frequency-dependent
polarizabilities, with relatively small charge-flow terms. Furthermore,
Misquitta \emph{et al.}\xspace \cite{MisquittaSSA10} have demonstrated that these polarizabilities
can accurately model the two-body dispersion energies between hydrogen chains
for which terms of rank $0$ are sufficient;
the agreement with non-expanded SAPT(DFT) \Edisp{2} energies being excellent
even for chain separations as small as $6$ a.u. We expect a similar accuracy for the
three-body non-additive dispersion energy investigated in this paper.
\begin{figure}
\includegraphics[width=0.5\textwidth,clip]{Triwires-vdW-Fig2.pdf}
\caption[Three-body non-additive dispersion from polarizabilities of \HH{64}-chains]{
The third-order non-additive dispersion energy calculated using the
non-local charge-flow (rank $0$) polarizabilities of \HH{64} chains with bond alternation parameters
$\ensuremath{\eta}\xspace=1$, $1.5$ and $2$.
The wires are parallel and arranged in an equilateral triangular configuration
with side $d$.
Each set of data is associated with two straight-line fits of the form $\sim d^{-x}$
to the data in the near (solid lines) and far (dashed lines) regions.
Broadly, the transition from the short- to long-range behaviour is
in the region of the intersection of these lines.
\label{fig:triwire-tri-n64}
}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth,clip]{Triwires-vdW-Fig3.pdf}
\caption[Three-body non-additive dispersion from polarizabilities of \HH{64}-chains]{
The third-order non-additive dispersion energy calculated using the
non-local charge-flow (rank $0$) polarizabilities of \HH{64} chains with bond alternation parameters
$\ensuremath{\eta}\xspace=1$, $1.5$ and $2$.
The wires are parallel, coplanar and equally spaced.
\label{fig:triwire-par-n64}
}
\end{figure}
In Figs.~\ref{fig:triwire-tri-n64} and \ref{fig:triwire-par-n64} we report
$\EdispMP{3}[3]$ energies per H$_2$ unit for the equilateral triangular
and coplanar configurations of the \HH{64} trimer.
The broad features of these figures are:
\begin{itemize}
\item
There is no single power law that fits the data. Instead we have two distinct
regions: for separations much larger than the chain length (much greater than
70--100 a.u.)
the non-additive dispersion energy decays as $\sim d^{-9}$, consistent with the
Axilrod--Teller--Muto expression (Eq.~\eqref{eq:ATM}). This is because
at such large separations the chains appear to each other as point particles.
\item
At sufficiently short separations we see another power-law decay, but with an exponent
that varies with the bond alternation, \ensuremath{\eta}\xspace, of the wire. For the most
insulating wire with $\ensuremath{\eta}\xspace=2.0$ the short-separation exponent is relatively
close to $7$, the value expected from the summation of trimers of atoms,
while for the most metallic wire with $\ensuremath{\eta}\xspace=1.0$ the exponent is close
to $3$.
\item
The non-additive dispersion energy is {\em enhanced} as the degree of metallicity
increases, and for the most metallic wires is nearly four orders of magnitude larger
than that for the most insulating wire.
\item
The charge-flow polarizabilities are responsible for both the change in
power-law exponent at short range and the enhancement at long range.
Contributions from non-local dipole fluctuations, that is, terms of rank $1$
(not shown in the figures), are insignificant by comparison.
This was also the observation of
Misquitta \emph{et al.}\xspace \cite{MisquittaSSA10} for the two-body dispersion energy.
\item
The Axilrod--Teller--Muto triple dipole expression leads to a favourable
three-body non-additive dispersion energy for three atoms in a linear
configuration. However, for three wires in such a configuration
(Fig.~\ref{fig:triwire-par-n64}) the non-additivity is positive, i.e., unfavourable.
\end{itemize}
These observations should perhaps not come as a surprise as they are analogous to
those obtained by Misquitta \emph{et al.}\xspace \cite{MisquittaSSA10} for the two-body dispersion energy
between 1D wires. However the deviations from the standard picture are much
more dramatic here. In going from the insulating, $\ensuremath{\eta}\xspace=2.0$, to near-metallic
wire the two-body dispersion exhibits a large-separation enhancement of two orders
of magnitude compared with four orders for the three-body non-additive dispersion, and
for small wire separations the power-law changes from $d^{-5}$ to $d^{-2}$ for the
two-body energy while it changes from $d^{-7}$ to $d^{-3}$ for the three-body non-additivity.
\begin{figure}
\includegraphics[width=0.17\textwidth,clip]{Triwires-vdW-Fig4.png}
\caption[Physical origin of anomalous non-additive three-body dispersion energy
between three parallel 1-D wires]{
The anomalous three-body non-additive dispersion interaction between three
parallel 1D wires (in blue) in an equilateral arrangement can be
rationalised on the basis of correlations in long-range fluctuations
(red arrows). Here $d$ is the side of the triangle and $l_c$ is the typical
correlation length for electronic fluctuations.
The spontaneous and induced extended fluctuations are indicated by the
double-headed arrows, and their signs by the $+\cdots-$ labels.
\label{fig:edisp33-physics}
}
\end{figure}
In an analogous manner to the second-order dispersion energy \Edisp{2}, the
anomalous nature of $\Edisp{3}[3]$ can be explained using a simple charge-fluctuation
picture. In Fig.~\ref{fig:edisp33-physics} we depict the
plasmon-like long-range electronic fluctuations in the wires arranged in the
equilateral triangular geometry. The dispersion interaction will be associated
with both local and extended fluctuations. The local fluctuations give rise
to the standard model for $\Edisp{3}[3]$. Here we are concerned with the
extended, plasmon-like fluctuations of typical length-scale $l_c$,
as depicted in the figure.
An extended $+\cdots-$ spontaneous fluctuation in one wire induces a
$-\cdots+$ fluctuation in the second, which in turn, induces a $+\cdots-$
fluctuation in the third. The interaction between the first and third will
always be repulsive leading to a positive $\Edisp{3}[3]$ energy.
If the wire separations satisfies $d \lt l_c$, the extended fluctuations
cannot be regarded as dipoles, instead, as shown in
Fig.~\ref{fig:edisp33-physics}, their interactions are modelled as
between two trimers of charges resulting from extended charge fluctuations.
Each pair of charges in a trimer interacts as $d^{-1}$, leading to an
effective three-body non-additive dispersion of $\U{3} \sim +d^{-3}$.
For wire separations much larger than $l_c$, the extended
fluctuations can be modelled as dipoles. Each pair of these dipoles interacts
as $\pm d^{-3}$, giving rise to a $+d^{-9}$ contribution to the non-additive
dispersion energy.
But all such interactions must be summed over, leading to the effective
$\U{3} \sim +d^{-7}$ behaviour.
If the wires are finite in extent, we recover the $\U{3} \sim +d^{-9}$ power law
for separations much larger than the wire length.
It is now well-known that Kohn--Sham time-dependent linear-response theory is not
quantitatively accurate for heavily delocalized systems, with polarizabilities typically
overestimated \cite{ChampagnePvGBSS-GRK98,ChampagneMVA95a,ChampagneMVA95b}, and
hyperpolarizabilities even more so. One may therefore question the validity of
our calculations. We seek, however, a description of the {\em physical} effect
and make no claims to being quantitatively accurate. We know from the range of calculations
described in the Introduction that our hydrogen chain models are able to
describe the physics of the two-body dispersion energy between 1D wires
and we see no reason to doubt their validity for trimers of such wires.
Nevertheless, to remove any possibility of doubt, we have used
QMC techniques to corroborate the results obtained with these
models.
\subsection{Diffusion Monte Carlo (DMC) calculations}
\label{ssec:results-DMC}
In our DMC calculations we considered parallel biwires and parallel triwires
in an equilateral-triangle configuration with interwire spacing $d$.
Each wire was modelled by a
single-component 1D HEG of density parameter $r_s$ in a cell of length
$L(r_s,N) = 2Nr_s$ subject to periodic boundary conditions, where $N$ is the
number of electrons per wire in the cell.
The electron-electron interaction was modelled
by a 1D Coulomb potential \cite{Saunders94}. The charge neutrality of each
wire was maintained by
introducing a uniform line of positive background charge. To estimate the
asymptotic binding behaviour between long, metallic wires we must have
\begin{equation}
L\left(r_s,N \right) \gg d \gg r_s .
\end{equation}
We chose to work with real wave functions at the $\Gamma$ point of the
simulation-cell Brillouin zone, and the largest systems we considered had $N =
111$ electrons per wire (333 electrons in total for the triwire). To
investigate finite-size errors we also performed calculations with $N = 5$,
11, 21, and 55 electrons per wire.
We used many-body trial wave functions of Slater-Jastrow-backflow
type. Each Slater determinant contained plane-wave orbitals of the
form $\exp(ikx)$. The use of single-component (i.e., fully
spin-polarised) HEGs is justified in Ref.\ \onlinecite{DrummondN07}. DMC
calculations for strictly 1D systems do not suffer from a fermion sign problem
because the
nodal surface is completely defined by electron coalescence points,
where the trial wave function goes to zero. Our DMC calculations are
therefore essentially exact for the systems studied, although these
systems are finite wires subject to periodic boundary conditions rather
than infinite wires. Electrons in different wires were treated as
distinguishable, so the triwire (biwire) wave function involves the
product of three (two) Slater determinants. Our Jastrow exponent
\cite{DrummondTN04} was the sum of a two-body function consisting
of an expansion in powers of inter-electron in-wire separation up to
10th order, and a two-body function consisting of a Fourier expansion
with 14 independent reciprocal-lattice points. These functions
contained optimisable parameters whose values were allowed to differ
for intrawire and interwire electron pairs.
We employed a backflow transformation in which the electron coordinates
in the Slater determinants were replaced by ``quasiparticle coordinates'' that
depend on the positions of all the electrons. We used the two-body backflow
function of
Ref.\ \onlinecite{Lopez-RiosMDTN06}, which consists of an expansion in powers
of inter-electron in-wire separation up to 10th order, again with separate
terms for intrawire and interwire electron pairs. Backflow functions are
normally used to improve the nodal surfaces of Slater determinants in QMC
trial wave functions \cite{Lopez-RiosMDTN06}. In the strictly 1D case the
backflow transformation leaves the (already exact) nodal surface unchanged,
but it provides a
compact parameterisation of three-body correlations \cite{LeeD11}.
The values of the optimisable parameters in the Jastrow factor and backflow
function were determined within VMC by minimising the mean absolute deviation
of the local energy from the median local energy \cite{NeedsTDL-R10}. The
optimisations were performed using 32,000 statistically independent electron
configurations to obtain statistical estimators, while 3,200 configurations
were used to determine updates to the parameters
\cite{Trail08a,TrailM10}.
Our DMC calculations were performed with a target population of 1,280
configurations. The first 500 steps were discarded as equilibration.
To aid comparison of the present results with a previous study \cite{DrummondN07}, we
used the same time steps: 0.04, 0.2, and 2.5 a.u.\ at $r_s = 1$, 3, and 10,
respectively. These are sufficiently small that the time-step bias in our
results is negligible.
Our QMC calculations were performed using the \textsc{casino} code
\cite{NeedsTDL-R10}.
\subsection{DMC results}
We denote the total energy of the $N$-electron $M$-wire system
as $E_M$, and the total energy per electron as $e_M$, so $e_1 = E_1/N$.
The parallel 2-wire system has an additional interaction energy $\Delta E_2(d)$,
so the energy per electron is
\begin{equation}
e_2(d) = (2E_1 + \Delta E_2(d))/2N \equiv e_1 + \U{2}(d),
\end{equation}
consequently the biwire interaction energy per electron $\U{2}(d)$ is
\begin{equation}
\U{2}(d) = e_2(d) - e_1 .
\end{equation}
Similarly the equilateral-triangle configuration, parallel 3-wire system
has an energy per electron of
\begin{equation}
e_3(d) = (3E_1 + 3\Delta E_2(d) + \Delta E_3(d)) / 3N \equiv e_1 + 2u_2(d) + u_3(d),
\end{equation}
from which we get the nonadditive contribution to the energy of the triwire system per
electron to be
\begin{equation}
\U{3}(d) = e_3(d) - e_1 - 2 \U{2}(d) = e_3(d) - 2e_2(d) + e_1.
\end{equation}
We fitted
\begin{equation}
\label{eq:power_law}
u(d) = \frac{\exp(C)}{d^\alpha},
\end{equation}
where $C$ and $\alpha$ are fitting parameters, to our DMC results for
$|\U{2}(d)|$ and $|\U{3}(d)|$ (extrapolated to the thermodynamic limit), for
$d$ in the asymptotic regime.
As shown in
Figs.\ \ref{Fig04}--\ref{Fig06}, the asymptotic binding energies $\U{2}(d)$
and $\U{3}(d)$ show power-law behaviour as a function of $d$ at all
densities.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=80mm,angle=0]{Triwires-vdW-Fig5a.pdf}
\includegraphics[width=80mm,angle=0]{Triwires-vdW-Fig5b.pdf}
\end{center}
\caption{DMC results for the asymptotic behaviour of the biwire interaction \U{2} (left panel)
and the nonadditive triwire contribution \U{3} (right panel) at $r_s=1$.
}
\label{Fig04}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=80mm,angle=0]{Triwires-vdW-Fig6a.pdf}
\includegraphics[width=80mm,angle=0]{Triwires-vdW-Fig6b.pdf}
\end{center}
\caption{DMC results for the asymptotic behaviour of the biwire interaction \U{2} (left panel)
and the nonadditive contribution \U{3} (right panel) at $r_s=3$.
}
\label{Fig05}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=80mm,angle=0]{Triwires-vdW-Fig7a.pdf}
\includegraphics[width=80mm,angle=0]{Triwires-vdW-Fig7b.pdf}
\end{center}
\caption{DMC results for the asymptotic behaviour of the biwire interaction \U{2} (left panel)
and the non-additive contribution \U{3} (right panel) at $r_s=10$.
}
\label{Fig06}
\end{figure}
To estimate the finite-size errors at a given wire separation $d$, we examined
the variation of the energy with the number $N$ of electrons per wire. It has
recently been reported \cite{LeeD11} that the finite-size error in the total
energy per electron of the 1D HEG scales as
\begin{equation}
e_1(N) = e_1(\infty) + \frac{c}{N^2}, \label{eq:N-2_scaling}
\end{equation}
where $c$ is a constant, over the range of $N$ considered here. Our results
for $e_2$ and $e_3$, shown in
Fig.\ \ref{Fig07}, are consistent with this dependence. However, we find that
the interaction energies $u_2$ and $u_3$ at a given $d$ show a more slowly
decaying finite-size error:
\begin{equation}
u_M\left( N \right) = u_M(\infty) + \frac{c^\prime}{N}, \label{eq:fs_extrap}
\end{equation}
where $c^\prime$ is a constant. Hence Eq.\ (\ref{eq:N-2_scaling}) cannot
give the asymptotic form of the finite-size error in the total energy of a 1D
system in the limit of large $N$.
We have extrapolated the binding-energy data shown in
Figs.\ \ref{Fig04}--\ref{Fig06} to the thermodynamic limit at each $d$ using
Eq.\ (\ref{eq:fs_extrap}). We have then fitted Eq.\ (\ref{eq:power_law})
to the extrapolated binding-energy data for triwires and
biwires, respectively. The resulting fitting parameters, including the
asymptotic exponents, are given in Table \ref{tab:Fit1}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=80mm,angle=0]{Triwires-vdW-Fig8a.pdf}
\includegraphics[width=80mm,angle=0]{Triwires-vdW-Fig8b.pdf}\\
\includegraphics[width=80mm,angle=0]{Triwires-vdW-Fig8c.pdf}
\includegraphics[width=80mm,angle=0]{Triwires-vdW-Fig8d.pdf}
\end{center}
\caption{DMC results for the $N$-dependence of the total biwire ($e_2$)
and triwire ($e_3$) energies and interaction energies (\U{2} and \U{3})
at $r_s=10$ and at interwire spacing $d=30$ a.u.
The data at $N=5$ ($1/N=0.2$, $1/N^2 = 0.04$) were excluded from the fits
(solid lines).}
\label{Fig07}
\end{figure}
\begin{table}[ht]
\caption{Values of power-law parameters in Eq.\ (\ref{eq:power_law}) for the two-body
and three-body energies. }
\label{tab:Fit1}
\begin{center}
\begin{tabular}{l Z{8} Z{8} c Z{8} Z{8} }
\toprule
& \multicolumn{2}{c}{$\U{2} \lt 0$}
& & \multicolumn{2}{c}{$\U{3} \gt 0$}\\
\cline{2-3}\cline{5-6}
&\multicolumn{1}{c}{$C$}
&\multicolumn{1}{c}{$\alpha$}
& & \multicolumn{1}{c}{$C$}
&\multicolumn{1}{c}{$\alpha$} \\
\colrule
$r_s=1$ & -6.0685(6) & 2.310(1) & & -7.942(5) & 2.435(8) \\
$r_s=3$ & -4.084(1) & 2.5410(7) & & -5.565(8) & 2.670(5) \\
$r_s=10$ & -2.114(6) & 2.649(2) & & -2.98(5) & 2.88(2) \\
\botrule
\end{tabular}
\end{center}
\end{table}
\section{Discussion}
\label{sec:discussion}
We have investigated the nature of the non-additive dispersion between three
parallel wires and we have demonstrated that as the HOMO--LUMO gap (band gap
in infinite wires) decreases, the deviations of $\Edisp{3}[3]$ from the
conventional triple-dipole Axilrod--Teller--Muto model increase. These
deviations occur mainly in two ways:
\begin{itemize}
\item
For wire separations smaller than the typical electron correlation
length, the effective three-body non-additive dispersion behaves
as $\U{3}(d) \sim d^{-\beta}$, where $\beta \rightarrow 3$ as the
HOMO--LUMO gag decreases.
This power-law arises from the correlations between extended charge
fluctuations that are associated from the plasmon-like modes in the
wires.
This is a substantially slower decay than the
$\U{3}(d) \sim d^{-7}$ behaviour expected from the standard triple-dipole
summations associated with local dipole fluctuations.
For finite wires, $\U{3}(d) \sim d^{-9}$ for separations much
larger than the wire length.
\item
$\U{3}(d)$ is substantially enhanced as the gap reduces. This is most
dramatic for large separations, where we observed an enhancement of four
orders of magnitude for the near-metallic wires compared with the wires with
the largest HOMO--LUMO gap.
\end{itemize}
These observations are analogous to those obtained by Misquitta
\emph{et al.}\xspace \cite{MisquittaSSA10} with regard to the second-order dispersion
energy \Edisp{2}, though the effects of metallicity are more dramatic
for the three-body non-additivity.
We have provided a simple physical picture of correlations in extended
charge fluctuations using which both of these observations can be
understood.
We have established these results using two techniques: (1) a generalised
multipole expansion for $\Edisp{3}[3]$ that includes contributions from
charge-flow polarizabilities responsible for the long-wavelength, plasmon-like
fluctuations, and (2) DMC\@. The former has the advantage
that we can directly calculate $\Edisp{3}[3]$, but it is applicable only to
finite systems with non-zero HOMO--LUMO gaps.
By contrast, DMC
is applicable to infinite systems (modelled in cells subject to periodic
boundary conditions) with zero gaps, and in principle is
able to describe the third-order correlation energy exactly.
However, like any supermolecular technique, that is, techniques that calculate
the interaction energy from total energy differences, DMC is unable to
separate the two-body energy $\Edisp{3}[2]$ from the three-body
non-additive dispersion $\Edisp{3}[3]$.
Nevertheless, there is a consistency in the results from these two methods.
At short range (i.e., at separations less than the correlation length)
the multipole expansion used on trimers of finite \HH{64} chains
yields a power-law of $\U{3}(d) \sim d^{-\beta}$ where $\beta \rightarrow 3^{+}$,
that is, $\beta$ approaches $3$ from above, while in the DMC results,
$\beta \rightarrow 3^{-}$ as $r_s$ increases. For small $r_s$ the exponent is
significantly smaller than $3$. This could be because of finite-size effects,
contributions from $\Edisp{3}[2]$,
or it could be a genuine effect not captured by the multipole expansion.
The increased effect of the plasmon-like, charge-flow fluctuations on
$\Edisp{3}[3]$ compared with $\Edisp{2}$ is related to the long range
of these fields produced by the fluctuations. The dipole fluctuations
in insulators result in electric fields that behave as $r^{-3}$;
a rapid decay compared with the $r^{-1}$ behaviour of the electric
fields from the plasmon-type fluctuations. Consequently we expect
the many-body expansion to be slowly convergent for conglomerates of
low-dimensional semi-metallic systems. As we have demonstrated, the
three-body non-additivity quenches the already enhanced two-body
dispersion. Likewise, by extending our physical model for these
anomalous dispersion effects, we expect that the four-body non-additivity
will be attractive and decay as $-d^{-4}$ for 1D metallic systems,
and will consequently quench the three-body non-additivity.
The slow decay and alternating signs of the $N$-body non-additive dispersion
suggests that the many-body expansion may not be a useful way of modelling
the dispersion interaction in, say, a bundle of 1D semi-metallic wires.
An alternative may be a generalisation of the self-consistent polarization model
proposed by Silberstein \cite{Silberstein17} and Applequist \cite{ApplequistCF72},
and recently significantly developed by Tkatchenko \emph{et al.}\xspace \cite{TkatchenkoDiSCS12}.
However, models such as these would have to be modified to include the
charge-flow polarizabilities to be able to describe the metallic
effects described in this article.
For finite molecular systems, the changes in power-law described here are,
to an extent, of academic interest only. In practice, subtle power-law changes
in the dispersion interaction can be easily masked by the other, often larger,
components of the interaction energy, particularly the first-order
electrostatic energy. While this may be the case, it is the second
effect---the enhancement of the dispersion energy that arises from the
plasmon-like modes---that may have a perceptible effect. The long-wavelength
fluctuations cause an enhancement of the {\em effective} two- and three-body
dispersion coefficients.
We believe that this effect, which is captured by techniques such as
the Williams--Stone--Misquitta method\cite{MisquittaS08a,MisquittaSP08},
may prove significant even for relatively small molecular systems.
We are currently working to investigate this phenomenon.
\section{Acknowledgments}
Financial support was provided by the UK Engineering and Physical Sciences
Research Council (EPSRC)\@.
Part of the computations have been performed using the K computer at
Advanced Institute for Computational Science, RIKEN. R.M. is grateful for financial support
from KAKENHI grants (23104714, 22104011, and 25600156), and from the Tokuyama Science Foundation.
\newcommand{\JCP}[0]{J. Chem. Phys.\ }
\newcommand{\JPCA}[0]{J. Phys. Chem. A\ }
\newcommand{\JPCB}[0]{J. Phys. Chem. B\ }
\newcommand{\JPCC}[0]{J. Phys. Chem. C\ }
\newcommand{\JPC}[0]{J. Phys. Chem.\ }
\newcommand{\JCTC}[0]{J. Chem. Theory Comput.\ }
\newcommand{\IJQC}[0]{Int. J. Quantum Chem.\ }
\newcommand{\CPL}[0]{Chem. Phys. Lett.\ }
\newcommand{\TCA}[0]{Theor. Chim. Acta\ }
\newcommand{\PR}[0]{Phys. Rev.\ }
\newcommand{\PRA}[0]{Phys. Rev. A\ }
\newcommand{\PRB}[0]{Phys. Rev. B\ }
\newcommand{\PRE}[0]{Phys. Rev. E\ }
\newcommand{\PRL}[0]{Phys. Rev. Lett.\ }
\newcommand{\CR}[0]{Chem. Rev.\ }
\newcommand{\ChemRev}[0]{Chem. Rev.\ }
\newcommand{\NuP}[0]{Nucl. Phys.\ }
\newcommand{\MolP}[0]{Mol. Phys.\ }
\newcommand{\AdQC}[0]{Adv. Quantum Chem.\ }
\newcommand{\CPC}[0]{Comput. Phys. Commun.\ }
\newcommand{\JMS}[0]{J. Mol. Struct.\ }
\newcommand{\CJC}[0]{Can. J. Chem.\ }
\newcommand{\CP}[0]{Chem. Phys.\ }
\newcommand{\JPB}[0]{J. Phys. B: At. Mol. Opt. Phys.\ }
\newcommand{\PCCP}[0]{Phys. Chem. Chem. Phys.\ }
\newcommand{\JCC}[0]{J. Comp. Chem.\ }
\newcommand{\JACS}[0]{J. Am. Chem. Soc.\ }
\newcommand{\AngChemInt}[0]{Angew. Chem. Int. Ed.\ }
\newcommand{\AngChem}[0]{Angew. Chem.\ }
\newcommand{\ActCrysB}[0]{Acta Cryst. B\ }
\newcommand{\IRPC}[0]{Int. Revs. Phys. Chem.\ }
\newcommand{\PNAS}[0]{Proc. Natl. Acad. Sci.\ }
\newcommand{\PRSLA}[0]{Proc. R. Soc. Lond. A\ }
\newcommand{\CEC}[0]{CrystEngComm.\ }
\newcommand{\CGD}[0]{Cryst. Growth Des.\ }
\newcommand{\NJC}[0]{New J. Chem.\ }
\newcommand{\ChemPhysChem}[0]{ChemPhysChem\ }
\newcommand{\APL}[0]{Appl. Phys. Lett.\ }
\newcommand{\ChemComm}[0]{Chem. Commun.\ }
\newcommand{\RevModPhys}[0]{Rev. Mod. Phys.\ }
\newcommand{\AccChemRes}[0]{Acc. Chem. Res.\ }
\newcommand{\SurfSciLett}[0]{Surf. Sci. Lett.\ }
\newcommand{\JPhysCondMat}[0]{J. Phys.: Condens. Matter\ }
\newcommand{\Nature}[0]{Nature\ }
\newcommand{\NatureMat}[0]{Nature Materials\ }
\newcommand{\NaturePhy}[0]{Nature Physics\ }
\newcommand{\NatureComms}[0]{Nature Communications\ }
\newcommand{\JMathChem}[0]{J. Math. Chem.\ }
\newcommand{\PhysLett}[0]{Phys. Lett.\ }
\bibliographystyle{apsrev}
|
train/arxiv
|
BkiUdtA5qYVBi77IMWgU
| 5 | 1 |
\section{Introduction}
We study heat equation associated to inverse square potential:
\begin{equation}\label{0}
\begin{cases}
u_t(t,x) +\mathcal{L}_au(t,x)=\frac{\mu}{|x|^{b}}|u(t,x)|^{\alpha}u(t,x) \\
u(0,x)=\varphi(x)
\end{cases} (t,x) \in {\mathbb R}^+ \times {\mathbb R}^d
\end{equation}
where $d\geq2$, $\mu \in \{ \pm 1 \}$ and $b, \alpha>0.$ \eqref{0} is also known as Hardy parabolic equation.
The Schr\"odinger operator
$$\mathcal{L}_{a}=-\Delta + \frac{a}{|x|^2},$$
with $a\geq- \left(\frac{d-2}{2} \right)^2$, is initially defined
with domain $C_{c}^{\infty}({\mathbb R}^d \setminus \{ 0\}).$ See \cite[Section 1.1]{Killipatal}. It is then
extended as an unbounded operator in $L^p({\mathbb R}^d)$ that
generates a positive semigroup $\{e^{-t{\mathcal L}_a}\}_{t\geq0}$ in $L^p({\mathbb R}^d)$ for $s_1<d/p<s_2+2.$ Here,
\[
s_1:=s_1(a)=\frac{d-2}{2}-\sqrt{\frac{(d-2)^2}{4}+a} \quad \text{and} \quad s_2:=s_2(a)=\frac{d-2}{2}+\sqrt{\frac{(d-2)^2}{4}+a}
\]
are the roots of $s^2-(d-2)s-a=0$, see \cite[Theorems 1.1, 1.3]{metafune2016scale}.
Moreover, the semigroup $\{e^{-t{\mathcal L}_a}\}_{t\geq0}$ has the following smoothing effect (see also Remark \ref{rfu} below).
\begin{itheorem}[Decay estimate, Theorem 5.1 in \cite{ioku2016estimates}]\label{ste}
Assume $d\geq2$, $a\geq- \left(\frac{d-2}{2} \right)^2$ and
\begin{equation}\label{r}
\tilde{s}_1=\max(s_1,0) \quad \text{and} \quad \tilde{s}_2=\min(s_2,d-2).
\end{equation}
Then, for all $\tilde{s}_1<\frac{d}{q}\leq\frac{d}{p}<\tilde{s}_2+2$ and $t>0,$ we have
\begin{equation}\label{d}
\|e^{-t{\mathcal L}_a}f\|_{L^q}\leq ct^{-\frac{d}{2}(\frac{1}{p}-\frac{1}{q})}\|f\|_{L^p}.
\end{equation}
\end{itheorem}
This smoothing effect for $e^{-t{\mathcal L}_a}$ will play a vital role in our analysis of short and long time behaviour of (mild) solutions of \eqref{0}. We note that the inverse-square potential breaks space-translation symmetry for \eqref{0}. However, it retains the scaling symmetry. Specifically,
if $u (t,x)$ solves \eqref{0}, then
\begin{equation}\label{scl}
u_{\lambda} (t,x) = \lambda^{\frac{2-b}{\alpha}} u(\lambda^{2} t, \lambda x)
\end{equation}
also solves \eqref{0} with data $u_\lambda(0).$ The Lebesgue space $L^q$ is invariant under the above scaling only when $q=q_c:= \frac{d \alpha}{2-b}.$ We shall see that this constant $q_c$ will play important role in studying \eqref{0}. The problem \eqref{0}
is $L^q-$
\begin{equation*}
\begin{cases} \text{sub-critical} & \text{if} \ 1\leq q <q_c\\
\text{critical} & \text{if} \ q=q_c\\
\text{super-critical} & \text{if} \ q> q_c.
\end{cases}
\end{equation*}
We point out that with $a=0$, the heat equation \eqref{0} is extentively studied, see \cite{chikami2021well} and the references therein. Corresponding results in the context of Schr\"odinger equation, are investigated in \cite{guzman2021scattering,bhimani2023sharp} and the references therein.
With $a\neq0$, the operator $\mathcal{L}_a$ is a mathematically intriguing borderline situation that can be found in a number of physical contexts, including geometry, combustion theory to the Dirac equation with Coulomb potential, and to the study of perturbations of classic space-time metrics such as Schwarzschild and Reissner–Nordstr\"om. See \cite{Killipatal, Zhang, Burq, Kalf, Luis} for detailed discussions.
In fact, substantial progress has been made to understand well-poseness theory for nonlinear Schr\"odinger and wave equation associated to $\mathcal{L}_a$, see for e.g. \cite{Jason, Haque, Killip, Zhang} and references therein. The mathematical interest in these equations with $a|x|^{-2}$ however comes mainly from the fact that the potential term is homogeneous of degree -2 and therefore scales exactly the same as the Laplacian. On the other hand, it appears that we know very little about the well-posedness results for heat equation associated to $\mathcal{L}_a,$ even when $b=0$ in \eqref{0}. In this note, we aim to initiate a systematic study of well-posedness theory for \eqref{0}.
In this article, by a solution to \eqref{0} we mean mild solution of \eqref{0}, that is, a solution of the integral equation
\begin{equation*
u(t)=e^{-t{\mathcal L}_a}\varphi+\mu\int_0^t e^{-(t-s){\mathcal L}_a}(|\cdot|^{-b}|u(s)|^\alpha u(s))ds.
\end{equation*}
We now state our first theorem.
\begin{theorem}[Local theory]\label{local} Let $d\geq2$, $a\geq-\frac{(d-2)^2}{4}$, $0\leq b<\min(2,d),$ $\tilde{s}_1,\tilde{s}_2$ be as in \eqref{r} and $0<\alpha<\frac{2-b}{\tilde{s}_1}$ and $\varphi\in L^q({\mathbb R}^d).$
\begin{enumerate}
\item \label{i}
Assume that $$ \max\left\{\frac{d(\alpha+1)}{\tilde{s}_2+2-b},q_c\right\}<q<\frac{d}{\tilde{s}_1}.$$
Then there exists a maximal time $T_{\max}>0$ and a unique solution $u$ of \eqref{0} such that $$u \in C([0,T_{\max} ); L^q ({\mathbb R}^d )).$$ Moreover, blowup alternative holds, i.e. if $T_{\max}<\infty,$
then $\lim_{t\to T_{\max}} \|u(t)\|_q = \infty$.
\item \label{ii} Assume that
$$q_c \leq q< \frac{d}{\tilde{s}_1},\qquad q>\frac{d}{\tilde{s}_2+2}$$
(cf. Figure \ref{f2}). Then\begin{itemize}
\item[(i)] there exist $T > 0$ and a solution $u$ of \eqref{0} such that $u \in C ([0, T ], L^q ({\mathbb R}^d )).$
\item[(ii)] uniqueness in part (i) is guaranteed only among functions in
\begin{itemize}
\item $\{u \in C([0,T] ; L^q({\mathbb R}^d):\sup_{t\in[0,T ]} t^{\frac{d}{2}(\frac{1}{q}-\frac{1}{r})}\|u(t)\|_r<\infty\}$ where $r$ satisfies \eqref{2} and $q>q_c.$
\item $\{u \in C([0,T] ; L^q({\mathbb R}^d):\sup_{t\in[0,T ]} t^{\frac{d}{2}(\frac{1}{q_c}-\frac{1}{r})}\|u(t)\|_r\leq M \}$ for some $M>0$ and $r$ satisfies \eqref{6} and $q=q_c$.
\end{itemize}
\end{itemize} Moreover, for the super-critical exponents, continuous dependency on data and the blowup alternative holds.
\item\label{iii} In all the above cases, except where $q = q_c$, the maximal existence time of the solution, denoted by $T_{\max}$, depends on $\|\varphi\|_q$, not $\varphi$ itself.
\end{enumerate}
\end{theorem}
\begin{figure}
\subfigure[The case $d=3$, $a=-\frac{1}{8}$, $b=1$] {
\begin{tikzpicture}[scale=2]
\fill [gray!90!white](0,1/6-2^.5/12)--(0,5/3-2*2^.5/3)--plot[domain=0:4-2*2^.5] ({\x},{(6+2^.5)/(12*\x+12)})--plot[domain=4-2*2^.5:4+2*2^.5] ({\x},{(1/3)*(1/\x)})--(4-2*2^.5,1/6-2^.5/12);
\fill [gray!50!white](0,1/6-2^.5/12)--(0,10/12+2^.5/12)--(1/3,10/12+2^.5/12)--plot[domain=20/49-2*2^.5/49:4-2*2^.5] ({\x},{(1/3)*(1/\x)})--plot[domain=4-2*2^.5:0] ({\x},{(6+2^.5)/(12*\x+12)});
\draw[][] (-.25,0)--(7,0) node[anchor=west] {\tiny{$\alpha$}};
\draw[][-] (0,-.25)--(0,2) node[anchor=east] {\tiny{$\frac{1}{q}$}};
\draw[dashed](0,1)--(3,1) node[anchor=south west] {\tiny{$\frac{1}{q}=1$}};
\draw[dashed](4+2*2^.5,2)--(4+2*2^.5,0) node[anchor=north] {\tiny{$\frac{2-b}{s_1}$}};
\draw[dashed](4-2*2^.5,2)--(4-2*2^.5,0) node[anchor=north] {\tiny{$\frac{2-b}{s_2}$}};
\draw[dotted][-] [domain=1:.25] plot ({\x}, {(1/3)*(1/\x)})node[anchor=south] {\tiny{{$\frac{1}{q}=\frac{1}{q_c}$}}};
\draw[][-] [domain=1:20/49-2*2^.5/49] plot ({\x}, {(1/3)*(1/\x)})
\draw[][-] [domain=7:1] plot ({\x}, {(1/3)*(1/\x)});
\draw[dashed][] [domain=0:2.8] plot ({\x}, {(6+2^.5)/(12*\x+12)})node[anchor=south west] {\tiny{{$\frac{1}{q}=\frac{s_2+2-b}{d(\alpha+1)}$}}};
\draw[dashed](7,1/6-2^.5/12)--(0,1/6-2^.5/12) node[anchor=east] {\tiny{$\frac{s_1}{d}$}};
\draw[dashed](0,10/12+2^.5/12)--(5,10/12+2^.5/12) node[anchor=west] {\tiny{$\frac{1}{q}=\frac{s_2+2}{d}$}};
\end{tikzpicture}
}
\subfigure[The case $d=3$, $a\geq0$, $b=1$] {
\begin{tikzpicture}[scale=2]
\fill [gray!90!white](0,0)--(0,2/3)--plot[domain=0:1] ({\x},{2/(3*\x+3)})--plot[domain=1:7]({\x},{1/(3*\x)})--(7,0);
\fill [gray!50!white](0,2/3)--(0,1)--(1/3,1)--plot[domain=1/3:1] ({\x},{1/(3*\x)})--plot[domain=1:0] ({\x},{2/(3*\x+3)});
\draw[][] (-.25,0)--(7,0) node[anchor=west] {\tiny{$\alpha$}};
\draw[][-] (0,-.25)--(0,2) node[anchor=east] {\tiny{$\frac{1}{q}$}};
\draw[dashed](0,1)--(3,1) node[anchor=west] {\tiny{$\frac{1}{q}=1$}};
\draw[dashed](1,2)--(1,0) node[anchor=north] {\tiny{$\frac{2-b}{d-2}$}};
\draw[dotted][-] [domain=1:.25] plot ({\x}, {(1/3)*(1/\x)})node[anchor=south] {\tiny{{$\frac{1}{q}=\frac{1}{q_c}$}}};
\draw[dashed][] [domain=0:2.8] plot ({\x}, {(2/3)*1/(\x+1)})node[anchor=south west] {\tiny{{$\frac{1}{q}=\frac{d-b}{d(\alpha+1)}$}}};
\draw[][-] [domain=7:1] plot ({\x}, {(1/3)*(1/\x)});
\draw[][-] [domain=1:1/3] plot ({\x}, {(1/3)*(1/\x)})
\end{tikzpicture}
}
\caption{\tiny{Local well-posedness in mere $L^q({\mathbb R}^d)$ occurs in the darkly shaded (open) region by part \eqref{i} of Theorem \ref{local}. Local existence in $L^q({\mathbb R}^d)$ is guaranteed by part \eqref{ii} of Theorem \ref{local} in the total shaded (dark \& light) region along with the open segment on boundary which is part of the curve $\frac{1}{q}=\frac{1}{q_c}$.}} \label{f2}
\end{figure}
As far as we know, Theorem \ref{local} is new for $a\neq 0.$
In \cite[Theorem 1.1]{slimene2017well}, Slimene, Tayachi and Weissler proved
Theorem \ref{local} when $a=0.$ Our method of proof is based on standard contraction argument and inspired from the work in \cite{slimene2017well}. The main key ingredient to prove Theorem \ref{local} is Proposition \ref{est0}. In order to prove Proposition \ref{est0}, Theorem \ref{ste} and scaling properties of $e^{-t{\mathcal L}_a}$
play a vital role.
\begin{remark}\label{r1} For $a=0$, \eqref{d} is valid for the end points, see Remark \ref{rfu} below. Thus, in this case, whenever there is a strict inequality $<$ involving $s_1$ in Theorem \ref{local} it can be relaxed to non strict one $\leq$.
\end{remark}
\begin{remark} The case $a=b=0$ corresponds to standard nonlinear heat equation. This has been extensively studied in the literature since the pioneering work done in early 80s, see \cite{weissler1980local, brezis1996nonlinear}.
In this situation, for the sub-critical case, Weissler \cite{weissler1980local} proved local well-poseness for \eqref{0} in $L^q({\mathbb R}^d)$ for $q> q_c\geq 1.$ For sub-critical case, there is no general theory of existence, see \cite{weissler1980local, brezis1996nonlinear}. In fact, Haraux-Weissler \cite{haraux1982non} proved that if $1<q_c < \alpha +2$, then there is a global solution (with zero initial data) in $L^q({\mathbb R}^d)$ for $1\leq q < q_c.$ But no such solution exists when $\alpha +2 < q_c.$ For critical case, i.e. $q=q_c$ the solution exists globally in time for small initial data.
\end{remark}
\begin{remark} For the sub-critical exponents, i.e. for $q<q_c,$ classical inhomogeneous heat equation, i.e. \eqref{0} with $a=0,$ is known to be ill-posed in $L^q({\mathbb R}^d)$. We believe that similar result hold for \eqref{0}. However, shall not pursue this issue in the present paper.
\end{remark}
After achieving local well-posedness with a contraction mapping, one then finds the following lower estimate for blow-up in the super critical case $q>q_c$ as in the classical case $a=b=0$.
\begin{theorem} [Lower blow-up rate] Assume that $q> \max (1, q_c)$ and $T_{\max}< \infty,$ where $T_{\max}$ is the existence time of the resulting maximal solution of \eqref{0}. Then
under the hypotheses of Theorem \ref{local}, we have
$$\|u(t)\|_q \gtrsim (T_{max}-t)^{\frac{ d}{2q}-\frac{2-b}{2\alpha}} ,\quad \forall t\in[0,T_{max}).$$
\end{theorem}
On the other hand in the critical case $q=q_c$, we achieve `small' data global well-posedness where the smallness is in the sense described in the theorem below:
\begin{theorem}[Small data global existence]\label{global} Let $d \geq 2,$ $0\leq b<\min(2,d)$ and $\frac{d}{\tilde{s}_2+2}<q_c<\frac{d}{\tilde{s}_1}$ i.e. $$\frac{2-b}{\tilde{s}_2+2}<\alpha<\frac{2-b}{\tilde{s}_1}.$$
Then we have the following.\begin{enumerate}
\item\label{global-1} If $\varphi \in L^{q_c} ({\mathbb R}^d )$ and $\|\varphi\|_{q_c}$ is sufficiently small, then $T_{max} (\varphi)= \infty.$
\item\label{global-2} If $\varphi\in \mathcal{S}'({\mathbb R}^d)$ such that $|\varphi|\leq c( 1+|\cdot|^2)^{-\sigma/2}$,
$c $ sufficiently small and $\sigma>\frac{2-b}{\alpha}$, then $T_{max}(\varphi) = \infty$.
\item\label{global-3}
Let $\varphi\in \mathcal{S}' ({\mathbb R}^d )$ be such that $|\varphi| \leq c|\cdot|^{- \frac{2-b}{\alpha}}$ , for $c$ sufficiently small. Then there exists a global
time solution of (1.5), $u\in C([0,\infty);L^q({\mathbb R}^d))$ for all $q\in(q_c,\frac{d}{\tilde{s}_1})$. Moreover $u(t)\to\varphi$ in $\mathcal{S}'({\mathbb R}^d)$ as $t\to0$.
\end{enumerate}
\end{theorem}
Theorem \ref{global} is proved using more general Theorem \ref{global2} below. This is inspired from \cite[Theorem 4.1]{slimene2017well} (that deal with $a=0$ case), in which they uses idea from the earlier work \cite[Theorem 6.1]{cazenave1998asym} (that deals with $a=b=0$ case).
A solution of \eqref{0} is \textit{self-similar} if $u_\lambda = u$ for all $\lambda > 0$ where $u_\lambda$ is defined in \eqref{scl}. In \cite{Hirose, Wang}, authors have established the existence of radially symmetric self-similar solutions and later in \cite[Theorem 1.4]{slimene2017well} authors have proved the self-similar solutions that are not necessarily symmetric for classical inhomogeneous heat equation. In the next theorem, we establish similar result in the presence of inverse square potential.
\begin{theorem}[Self-similar solutions]\label{selfsimilar}
Let $d\geq2,$ $0 < b < \min(2, d )$ and $\frac{d}{\tilde{s}_2+2}<q_c<\frac{d}{\tilde{s}_1}$ i.e. $$\frac{2-b}{\tilde{s}_2+2}<\alpha<\frac{2-b}{\tilde{s}_1}.$$
Let $\varphi(x) = \omega(x)|x|^{-\frac{2-b}{\alpha}}$, where $\omega \in L^\infty({\mathbb R}^d)$
is homogeneous of degree $0$ and $\|\omega\|_\infty$ is sufficiently small. Then there exists a global mild self-similar solution $u_S$ of \eqref{0} and $u_S(t)\to\varphi$ in $\mathcal{S}'({\mathbb R}^d)$ as $t\to 0$.
\end{theorem}
Using this self-similar solution, the next theorem gives the information about the asymptotic behaviour of global solutions achieved by Theorem \ref{global}, provided the data satisfies certain bounds.
\begin{theorem}[Asymptotic behaviour]\label{asym}
Let $d\geq2,$ $0 < b < \min(2, d )$ and $\frac{d}{\tilde{s}_2+2}<q_c<\frac{d}{\tilde{s}_1}$ i.e. $\frac{2-b}{\tilde{s}_2+2}<\alpha<\frac{2-b}{\tilde{s}_1}$ and
\[
\frac{2-b}{\alpha}\leq\sigma<\tilde{s}_2+2.
\]
Let $\varphi\in \mathcal{S}'({\mathbb R}^d)$ be such that
\[
|\varphi(x)|\leq c( 1+|x|^2)^{-\sigma/2}, \quad \forall x\in{\mathbb R}^d
\]
for $c > 0$ sufficiently small, and
\[|\varphi(x)|= \omega(x)|x|^{-\sigma}, \quad \forall |x|\geq A \]
for some constant $A>0$ and some $\omega\in L^\infty({\mathbb R}^d)$ homogeneous of degree $0$ with $\|\omega\|_\infty$ sufficiently small.
Let $u,u_{\mathcal{S}}$ be the unique solutions to \eqref{0} with data $\varphi$, $\omega(x)|x|^{-\frac{2-b}{\alpha}}$ given by Theorem \ref{global}, Theorem \ref{selfsimilar} respectively. Then
\begin{enumerate}
\item\label{asym1} if $\sigma=\frac{2-b}{\alpha}$ and $q\in[r,\frac{d}{\tilde{s}_1})$ there exists $\delta > 0$ such that \begin{equation}\label{p10}
\|u(t)-u_{\mathcal{S}}(t)\|_q \leq Ct^{-(\frac{\sigma}{2}-\frac{d}{2q})-\delta}, \text{ for all }t>0
\end{equation}
where $C$ is a positive constant.
In particular, if $\omega \neq 0$, there exist $c_1, c_2>0$ such that for $t$ large
\[
c_1 t^{-(\frac{2-b}{2\alpha}-\frac{d}{2q})} \leq \|u(t)\|_q \leq c_2 t^{-(\frac{\sigma}{2}-\frac{d}{2q})}.
\]
\item\label{asym2} if $\frac{2-b}{\alpha}<\sigma<(2-b)(\frac{\tilde{s}_2+2-b}{\tilde{s}_1\alpha}-1)$ and $q\in[r_1,\frac{d}{\tilde{s}_1})$ where $r_1$ as in Lemma \ref{11}, there exists $\delta > 0$ such that
$$\|u(t)-e^{-t{\mathcal L}_a}(\omega(x)|x|^{-\sigma})\|_q \leq Ct^{-(\frac{\sigma}{2}-\frac{d}{2q})-\delta}
$$
for all $t$ large enough.
In particular, if $\omega \neq 0$, there exist $c_1, c_2>0$ such that for $t$ large
\[
c_1 t^{-(\frac{\sigma}{2}-\frac{d}{2q})} \leq \|u(t)\|_q \leq c_2 t^{-(\frac{\sigma}{2}-\frac{d}{2q})}.
\]
\end{enumerate}
\end{theorem}
\begin{remark}
As remarked earlier, see Remark \ref{r1}, when $a=0$, one can actually gets the above results even for $q=\frac{d}{\tilde{s}_1}=\infty$, recovering the result in \cite{{slimene2017well}}.
\end{remark}
\begin{remark}
In the case \eqref{asym1}, since $\frac{\sigma}{2}-\frac{d}{2q}=\frac{2-b}{2\alpha}-\frac{d}{2q}>0$, it follows from \eqref{p10}, that for $t$ large, the solution is close to a nonlinear self-similar solution. Thus in this case, the solution has nonlinear behaviour near $t=\infty$. By similar reasoning, in the case \eqref{asym2}, the solution has linear behaviour near $t=\infty$, which can be related to scattering phenomenon. In both the case the final conclusion says $\|u(t)\|_q\to0$ as $t\to\infty$.
\end{remark}
\begin{remark}
The method of proof of Theorem \ref{asym} \eqref{asym2} differs at certain stage from the one in \cite{slimene2017well} (that deals with $a=0$ case) due the fact that in the case $a\neq0$, one do not have the decay estimate \eqref{d} for $q=\infty$, see Remark \ref{r2}.
\end{remark}
\begin{remark}\label{rfu} \
\begin{enumerate}
\item Notice that
\begin{equation*}
\tilde{s}_1=\tilde{s}_1(a)=\begin{cases}
s_1(a)&\text{ for }a\in[-\frac{(d-2)^2}{4},0)\\
0 &\text{ for }a\in[0,\infty)
\end{cases},
\qquad
\tilde{s}_2=\tilde{s}_2(a)=\begin{cases}
s_2(a)&\text{ for }a\in[-\frac{(d-2)^2}{4},0)\\
d-2 &\text{ for }a\in[0,\infty)
\end{cases}.
\end{equation*}
\item\label{rfu2}
For $a=0,$
Theorem \ref{ste} says \eqref{d} valid for $1<p\leq q<\infty$. However, since $e^{-t\mathcal{L}_0}f=e^{t\Delta}f=k_t*f$ where $k_t(x)=(4\pi t)^{-d/2}\exp(-|x|^2/(4t))$, the inequality \eqref{d} holds for all $1\leq p\leq q\leq\infty$: In fact
by Young's inequality, $\|e^{t\Delta}f\|_q\leq\|k_t\|_r\|f\|_p=t^{-\frac{d}{2r'}}\|k_1\|_r\|f\|_p$ with $\frac{1}{r}=1+\frac{1}{q}-\frac{1}{p}$ i.e. $\frac{1}{r'}=\frac{1}{p}-\frac{1}{q}$ (assumption $p\leq q$ is to make sure %
$r\geq1$)
\item
For $-\frac{(d-2)^2}{4}\leq a<0$,
from Theorem \ref{ste}, \eqref{d} is valid for $1<\frac{d}{s_2+2}<p\leq q<\frac{d}{s_1}<\infty$. Therefore in this case, we get results valid with more restrictions (involving $s_1,s_2$) compared to the case $a=0$. For example, for the local theory, compare the regions for the case (a) and the case (b) in Figure \ref{f2}.
\item On the other hand, for $a>0$,
from Theorem \ref{ste}, \eqref{d} is valid for all $1<p\leq q<\infty$.
Thus in this case the results match with the results for case $a=0$ (and do not have restrictions in involving $s_1,s_2$) except the end point restrictions.
\end{enumerate}
\end{remark}
The paper is organized as follows. In Section \ref{keat}, we prove key estimate
for $e^{-t{\mathcal L}_a}(|\cdot|^{-b} f)$. In Section \ref{es}, we prove Theorems \ref{local}, \ref{global} and \ref{selfsimilar}. In Section \ref{AB}, we prove Theorem \ref{asym}. \\
\noindent
\textit{Notations}. The notation $A \lesssim B $ means $A \leq cB$ for some universal constant $c > 0.$ The Schwartz space is denoted by $\mathcal{S}(\mathbb R^{d})$, and the space of tempered distributions is denoted by $\mathcal{S'}(\mathbb R^{d}).$
\section{Key Ingradient}\label{keat}
In order to incorporate the inhomogeneous nonlinearity, we first establish some fixed-time estimate for $e^{-t\mathcal{L}_a}|\cdot|^{-b}$ in the next proposition.
\begin{proposition}\label{est0}
Let $d\geq2$, $a\geq-\frac{(d-2)^2}{4}$, $0\leq b<d$ and
\begin{equation}\label{p1}
\tilde{s}_1<\frac{d}{q}\leq b+\frac{d}{p}<\tilde{s}_2+
\end{equation}
then for $t>0$
\[
\|e^{-t{\mathcal L}_a}(|\cdot|^{-b}f)\|_{L^q}\leq ct^{-\frac{d
}{2}(\frac{1}{p}-\frac{1}{q})-\frac{b}{2}}\|f\|_{L^p}.
\]
\end{proposition}
\begin{proof}
Assume first that $t=1$. Put $m=\frac{d}{b}.$ Since $\frac{1}{q}<\frac{1}{m}+\frac{1}{p}<\frac{\tilde{s}_2+2}{d},$ we can choose $\epsilon,\delta>0$ so that
\begin{equation}\label{p2}
\frac{1}{q}<\frac{1}{m+\delta}+\frac{1}{p}<\frac{1}{m-\epsilon}+\frac{1}{p}<\frac{\tilde{s}_2+2}{d}.
\end{equation}
Split $|\cdot|^{-b}$ as follows
\[
|\cdot|^{-b}=k_1+k_2 \quad\text{with }k_1\in L^{m-\epsilon}({\mathbb R}^d), \quad k_2\in L^{m+\delta}({\mathbb R}^d),
\]
for example one may take
$$k_1= \chi_{ \{|x|\leq 1 \}} |\cdot|^{-b}, \quad k_2= \chi_{ \{|x|> 1 \}} |\cdot|^{-b}. $$
Let $$\frac{1}{r_1}=\frac{1}{m-\epsilon}+\frac{1}{p}, \quad \frac{1}{r_2}=\frac{1}{m+\delta}+\frac{1}{p}$$ and note
that $\tilde{s}_1<\frac{d}{q}<\frac{d}{r_i}<\tilde{s}_2+2$.
By Theorem \ref{ste} and H\"older's inequality, we have
\begin{eqnarray*}
\|e^{-{\mathcal L}_a}(|\cdot|^{-b}f)\|_q&\leq&\|e^{-{\mathcal L}_a}(k_1f)\|_q+\|e^{-{\mathcal L}_a}(k_2f)\|_q\\
&\lesssim&\|k_1f\|_{r_1}+\|k_2f\|_{r_2}\\
&\leq&(\|k_1\|_{m-\epsilon}+\|k_2\|_{m+\delta})\|f\|_{p}\lesssim\|f\|_p.
\end{eqnarray*}
Thus the case $t=1$ is proved.
For $\varphi\in\mathcal{S}({\mathbb R}^d)$, set $D_\lambda\varphi=\varphi(\lambda\ \cdot)$. Then we claim that $$e^{-t{\mathcal L}_a}(D_\lambda\varphi) =D_\lambda(e^{-\lambda^2t{\mathcal L}_a}\varphi).$$
In fact if $v(t)=e^{-t{\mathcal L}_a}(D_\lambda\varphi)$, then $v$ solves
\begin{equation}\label{si}
\begin{cases} v_t={\mathcal L}_a v\\
v(0,x)=D_\lambda\varphi(x)=\varphi(\lambda x).
\end{cases}
\end{equation}
Put $w(t)=u(\lambda^2t)$ where $u(t)=e^{-t{\mathcal L}_a}\varphi$. Then
we have
\begin{eqnarray*}
(D_\lambda w(t))_t & = & D_\lambda w_t(t)\\
& = & \lambda^2D_\lambda u_t(\lambda^2t)\\
&= & \lambda^2D_\lambda {\mathcal L}_a u(\lambda^2t) \quad (\because u_t={\mathcal L}_a u)\\
&= & \lambda^2D_\lambda {\mathcal L}_a w(t)\\
& = & \lambda^2({\mathcal L}_a w(t))(\lambda\cdot)\\
& = & {\mathcal L}_a(w(t,\lambda \cdot)) \\
& = & {\mathcal L}_a(D_\lambda w(t)),
\end{eqnarray*}
and
$$(D_\lambda w)(0)=(D_\lambda u)(0)=D_\lambda \varphi.$$
Thus, $D_\lambda w$ also satisfies \eqref{si}.
Therefore, by uniqueness of solution of Banach valued ODE (semigroup theory), one has $$v(t)=D_\lambda w(t).$$ This establishes the claim.
Using the above claim we have $e^{\lambda^2t{\mathcal L}_a }\varphi=D_\lambda^{-1}e^{-t{\mathcal L}_a}(D_\lambda\varphi)=D_{\lambda^{-1}}e^{-t{\mathcal L}_a}(D_\lambda\varphi)$, putting $\lambda=1/\sqrt{t}$ one has $e^{{\mathcal L}_a}\varphi=D_{\sqrt{t}}e^{-t{\mathcal L}_a}(D_{1/\sqrt{t}}\varphi)$. Then from the case $t=1$ it follows that
\[
\|D_{\sqrt{t}}e^{-t{\mathcal L}_a}D_{1/\sqrt{t}}(|\cdot|^{-b}\varphi)\|_q\lesssim\|\varphi\|_p.
\]
Since $D_\lambda|\cdot|^{-b}=\lambda^{-b}|\cdot|^{-b}$, we get,
\[
t^{\frac{b}{2}} \|D_{\sqrt{t}}e^{-t{\mathcal L}_a}(|\cdot|^{-b}D_{1/\sqrt{t}}\varphi)\|_q\lesssim\|\varphi\|_p.
\]
Replacing $\varphi$ by $D_{\sqrt{t}}\varphi$ we have
\[
t^{\frac{b}{2}} \|D_{\sqrt{t}}e^{-t{\mathcal L}_a}(|\cdot|^{-b}\varphi)\|_q\lesssim\|D_{\sqrt{t}}\varphi\|_p
\]which implies
\[
t^{\frac{b}{2}-\frac{d}{2q}} \|e^{-t{\mathcal L}_a}(|\cdot|^{-b}\varphi)\|_q\lesssim t^{-\frac{d}{2p}}\|\varphi\|_p
\]using $\|D_\lambda f\|_p=\lambda^{-d/p}\|f\|_p$. This completes the proof.
\end{proof}
\begin{remark}
In view of Remark \ref{rfu} \eqref{rfu2}, first strict inequality namely $\tilde{s}_1<\frac{d}{q}$ in \eqref{p1} can be relaxed to $\tilde{s}_1\leq\frac{d}{q}$ when $a=0$. But same cannot be done for the last inequality $b+\frac{d}{p}<\tilde{s}_2+2$ as a continuity argument is used to achieve \eqref{p2}.
\end{remark}
\begin{remark} When $a=0$, the above result holds even in dimension $d=1$. Then restriction for $a\neq0$ is due to the fact that the operator ${\mathcal L}_a$ is not defined on full ${\mathbb R}$.
\end{remark}
\section{Existence of Solution}\label{es}
\subsection{Local Existence}
\begin{proof}[{\bf Proof of Theorem \ref{local} \eqref{i}}]
Let $K_t(\varphi)=e^{-t{\mathcal L}_a}(|\cdot|^{-b}|\varphi|^\alpha \varphi)$. Since
$\frac{d(\alpha+1)}{\tilde{s}_2+2-b}<q<\frac{d}{\tilde{s}_1},$ we have \[
\tilde{s}_1<\frac{d}{q}<b+\frac{d}{q/(\alpha+1)}<\tilde{s}_2+2.
\]
By Proposition \ref{est0}, we may obtain
\begin{eqnarray*}
\|K_t(\varphi)-K_t(\psi)\|_q&\lesssim&t^{-\frac{d
}{2}(\frac{\alpha+1}{q}-\frac{1}{q})-\frac{b}{2}}\||\varphi|^\alpha \varphi-|\psi|^\alpha \psi\|_{\frac{q}{\alpha+1}}\\
&\lesssim &t^{-\frac{d\alpha}{2q}-\frac{b}{2}}\|(|\varphi|^\alpha+|\psi|^\alpha)|\varphi-\psi|\|_{\frac{q}{\alpha+1}}\\
&\lesssim &t^{-\frac{d\alpha}{2q}-\frac{b}{2}}(\|\varphi\|_q^\alpha+\|\psi\|_q^\alpha)\|\varphi-\psi\|_q\\
&\lesssim &t^{-\frac{d\alpha}{2q}-\frac{b}{2}}M^\alpha\|\varphi-\psi\|_q,
\end{eqnarray*}
provided $\|\varphi\|_q,\|\psi\|_q\leq M$. Note that
\begin{itemize}
\item $K_t:L^q({\mathbb R}^d)\to L^q({\mathbb R}^d)$ is locally Lipschitz with Lipschitz constant $C_M(t)=t^{-\frac{d\alpha}{2q}-\frac{b}{2}}M^\alpha$ in $\{\varphi\in L^q({\mathbb R}^d):\|\varphi\|\leq M\}$
\item
$C_M\in L^1(0,\epsilon)$ as $\frac{d\alpha}{2q}+\frac{b}{2}<1,$ i.e. $q>\frac{d\alpha}{2-b}=q_c$
\item $e^{-s{\mathcal L}_a }K_t=K_{s+t}$
\item $t\mapsto K_t(0)\equiv0\in L_t^1(0,\epsilon).$
\end{itemize}
By \cite[Theorem 1, p. 279]{weissler1979semilinear}, the result follows.
In order to ensure room for $q$, in between $\max(\frac{d(\alpha+1)}{\tilde{s}_2+2-b},q_c)$ and $\frac{d}{\tilde{s}_1},$ the hypothesis $0< \alpha < \frac{2-b}{\tilde{s}_1}$ is imposed, while $0<b< 2$ is to make $q_c>0$.
\end{proof}
\begin{proof}[{\bf Proof of Theorem \ref{local} \eqref{ii}}] {\bf Part (i) with the case $q>q_c$.} (the case $q=q_c$ will be treated along with the global existence proof.)
Since $\alpha < \frac{2-b}{\tilde{s}_1}< \frac{2-b}{\tilde{s}_1}+ ( \frac{\tilde{s}_2}{\tilde{s}_1}-1)$ and $\frac{d}{\tilde{s}_2+2}<q<\frac{d}{\tilde{s}_1},$ we have \[
\max\left(\tilde{s}_1,\frac{{d}/{q}-b}{\alpha+1}\right)<\min\left(\frac{\tilde{s}_2+2-b}{\alpha+1},\frac{d}{q}\right).
\] Therefore, one can choose $r$ such that\[
\max\left(\tilde{s}_1,\frac{{d}/{q}-b}{\alpha+1}\right)<\frac{d}{r}<\min\left(\frac{\tilde{s}_2+2-b}{\alpha+1},\frac{d}{q}\right),
\] which implies
\begin{equation}\label{2}
\tilde{s}_1<\frac{d}{r}<\frac{d}{q}<b+\frac{d(\alpha+1)}{r}<\tilde{s}_2+2.
\end{equation}
Thus Proposition \ref{est0} can be used with exponent pairs $(\frac{r}{\alpha+1},q)$ and $(\frac{r}{\alpha+1},r)$. From $q>q_c$, it follows that
\[
\frac{d(\alpha+1)}{q}-2<\frac{d}{q}-b.
\]
This and \eqref{2} implies that $\frac{d(\alpha+1)}{q}-2<\frac{d}{q}-b<\frac{d(\alpha+1)}{r}$ which gives $d(\frac{1}{q}-\frac{1}{r})(\alpha+1)<2$ i.e. $\beta(\alpha+1)<1$ where $\beta=\frac{d}{2}(\frac{1}{q}-\frac{1}{r})$.
Introduce the space
$$B_M^T=\{u\in C([0,T];L^q({\mathbb R}^d))\cap C((0,T];L^r({\mathbb R}^d)):\max[\sup_{t\in [0,T ]} \|u(t)\|_q, \sup_{t\in[0,T ]} t^\beta\|u(t)\|_r]\leq M\}.$$ This space is endowed with the metric
\[
d(u,v)=:\max[\sup_{t\in [0,T ]} \|u(t)-v(t)\|_q, \sup_{t\in[0,T ]} t^\beta\|u(t)-v(t)\|_r].
\]
Consider the mapping
\[
\mathcal{J}_\varphi(u)(t)=e^{-t{\mathcal L}_a}\varphi+\mu\int_0^t e^{-(t-s){\mathcal L}_a}(|\cdot|^{-b}|u(s)|^\alpha u(s))ds.
\]
Let $\varphi,\psi\in L^q({\mathbb R}^d)$ and $u,v\in B_M^T.$ By Proposition \ref{est0} with exponent pairs $(\frac{r}{\alpha+1},q)$ we get
\begin{eqnarray*}
&&\|\mathcal{J}_\varphi(u)(t)-\mathcal{J}_\psi(v)(t)\|_q\\
&\lesssim&\|\varphi-\psi\|_q+\int_0^t(t-s)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{q})-\frac{b}{2}}\||u(s)|^\alpha u(s)-|v(s)|^\alpha v(s)\|_{\frac{r}{\alpha+1}}ds\\
&\lesssim&\|\varphi-\psi\|_q+\int_0^t(t-s)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{q})-\frac{b}{2}}(\|u(s)\|_r^\alpha+\|v(s)\|_r^\alpha)\| u(s)- v(s)\|_rds\\
&\lesssim&\|\varphi-\psi\|_q+M^\alpha\int_0^t(t-s)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{q})-\frac{b}{2}}s^{-\beta\alpha}\| u(s)- v(s)\|_rds\\
&\lesssim&\|\varphi-\psi\|_q+M^\alpha d(u,v)\int_0^t(t-s)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{q})-\frac{b}{2}}s^{-\beta(\alpha+1)}ds\\
&\lesssim&\|\varphi-\psi\|_q+M^\alpha d(u,v)t^{1-\frac{d\alpha}{2q}-\frac{b}{2}}\int_0^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{q})-\frac{b}{2}}\sigma^{-\beta(\alpha+1)}d\sigma
\end{eqnarray*}
as $\beta=\frac{d}{2}(\frac{1}{q}-\frac{1}{r})$. It follows from $q>q_c$ that $1-\frac{d\alpha}{2q}-\frac{b}{2}>0$ and from $r>q$ that
\[
\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{q})+\frac{b}{2}<\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{r})+\frac{b}{2}=\frac{d\alpha}{2r}+\frac{b}{2}<\frac{d\alpha}{2q}+\frac{b}{2}<1.
\]
This together with $\beta(\alpha+1)<1$ imply that $\int_0^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{q})-\frac{b}{2}}\sigma^{-\beta(\alpha+1)}d\sigma<\infty.$ Hence,
\begin{equation}\label{3}
\|\mathcal{J}_\varphi(u)(t)-\mathcal{J}_\psi(v)(t)\|_q\lesssim\|\varphi-\psi\|_q+M^\alpha T^{1-\frac{d\alpha}{2q}-\frac{b}{2}}d(u,v).
\end{equation}
Similarly, by Proposition \ref{est0} with exponent pairs $(\frac{r}{\alpha+1},r)$ we get
\begin{eqnarray*}
&&\|\mathcal{J}_\varphi(u)(t)-\mathcal{J}_\psi(v)(t)\|_r\\
&\lesssim&t^{-\beta}\|\varphi-\psi\|_q+\int_0^t(t-s)^{-\frac{d\alpha}{2r}-\frac{b}{2}}(\|u(s)\|_r^\alpha+\|v(s)\|_r^\alpha)\| u(s)- v(s)\|_r ds\\
&\lesssim&t^{-\beta}\|\varphi-\psi\|_q+M^\alpha d(u,v)\int_0^t(t-s)^{-\frac{d\alpha}{2r}-\frac{b}{2}}s^{-\beta(\alpha+1)} ds.\\
\end{eqnarray*}
Hence,
\begin{eqnarray}\label{4}
&&t^\beta\|\mathcal{J}_\varphi(u)(t)-\mathcal{J}_\psi(v)(t)\|_r \nonumber \\
&\lesssim&\|\varphi-\psi\|_q+M^\alpha d(u,v)t^\beta\int_0^t(t-s)^{-\frac{d\alpha}{2r}-\frac{b}{2}}s^{-\beta(\alpha+1)} ds \nonumber \\
&\lesssim&\|\varphi-\psi\|_q+M^\alpha d(u,v)t^{1-\frac{d\alpha}{2q}-\frac{b}{2}}\int_0^1(1-\sigma)^{-\frac{d\alpha}{2r}-\frac{b}{2}}\sigma^{-\beta(\alpha+1)} d\sigma \nonumber \\
& \lesssim & \|\varphi-\psi\|_q+M^\alpha T^{1-\frac{d\alpha}{2q}-\frac{b}{2}}d(u,v).
\end{eqnarray}
By \eqref{3} and \eqref{4}, we obtain
\begin{equation}\label{5}
d(\mathcal{J}_\varphi(u),\mathcal{J}_\psi(v))\leq c\|\varphi-\psi\|_q+cM^\alpha T^{1-\frac{d\alpha}{2q}-\frac{b}{2}}d(u,v)
\end{equation} for some $c>0$.
For a given $\varphi\in L^q({\mathbb R}^d)$, choose $\rho>0$ so that $\|\varphi||_q\leq\rho$. Take $M=2c\rho$. Then for $u\in B_M^T$, from \eqref{5} it follows that
\[
d(\mathcal{J}_\varphi(u),0)\leq c\rho+cM^\alpha T^{1-\frac{d\alpha}{2q}-\frac{b}{2}}d(u,0)\leq \frac{M}{2}+cM^{\alpha+1} T^{1-\frac{d\alpha}{2q}-\frac{b}{2}}\leq M
\]
provided $T>0$ small enough so that $cM^{\alpha} T^{1-\frac{d\alpha}{2q}-\frac{b}{2}}\leq\frac{1}{2}$. This is possible as $1-\frac{d\alpha}{2q}-\frac{b}{2}>0$ as a consequence of $q>q_c$ as mentioned earlier. Thus $\mathcal{J}_\varphi(u)\in B_M^T$. Note that $T\sim M^{-\alpha(1-\frac{d\alpha}{2q}-\frac{b}{2})^{-1}}\sim \rho^{-\alpha(1-\frac{d\alpha}{2q}-\frac{b}{2})^{-1}}$ and thus this time $T$ only depends on $\|\varphi\|_q$ rather than on the profile of $\varphi$ itself.
Also for $u,v\in B_M^T$, from \eqref{5} we have
\begin{align*}
d(\mathcal{J}_\varphi(u),\mathcal{J}_\varphi(v))\leq cM^\alpha T^{1-\frac{d\alpha}{2q}-\frac{b}{2}}d(u,v)\leq\frac{1}{2}d(u,v)
\end{align*} which says $\mathcal{J}_\varphi$ is a contraction in $B_M^T$. Therefore, it has a unique fixed point say $u$ i.e. there is a unique $u\in B_M^T$ such that $\mathcal{J}_\varphi(u)=u$. This completes the proof of part (i).\\
\noindent
\textbf{Part (ii) (i.e. uniqueness) with the case $q>q_c$.} Let $u_1,u_2$ be two solution satisfying
\[
\sup_{t\in[0,T ]} \|u_j(t)\|_q<\infty,\qquad \sup_{t\in[0,T ]} t^{\frac{d}{2}(\frac{1}{q}-\frac{1}{r})}\|u_j(t)\|_r<\infty.
\]
Choose $\tilde{M}$ big enough so that
\[\sup_{t\in[0,T ]} \|u_j(t)\|_q<\tilde{M},\qquad\sup_{t\in[0,T ]} t^\beta\|u_j(t)\|_r<\tilde{M}.\]
Then $u_1,u_2\in B_{\tilde{M}}^T$. Let $t_0$ be the infimum of $t$ in $[0,T)$ where $u_1(t)\neq u_2(t)$, then as in the above, one can choose a $0<\tilde{T}<T-t_0$ depending on $\|u_1(t_0)\|_q=\|u_2(t_0)\|_q.$ So that $\tilde{\mathcal{J}}_{u_1(t_0)}$ given by
\[
\tilde{\mathcal{J}}_{u_1(t_0)}(u)(t)=e^{-(t-t_0){\mathcal L}_a}\varphi+\mu\int_{t_0}^t e^{-(t-s){\mathcal L}_a}(|\cdot|^{-b}|u(s)|^\alpha u(s))ds
\]
has a unique fixed point in $B_{\tilde{M}}^{\tilde{T}}$. Since $u_j$ are solutions to \eqref{0} they are also fixed point of $\tilde{\mathcal{J}}_{u_1(t_0)}$ and hence $u_1(t)=t_2(t)$ for $t\in[t_0,t_0+\tilde{T})$ which is a contraction.
Let $u,v$ be the unique solutions with data $\varphi,\psi$ respectively, the from \eqref{5}, it follows that
\begin{align*}
d(u,v)\leq c\|\varphi-\psi\|_q+cM^\alpha T^{1-\frac{d\alpha}{2q}-\frac{b}{2}}d(u,v)\leq c\|\varphi-\psi\|_q+\frac{1}{2}d(u,v)\Longrightarrow d(u,v)\leq 2c\|\varphi-\psi\|_q
\end{align*}implying continuous dependency of solution on data.
Blowup alternatives, part \eqref{iii} regarding $T_{max}$ are standard.
\end{proof}
\subsection{Global Existence}
Using \eqref{2} for $q=q_c$, if $\frac{2-b}{\tilde{s}_2+2}<\alpha<\frac{2-b}{\tilde{s}_1}$ i.e. $\frac{d}{\tilde{s}_2+2}<q_c<\frac{d}{\tilde{s}_1}$ there exists $r>q_c$ such that
\begin{equation}\label{6}
\tilde{s}_1<\frac{d}{r}<\frac{d}{q_c}<b+\frac{d(\alpha+1)}{r}<\tilde{s}_2+2.
\end{equation}
Set \begin{equation}\label{b1}
\beta=\frac{d}{2}(\frac{1}{q_c}-\frac{1}{r})=\frac{2-b}{2\alpha}-\frac{d}{2r}
\end{equation} as before. Note that using $q_c=\frac{d\alpha}{2-b}$ and \eqref{6}, $$\frac{d(\alpha+1)}{q_c}-2=\frac{d}{q_c}-b<\frac{d(\alpha+1)}{r}$$ which implies $\beta(\alpha+1)<1$. Also $\frac{d\alpha}{2r}+\frac{b}{2}<\frac{d\alpha}{2q_c}+\frac{b}{2}=1.$
\begin{theorem}[global existence]\label{global2}
Let $0 < b < \min(2,d)$ and $\tilde{s}_1<\frac{\tilde{s}_2+2-b}{\alpha+1}$, $\frac{d}{\tilde{s}_2+2}<q_c<\frac{d}{\tilde{s}_1} $. Let $r$ verify \eqref{6} and $\beta$ be as defined in \eqref{b1}.
Suppose that $\rho > 0$ and $M > 0$ satisfy the inequality\[
c\rho+cM^{\alpha+1} \leq M,
\] where $c = c(\alpha, d, b, r) > 0$
is a constant and can explicitly be computed. Let $\varphi\in\mathcal{S}'({\mathbb R}^d)$ be
such that
\begin{equation}\label{8}
\sup_{t>0}t^\beta\|e^{-t{\mathcal L}_a} \varphi\|_r \leq\rho.
\end{equation}
Then there exists a unique global solution u of \eqref{0} such that
\[
\sup_{t>0}t^\beta\|u(t)\|_r \leq M.
\]
Furthermore,
\begin{enumerate}
\item $u(t)-e^{-t{\mathcal L}_a}\varphi\in C([0,\infty);L^s({\mathbb R}^d))$, for $\tilde{s}_1<\frac{d}{q_c} <\frac{d}{s}<b +\frac{(\alpha+1)d}{r}<\tilde{s}_2+2$\label{g21}
\item $u(t)-e^{-t{\mathcal L}_a}\varphi\in L^\infty((0,\infty);L^{q_c}({\mathbb R}^d))$, if $\tilde{s}_1<\frac{d}{q_c}< b +\frac{\alpha+1}{r}<\tilde{s}_2+2$ \label{g22}
\item $\lim_{t\to0} u(t) = \varphi$ in $L^{s}({\mathbb R}^d)$ if $\varphi\in L^{s}({\mathbb R}^d)$ for $s$ satisfying $\frac{d}{q_c}\leq\frac{d}{s} < b +\frac{\alpha+1}{r}<\tilde{s}_2+2$\label{g23}
\item $\lim_{t\to0} u(t) = \varphi$ in $\mathcal{S}'({\mathbb R}^d)$ for $\varphi\in\mathcal{S}'({\mathbb R}^d)$ \label{g25}
\item $\sup_{t>0} t^{\frac{2-b}{2\alpha}-\frac{d}{2q}}\|u(t)\|_q \leq C_M<\infty,$ for all $q\in [r,\frac{d}{\tilde{s}_1})$ with $C_M\to0$ as $M\to0$\label{g24}.
\end{enumerate}
Moreover, if $\varphi$ and $\psi$ satisfy \eqref{6} and if $u$ and $v$ be respectively the solutions of \eqref{0} with initial data $\varphi$ and $\psi$. Then
\[
\sup_{t>0}t^{\frac{2-b}{2\alpha}-\frac{d}{2q}} \|u(t)-v(t)\|_q \leq C \sup_{t>0}t^\beta\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r , \text{ for all }q\in[r,\frac{d}{\tilde{s}_1}).
\]
If in addition, $e^{-t{\mathcal L}_a}(\varphi-\psi)$ has the stronger decay property
\[
\sup_{t>0}t^{\beta+\delta}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r <\infty,
\]
for some $\delta > 0$ such that $\beta(\alpha + 1) + \delta < 1$, and with $M$ perhaps smaller, then\begin{equation}\label{9} \sup_{t>0}t^{\beta+\delta}\|u(t)-v(t)\|_r \leq C\sup_{t>0}t^{\beta+\delta}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r
\end{equation}
where $C>0$ is a constant.
\end{theorem}
\begin{proof}[{\bf Proof}]
Let
$$B_M=\{u:(0,\infty)\to L^r({\mathbb R}^d)): \sup_{t>0} t^\beta\|u(t)\|_r\leq M\}$$ and
\[
d(u,v)=: \sup_{t>0} t^\beta\|u(t)-v(t)\|_r.
\]
Let $\varphi,\psi\in L^q({\mathbb R}^d)$ and $u,v\in B_M.$ Using Proposition \ref{est0} with exponent pairs $(\frac{r}{\alpha+1},r)$ we get
\begin{eqnarray}\label{25}
&& t^\beta \|\mathcal{J}_\varphi(u)(t)-\mathcal{J}_\psi(v)(t)\|_r\nonumber\\
& \lesssim& t^\beta \|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r+ t^\beta \int_0^t(t-s)^{-\frac{d\alpha}{2r}-\frac{b}{2}}(\|u(s)\|_r^\alpha+\|v(s)\|_r^\alpha)\| u(s)- v(s)\|_r ds\\
&\lesssim& t^\beta \|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r+ t^\beta M^\alpha d(u,v)\int_0^t(t-s)^{-\frac{d\alpha}{2r}-\frac{b}{2}}s^{-\beta(\alpha+1)} ds\nonumber\\
& \lesssim & t^\beta\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r+M^\alpha d(u,v)t^{\beta+1-\frac{d\alpha}{2r}-\frac{b}{2}-\beta(\alpha+1)}\int_0^1(1-\sigma)^{-\frac{d\alpha}{2r}-\frac{b}{2}}\sigma^{-\beta(\alpha+1)} d\sigma.
\end{eqnarray}
Since $\beta+1-\frac{d\alpha}{2r}-\frac{b}{2}-\beta(\alpha+1)=0$, we get
\begin{equation}\label{7}
d(\mathcal{J}_\varphi(u),\mathcal{J}_\psi(v))\lesssim\sup_{t>0}t^\beta\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r+M^\alpha d(u,v).
\end{equation}
Putting $\psi,v=0$, and using the hypothesis we have
\[
\sup_{t>0}t^\beta\|\mathcal{J}_\varphi(u)(t)\|_r=d(\mathcal{J}_\varphi(u),0)\leq c\sup_{t>0}t^\beta\|e^{-t{\mathcal L}_a}\varphi\|_r+cM^\alpha d(u,0)\leq c\rho+cM^\alpha\leq M
\]implying $\mathcal{J}_\varphi(u)\in B_M$. Putting $\psi=\varphi$ in \eqref{7}
\[
d(\mathcal{J}_\varphi(u),\mathcal{J}_\varphi(v))\leq cM^\alpha d(u,v)
\]implying $\mathcal{J}_\varphi(u)$ is a contraction in $B_M$ as $cM^\alpha<1$. Hence it has a unique fixed point in $B_M$.
Now we will prove \eqref{g21}. Lets prove the continuity at $t=0$ first. Take $s$ satisfying \[
\tilde{s}_1<\frac{d}{q_c}\leq\frac{d}{s}<b+\frac{d(\alpha+1)}{r}<\tilde{s}_2+2.
\]
Using Proposition \ref{est0} for the pair $(\frac{r}{\alpha+1},s)$ we have
\begin{eqnarray*}
\|u(t)-e^{-t{\mathcal L}_a}\varphi\|_s
&\lesssim &\int_0^t(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\|u(\tau)\|_r^{\alpha+1}d\tau\\
&\leq&M^{\alpha+1}\int_0^t(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\tau^{-\beta(\alpha+1)}d\tau\\
&=&M^{\alpha+1}t^{1-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}-\beta(\alpha+1)}\int_0^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\sigma^{-\beta(\alpha+1)}d\sigma\\
&=&M^{\alpha+1}t^{\frac{d}{2s}-\frac{2-b}{2\alpha}}\int_0^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\sigma^{-\beta(\alpha+1)}d\sigma.
\end{eqnarray*}
Note that
\[
\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})+\frac{b}{2}<\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{q_c})+\frac{b}{2}<\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{r})+\frac{b}{2}=\frac{d\alpha}{2r}+\frac{b}{2}<\frac{d\alpha}{2q_c}+\frac{b}{2}=1
\]and $\beta(\alpha+1)<1$ implying $\int_0^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\sigma^{-\beta(\alpha+1)}d\sigma<\infty$. Thus
\begin{equation}\label{p5}
\|u(t)-e^{-t{\mathcal L}_a}\varphi\|_s\lesssim M^{\alpha+1}t^{\frac{d}{2s}-\frac{2-b}{2\alpha}}.
\end{equation}
Since $\frac{d}{2s}-\frac{2-b}{2\alpha}>0$ (this follows fro the condition $\frac{d}{q_c}<\frac{d}{s}$ in \eqref{g21}), we get \begin{equation}\label{p6}
u(t)-e^{-t{\mathcal L}_a}\varphi\to0
\end{equation} in $L^s({\mathbb R}^d)$ as $t\to0+$ and hence continuity at $t=0$.
For other $t>0$, first note that $u(t)-e^{-t{\mathcal L}_a}\varphi-u(t_0)+e^{-t_0{\mathcal L}_a}\varphi=\mu\int_0^t e^{-(t-s){\mathcal L}_a}(|\cdot|^{-b}|u(s)|^\alpha u(s))ds-\mu\int_0^{t_0} e^{-(t_0-s){\mathcal L}_a}(|\cdot|^{-b}|u(s)|^\alpha u(s))ds$, but with $0<t<t_0$
\begin{align}
&\|\int_0^t e^{-(t-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))d\tau-\int_0^t e^{-(t_0-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))d\tau\|_s\nonumber\\
&\leq\int_0^{t_0}\chi_{[0,t]}(\tau) \|e^{-(t-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))- e^{-(t_0-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))\|_sd\tau\to 0\label{p3}
\end{align}as $t\uparrow t_0$, using dominated convergence (in fact note that for each $\tau$, the integrand say $I(\tau)$ goes to zero as $t\uparrow t_0$ and now
\begin{align*}
I(\tau)&\leq\chi_{[0,t]}(\tau)\left[ \|e^{-(t-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))\|_s+\| e^{-(t_0-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))\|_s\right]\\
&\lesssim\chi_{[0,t]}(\tau)\left[(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}+(t_0-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\right]\|u(\tau)\|_s^{\alpha+1}\\
&\leq M^{\alpha+1}\chi_{[0,t)}\left[(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}+(t_0-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\right]\tau^{-\beta(\alpha+1)}
\end{align*} which is integrable).
On the other hand for $\frac{t_0}{2}\leq t<t_0$,
\begin{align*}
&\|\int_0^t e^{-(t_0-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))d\tau-\int_0^{t_0} e^{-(t_0-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))d\tau\|_s\\
&=\|\int_t^{t_0} e^{-(t_0-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))d\tau\|_s\\
&\leq M^{\alpha+1}\int_t^{t_0}(t_0-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\tau^{-\beta(\alpha+1)}d\tau\\
&=M^{\alpha+1}t_0^{1-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}-\beta(\alpha+1)}\int_{t/t_0}^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\sigma^{-\beta(\alpha+1)}d\sigma\\
&\leq M^{\alpha+1}2^{2\beta(\alpha+1)}t_0^{1-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}-\beta(\alpha+1)}\int_{t/t_0}^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}d\sigma\\
&= M^{\alpha+1}2^{2\beta(\alpha+1)}t_0^{1-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}-\beta(\alpha+1)}\int_0^{1-t/t_0}\sigma^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}d\sigma\\
&= M^{\alpha+1}2^{2\beta(\alpha+1)}t_0^{1-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}-\beta(\alpha+1)}\frac{1}{1-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}(1-\frac{t}{t_0})^{1-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}
\to0
\end{align*}as $t\uparrow t_0$. Using this and \eqref{p3} we get $u(t)-e^{-t_0{\mathcal L}_a}\varphi\to u(t_0)-e^{-t_0{\mathcal L}_a}\varphi$ as $t\uparrow t_0$ and so left continuity is established. Now for $0<t_0<t$,
\begin{align}
&\|\int_0^t e^{-(t-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))d\tau-\int_0^{t_0} e^{-(t-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))d\tau\|_s\nonumber\\
&=\|\int_{t_0}^t e^{-(t-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))d\tau\|_s\nonumber\\
&\leq\int_{t_0}^t(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\tau^{-\beta(\alpha+1)}d\tau\nonumber\\
&=t^{1-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}-\beta(\alpha+1)}\int_{t_0/t}^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{r}-\frac{1}{s})-\frac{b}{2}}\sigma^{-\beta(\alpha+1)}d\sigma\to0\label{p4}
\end{align}as $t\downarrow t_0$. And arguing as \eqref{p3}
\begin{align*}
&\|\int_0^{t_0} e^{-(t-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))d\tau-\int_0^{t_0} e^{-(t_0-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))d\tau\|_s\\
&\leq\int_0^{t_0}\| e^{-(t-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))- e^{-(t_0-\tau){\mathcal L}_a}(|\cdot|^{-b}|u(\tau)|^\alpha u(\tau))\|_sd\tau\to0
\end{align*}as $t\downarrow t_0$. This together with \eqref{p4} implies $u(t)-e^{-t_0{\mathcal L}_a}\varphi\to u(t_0)-e^{-t_0{\mathcal L}_a}\varphi$ as $t\downarrow t_0$ so the right continuity is established. This proves \eqref{g21}.
\eqref{g22} follows from \eqref{p5}.
If $\varphi\in L^{s}({\mathbb R}^d)$, then $e^{-t{\mathcal L}_a}\varphi\to\varphi$ in $L^s({\mathbb R}^d)$ as $t\to0+$(semigroup property).
Thus, using \eqref{p6}, we conclude $u(t) \to\varphi$ in $L^s({\mathbb R}^d)$.
This proves \eqref{g23} and hence \eqref{g25}.
For part \eqref{g24} we use iteration. Set $r_0=r$, $M_0=M$ we note that we have
\[
\sup_{t>0}t^{\frac{2-b}{2\alpha}-\frac{d}{2r_0}}\|u(t)\|_{r_0}\leq M_0.
\]
Take
\begin{equation}\label{24}
\frac{1}{r_1}=\begin{cases}
\frac{1}{r_0}-\frac{1}{2}(\frac{2-b}{d}-\frac{\alpha}{r_0})&\text{ if }\frac{1}{r_0}-\frac{1}{2}(\frac{2-b}{d}-\frac{\alpha}{r_0})>\frac{\tilde{s}_1}{d}\\
\text{any number in }(\frac{\tilde{s}_1}{d},\frac{1}{r_0})&\text{ otherwise}.
\end{cases}
\end{equation}
Then it is easy to see $\tilde{s}_1 <\frac{d}{r_1}<\frac{d}{r_0}$ as $\frac{2-b}{d}-\frac{\alpha}{r_0}>0\Leftrightarrow r_0>q_c$. Thus using \eqref{6} we have \begin{equation}\label{27}
\tilde{s}_1 <\frac{d}{r_1}<\frac{d}{r_0} <b+ \frac{d(\alpha+1)}{r_0} < \tilde{s}_2+2
\end{equation}
If the first case occurs in \eqref{24}, then $\frac{d}{2}\big(\frac{1}{r_0}-\frac{1}{r_1}\big)=\frac{1}{2}(\frac{2-b}{2}-\frac{d\alpha}{2r_0})<\frac{2-b}{2}-\frac{d\alpha}{2r_0}\Leftrightarrow\frac{d}{2}\big(\frac{\alpha+1}{r_0}-\frac{1}{r_1}\big)<1-\frac{b}{2}$.
If the second case occurs in \eqref{24}, then $\frac{1}{r_0}-\frac{1}{2}(\frac{2-b}{d}-\frac{\alpha}{r_0})<\frac{\tilde{s}_1}{d}<\frac{1}{r_1}$. Therefore
$\frac{d}{2}\big(\frac{1}{r_0}-\frac{1}{r_1}\big)<\frac{d}{2}\big(\frac{1}{r_0}-\frac{1}{r_0}+\frac{1}{2}(\frac{2-b}{d}-\frac{\alpha}{r_0}\big)=\frac{1}{2}(\frac{2-b}{2}-\frac{d\alpha}{2r_0})<\frac{2-b}{2}-\frac{d\alpha}{2r_0}\Rightarrow\frac{d}{2}\big(\frac{\alpha+1}{r_0}-\frac{1}{r_1}\big)<1-\frac{b}{2}$. Thus in both case in \eqref{24}
\[
\frac{d}{2}\big(\frac{\alpha+1}{r_0}-\frac{1}{r_1}\big)<1-\frac{b}{2}.
\]
In view of this and \eqref{27}, using Lemma \ref{23} with $(s,q)=(r_0,r_1)$ we have
\begin{equation*}\label{}
\sup_{t>0}t^{\frac{2-b}{2\alpha}-\frac{d}{2r_1}}\|u(t)\|_{r_1}\leq cM_0(1+M_0^\alpha)=:M_1
\end{equation*}
If the second case occors in \eqref{24}, we stop the iteration, otherwise will next choose $r_2$ from $r_1$ as we have chosen $r_1$ from $r_0$ above.
Note that this iteration must stop (the second case must occur) at finite steps.
If not we would find $\tilde{s}_1<\cdots<\frac{d}{r_{i+1}}<\frac{d}{r_i}<\cdots<\frac{d}{r_0}$ with
\begin{equation}\label{26}
\sup_{t>0}t^{\frac{2-b}{2\alpha}-\frac{d}{2r_{i+1}}}\|u(t)\|_{r_{i+1}}\leq cM_i(1+M_i^\alpha)=:M_{i+1}
\end{equation}
and $\frac{1}{r_i}-\frac{1}{r_{i+1}}=\frac{1}{2}(\frac{2-b}{d}-\frac{\alpha}{r_i})\geq\frac{1}{2}(\frac{2-b}{d}-\frac{\alpha}{r_0})>0$. Then $\frac{1}{r_0}=(\frac{1}{r_0}-\frac{1}{r_1})+(\frac{1}{r_1}-\frac{1}{r_2})+\cdots+(\frac{1}{r_i}-\frac{1}{r_{i+1}})+\frac{1}{r_{i+1}}\geq\frac{i+1}{2}(\frac{2-b}{d}-\frac{\alpha}{r_0})\to\infty$ which is a contradiction as $\frac{1}{r_0}<\infty$.
For $q\in(r_i,r_{i+1})$ we use interpolation inequality with \eqref{26}
$\|u(t)\|_q\leq\|u(t)\|_{r_i}^\theta\|u(t)\|_{r_{i+1}}^{1-\theta}$ where $\frac{1}{q}=\frac{\theta}{r_1}+\frac{1-\theta}{r_{i+1}}$.
Further part: Using $\mathcal{J}_\varphi(u)=u$, $\mathcal{J}_\psi(v)=v$ in \eqref{25} we have
\begin{eqnarray*}
\|u(t)-v(t)\|_r&\lesssim&\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r+\int_0^t(t-s)^{-\frac{d\alpha}{2r}-\frac{b}{2}}(\|u(s)\|_r^\alpha+\|v(s)\|_r^\alpha)\| u(s)- v(s)\|_r ds\\
&\leq&\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r+M^\alpha\int_0^t(t-s)^{-\frac{d\alpha}{2r}-\frac{b}{2}}s^{-\beta\alpha}\| u(s)- v(s)\|_r ds.
\end{eqnarray*}
Therefore
\begin{eqnarray*}
t^{\beta+\delta}\|u(t)-v(t)\|_r&\leq&t^{\beta+\delta}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r+M^\alpha t^{\beta+\delta}\\
&&\cdot\int_0^t(t-s)^{-\frac{d\alpha}{2r}-\frac{b}{2}}s^{-\beta\alpha-\beta-\delta}s^{\beta+\delta}\| u(s)- v(s)\|_r ds\\
&\leq&t^{\beta+\delta}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r+M^\alpha t^{\beta+\delta}\sup_{\tau>0} \tau^{\beta+\delta}\| u(\tau)- v(\tau)\|_r\\
&&\cdot\int_0^t(t-s)^{-\frac{d\alpha}{2r}-\frac{b}{2}}s^{-\beta(\alpha+1)-\delta} ds\\
&\leq&t^{\beta+\delta}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r+M^\alpha t^{\beta+\delta -\frac{d\alpha}{2r}-\frac{b}{2}-\beta(\alpha+1)-\delta+1}\\
&&\cdot\sup_{\tau>0} \tau^{\beta+\delta}\| u(\tau)- v(\tau)\|_r\int_0^1(1-\sigma)^{-\frac{d\alpha}{2r}-\frac{b}{2}}\sigma^{-\beta(\alpha+1)-\delta} d\sigma
\end{eqnarray*}
Note that $\int_0^1(1-\sigma)^{-\frac{d\alpha}{2r}-\frac{b}{2}}\sigma^{-\beta(\alpha+1)-\delta} d\sigma<\infty$ as $\frac{d\alpha}{2r}+\frac{b}{2}<1$, $\beta(\alpha+1)+\delta<1$. Thus
\begin{equation*}
\sup_{t>0}t^{\beta+\delta}\|u(t)-v(t)\|_r\leq ct^{\beta+\delta}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r+cM^\alpha\sup_{\tau>0} \tau^{\beta+\delta}\| u(\tau)- v(\tau)\|_r
\end{equation*}
choosing $M>0$ so that $cM^\alpha<\frac{1}{2}$ we achieve \eqref{9}.
Moreover part: Note that
\[
u(t)-v(t)=e^{-t{\mathcal L}_a/2}(u(t/2)-v(t/2))+\mu\int_\frac{t}{2}^t e^{-(t-s){\mathcal L}_a}(|\cdot|^{-b}[|u(s)|^\alpha u(s)-|v(s)|^\alpha v(s)])ds.
\]
Therefore for $q\in[r,\frac{d}{\tilde{s}_1})$ using \eqref{6}, one has \[\tilde{s}_1<\frac{d}{q}<\frac{d}{r} <\tilde{s}_2+2\qquad\text{and}\qquad \tilde{s}_1<\frac{d}{q}<b+\frac{d(\alpha+1)}{q}<\tilde{s}_2+2.\]
Then using Theorem \ref{ste} for $(r,q)$ and Proposition \ref{est0} for $(\frac{q}{\alpha+1},q)$ we have
\begin{eqnarray*}
&&\|u(t)-v(t)\|_q\\
&\lesssim&t^{-\frac{d}{2}(\frac{1}{r}-\frac{1}{q})}\|u(t/2)-v(t/2)\|_r+\int_\frac{t}{2}^t (t-s)^{-\frac{d\alpha}{2q}-\frac{b}{2}}(\| u(s)\|_q^\alpha+\|v(s)\|_q^\alpha)\| u(s)- v(s)\|_qds\\
&\lesssim&t^{-\frac{d}{2}(\frac{1}{r}-\frac{1}{q})}\|u(t/2)-v(t/2)\|_r+C_M^\alpha\int_\frac{t}{2}^t (t-s)^{-\frac{d\alpha}{2q}-\frac{b}{2}}s^{-\alpha\beta(q)}\| u(s)- v(s)\|_qds,
\end{eqnarray*}
using \eqref{g24}, $\beta(q):=\frac{2-b}{2\alpha}-\frac{d}{2q}$.
Let $\beta(q)=\frac{2-b}{2\alpha}-\frac{d}{2q}$. Then
\begin{eqnarray*}
t^{\beta(q)}\|u(t)-v(t)\|_q
&\lesssim&t^\beta\|u(t/2)-v(t/2)\|_r\\
&&+C_M^\alpha t^{\beta(q)}\sup_{\tau>0} \tau^{\beta(q)}\| u(\tau)- v(\tau)\|_q\int_\frac{t}{2}^t (t-s)^{-\frac{d\alpha}{2q}-\frac{b}{2}}s^{-(\alpha+1)\beta(q)}ds\\
&\lesssim&(t/2)^\beta\|u(t/2)-v(t/2)\|_r+C_M^\alpha t^{\beta(q)-\frac{d\alpha}{2q}-\frac{b}{2}-(\alpha+1)\beta(q)+1}\\
&&\cdot\sup_{\tau>0} \tau^{\beta(q)}\| u(\tau)- v(\tau)\|_q\int_\frac{1}{2}^1 (1-\sigma)^{-\frac{d\alpha}{2q}-\frac{b}{2}}\sigma^{-(\alpha+1)\beta(q)}d\sigma.
\end{eqnarray*}
Since $\beta(q)-\frac{d\alpha}{2q}-\frac{b}{2}-(\alpha+1)\beta(q)+1=0$ and $\frac{d\alpha}{2q}+\frac{b}{2}<\frac{d\alpha}{2r}+\frac{b}{2}<1$, we have
\begin{align*}
t^{\beta(q)}\|u(t)-v(t)\|_q\leq c\sup_{\tau>0}\tau^\beta\|u(\tau)-v(\tau)\|_r+cC_M^\alpha\sup_{\tau>0} \tau^{\beta(q)}\| u(\tau)- v(\tau)\|_q
\end{align*}
Choosing $M>0$ small enough so that $cC_M<\frac{1}{2}$, we get
\[
\sup_{t>0}t^{\beta(q)}\|u(t)-v(t)\|_q\lesssim\sup_{t>0}t^\beta\|u(t)-v(t)\|_r\lesssim\sup_{t>0}t^{\beta}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r
\]
by using \eqref{9} with $\delta=0$.
This completes the proof.
\end{proof}
\begin{lemma}[A priori estimate]\label{23}
Suppose $ s < q $,
\[
\tilde{s}_1 <\frac{d}{q} <b+ \frac{d(\alpha+1)}{s} < \tilde{s}_2+2,\qquad\frac{d}{2}\big(\frac{\alpha+1}{s}-\frac{1}{q}\big)<1-\frac{b}{2}.
\]
Assume $u$ be solution to \eqref{0}
satisfying
\[
\sup_{t>0}t^{\frac{2-b}{2\alpha}-\frac{d}{2s}}\|u(t)\|_s\leq A<\infty
\]
then
\[
\sup_{t>0}t^{\frac{2-b}{2\alpha}-\frac{d}{2q}}\|u(t)\|_q\lesssim A(1+A^\alpha)=:C_A<\infty
\]
with $C_A\to0$ as $A\to0$.
\end{lemma}
\begin{proof}[{\bf Proof}]
Note that
\[
u(t)=e^{-\frac{t}{2}{\mathcal L}_a}u(t/2)+\mu\int_{t/2}^te^{-(t-\sigma ){\mathcal L}_a}(|\cdot|^{-b} (|u(\sigma)|^\alpha u(\sigma))d\sigma.
\]
Using Theorem \ref{ste} for $(p,q)=(s,q)$ and Proposition \ref{est0} for $(p,q)=(\frac{s}{\alpha+1},q)$ we have
\begin{eqnarray*}
\|u(t)\|_q
&\lesssim&t^{-\frac{d}{2}(\frac{1}{s}-\frac{1}{q})}\|u(t/2)\|_s+\int_{t/2}^t(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}}\||u(\tau)|^\alpha u(\tau)\|_{\frac{s}{\alpha+1}}d\tau\\
&\lesssim&t^{\frac{d}{2q}-\frac{2-b}{2\alpha}}A+\int_{t/2}^t(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})}\| u(\tau)\|_s^{\alpha+1}d\tau\\
&\lesssim&t^{\frac{d}{2q}-\frac{2-b}{2\alpha}}A+A^{\alpha+1}\int_{t/2}^t(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}}\tau^{-\beta(s)(\alpha+1)}d\tau\\
&\lesssim&t^{\frac{d}{2q}-\frac{2-b}{2\alpha}}A+A^{\alpha+1}t^{1-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}-\beta(s)(\alpha+1)}\int_{1/2}^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}}\sigma^{-\beta(s)(\alpha+1)}d\sigma\\
&\lesssim&t^{\frac{d}{2q}-\frac{2-b}{2\alpha}}A+A^{\alpha+1}t^{\frac{d}{2q}-\frac{2-b}{2\alpha}}=A(1+A^\alpha)t^{\frac{d}{2q}-\frac{2-b}{2\alpha}}
\end{eqnarray*} as $\int_{1/2}^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}}\sigma^{-\beta(s)(\alpha+1)}d\sigma<\infty$.
This completes the proof.
\end{proof}
\begin{proof}[{\bf Proof of Theorem \ref{local} \eqref{ii}}] {\bf The case $q=q_c$.} First note that if the condition \eqref{8} of Theorem \ref{global2} is satisfied on $[0,T)$ (in place of $(0,\infty)$ i.e. if
\begin{equation}\label{8a}
\sup_{t\in[0,T)}t^\beta\|e^{-t{\mathcal L}_a} \varphi\|_r \leq\rho.
\end{equation}
happens, then the conclusion holds with $(0,\infty)$ replaced by $(0,T)$ i.e. there exists a unique solution $u$ on $[0,T)$ such that \begin{equation*}
\sup_{t\in[0,T)}t^\beta\|u(t)\|_r \leq M.
\end{equation*}
In view of this, it is enough to prove \eqref{8a} with a $T>0$ for a given $\varphi\in L^{q_c}({\mathbb R}^d)$. For $\epsilon>0$, there exists $\psi\in L^{q_c}({\mathbb R}^d)\cap L^r({\mathbb R}^d)$ such that $\|\varphi-\psi\|_{q_c}<\epsilon$ (by density). Then for $0\leq t<T$, by Theorem \ref{ste}
\begin{align*}
t^\beta\|e^{-t{\mathcal L}_a} \varphi\|_r &\leq t^\beta\|e^{-t{\mathcal L}_a}( \varphi-\psi)\|_r +t^\beta\|e^{-t{\mathcal L}_a} \psi\|_r \\
&\leq ct^\beta t^{-\beta}\| \varphi-\psi\|_{q_c} +ct^\beta\| \psi\|_r\\
&\leq c\epsilon+T^\beta\| \psi\|_r\leq 2c\epsilon
\end{align*} by choosing $T > 0$ small enough depending on $\epsilon$, $\|\psi\|_r$ (which again depends on $\epsilon$). Now take $\epsilon=\frac{\rho}{2c}$, to achieve \eqref{8a}. This completes the proof.
\end{proof}
\begin{proof}[{\bf Proof of Theorem \ref{global}}]
For part \eqref{global-1}, using Theorem \ref{ste} note that
\[
t^\beta\|e^{-t{\mathcal L}_a} \varphi\|_r\leq ct^\beta t^{-\frac{d}{2}(\frac{1}{q_c}-\frac{1}{r})}\|\varphi\|_{q_c}=c\|\varphi\|_{q_c}
\]for all $t>0$ therefore it satisfies the hypothesis \eqref{8} of Theorem \ref{global2} for $\|\varphi\|_{q_c}$ small enough. Hence the result follows from Theorem \ref{global2}.
For part \eqref{global-2}, with the given condition on $\sigma$, the data satisfies the condition in part \eqref{global-1}.
For part \eqref{global-3}, write $|\cdot|^{-\frac{2-b}{\alpha}}=\varphi_1+\varphi_2$ with $\varphi_1=|\cdot|^{-\frac{2-b}{\alpha}}\chi_{\{|x|\leq1\}}$ and $\varphi_2=|\cdot|^{-\frac{2-b}{\alpha}}\chi_{\{|x|>1\}}$, then $\varphi_1\in L^s({\mathbb R}^d)$ for $1\leq s<\frac{d\alpha}{2-b}=q_c$ and $\varphi_2\in L^\sigma({\mathbb R}^d)$ for $q_c<\sigma\leq\infty$. Then using Theorem \ref{ste}, and the fact that $\tilde{s}_1<\frac{d}{r}<\frac{d}{q_c}<\tilde{s}_2+2$ (see \eqref{6}), $e^{-{\mathcal L}_a}\varphi_j\in L^r({\mathbb R}^d)$ hence $e^{-{\mathcal L}_a}|\cdot|^{-\frac{2-b}{\alpha}}\in L^r({\mathbb R}^d)$. By homogeneity of $|\cdot|^{-\frac{2-b}{\alpha}}$ it follows that $\sup_{t>0}t^\beta\|e^{-t{\mathcal L}_a}|\cdot|^{-\frac{2-b}{\alpha}}\|_r<\infty$. Using positivity of $e^{-t{\mathcal L}_a}$, from $|\varphi|\leq c|\cdot|^{-\frac{2-b}{\alpha}}$ it follows that, $\sup_{t>0}t^\beta\|e^{-t{\mathcal L}_a}\varphi\|_r<\infty$. Choosing $c>0$ small enough, $\varphi$ satisfies condition \eqref{8} of Theorem \ref{global2}.
\end{proof}
\subsection{Selfsimilar Solution}
\begin{proof}[{\bf Proof of Theorem \ref{selfsimilar}}]
Let $r$ as in \eqref{6}. Since $|\varphi|\leq\|\omega\|_\infty|\cdot|^{-\frac{2-b}{\alpha}}$, proceeding as in the proof of Theorem \ref{global} \eqref{global-3} above, we achieve $\varphi$ satisfies \eqref{8}. Then \eqref{0} has a unique global solution $u$ by Theorem \ref{global2} satisfying \eqref{9}.
Since $\varphi$ is homogeneous of degree $-\frac{2-b}{d}$ it follows that $\varphi_\lambda=\varphi$ for all $\lambda$ where $\varphi_\lambda(x) = \lambda^{\frac{2-b}{\alpha}} \varphi(\lambda x)$. Therefore $\varphi_\lambda$ also satisfies \eqref{8} and has a unique global solution $\tilde{u}_\lambda$ by Theorem \ref{global2} satisfying $\sup_{t>0}t^\beta\|\tilde{u}_\lambda(t)\|_r \leq M$. By computation one has $\tilde{u}_\lambda=u_\lambda$, where $u_\lambda$ as defined in \eqref{scl}.
Since $\varphi_\lambda=\varphi$, by uniqueness of solution, we have $u_\lambda=u$.
\end{proof}
\section{Asymptotic Behaviour}\label{AB}
\subsection{Case I: Nonlinear Behaviour}\label{AB1}
\begin{proof}[{\bf Proof of Theorem \ref{asym} \eqref{asym1}}]
Set $\beta(q)=\frac{2-b}{2\alpha}-\frac{d}{2q}$ and $\psi(x)=\omega(x)|x|^{-\frac{2-b}{\alpha}}$.
Note that $|\varphi(x)-\psi(x)|=0$ for $|x|\geq A$ and $|\varphi(x)-\psi(x)|\leq (c+\|\omega\|_\infty)|x|^{-\frac{2-b}{\alpha}}.$ Hence,
\[
|\varphi-\psi|\leq (c+\|\omega\|_\infty)\varphi_1
\]
where $\varphi_1=|x|^{-\frac{2-b}{\alpha}}\chi_{\{|x|\leq A\}}\in L^s({\mathbb R}^d)$ with any $1\leq s<\frac{d\alpha}{2-b}=q_c$. Using \eqref{6}, for any choice of $s$ with $d/q_c<d/s<\tilde{s}_2+2$, we have by Theorem \ref{ste} and \eqref{6} that
\[
t^{\frac{d}{2}(\frac{1}{s}-\frac{1}{r})}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r\lesssim\| \varphi-\psi\|_s\leq(c+\|\omega\|_\infty)\|\varphi_1\|_s<\infty.
\]
This implies (letting $\delta:=\frac{d}{2}(\frac{1}{s}-\frac{1}{r})-\beta(r)=\frac{d}{2s}-\frac{2-b}{2\alpha}$)
\begin{equation}\label{12}
\sup_{t>0} t^{\beta(r)+\delta}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_r<\infty\quad\text{for all }0<\delta<\frac{\tilde{s}_2+2}{2}-\frac{2-b}{2\alpha}.
\end{equation}
Note that $\beta(r)(\alpha + 1)<1$ and therefore for $0<\delta\leq\frac{\tilde{s}_2+2}{2}-\frac{2-b}{2\alpha}$ small enough one has $\beta(r)(\alpha + 1)+\delta<1$, applying Theorem \ref{global2} (specifically \eqref{9}) we have
\begin{equation}\label{10}
\sup_{t>0} t^{\beta(r)+\delta}\|u(t)-u_\mathcal{S}(t)\|_r<\infty
\end{equation}
for $\delta>0$ small enough. This proves the result for $q=r$.
Note that
\[
u(t) - u_\mathcal{S}(t) = e^{-\frac{t}{2}{\mathcal L}_a }(u(t/2) - u_\mathcal{S}(t/2)) + \mu\int_{t/2}^te^{-(t-\sigma ){\mathcal L}_a}(|\cdot|^{-b} (|u(\sigma)|^\alpha u(\sigma) -|u_\mathcal{S}(\sigma)|^\alpha u_\mathcal{S}(\sigma)))d\sigma
\]
Therefore for $q\in[r,\frac{d}{\tilde{s}_1})$ using \eqref{6}, one has \[\tilde{s}_1<\frac{d}{q}<\frac{d}{r} <\tilde{s}_2+2\qquad\text{and}\qquad \tilde{s}_1<\frac{d}{q}<b+\frac{d(\alpha+1)}{q}<\tilde{s}_2+2.\]
Thus applying Theorem A with the pair $(r,q)$ and Proposition \ref{est0} with the pair $(\frac{r}{\alpha+1},q)$ we get
\begin{eqnarray*}
t^{\beta(q)+\delta}\|u(t) - u_\mathcal{S}(t)\|_{q}&\lesssim& t^{\beta(q)+\delta-\frac{d}{2}(\frac{1}{r}-\frac{1}{q})}\|u(t/2) - u_\mathcal{S}(t/2)\|_r+t^{\beta(q)+\delta}\\
&&\cdot\int_{t/2}^t(t-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{q}-\frac{1}{q})-\frac{b}{2}}\||u(\sigma)|^\alpha u(\sigma) -|u_\mathcal{S}(\sigma)|^\alpha u_\mathcal{S}(\sigma)\|_{\frac{q}{\alpha+1}}d\sigma.
\end{eqnarray*}
Using \eqref{10} for the first term in the LHS we have
\begin{eqnarray*}
t^{\beta(q)+\delta}\|u(t) - u_\mathcal{S}(t)\|_{q}&\lesssim&t^{\beta(q)+\delta-\frac{d}{2}(\frac{1}{r}-\frac{1}{q})-\beta(r)-\delta}+t^{\beta(q)+\delta}\\
&&\cdot\int_{t/2}^t(t-\sigma)^{-\frac{d\alpha}{2q}-\frac{b}{2}}(\|u(\sigma)\|_{q}^\alpha+\| u_\mathcal{S}(\sigma)\|_{q}^\alpha)\|u(\sigma) - u_\mathcal{S}(\sigma)\|_{q}d\sigma.
\end{eqnarray*}
Using Theorem \ref{global2} part \eqref{g24} we further have
\begin{eqnarray*}
t^{\beta(q)+\delta}\|u(t) - u_\mathcal{S}(t)\|_{q}
&\lesssim&t^{0}+C_M^\alpha t^{\beta(q)+\delta}\int_{t/2}^t(t-\sigma)^{-\frac{d\alpha}{2q}-\frac{b}{2}}\sigma^{-\frac{2-b}{2}+\frac{d\alpha}{2q}}\|u(\sigma) - u_\mathcal{S}(\sigma)\|_{q}d\sigma\\
&\lesssim&1+C_M^\alpha t^{\beta(q)+\delta}\int_{t/2}^t(t-\sigma)^{-\frac{d\alpha}{2q}-\frac{b}{2}}\sigma^{-\frac{2-b}{2}+\frac{d\alpha}{2q}-\beta(q)-\delta}\\
&&\cdot\sigma^{\beta(q)+\delta}\|u(\sigma) - u_\mathcal{S}(\sigma)\|_{q}d\sigma\\
&\lesssim&1+C_M^\alpha t^{\beta(q)+\delta}\int_{t/2}^t(t-\sigma)^{-\frac{d\alpha}{2q}-\frac{b}{2}}\sigma^{-\frac{2-b}{2}+\frac{d\alpha}{2q}-\beta(q)-\delta}d\sigma\\
&&\cdot\sup_{\tau>0}\tau^{\beta(q)+\delta}\|u(\tau) - u_\mathcal{S}(\tau)\|_{q}\\
&\lesssim&1+C_M^\alpha t^{0}\int_{1/2}^1(1-\sigma)^{-\frac{d\alpha}{2q}-\frac{b}{2}}\sigma^{-\frac{2-b}{2}+\frac{d\alpha}{2q}-\beta(q)-\delta}d\sigma\\
&&\cdot\sup_{\tau>0}\tau^{\beta(q)+\delta}\|u(\tau) - u_\mathcal{S}(\tau)\|_{q}
\end{eqnarray*}
where $q$ is so that $ $. Since $\frac{d\alpha}{2q}+\frac{b}{2}<\frac{d\alpha}{2r}+\frac{b}{2}<1$ we obtain
\begin{equation*}
\sup_{t>0}t^{\beta(q)+\delta}\|u(t) - u_\mathcal{S}(t)\|_{q}\leq c+cC_M^\alpha \sup_{t>0}t^{\beta(q)+\delta}\|u(t) - u_\mathcal{S}(t)\|_{q}
\end{equation*} and $C_M\to0$ as $M\to0$. Choosing $M>0$ small enough so that $cC_M^\alpha<1$, we achieve \begin{equation}\label{p7}
\sup_{t>0}t^{\beta(q)+\delta}\|u(t) - u_\mathcal{S}(t)\|_{q}\lesssim1 \Longrightarrow\|u(t) - u_\mathcal{S}(t)\|_{q}\leq ct^{-\beta(q)-\delta}\ \forall\ t>0.
\end{equation}
Now $u_{\mathcal{S}}(t,x)=\lambda^{\frac{2-b}{\alpha}}u_{\mathcal{S}}(\lambda^2t,\lambda x)$ for all $\lambda$, and hence taking $\lambda=\frac{1}{\sqrt{t}}$, we have $u_{\mathcal{S}}(t,x)=t^{-\frac{2-b}{2\alpha}}u_{\mathcal{S}}(1,\frac{x}{\sqrt{t}})$ and hence
\begin{align*}
\|u_{\mathcal{S}}(t)\|_q=t^{-\frac{2-b}{2\alpha}}\left(\int_{{\mathbb R}^d}|u_{\mathcal{S}}(1,{x}/{\sqrt{t}})|^qdx\right)^{1/q}=t^{-\frac{2-b}{2\alpha}}\left(t^{\frac{d}{2}}\int_{{\mathbb R}^d}|u_{\mathcal{S}}(1,y)|^qdy\right)^{1/q}=t^{-\beta(q)}\|u_{\mathcal{S}}(1)\|_q.
\end{align*} Therefore using \eqref{p7}, for large $t>0$ we have $\|u(t)-u_{\mathcal{S}}(t)\|_q\leq\frac{1}{2}\|u_{\mathcal{S}}(t)\|_q$ and thus
\begin{align}\label{p8}
\|u(t)\|_q\geq\|u_{\mathcal{S}}(t)\|_q-\|u(t) - u_\mathcal{S}(t)\|_q\geq \frac{1}{2}\|u_{\mathcal{S}}(t)\|_q=\frac{1}{2}t^{-\beta(q)}\|u_{\mathcal{S}}(1)\|_q.
\end{align}
Also
\begin{align}\label{p9}
\|u(t)\|_q\leq\|u_{\mathcal{S}}(t)\|_q+\|u(t) - u_\mathcal{S}(t)\|_q\leq \frac{3}{2}\|u_{\mathcal{S}}(t)\|_q=\frac{3}{2}t^{-\beta(q)}\|u_{\mathcal{S}}(1)\|_q.
\end{align}
This completes the proof.
\end{proof}
\subsection{Case II: Linear Behaviour}
For this case, we need the following technical result to be proved in the Appendix.
\begin{lemma}\label{11}
Assume that $0<b<\min(2,d)$ and $\frac{2-b}{\tilde{s}_2+2}<\alpha<\frac{2-b}{\tilde{s}_1}$. Let $\alpha_1$ be real number such that \[\max\left(\frac{2-b}{\tilde{s}_2+2},\frac{\tilde{s}_1\alpha}{\tilde{s}_2+2-b-\tilde{s}_1\alpha}\right)<\alpha_1 < \alpha<\frac{2-b}{\tilde{s}_1}.
\]
Let $r_1$ be a real number satisfying
\[
\max\left(\frac{(\alpha_1+1)d}{\tilde{s}_2+2-b},\frac{d\alpha_1}{2-b}\right)<r_1<\min\left(\frac{d\alpha_1(\alpha_1+1)}{(2-b(\alpha_1+1))_+},\frac{d\alpha_1}{\tilde{s}_1\alpha}\right).
\]
Let \begin{align*}
r_2&=\frac{\alpha}{\alpha_1}r_1\\
\beta_1&=\frac{2-b}{2\alpha_1}-\frac{d}{2r_1}\\
\beta_2&=\frac{2-b}{2\alpha}-\frac{d}{2r_2}\\
r_{12}&=\frac{\alpha+1}{\alpha_1+1}r_1\\
\beta_{12}&=\frac{\alpha_1+1}{\alpha+1}\beta_1.
\end{align*}
Then one has the following
\begin{enumerate}
\item $\beta_1,\beta_2,\beta_{12}>0$
\item $\tilde{s}_1<\frac{d}{r_1}<b+\frac{(\alpha+1)d}{r_{12}}<\tilde{s}_2+2$, $\tilde{s}_1<\frac{d}{r_2}<b+\frac{(\alpha+1)d}{r_2}<\tilde{s}_2+2$\label{11ii}
\item $ \frac{d}{2} (\frac{\alpha+1}{r_{12}} - \frac{1}{r_1} ) + \frac{b}{d} = \frac{d\alpha}{2r_2} + \frac{b}{2} < 1$\label{11iii}
\item $\beta_2(\alpha+1),\beta_{12}(\alpha+1)<1$\label{11iv}
\item $\beta_2 -\frac{d\alpha}{2r_2} -\frac{b}{2} - \beta_2(\alpha + 1) + 1 = 0$
\item $\beta_1- \frac{d}{2} (\frac{\alpha+1}{r_{12}} - \frac{1}{r_1} ) -\frac{b}{2} - \beta_{12}(\alpha + 1) + 1 = 0$.
\end{enumerate}
\end{lemma}
With the above, lemma in hand we now prove a variant of Theorem \ref{global2} which will be useful to prove Theorem \ref{asym} \eqref{asym2}.
\begin{theorem}\label{asym 1}
Let $0 < b < \min(2, d )$ and $\frac{2-b}{\tilde{s}_2+2}<\alpha<\frac{2-b}{\tilde{s}_1}$. Suppose that \[\max\left(\frac{2-b}{\tilde{s}_2+2},\frac{\tilde{s}_1\alpha}{\tilde{s}_2+2-b-\tilde{s}_1\alpha}\right)<\alpha_1 < \alpha<\frac{2-b}{\tilde{s}_1}.
\]
Let $r_1, r_2, r_{12},\beta_1,\beta_2$ are real numbers as in Lemma \ref{11}. Suppose further that $M > 0$ satisfies the
inequality
$KM^\alpha <1$, where $K$ is a positive constant. Choose $R > 0$ such that
\[cR+KM^{\alpha+1} \leq M.
\]
Let $\varphi$ be a tempered distribution such that
\begin{equation}\label{16}
\sup_{t>0} t^{\beta_1} \|e^{-t{\mathcal L}_a}\varphi\|_{r_1} \leq R,\qquad \sup_{t>0} t^{\beta_2} \|e^{-t{\mathcal L}_a}\varphi\|_{r_2} \leq R.
\end{equation}
Then there exists a unique global solution $u$ of \eqref{0} such that
\begin{equation}\label{20}
\sup_{t>0} t^{\beta_1} \| u(t)\|_{r_1}\leq M ,\qquad \sup_{t>0} t^{\beta_2} \| u(t)\|_{r_2}\leq M.
\end{equation}
Furthermore, \begin{enumerate}
\item $\sup_{t\geq t_q} t^{ \frac{2-b}{2\alpha_1}-\frac{d}{2q}} \|u(t)\|_q \leq C_M<\infty$, for all $q \in [r_1, \frac{d}{\tilde{s}_1})$ with $C_M\to0$ as $M\to0$ \label{as1}
\item $\sup_{t>0} t^{ \frac{2-b}{2\alpha}-\frac{d}{2q}} \|u(t)\|_q\leq C_M < \infty$, for all $q \in [r_2,\frac{d}{\tilde{s}_1})$ with $C_M\to0$ as $M\to0$\label{as2}.
\end{enumerate}
\end{theorem}
\begin{proof}[{\bf Proof}]
Let
$$B_M:=\{u:(0,\infty)\to L^r({\mathbb R}^d)): \sup_{t>0} t^{\beta_1}\|u(t)\|_{r_1}\leq M, \sup_{t>0} t^{\beta_2}\|u(t)\|_{r_2}\leq M\}$$ and
\[
d(u,v)=: \max\left(\sup_{t>0} t^{\beta_1}\|u(t)-v(t)\|_{r_1},\sup_{t>0} t^{\beta_2}\|u(t)-v(t)\|_{r_2}\right)
\]
and
\[
\mathcal{J}_\varphi(u)(t)=e^{-t{\mathcal L}_a}\varphi+\mu\int_0^t e^{-(t-s){\mathcal L}_a}(|\cdot|^{-b}|u(s)|^\alpha u(s))ds.
\]for $\varphi$ satisfies \eqref{16} and $u\in B_M$.
Let $\varphi,\psi$ satisfy \eqref{16} and $u,v\in B_M$, then using Lemma \ref{11}, Proposition \ref{est0} for $(p,q)=(\frac{r_{12}}{\alpha+1},r_1)$ we have
\begin{eqnarray*}
&&\|\mathcal{J}_\varphi(u)(t)-\mathcal{J}_\psi(v)(t)\|_{r_1}\\
&\lesssim&\|e^{-t{\mathcal L}_a}\varphi-e^{-t{\mathcal L}_a}\psi\|_{r_1}+\int_0^t \|e^{-(t-s){\mathcal L}_a}(|\cdot|^{-b}[|u(s)|^\alpha u(s)-|v(s)|^\alpha v(s)])\|_{r_1}ds\\
&\lesssim&\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_1}+\int_0^t (t-s)^{-\frac{d}{2}(\frac{\alpha+1}{r_{12}}-\frac{1}{r_1})-\frac{b}{2}}\||u(s)|^\alpha u(s)-|v(s)|^\alpha v(s)\|_{\frac{r_{12}}{\alpha+1}}ds\\
&\lesssim&\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_1}+\int_0^t (t-s)^{-\frac{d}{2}(\frac{\alpha+1}{r_{12}}-\frac{1}{r_1})-\frac{b}{2}}(\|u(s)\|_{r_{12}}^\alpha+\|v(s)\|_{r_{12}}^\alpha)\|u(s)- v(s)\|_{r_{12}}ds\\
&\lesssim&\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_1}+M^\alpha d(u,v)\int_0^t (t-s)^{-\frac{d}{2}(\frac{\alpha+1}{r_{12}}-\frac{1}{r_1})-\frac{b}{2}}s^{-(\alpha+1)\beta_{12}}ds
\end{eqnarray*} as $\frac{1}{r_{12}}=\frac{1/(\alpha+1)}{r_1}+\frac{\alpha/(\alpha+1)}{r_2}$ and hence for $u\in B_M$,
\[
\|u(s)\|_{r_{12}}\leq\|u(s)\|_{r_1}^{\frac{1}{\alpha+1}}\|u(s)\|_{r_2}^{\frac{\alpha}{\alpha+1}}\leq Ms^{-\beta_1/(\alpha+1)-\alpha\beta_2/(\alpha+1)}=Ms^{-\frac{\beta_1+\alpha\beta_2}{\alpha+1}}
=Ms^{-\beta_{12}},
\]
\[
\|u(s)-v(s)\|_{r_{12}}\leq\|u(s)-v(s)\|_{r_1}^{\frac{1}{\alpha+1}}\|u(s)-v(s)\|_{r_2}^{\frac{\alpha}{\alpha+1}}\leq d(u,v)s^{-\frac{\beta_1+\alpha\beta_2}{\alpha+1}}=d(u,v)s^{-\beta_{12}}.
\]
Thus
\begin{eqnarray}\label{17}
t^{\beta_1}\|\mathcal{J}_\varphi(u)(t)-\mathcal{J}_\psi(v)(t)\|_{r_1}&\lesssim&t^{\beta_1}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_1}+M^\alpha d(u,v) t^{\beta_1-\frac{d}{2}(\frac{\alpha+1}{r_{12}}-\frac{1}{r_1})-\frac{b}{2}-(\alpha+1)\beta_{12}+1}\nonumber\\
&&\cdot\int_0^1 (1-s)^{-\frac{d}{2}(\frac{\alpha+1}{r_{12}}-\frac{1}{r_1})-\frac{b}{2}}s^{-(\alpha+1)\beta_{12}}ds\nonumber\\
&\lesssim&t^{\beta_1}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_1}+M^\alpha d(u,v)
\end{eqnarray}using Lemma \ref{11} (3), (4), (6).
Now using Lemma \ref{11} (2), Proposition \ref{est0} for $(p,q)=(\frac{r_2}{\alpha+1},r_2)$ we have
\begin{eqnarray*}
&&\|\mathcal{J}_\varphi(u)(t)-\mathcal{J}_\psi(v)(t)\|_{r_2}\\
&\lesssim&\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_2}+\int_0^t \|e^{-(t-s){\mathcal L}_a}(|\cdot|^{-b}[|u(s)|^\alpha u(s)-|v(s)|^\alpha v(s)])\|_{r_2}ds\\
&\lesssim&\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_2}+\int_0^t(t-s)^{-\frac{d}{2}(\frac{\alpha+1}{r_2}-\frac{1}{r_2})-\frac{b}{2}} \||u(s)|^\alpha u(s)-|v(s)|^\alpha v(s)\|_{\frac{r_2}{\alpha+1}}ds\\
&\lesssim&\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_2}+\int_0^t(t-s)^{-\frac{d\alpha}{2r_2}-\frac{b}{2}}(\|u(s)\|_{r_2}^\alpha+\|v(s)\|_{r_2}^\alpha)\|u(s)- v(s)\|_{r_2} ds\\
&\lesssim&\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_2}+M^\alpha d(u,v)\int_0^t(t-s)^{-\frac{d\alpha}{2r_2}-\frac{b}{2}}s^{-(\alpha+1)\beta_2} ds
\end{eqnarray*}
and hence using Lemma \ref{11} (3), (4), (5) we get
\begin{eqnarray}\label{18}
&&t^{\beta_2}\|\mathcal{J}_\varphi(u)(t)-\mathcal{J}_\psi(v)(t)\|_{r_2}\nonumber\\
&\lesssim&t^{\beta_2}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_2}+M^\alpha d(u,v)t^{\beta_2-\frac{d\alpha}{2r_2}-\frac{b}{2}-(\alpha+1)\beta_2+1}\int_0^1(1-s)^{-\frac{d\alpha}{2r_2}-\frac{b}{2}}s^{-(\alpha+1)\beta_2} ds\nonumber\\
&\lesssim&t^{\beta_2}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_2}+M^\alpha d(u,v).
\end{eqnarray}
Using \eqref{17}, \eqref{18} we have
\begin{align}\label{19}
&d(\mathcal{J}_\varphi(u),\mathcal{J}_\psi(v))\nonumber\\
&\leq c\max\left(\sup_{t>0}t^{\beta_1}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_1},\sup_{t>0}t^{\beta_2}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_2}\right)+KM^\alpha d(u,v).
\end{align}
Putting $\psi=0,v=0$ in \eqref{19} one has
\[
d(\mathcal{J}_\varphi(u),0)\leq c\max\left(\sup_{t>0}t^{\beta_1}\|e^{-t{\mathcal L}_a}\varphi\|_{r_1},\sup_{t>0}t^{\beta_2}\|e^{-t{\mathcal L}_a}\varphi\|_{r_2}\right)+KM^\alpha d(u,0)\leq cR+KM^{\alpha+1}\leq M
\] and hence $\mathcal{J}_\varphi(u)\in B_M$. Thus $\mathcal{J}_\varphi$ maps from $B_M$ to itself. Putting $\psi=\varphi$ in \eqref{19}
\[
d(\mathcal{J}_\varphi(u),\mathcal{J}_\varphi(v))\leq KM^\alpha d(u,v)<d(u,v)
\]and hence $\mathcal{J}_\varphi$ is a contraction in $B_M$. Thus \eqref{0} has a unique solution in $B_M$ satisfying \eqref{20}.
\eqref{as1} follows from Lemma \ref{c1} and iteration as in proof of Theorem \ref{global2}.
\eqref{as2} follows from Lemma \ref{23} and iteration as in proof of Theorem \ref{global2}.
\end{proof}
This is a variant of Lemma \ref{23} used in the above result.
\begin{lemma}[A priori estimate]\label{c1}
Suppose $s < q $ and
\[
\tilde{s}_1 <\frac{d}{q} <b+ \frac{d(\alpha+1)}{s} < \tilde{s}_2+2,\qquad\frac{d}{2}\big(\frac{\alpha+1}{s}-\frac{1}{q}\big)<1-\frac{b}{2}
\]
Assume $t_0\geq1$ and $u$ be solution to \eqref{0}
satisfying
\[
\sup_{t>t_0}t^{\frac{2-b}{2\alpha_1}-\frac{d}{2s}}\|u(t)\|_s\leq A<\infty
\]
then
\[
\sup_{t\geq 2t_0}t^{\frac{2-b}{2\alpha_1}-\frac{d}{2q}}\|u(t)\|_q\lesssim A(1+A^\alpha)=:C_A<\infty
\]
with $C_A\to0$ as $A\to0$.
\end{lemma}
\begin{proof}[{\bf Proof}]
Note that
\[
u(t)=e^{-\frac{t}{2}{\mathcal L}_a}u(t/2)+\mu\int_{t/2}^te^{-(t-\sigma ){\mathcal L}_a}(|\cdot|^{-b} (|u(\sigma)|^\alpha u(\sigma))d\sigma
\]
and therefore using Theorem \ref{ste} for $(p,q)=(s,q)$ and Proposition \ref{est0} for $(p,q)=(\frac{s}{\alpha+1},q)$ we have
\begin{eqnarray*}
\|u(t)\|_q&\lesssim&\|e^{-\frac{t}{2}{\mathcal L}_a}u(t/2)\|_q+\int_{t/2}^t\|e^{-(t-\tau){\mathcal L}_a}(|\cdot|^{-b} (|u(\tau)|^\alpha u(\tau))\|_q d\tau\\
&\lesssim&t^{-\frac{d}{2}(\frac{1}{s}-\frac{1}{q})}\|u(t/2)\|_s+\int_{t/2}^t(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}}\||u(\tau)|^\alpha u(\tau)\|_{\frac{s}{\alpha+1}}d\tau\\
&\lesssim&t^{\frac{d}{2q}-\frac{2-b}{2\alpha_1}}A+\int_{t/2}^t(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}}\| u(\tau)\|_s^{\alpha+1}d\tau.
\end{eqnarray*}
Now with $\beta_1(s)=\frac{2-b}{2\alpha_1}-\frac{d}{2s}$, and using $t/2\geq t_0$
\begin{eqnarray*}
\|u(t)\|_q
&\lesssim&t^{\frac{d}{2q}-\frac{2-b}{2\alpha_1}}A+A^{\alpha+1}\int_{t/2}^t(t-\tau)^{-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}}\tau^{-\beta_1(s)(\alpha+1)}d\tau\\
&\lesssim&t^{\frac{d}{2q}-\frac{2-b}{2\alpha_1}}A+A^{\alpha+1}t^{1-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}-\beta_1(s)(\alpha+1)}\int_{1/2}^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}}\sigma^{-\beta_1(s)(\alpha+1)}d\sigma\\
&\lesssim&t^{\frac{d}{2q}-\frac{2-b}{2\alpha_1}}A+A^{\alpha+1}t^{\frac{d}{2q}-\frac{2-b}{2\alpha_1}-\frac{2-b}{2}(\frac{\alpha}{\alpha_1}-1)
\end{eqnarray*} as $\int_{1/2}^1(1-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{s}-\frac{1}{q})-\frac{b}{2}}\sigma^{-\beta(s)(\alpha+1)}d\sigma<\infty$.
This completes the proof as $t\geq1$ for $t/2\geq t_0$
\end{proof}
Following is again a technical result, to be used to prove Theorem \ref{b2} that further proves the final result.
\begin{lemma}\label{13}
Let $0 < b < \min(2, d)$ and $\frac{2-b}{\tilde{s}_2+2}<\alpha<\frac{2-b}{\tilde{s}_1}$. Let the real numbers $\alpha_1$ and $\alpha$ be such that\[\max\left(\frac{2-b}{\tilde{s}_2+2},\frac{\tilde{s}_1\alpha}{\tilde{s}_2+2-b-\tilde{s}_1\alpha}\right)<\alpha_1 < \alpha<\frac{2-b}{\tilde{s}_1}.
\]
Let $r_1,r_2,\beta_1,\beta_2$ are real numbers as in Lemma \ref{11}. Then there exists a real number $\delta_0 > 0$ such that, for all $0 < \delta < \delta_0$, there exists a real number $
0<\theta_\delta <1$,
with the properties that, the two real numbers $\tilde{r}$ and $\tilde{\beta}$ given by
\begin{equation}\label{21}
\frac{1}{\tilde{r}}=\frac{\theta_\delta}{r_1}+\frac{1-\theta_\delta}{r_2},\qquad\tilde{\beta}=\theta_\delta\beta_1+(1-\theta_\delta)\beta_2
\end{equation}
satisfy the following conditions\begin{itemize}
\item[(i)]$\tilde{s}_1<\frac{d}{r_1} < b + \frac{d(\alpha+1)}{\tilde{r}} < \tilde{s}_2+2$
\item[(ii)]$\beta_1 +\delta-\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})-\frac{b}{2}-\tilde{\beta}(\alpha+1)+1=0$
\item[(iii)]$\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})$ $+\frac{b}{2}<1,\tilde{\beta}(\alpha+1)<1$.
\end{itemize}
Moreover this $\theta_\delta$ is given by
\begin{equation}\label{22}
\theta_\delta=\frac{1}{\alpha+1}+\frac{2\alpha_1\alpha}{(2-b)(\alpha-\alpha_1)(\alpha+1)}\delta.
\end{equation}
\end{lemma}
\begin{proof}[{\bf Proof}]
{\bf Step I:} If \eqref{21} is true then (ii) is equlalent to \eqref{22}:\\
First note that
$\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1}=(\alpha+1)(\frac{1}{r_1}-\frac{1}{r_2})\theta_\delta+\frac{\alpha+1}{r_2}-\frac{1}{r_1}$, $\tilde{\beta}=\theta_\delta(\beta_1-\beta_2)+\beta_2$. Thus
\begin{align*}
&\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})+\tilde{\beta}(\alpha+1)\\
=&\frac{d(\alpha+1)}{2}(\frac{1}{r_1}-\frac{1}{r_2})\theta_\delta+\frac{d(\alpha+1)}{2r_2}-\frac{d}{2r_1}+\theta_\delta(\beta_1-\beta_2)(\alpha+1)+\beta_2(\alpha+1).
\end{align*}
Then
%
\begin{align*}
&\beta_1 -\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})-\tilde{\beta}(\alpha+1)\\
&=\beta_1-\frac{d(\alpha+1)}{2}(\frac{1}{r_1}-\frac{1}{r_2})\theta_\delta-\frac{d(\alpha+1)}{2r_2}+\frac{d}{2r_1}-\theta_\delta(\beta_1-\beta_2)(\alpha+1)-\beta_2(\alpha+1)\\
&=(\beta_1+\frac{d}{2r_1})-(\alpha+1)(\beta_2+\frac{d}{2r_2})-\theta_\delta(\alpha+1)(\frac{d}{2}(\frac{1}{r_1}-\frac{1}{r_2})+\beta_1-\beta_2)\\
&=\frac{2-b}{2\alpha_1}-(\alpha+1)\frac{2-b}{2\alpha}-\theta_\delta(\alpha+1)(\frac{2-b}{2\alpha_1}-\frac{2-b}{2\alpha})
\end{align*}
Therefore (ii) is equivalent to
\begin{align*}
&\frac{2-b}{2\alpha_1}-(\alpha+1)\frac{2-b}{2\alpha}+\delta-\frac{b}{2}+1=\theta_\delta(\alpha+1)(\frac{2-b}{2\alpha_1}-\frac{2-b}{2\alpha})\\
&\Longleftrightarrow\frac{1}{\alpha_1}-\frac{\alpha+1}{\alpha}+\frac{2}{2-b}\delta+1=\theta_\delta(\alpha+1)(\frac{1}{\alpha_1}-\frac{1}{\alpha})\\
&\Longleftrightarrow\frac{1}{\alpha_1}-\frac{1}{\alpha}+\frac{2}{2-b}\delta=\theta_\delta(\alpha+1)(\frac{1}{\alpha_1}-\frac{1}{\alpha})
\end{align*}which is equivalent to \eqref{22}.
{\bf Step II:} Validity of (i), (iii):
Note that \[\theta_\delta=\frac{1}{\alpha+1}+\epsilon(\delta)
\]
Now $\theta_0=\frac{1}{\alpha+1}+\epsilon(0)=\frac{1}{\alpha+1}$ and for this choice of $\theta_\delta$, one has $\tilde{r}=r_{12}$, $\tilde{\beta}=\beta_{12}$. So at $\theta=0$, the inequalities (i), (iii) hold by Lemma \ref{11}. Then by continuity of $\epsilon$ with respect to $\delta$ one has (i), (iii) for $\delta>0$ small enough.
\end{proof}
\begin{theorem}\label{b2}
Let $0 < b < \min(2, d )$ and $\frac{2-b}{\tilde{s}_2+2}<\alpha<\frac{2-b}{\tilde{s}_1}$. Let the real numbers $\alpha_1$ and $\alpha$ be such that\[\max\left(\frac{2-b}{\tilde{s}_2+2},\frac{\tilde{s}_1\alpha}{\tilde{s}_2+2-b-\tilde{s}_1\alpha}\right)<\alpha_1 < \alpha<\frac{2-b}{\tilde{s}_1}.
\]
Let $r_1, r_2$ be two real numbers as in Lemma \ref{11}. Let $\beta_1, \beta_2$ be given by Lemma \ref{11} and define $\beta_1(q)$ by
\[\beta_1(q)=\frac{2-b}{2\alpha_1}-\frac{d}{2q}, q>1
\]
Let $\psi(x) = \omega(x)|x|^{-\frac{2-b}{\alpha_1}}$ , where $\omega \in L^\infty(S^{d-1})$ is homogeneous of degree $0$. Let $\varphi\in C_0({\mathbb R}^d)$ be such that\[
|\varphi(x)|\leq c( 1+|x|^2)^{-\frac{2-b}{2\alpha_1}}\text{ for all }x\in{\mathbb R}^d,\qquad |\varphi(x)|= \omega(x)|x|^{-\frac{2-b}{\alpha_1}}\text{ for all }|x|\geq A
\]for some constant $A > 0$, where c is a small positive constant and $\|\omega\|$ is sufficiently small.
Let u be the solution of \eqref{0} with initial data $\varphi$, constructed by Theorem \ref{asym 1} and let $w$ be the self-similar solution of \eqref{0} constructed by Theorem \ref{asym 1} with $\mu = 0$, and with initial data $\psi$. Then there exists $\delta_0 > 0$ such that for all $0<\delta<\delta_0$, and with $M$ perhaps smaller, there exists $C_\delta > 0$ such that
\begin{equation}\label{28}
\|u(t)- w(t)\|_q \leq C_\delta t^{-\beta_1(q)-\delta}, \forall t \geq t_q,
\end{equation}
for all $q \in [r_1, \frac{d}{\tilde{s}_1})$. In particular, if $\omega\neq 0$,
for large time $t$
\[
c_1t^{-\beta_1(q)} \leq \|u(t)\|_q \leq c_2t^{-\beta_1(q)}.
\]
\end{theorem}
\begin{proof}[{\bf Proof}]
Let $\psi(x)=\omega(x)|x|^{-\frac{2-b}{2\alpha_1}}$, $x\in{\mathbb R}^d$.
Let $\varphi_1=\varphi\chi_{\{|x|\leq1\}}$ and $\varphi_2=\varphi\chi_{\{|x|>1\}}$ so that $\varphi=\varphi_1+\varphi_2$.
Note that
\[
|\varphi(x)|\leq c( 1+|x|^2)^{-\frac{2-b}{2\alpha_1}}\leq c|x|^{-\frac{2-b}{\alpha_1}}, \forall\ x\in{\mathbb R}^d
\]
and thus $\varphi_1\in L^s({\mathbb R}^d)$ for $1\leq s<\frac{d\alpha_1}{2-b}$ and $\varphi_2\in L^\sigma({\mathbb R}^d)$ for $\frac{d\alpha_1}{2-b}<\sigma\leq\infty$.
Since $\alpha>\alpha_1$,
\[
|\varphi(x)|\leq c( 1+|x|^2)^{-\frac{2-b}{2\alpha_1}}\leq c( 1+|x|^2)^{-\frac{2-b}{2\alpha}}\leq c|x|^{-\frac{2-b}{\alpha}}, \forall\ x\in{\mathbb R}^d.
\]
Then applying Theorem A
\begin{align*}
\|e^{-{\mathcal L}_a}\varphi\|_{r_1}&\leq \|e^{-{\mathcal L}_a}\varphi_1\|_{r_1}+\|e^{-{\mathcal L}_a}\varphi_2\|_{r_1}
\lesssim \|\varphi_1\|_s+\|\varphi_2\|_\sigma<\infty
\end{align*}by choosing $s,\sigma$ so that $\tilde{s}_1<\frac{d}{r_1}<\frac{d}{\sigma}<\frac{2-b}{\alpha_1}<\frac{d}{s}<\tilde{s}_2+2$. Similarly using $r_2>\frac{d\alpha}{2-b}$ one finds $\|e^{-{\mathcal L}_a}\varphi\|_{r_2}<\infty$. Then using the homogeneity of $|\cdot|^{-\frac{2-b}{\alpha_1}}$, $|\cdot|^{-\frac{2-b}{\alpha}}$ (and positivity of $e^{-t{\mathcal L}_a}$) we achieve
\[
\sup_{t>0}t^{\beta_1}\|e^{-t{\mathcal L}_a}\varphi\|_{r_1}\leq R,\qquad \sup_{t>0}t^{\beta_2}\|e^{-t{\mathcal L}_a}\varphi\|_{r_2}\leq R
\]after possibly choosing $c$ smaller.
Proceeding as \eqref{12}, one finds \begin{equation}\label{14}
\sup_{t>0} t^{\beta_1+\delta}\|e^{-t{\mathcal L}_a}(\varphi-\psi)\|_{r_1}<\infty\quad\text{for all }0<\delta\leq\frac{\tilde{s}_2+2}{2}-\frac{2-b}{2\alpha_1}.
\end{equation}
for $\delta>0$ small enough.
Let $v(t)=e^{-t{\mathcal L}_a}\varphi$ then $u(t)=v(t)+\mu\int_0^te^{-(t-\sigma ){\mathcal L}_a}(|\cdot|^{-b} (|u(\sigma)|^\alpha u(\sigma))d\sigma$, therefore using Proposition \ref{est0}, Lemma \ref{13}
\begin{eqnarray*}
\|u(t)-v(t)\|_{r_1}&\lesssim&\int_0^t\|e^{-(t-\sigma ){\mathcal L}_a}(|\cdot|^{-b} (|u(\sigma)|^\alpha u(\sigma))\|_{r_1}d\sigma\\
&\lesssim&\int_0^t(t-\sigma )^{-\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})-\frac{b}{2}}\||u(\sigma)|^\alpha u(\sigma))\|_{\frac{\tilde{r}}{\alpha+1}}d\sigma\\
&=&\int_0^t(t-\sigma )^{-\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})-\frac{b}{2}}\| u(\sigma))\|_{\tilde{r}}^{\alpha+1}d\sigma.
\end{eqnarray*}
Now note that
\[
\| u(\sigma))\|_{\tilde{r}}\leq\| u(\sigma))\|_{r_1}^{\theta_\delta}\| u(\sigma))\|_{r_2}^{1-\theta_\delta}
\leq M \sigma^{-\beta_1\theta_\delta-\beta_2(1-\theta_\delta)}=M \sigma^{-\tilde{\beta}}\]
and hence
\begin{eqnarray*}
t^{\beta_1+\delta}\|u(t)-v(t)\|_{r_1}&\lesssim&M^{\alpha+1}t^{\beta_1+\delta}\int_0^t(t-\sigma )^{-\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})-\frac{b}{2}}\sigma^{-(\alpha+1)\tilde{\beta}}d\sigma\\
&\lesssim&M^{\alpha+1}t^{\beta_1+\delta+1-\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})-\frac{b}{2}-(\alpha+1)\tilde{\beta}}\int_0^1(1-\sigma )^{-\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})-\frac{b}{2}}\sigma^{-(\alpha+1)\tilde{\beta}}d\sigma\\
&=&M^{\alpha+1}\int_0^1(1-\sigma )^{-\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})-\frac{b}{2}}\sigma^{-(\alpha+1)\tilde{\beta}}d\sigma<\infty
\end{eqnarray*}using Lemma \ref{13}. This together with \eqref{14} imply
\begin{equation}\label{15}
\|u(t)-w(t)\|_{r_1}\leq\|u(t)-v(t)\|_{r_1}+\|e^{-t{\mathcal L}_a}\varphi-e^{-t{\mathcal L}_a}\psi\|_{r_1}\lesssim t^{-\beta_1-\delta}
\end{equation}
which proves \eqref{28} for $q=r_1$.
Now
\[
u(t)-w(t)=e^{-\frac{t}{2}{\mathcal L}_a}(u(t/2)-w(t/2))+\mu\int_{t/2}^te^{-(t-\sigma ){\mathcal L}_a}(|\cdot|^{-b} (|u(\sigma)|^\alpha u(\sigma))d\sigma
\]
using Lemma \ref{13} for $q\in[r_2,\frac{d}{\tilde{s}_1})$,
\[
\tilde{s}_1<\frac{d}{q}<\frac{d}{r_1} <\tilde{s}_2+2\qquad\text{and}\qquad \tilde{s}_1<\frac{d}{q}<b+\frac{d(\alpha+1)}{q}<b+\frac{d(\alpha+1)}{r_2}
<\tilde{s}_2+2.
\]
\begin{eqnarray*}
&&\|u(t)-w(t)\|_q\\
&\lesssim& t^{-\frac{d}{2}(\frac{1}{r_1}-\frac{1}{q})}\|u(t/2) - w(t/2)\|_{r_1}+\int_{t/2}^t(t-\sigma)^{-\frac{d}{2}(\frac{\alpha+1}{q}-\frac{1}{q})-\frac{b}{2}}\||u(\sigma)|^\alpha u(\sigma) \|_{\frac{q}{\alpha+1}}d\sigma\\
&\lesssim& t^{-\frac{d}{2}(\frac{1}{r_1}-\frac{1}{q})-\beta_1-\delta}+\int_{t/2}^t(t-\sigma)^{-\frac{d\alpha}{2q}-\frac{b}{2}}\|u(\sigma) \|_q^{\alpha+1} d\sigma.
\end{eqnarray*}using \eqref{15}.
Now from \eqref{as1}, \eqref{as2} in Theorem \ref{asym 1}, for $\sigma\geq t_q$, $\|u(\sigma) \|_q\leq C_M\sigma^{-\beta_1(q)}$ as well as $\|u(\sigma) \|_q\leq C_M\sigma^{-\beta(q)}$ for all $q\in[r_2,\frac{d}{\tilde{s}_1})$. Therefore for $\sigma\geq t_q$,
\[
\|u(\sigma) \|_q^{\alpha+1}\leq C_M^{\alpha+1}\sigma^{-(\alpha+1)[\beta_1(q)\theta_\delta+\beta(q)(1-\theta_\delta)]}
\]
and $\beta_1(q)\theta_\delta+\beta(q)(1-\theta_\delta)=\beta_1\theta_\delta+\beta_2(1-\theta_\delta)+\frac{d}{2}(\frac{1}{r_1}-\frac{1}{q})\theta_\delta+\frac{d}{2}(\frac{1}{r_2}-\frac{1}{q})(1-\theta_\delta)=\tilde{\beta}+\frac{d}{2}(\frac{\theta_\delta}{r_1}+\frac{1-\theta_\delta}{r_1})-\frac{d}{2q}=\tilde{\beta}+\frac{d}{2\tilde{r}}-\frac{d}{2q}$.
Therefore for $t\geq 2t_q$,
\begin{eqnarray*}
&&t^{\beta_1(q)+\delta}\|u(t)-w(t)\|_q\\
&\lesssim& t^{\beta_1(q)-\beta_1-\frac{d}{2}(\frac{1}{r_1}-\frac{1}{q})}+t^{\beta_1(q)+\delta}\int_{t/2}^t(t-\sigma)^{-\frac{d\alpha}{2q}-\frac{b}{2}}\|u(\sigma) \|_q^{\alpha+1} d\sigma\\
&\lesssim& t^0+C_M^{\alpha+1}t^{\beta_1(q)+\delta}\int_{t/2}^t(t-\sigma)^{-\frac{d\alpha}{2q}-\frac{b}{2}}\sigma^{-(\alpha+1)(\tilde{\beta}+\frac{d}{2\tilde{r}}-\frac{d}{2q})} d\sigma\\
&\lesssim& 1+C_M^{\alpha+1}t^{\beta_1(q)+\delta-\frac{d\alpha}{2q}-\frac{b}{2}-(\alpha+1)(\tilde{\beta}+\frac{d}{2\tilde{r}}-\frac{d}{2q})+1}\int_{1/2}^1(1-\sigma)^{-\frac{d\alpha}{2q}-\frac{b}{2}}\sigma^{-(\alpha+1)(\tilde{\beta}+\frac{d}{2\tilde{r}}-\frac{d}{2q})} d\sigma.
\end{eqnarray*}
But $\beta_1(q)+\delta-\frac{d\alpha}{2q}-\frac{b}{2}-(\alpha+1)(\tilde{\beta}+\frac{d}{2\tilde{r}}-\frac{d}{2q})+1=\beta_1(q)+\delta-\frac{d}{2}(\frac{\alpha+1}{q}-\frac{1}{q})-\frac{b}{2}-(\alpha+1)(\tilde{\beta}+\frac{d}{2\tilde{r}}-\frac{d}{2q})+1=\beta_1+\frac{d}{2}(\frac{1}{r_1}-\frac{1}{q})+\delta-\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})-\frac{d}{2}(\frac{1}{r_1}-\frac{1}{q})-\frac{b}{2}-(\alpha+1)(\tilde{\beta}+\frac{d}{2q}-\frac{d}{2q})+1=\beta_1+\delta-\frac{d}{2}(\frac{\alpha+1}{\tilde{r}}-\frac{1}{r_1})-\frac{b}{2}-(\alpha+1)\tilde{\beta}+1=0$ and $\frac{d\alpha}{2q}+\frac{b}{2}<1$. Thus $t^{\beta_1(q)+\delta}\|u(t)-w(t)\|_q\lesssim1$ for $t\geq 2t_q$.
This proves \eqref{28} for $q\in[r_2,\frac{d}{\tilde{s}_1})$. To prove \eqref{28} for $(r_1,r_2)$, we use interpolation. The final conclusion follows as in the proof in nonlinear case, see \eqref{p8}, \eqref{p9} in Subsection \ref{AB1}.
\end{proof}
\begin{remark}\label{r2}
After proving \eqref{28} for $q=r_1$ (i.e. \eqref{15}) the authors in \cite{slimene2017well} proves it for $q=\infty$ and interpolates them to achieve \eqref{28} for $q\in(r_1,\infty)$. Since we do not have the decay estimate \eqref{d} for $q=\infty$, we could not achieve \eqref{28} for $q=\infty$ and hence we need to take a different path to achieve the result
\end{remark}
\begin{proof}[{\bf Proof of Theorem \ref{asym} \eqref{asym2}}]
Since $\sigma>\frac{2-b}{\alpha}$, we can find $\alpha_1<\alpha$ so that $\sigma=\frac{2-b}{\alpha_1}$. To apply Theorem \ref{b2} we need
\[\max\left(\frac{2-b}{\tilde{s}_2+2},\frac{\tilde{s}_1\alpha}{\tilde{s}_2+2-b-\tilde{s}_1\alpha}\right)<\alpha_1 < \alpha<\frac{2-b}{\tilde{s}_1}.
\]which will follow if
$\frac{2-b}{\tilde{s}_2+2}<\frac{2-b}{\sigma}$ i.e. $\sigma< \tilde{s}_2+2$ and $\frac{\tilde{s}_1\alpha}{\tilde{s}_2+2-b-\tilde{s}_1\alpha}<\frac{2-b}{\sigma}$ i.e. $\sigma<\frac{2-b}{\tilde{s}_1\alpha}(\tilde{s}_2+2-b-\tilde{s}_1\alpha)$.
\end{proof}
\section*{Appendix}
\renewcommand*{\theAL}{A\arabic{AL}}
\begin{AL}\label{a1}
Let $\max\left(\frac{2-b}{\tilde{s}_2+2},\frac{\tilde{s}_1\alpha}{\tilde{s}_2+2-b-\tilde{s}_1\alpha}\right)<\alpha_1<\alpha<\frac{2-b}{\tilde{s}_1}$. Then one can choose $r_1$ so that \begin{itemize}
\item[(i)] $\tilde{s}_1<\frac{d\alpha_1}{r_1\alpha}<\frac{d}{r_1}<b+\frac{(\alpha_1+1)d}{r_1}<\tilde{s}_2+2$
\item[(ii)]$\frac{d\alpha_1}{2r_1} + \frac{b}{2} < 1$
\item[(iii)] $\beta_1(\alpha_1+1)<1$.
\end{itemize}
\end{AL}
\begin{proof}[{\bf Proof}]
(i), (ii), (iii) are equivalent to $\frac{(\alpha_1+1)d}{\tilde{s}_2+2-b}<r_1<\frac{d\alpha_1}{\tilde{s}_1\alpha}$, $\frac{d\alpha_1}{2-b}<r_1$, $r_1<\frac{d\alpha_1(\alpha_1+1)}{2-b(\alpha_1+1)}$ if $2-b(\alpha_1+1)>0$. Then to make room for $r_1$, one thus needs \begin{equation*}
\begin{rcases}
\frac{(\alpha_1+1)d}{\tilde{s}_2+2-b}\\
\frac{d\alpha_1}{2-b}
\end{rcases}
<r_1<
\begin{cases}\frac{d\alpha_1}{\tilde{s}_1\alpha}\\
\frac{d\alpha_1(\alpha_1+1)}{2-b(\alpha_1+1)}
\end{cases}
\end{equation*}which is possible if $\max(\frac{2-b}{\tilde{s}_2+2},\frac{\tilde{s}_1\alpha}{\tilde{s}_2+2-b-\tilde{s}_1\alpha})<\alpha_1<\alpha<\frac{2-b}{\tilde{s}_1}$.
\end{proof}
\begin{AL}\label{a2}
Let $\max\left(\frac{2-b}{\tilde{s}_2+2},\frac{\tilde{s}_1\alpha}{\tilde{s}_2+2-b-\tilde{s}_1\alpha}\right)<\alpha_1<\alpha<\frac{2-b}{\tilde{s}_1}$.
Let $r_1$, $r_2$ be as in Lemma \ref{11}. Then
\begin{itemize}
\item[(i)] $r_1<r_2$
\item[(ii)] $\tilde{s}_1<\frac{d}{r_1}<b +\frac{(\alpha_1+1)d}{r_1} <\tilde{s}_2+2$, $\tilde{s}_1<\frac{d}{r_2}<b +\frac{(\alpha+1)d}{r_2} <\tilde{s}_2+2$
\item[(iii)] $ \frac{d\alpha_1}{2r_1} + \frac{b}{2} < 1$, $\frac{d\alpha}{2r_2} + \frac{b}{2} < 1$
\item[(iv)] $\beta_1(\alpha_1+1)<1$, $\beta_2(\alpha+1)<1$
\end{itemize}
\end{AL}
\begin{proof}[{\bf Proof}]
(i) is follows from $\alpha>\alpha_1$.
First part of (ii) is a consequence of Lemma \ref{a1}. Note that $\frac{(\alpha+1)\alpha_1}{\alpha(\alpha_1+1)}<1$ and hence $\frac{(\alpha+1)}{r_2}<\frac{(\alpha_1+1)}{r_1}$. This together with $\tilde{s}_1<\frac{d\alpha_1}{r_1\alpha}$
imply the second part of (ii).
(iii) follows from (i) and Lemma \ref{a1} (ii).
First one in (iv) is exactly part (iii) in Lemma \ref{a1}. Since $\beta_2(\alpha+1)=\beta_1(\alpha_1+1)\frac{(\alpha+1)\alpha_1}{\alpha(\alpha_1+1)}$ the last inequality in (iv) follows from the fact $\frac{(\alpha+1)\alpha_1}{\alpha(\alpha_1+1)}<1$.
\end{proof}
\begin{proof}[{\bf Proof of Lemma \ref{11}}]
(1) $\beta_1>0$ follows from $r_1>\frac{d\alpha}{2-b}$. Then $\beta_2,\beta_{12}>0$ follows from their definitions.
(2), (3), (4) are essentially consequences parts (ii), (iii), (iv) of Lemma \ref{a2} respectively.
(5), (6) follows by simple computation.
\end{proof}
\noindent
{\textbf{Acknowledgement}:} D.G. B is thankful to DST-INSPIRE (DST/INSPIRE/04/2016/001507) for the research grant. S. H. acknowledges Dept of Atomic Energy, Govt of India, for the financial support and Harish-Chandra Research Institute for the research facilities provided.
\bibliographystyle{siam}
|
train/arxiv
|
BkiUbLU5qrqCyrAtXiP7
| 5 | 1 |
\section{Introduction}
Emmy Noether's paper ``Invariante Variationsprobleme'' \cite{Noether1918b} is regarded today as one of her most important works, especially in view of its relevance for mathematical physics \cite{Uhl}. Those familiar with her many other achievements might wonder why these have largely been cast in the shadows by Noether's Theorem, the famous result accounting for the relationship between symmetries in physical systems and their related conservation laws.\footnote{For a detailed analysis of \cite{Noether1918b} and its slow reception by the mathematical community, see \cite{Kosmann-Schwarzbach2006}. See also the commentary in \cite{Siegmund-Schultze2011}.}
To be sure, standard accounts of Emmy Noether's life have never claimed that her 1918 paper was particularly significant, and for good reason. Her
influence and eventual fame as a mathematician had virtually nothing to do with physics or the calculus of variations; these stemmed instead from her contributions to modern algebra.\footnote{See \cite{Alex}, \cite{Weyl-3}, \cite{Dick}, \cite{Kim}, and \cite{Kor}.}
Considering these circumstances, it is natural to ask what motivated Noether to take up this topic in the first place. \cite{Rowe1999} deals with how Noether's paper arose from discussions in G\"ottingen concerning the status of energy conservation laws in general relativity. This paper focusses on an earlier discussion that
arose in 1916 after Albert Einstein and David Hilbert published their first papers addressing the role of energy conservation in general relativity.
As will be shown below, the approaches taken by Einstein and Hilbert to this aspect of the theory
differed strikingly. Hilbert's short paper \cite{Hilbert1915} was written in great haste and afterward substantially revised when he read the page proofs. Its contents
baffled many readers, including Einstein. In 1918, Emmy Noether was working closely with Felix Klein, who was determined to decipher the mathematical meaning of Hilbert's invariant energy vector. Allusions to Noether's role in earlier discussions with Hilbert can be found in Klein's published papers, \cite{Kl-1} and \cite{Kl-2}.
Little has been written, however, about what Noether contributed to these conversations from 1916, mainly due to lack of documentary evidence that might shed more light on her activities during the war years.
It is my hope that the present paper will help to clarify an important episode in this story. Here I will mainly focus on her efforts, starting in early 1916, to assist Hilbert's researches on general relativity, while touching only briefly on her subsequent work with Klein, which culminated with the publication of \cite{Noether1918b}.
In the course of exploring the foundations of his general theory of relativity, Einstein had experimented with variational methods \cite{Einstein1914}. Hilbert was the first, however, to use a variational principle to derive fully covariant gravitational field equations in the form of Euler-Lagrange equations. He
published this result in the first of his two papers on ``The Foundations of Physics''
\cite{Hilbert1915}. There he emphasized that the resulting system of 14 field equations was {\it not} independent; instead it
satisfied four identical relations, which he interpreted as establishing a linkage between gravity and electromagnetism. However, the precise nature of these relations, and in particular their physical significance, remained obscure up until the publication of \cite{Noether1918b}, which completely clarified this question.
More mysterious still was what Hilbert called his invariant energy equation, which he based on a complicated construct that came to be known as Hilbert's energy vector $e^l$.\footnote{As noted below, its definition
(\ref{eq:Hilbert-vector}) depends linearly on an arbitrary infinitesimal vector $p^l$, so $e^l$ should be conceived as a vector field. For a detailed analysis of Hilbert's approach to conservation of energy-momentum, both in \cite{Hilbert1915} and in the earlier unpublished version in \cite{Hilbert2009},
see \cite{Renn/Stachel2007}.}
He derived this vector using classical techniques for producing differential invariants, an approach that differed sharply from Einstein's much more physically motivated derivation of energy conservation in \cite{Ein-6}. Klein later showed how Hilbert's energy vector arose naturally from the variational framework used in his theory \cite{Kl-2}. Six years later, when Hilbert decided to publish a modified account of his original theory in \cite{Hilbert1924}, he dropped all reference to his earlier approach to energy conservation, a clear indication that he no longer felt it had any importance for his unified field theory. Already in January 1918 Klein had exposed various hasty claims made in \cite{Hilbert1915}. His critique in \cite{Kl-1} set the stage for Noether's insightful analysis that showed precisely how conservations laws and certain identities based on them arise in theories based on a variational principle. Klein took it upon himself to analyze the various
proposals for conservation laws in
differential form in \cite{Kl-2}. In the course of doing so, he gave a simplified
and much clearer
derivation of Hilbert's invariant energy equation (\ref{eq:Hilbert-energy}). He also succeeded in characterizing Einstein's formulation of energy conservation as presented in \cite{Ein-8}. Soon afterward, Klein took up Einstein's integral form for energy-momentum conservation in \cite{Kl-7}.\footnote{For a summary account of these issues as seen three years later, see \cite[175--178]{Pau-2}.} In all of these studies he was assisted by Emmy Noether.
When
Einstein began to study Hilbert's paper \cite{Hilbert1915} in earnest in May 1916, he naturally wondered whether there might be some deep connection between his own findings and Hilbert's energy equation. Hilbert thought this was probably the case, and he wrote as such to Einstein. He also informed him that he had already asked Emmy Noether to investigate this question, a circumstance that suggests he may have been disinclined to pursue this matter himself.
What transpired afterward remains somewhat shrouded in mystery, but the present account will show that already by 1916 Noether had taken an important step toward solving this problem. In that year, she discovered that
Hilbert's energy theorem as well as Einstein's formula for energy conservation shared a formal property closely connected with the field equations for gravitation. Although she never published on this topic, direct allusions to her discovery came to the surface in
early 1918 when Klein and Hilbert published \cite{Kl-1}, an exchange of letters concerning the status of energy conservation in general relativity. Thanks to
the recovery of a 9-page manuscript based on Noether's work, we can now reconstruct in outline
the arguments she set forth in response to Hilbert's inquiry.
In recounting this story, I have shifted the focus away from the immediate events of 1918 that led to
Noether's seminal achievement, her paper \cite{Noether1918b}. In the course of its telling, however, it will become clear that the earlier events from 1916 -- in particular Noether's findings with respect to energy conservation in the theories of Hilbert and Einstein -- directly presaged her later work, which arose from Klein's determination to clarify these issues.
\section{On the Research Agendas of Hilbert and Klein}
In the late
spring of 1915, only shortly after Emmy Noether's arrival, Einstein came to G\"ottingen to deliver a series of six two-hour
lectures on his new theory of gravitation, the general theory of
relativity.\footnote{Einstein had been invited to G\"ottingen once before by Hilbert, in 1912,
but declined that invitation (Albert Einstein to
David Hilbert, 4 October 1912, in \cite[321--322]{Ein-3}.}
Einstein was pleased with the reception he was accorded, and
expressed particular pleasure with Hilbert's reaction. ``I am very
enthusiastic about Hilbert,'' he wrote Arnold Sommerfeld, ``an important
man!'' \cite[147]{Einstein1998a}.
Hilbert had a long--standing interest in mathematical physics
(see \cite{Corry2004}, \cite{Corry2007}).
Following Hermann Minkowski's lead, he and other G\"ottingen mathematicians felt
strongly drawn to the formal elegance of relativity theory. For Hilbert, who had been advocating an axiomatic approach to physics for many years, relativity was ready-made for this program. In later years, he liked to joke that ``physics had become too difficult
for the physicists,'' a quip that was probably not intended all too seriously (\cite[347]{Weyl-7}).
Although little is known about what transpired during the week of his visit to G\"ottingen, Einstein was clearly delighted by the response he received: ``to my great joy, I succeeded in convincing
Hilbert and Klein completely.''\footnote{Einstein to
W.J. de Haas, undated,
probably August 1915, \cite[162]{Einstein1998a}.} As for Hilbert's reaction to Einstein's visit, this encounter inspired him to consider whether general relativity
might provide a fruitful framework for combining Einstein's gravitational theory with Gustav Mie's electromagnetic theory
of matter.
By the fall of 1915, however, Einstein was no longer expressing the kind of
self-satisfaction he felt immediately after
delivering his G\"ottingen lectures.
On 7 November, he wrote
Hilbert: ``I realized about four weeks ago
that my methods of proof used until then were deceptive'' \cite[191]{Einstein1998a}.
Thus began a flurry of exchanges in which Einstein and
Hilbert corresponded directly with
one another as well as through Arnold Sommerfeld \cite{Rowe2001}.
On November 20, Hilbert presented
the first of his two communications
to the
G\"ottingen Scientific Society.
Five days later, Einstein submitted \cite{Ein-1}, the last of his four
notes on general relativity to the Berlin
Academy.
Abandoning the basic
assumptions of his theory, he reaffirmed the centrality of
general covariance while seeking a corresponding set of
field equations for gravitation by making use of the Ricci tensor.\footnote{On Einstein's struggles from this period, see
\cite{Stach}, \cite{Janssen2014}, \cite{Janssen/Renn2007}, and \cite{Janssen/Renn2015}.}
The note \cite{Ein-1} contains the fundamental equations:
\begin{equation}\label{eq:einst}
R_{\mu\nu}= -\kappa(T_{\mu\nu}-\frac{1}{2}g_{\mu\nu}T),
\end{equation}
where $g_{\mu\nu}$ and $R_{\mu\nu}$ are the metric and Ricci
tensors, respectively. Einstein's argument for these equations was
highly heuristic in nature, but he had already shown how, by using them
in a simplified form, they could be used to
calculate the displacement in Mercury's perihelion, a major breakthrough for precision
measurements in solar astronomy. Hilbert, on the other hand, was able to derive gravitational field equations from a variational principle, an important mathematical achievement. Much as he had done in his other physical research, Hilbert hoped that by exploiting axiomatic and variational methods he would be able to place relativistic field theory on a firm footing.\footnote{While some authors have portrayed the events of November 1915 as a race to arrive at the equations (\ref{eq:einst}), recent research has made clear that this was indeed a major concern for Einstein, but much less so for Hilbert; see \cite{Sauer1999} and
\cite[9--17]{Sauer/Majer2009}.}
Initially, Emmy Noether worked closely with Hilbert, but she also assisted Felix Klein in preparing his lectures on the
development of mathematics during the nineteenth century \cite{Kl-5}.
Starting in the summer of 1916, Klein broke off these lectures in order to begin a 3-semester course
on the mathematical foundations of
relativity theory (published posthumously in \cite{Kl-6}). Much of what he presented during the first two semesters centered on the background to special relativity, including Maxwell's theory, but also the classical theory of algebraic and differential invariants. By the third semester, though, he entered the mathematical terrain of general relativity: Riemannian geometry and Ricci's absolute differential calculus.
Klein's interests diverged rather strikingly from those of Minkowski and Hilbert, both of whom hoped to break new ground in electrodynamics. Unlike them, he was exclusively interested in the mathematical underpinnings of the new physics.
Once Einstein pointed the way to a gravitational theory based on generalizing Minkowski space to a Riemannian manifold, Klein began to explore the purely mathematical foundations underlying Einstein's new Ansatz.\footnote{One year after
Minkowski's premature death in 1909, Klein took up the connection between
Minkowski's spacetime geometry, based on the invariance properties of the
Lorentz group, and the ideas in his ``Erlangen Program'' \cite{Kl-3}.
Klein's ``Erlangen Program'' \cite{Kl-4}
was republished many times, e.g. in
\cite[460--497]{Kl-GMA}.}
By the end of 1917, Klein sent Einstein a copy of the
{\it Ausarbeitungen} of his lectures on the
mathematical foundations of relativity. The latter's
response was not very flattering:
``it seems to me that you highly overrate the value of formal points of view. These
may be valuable when an {\it already found} truth needs to be
formulated in a final form, but they
fail almost always as heuristic aids'' \cite[569]{Einstein1998a}.
Compared with Hilbert's research program, Klein's
agenda was rather
modest. Indeed,
Hilbert was pursuing the far more ambitious goal of trying to find a connection between gravity and electromagnetism. His guiding ideas regarding the latter came from Gustav Mie's theory of matter.\footnote{Mie's approach to field physics also exerted a strong influence on Hermann Weyl up until around 1920. See Weyl's remarks in \cite[1952: 211]{Weyl-2} and the note he later added on p. 216 after he became disillusioned with this program.}
Hilbert was especially attracted to Max Born's presentation of Mie's theory in \cite{Born1914}\footnote{For discussions of this paper, see
\cite[309--315]{Corry2004} and \cite{Smeenk/Martin2007}.} because of its
mathematical elegance and reliance on variational methods. Variational principles had a longstanding place in classical mechanics, particularly due to the influential work of J. L. Lagrange, but their use in electrodynamics and field physics brought about numerous challenges.
In the context of Mie's theory, Born showed how to derive its fundamental equations from a variational principle by varying the field variables
rather than varying the coordinates for space and time.
Emmy Noether presumably had little knowledge of variational methods when she joined Hilbert's research group in 1915. What she knew very well, however, were related methods for using formal differential operators to generate algebraic and differential invariants.\footnote{For an introduction to this field, see \cite{Olver}.} In November 1915 she wrote to her friend and former Erlangen mentor, Ernst Fischer, to tell him about her work in G\"ottingen. Fischer had studied in Vienna under Franz Mertens, a leading expert on invariant theory whose work had influenced the young
David Hilbert.\footnote{Mertens influence on Hilbert is recounted in \cite[163--164]{Rowe2018}.} From Noether he now learned that
Hilbert had created a buzz of excitement about invariant theory, so that even the physicist Gustav Hertz was studying
the classical literature \cite[30--31]{Dick}. She herself had learned these older methods in Erlangen from Paul Gordan, who supervised her dissertation, a tedious study of the invariants and covariants associated with a ternary biquadratic form.
Hertz was learning them from
her {\it Doktorvater's} old lectures, edited by Georg Kerschensteiner in \cite{Kersch}. Noether knew that Hilbert was pushing his team on with hopes for a breakthrough in physics, but she freely admitted that none of them had any idea what good their calculations might be \cite[30--31]{Dick}.
No doubt Hilbert thought about this along lines first explored by Gustav Mie in his search for a
suitable ``world function'' $\Phi$ that would lead to an electromagnetic theory of matter
\cite{Mie1912}. Mie assumed that such a function $\Phi$ would have to be Lorentz covariant, thus compatible with the special theory of relativity, and furthermore that it should depend on the field variables alone. As Max Born pointed out, the latter assumption represented an important deviation from classical electron theory, in which the space and time coordinates enter the Lagrangian formalism. ``In Mie's theory,'' he writes ``the forces that hold atoms and electrons together should arise naturally from the formulation of $\Phi$, whereas in the classical theory of electrons the forces have to be specifically added'' \cite[753]{Born1914}. As for the demand that $\Phi$ be Lorentz covariant, Mie showed that this meant it had to be a function of just four invariant quantities.\footnote{Several years later, it was discovered that Mie and his contemporaries had overlooked a fifth invariant; see \cite[627, footnote 9]{Smeenk/Martin2007}.}
This same feature applied to Hilbert's world function $H$, which was invariant under general coordinate transformations. In his lecture course on foundations of physics from 1916/17, Hilbert emphasized the importance of restricting the possibilities for
the Lagrangian $H$ \cite[287--290]{Sauer/Majer2009}. He took this to be of the form $H=K+L$, where
$K$ is the Riemannian curvature scalar and the electromagnetic Lagrangian $L$ depends on the metric tensor $g_{\mu\nu}$, but not on its derivatives. Hilbert noted that the $g_{\mu\nu}$ had to be present in $L$, as otherwise one could not construct any invariants from the electromagentic potentials alone. By the same token: ``this assumption leads to a truly {\it powerful simplification},'' since it means that $L$ has to be a function of just four known invariants \cite[287]{Sauer/Majer2009}. Since the gravitational part of $H$ was given by $K$, this meant that Hilbert's program rested on finding the requisite properties of these invariants in order to construct $L$.\footnote{In this course from 1916/17, Hilbert made the additional assumption that the derivatives $q_{hk}$ only enter $L$ quadratically, from which he deduced that $L$ takes the form $L=aQ+a_1Q_1+a_2Q_2+ f(q)$, where $Q,\,Q_1,\,Q_2, \,q$ are the four known invariants underlying his theory.}
His initial enthusiasm for these ideas did not last long, however, and by 1917 Hilbert's ambitions for a unified field theory
of ``everything'' passed over to a ready acceptance of Einstein's position, namely that general relativity had no immediate relevance for microphysics \cite{Renn/Stachel2007}.
Emmy Noether presumably gained some understanding of what Hilbert hoped to achieve for physics by drawing on advances in invariant theory. Yet if so, she surely never thought of her own work as motivated by Hilbert's physical program.
In fact, she was already pursuing a program for invariant theory that was inspired by her collaboration with Ernst Fischer.\footnote{For an idea of the scope of her research program in invariant theory, see \cite{Noether1923}.}
On 22 August 1917, she wrote him to announce that she had finally solved a problem that had occupied her attention since spring, namely the extension of a theorem proved by E.B. Christoffel and G. Ricci for quadratic differential forms to forms with any finite number of variables \cite[33]{Dick}. On 15 January 1918, Noether presented a lecture on her ``Reduction Theorem''
at a meeting of the G\"ottingen Mathematical Society, and ten days later Felix Klein submitted her paper \cite{Noether1918a} for publication. Drawing on methods in the calculus of variations introduced by Lagrange, Riemann, and Lipschitz, she shows how problems involving systems of differential invariants can be reduced to classical invariant theory, i.e. invariants of the projective group. Her treatment of Lagrangian derivatives as formal invariants reveals that this paper is closely related to the far more famous \cite{Noether1918b}.
\section{On Conservation Laws in General Relativity}
Whereas Hilbert hoped to use Einstein's gravitational theory as a framework for a new unified field theory, Noether remained what she had always been: a pure mathematician. Her work thus
aimed to clarify the mathematical underpinnings of general relativity, an effort strongly promoted by Felix Klein, who took up this challenge around the time that Hilbert's interests were turning back to the foundations of mathematics \cite[22]{Sauer/Majer2009}.
In March of 1917, Klein
initiated a correspondence with Einstein
that sheds considerable light on
how both
thought about relativity theory and the
general relationship between
mathematical and physical reasoning.
Their letters mainly reflect the
three topics which were then uppermost in Klein's mind:
1) the conceptual links between relativity theory and
his ``Erlangen Program''; 2) the cosmological models proposed by Einstein and Willem de Sitter, in particular as these related to
non-Euclidean geometries;
and 3) the role of conservation
laws in classical and relativistic physics. Only this last topic will be discussed here, but the others are suggestive of the broader range of issues central to the reception of general relativity in G\"ottingen.\footnote{For a discussion of topic 2), see \cite[279--299]{Rowe2018}.}
Beginning in March 1918, the correspondence between Klein and Einstein
intensified
markedly
following the appearance of \cite{Kl-1}.
This paper arose from a presentation Klein made on 22 January 1918 to members of the G\"ottingen
Mathematical Society, a talk that elicited a reaction
from Hilbert one week later. The conclusions drawn from these two
sessions were later summarized in the journal of the German
Mathematical Society: ``The `conservation laws' valid for continua
in classical mechanics (the impulse-energy theorems) are already
contained in the field equations in
Einstein's newly inaugurated theory; they thereby lose their
independent
significance.''\footnote{{\it Jahresbericht der
Deutschen Mathematiker--Vereinigung}, 27 (1918),
(``Mitteilungen und Nachrichten''), p. 28.}
Klein and Hilbert afterward agreed to publish their respective viewpoints in the {\it G\"ottinger Nachrichten}
as an epistolary exchange.\footnote{Klein had already presented a preliminary version of \cite{Kl-1} at a meeting of the scientific society on 25 January.} Considering that they both lived only a short distance from one another on the Wilhelm Weber Strasse, one might wonder why they chose to publish the gist of their discussions as an exchange of correspondence. In any event, the views they set forth harmonized and were surely meant to be seen as representing
the consensus opinion on these matters in G\"ottingen.
In \cite{Kl-1}, Klein underscored that Hilbert's invariant energy equation should not be viewed as a conservation law in the sense of classical mechanics. The latter could only be derived by invoking physical properties of matter, whereas Hilbert's equation followed directly from the gravitational field equations by means of purely formal considerations. Klein further remarked that Emmy Noether had already noticed this and had worked out all the details in a manuscript, a text she had shown him.
``You know,'' he wrote, ``that Miss Noether advises me continually regarding my work, and that, in fact, it is only thanks to her that I have understood these questions'' \cite[559]{Kl-1}.
Hilbert was certainly very well aware of this, and he responded as follows:
\begin{quote}
I fully agree with the substance of your statements on the energy theorems. Emmy Noether, on whom I have called for assistance more than a year ago to clarify this type of analytical question concerning my energy theorem, found at that time that the energy components that I had proposed -- as well as those of Einstein -- could be formally transformed, using the Lagrangian differential equations . . . of my first note, into expressions whose divergence vanishes identically . . .. \cite[560--561]{Kl-1}.
\end{quote}
As it happens, a Swiss student named Rudolf Jakob Humm attended Klein's lecture and was impressed by what he heard.
Humm was studying relativity under Hilbert, but he had also spent a semester in Berlin attending Einstein's lecture course. Since he was particularly interested in energy conservation, Humm surely read \cite{Kl-1}, which would have made him aware of Noether's manuscript had he not known about it already before. In any event, Humm must have approached her at some point to ask if he could copy part of this text.\footnote{This copy, written in Humm's hand, is just one of the many documents in his posthumous papers relating to his interest in general relativity during the war years (Nachlass Rudolf Jakob Humm, Zentralbibliothek Z\"urich).}
Humm grew up in Modena and later
completed his secondary education at the Kantonsschule in
Aarau. This was the same
institution Einstein had attended for one year
before he entered the Polytechnicum in Z\"urich, a
circumstance that possibly inspired Humm to study relativity
in Germany. He first studied mathematics in Munich in 1915, before moving on to
G\"ottingen. By the winter semester of 1916/17 he was
thoroughly steeped in theoretical physics.
During wartime, university enrollments plummeted, so Humm was moving in a
small world in which people saw one another nearly every day. His contacts with
Emmy Noether were apparently
rather fleeting, whereas he regularly interacted with several fellow natives of Switzerland, including
the physicist Paul Scherrer and his wife, Paul Finsler, and
Richard B\"ar. Humm also socialized with Vsevolod
Frederiks, one of several Russians studying physics and
mathematics in G\"ottingen \cite[114--115]{Rowe2004}, and he befriended
Willy Windau, a blind mathematician who went on
to take his doctorate under Hilbert in 1920. Another sometime companion
was the astronomer Walter Baade, who took his
degree in 1919.
One evening in
April 1917, he and Baade met for drinks at
the Hotel National. Humm was
somewhat despondent on this occasion, in part because of
the meager course offerings for the coming semester. He
had been following Hilbert's relativity course with enthusiasm,
but for the summer, the master would be teaching
only a four-hour course on set theory. Over the course of that evening, Baade
convinced him to leave G\"ottingen for Berlin, where
Einstein had already begun
teaching a course on relativity. A few days later, Humm was already settled in and looking forward
to Einstein's course, which was held on Thursdays from 2 to 4. He also made plans to
attend the physics colloquium, which was run by Heinrich Rubens.
Humm had missed the first two lectures, so he had some
questions after hearing the third. Evidently, Einstein offered
to meet with him the following Saturday, an encounter that
led to a series of remarks by the physicist that Humm tried
to reconstruct in his diary. Einstein had recently read Hilbert's
second note on the foundations of physics \cite{Hilbert1917}. There, Hilbert had introduced a special coordinate
system in order to preserve causal relations in general relativity,
but Einstein thought this was inadmissible because
it could lead to worldlines that converge, thereby yielding
space-time singularities. He had already mentioned this
criticism two weeks earlier in a letter to Felix Klein.\footnote{Einstein to Klein, 24
April 1917, \cite[426]{Einstein1998a}.}
This was only one of several conversations Humm had
with Einstein during his three-month stay in Berlin.
Alongside Einstein's course, he also attended Max Planck's
lectures on quantum theory as well as Rubens's weekly
colloquium, which met on Wednesdays. He found this all
quite stimulating, but he also missed the conveniences of
G\"ottingen's Lesezimmer. In Berlin one had to order books
from the library, so there was no opportunity to browse
open shelves to pick out the volumes one might want to
read. Rubens had asked Humm to speak in the colloquium
in early August, but this plan was aborted after Einstein fell
ill in mid-July. His assistant, Jakob Grommer, then took
over the course, while Einstein left for Switzerland to
recover from an intestinal ailment. For Humm, this sudden
turn of events meant that he had little incentive to stay in
Berlin any longer. So he canceled his colloquium lecture
and soon thereafter returned to G\"ottingen.
Humm was surely well-versed when it came to the various proposals for dealing with energy conservation in general relativity. He had attended Hilbert's year-long course on ``Die Grundlagen der Physik'', in which this topic received renewed attention \cite[304--306]{Sauer/Majer2009}. When he arrived back
in G\"ottingen for the winter semester 1917/18, Hilbert appointed him to prepare the official {\it Ausarbeitung} for his lecture course on electron theory.
At the end of Einstein's final lecture, Humm was keen to learn his opinion about
one of the most controversial parts of his theory, namely Einstein's pseudo-tensor for representing gravitational energy. Throughout 1917 and 1918, Einstein argued against much skeptical opinion that the expression for gravitational energy could not be a general tensor; on the contrary, it needed to vary with the coordinate frame. Humm recorded this response in his notes from Einstein's lecture:
\begin{quote}
I asked Einstein if it would be possible to generalize the conservation equation
$$\frac{\partial{({\frak T}^{\sigma}_{\mu}+{\frak t}^{\sigma}_{\mu})}}{\partial{x_{\sigma}}}=0$$
so that it would contain only real tensors. He thought not: one does not shy from writing
$$\frac{\partial{(T+U)}}{\partial{t}}=0.$$
in classical mechanics, where $U$ in an invariant under Galilean transformations, but $T$ is not. So it not so terrible to have the general tensor ${\frak T}_{\mu}^{\sigma}$ next to the special ${\frak t}_{\mu}^{\sigma}$ . If one considers an accelerative field, then there will be a ${\frak t}_{\mu}^{\sigma}$ , even though the field can be transformed away. In the end, one can operate with any arbitrary concept, and it cannot be said that they have to be tensor quantities; the [Christoffel symbols] are also not tensors, but one operates with them. The ${\frak t}_{\mu}^{\sigma}$ are the quantities that deliver the most. (Nachlass Rudolf Jakob Humm, Zentralbibliothek Z\"urich)
\end{quote}
Humm was strongly drawn to Einstein's highly conceptual
way of thinking about fundamental physical
problems, an approach he contrasted with Hilbert's purely
mathematical approach. Energy conservation and the
equations of motion in general relativity would thenceforth
become his principal research agenda.
Meanwhile, Humm continued to stay in contact with
Einstein, who submitted two of his papers for publication
in {\it Annalen der Physik}. The first of these was \cite{Humm1918},
written in May 1918, just two months before Emmy Noether
presented her paper \cite{Noether1918b}. In it, Humm takes
considerable care to explain how one can apply different
variational methods to obtain results adapted to a particular
physical setting. Among his findings, he could show that
Einstein's differential equations for conservation of energy
were derivable from the equations of motion, i.e., the
assumption that a test particle moves along a geodesic in
curved space-time. Humm submitted
\cite{Humm1919}, his second paper, just one month after Noether completed hers;
again, one finds striking parallels between them. This
second contribution aimed to show that Einstein's energy
equations could be seen as equivalent to equations of
motion, an argument based on certain analogies with
Lagrangian mechanics.
Humm's transcription of Noether's results from 1916 will be discussed below. First, however, it will be necessary to consider how Hilbert derived his invariant energy vector in \cite{Hilbert1915}, after which I will briefly describe Einstein's handling of energy conservation in general relativity in \cite{Ein-6}.
\section{Hilbert's Approach to Energy in \cite{Hilbert1915} }
When Einstein delivered his six Wolfskehl lectures in G\"ottingen in the late spring of 1915, he was advancing a version of a gravitational theory that he had earlier worked out with the help of the mathematician Marcel Grossmann \cite{E/G1913}. At this time, he was convinced that if such a theory were based entirely on the principle of general covariance, then it would necessarily be undetermined. For this reason, he and Grossmann reached the conclusion that the gravitational field equations could not be generally covariant. Instead, their equations were covariant only with respect to a more restricted group that included the linear transformations. In G\"ottingen Einstein quite possibly spoke about the possibility of using energy conservation in order to bring about this restriction, one of several problems he had yet to solve.
In any event, Hilbert's initial attempt to subsume Mie's theory within the context of general relativity followed
Einstein's then current belief that the field equations themselves could not be
generally covariant. Moreover, to avoid this problem he struck on the idea of utilizing
energy conservation to restrict the system of allowable coordinates, a method designed to preserve causal relations. This initial foray into general relativity, however, never found its way into print. Hilbert's contemporaries were therefore unaware that his original approach to energy conservation differed fundamentally from the one that appeared in his published note, \cite{Hilbert1915}. The discrepancy was only discovered in the late 1990s when historians discovered that, although this note still bore the original date of submission (20 November 1915),
Hilbert had heavily revised it after receiving the page proofs in December 1915 \cite{Cor}.\footnote{The extant page proofs are incomplete, however. What they likely once contained has been discussed in \cite{Sauer2005} and in \cite{Renn/Stachel2007}.}
In these page proofs, published as \cite{Hilbert2009}, Hilbert introduces a linear invariant as the ``energy form,'' from which he derives four coordinate conditions from a divergence equation. He then introduces this coordinate system as an ``axiom for space and time,'' thereby obtaining a total of 14 differential equations for the 14 field variables required for combining Einstein's and Mie's theories. He adopted this strategy in order to circumvent the problem he foresaw if the theory only admitted 10 equations, thereby leaving four degrees of freedom for the motion of a physical system.
Hilbert dropped all this, however, in the published version of \cite{Hilbert1915}, where his modified energy law is fully covariant.
Nevertheless, as described in
\cite{Br-Ryck2018}, he continued to struggle with the problem of reconciling general covariance with causality, which returns to the fore in \cite{Hilbert1917}, the sequel to his first note on ``Die Grundlagen der Physik''. As John Stachel notes in
\cite{Stachel1992}, this paper was the first attempt to deal with the Cauchy problem in general relativity. In it, Hilbert commented:
\begin{quote}
As far as the causality principle is concerned, if the physical quantities and their time derivatives are known in the present in any given coordinate system, then a statement will only have physical meaning if it is invariant with respect to those transformations for which the coordinates used are precisely those for which the known present values remain invariant. I claim that all assertions of this kind are uniquely determined for the future as well, i.e., that the causality principle is valid in the following formulation:
From knowledge of the fourteen potentials \dots \,\,
in the present all statements about them in the future follow necessarily and uniquely insofar as they have physical meaning.
\cite[61]{Hilbert1917}
\end{quote}
Hilbert's theory in \cite{Hilbert1915} was based on
two axioms that concern the properties
of a ``world
function'' $$H(g_{\mu\nu}, g_{\mu\nu, l}, g_{\mu\nu, lk}, q_s, q_{s,l}).$$ This
$H$ is taken to be
a scalar-valued function that does not depend explicitly on the spacetime coordinates $w_s$ but rather on
the ten components of the symmetric metric tensor
$g_{\mu\nu}$ and its first and second derivatives as
well as four electromagnetic potentials $q_s$ and their first
derivatives. Hilbert notes that $H$ could just as well be defined by means of the contravariant arguments $g^{\mu\nu}, \, g^{\mu\nu}_l,
\, g^{\mu\nu}_{lk}$, which he adopts afterward.
Axiom I then
asserts that
under infinitesimal variations of the field functions $g^{\mu\nu}
\rightarrow g^{\mu\nu} +\delta g^{\mu\nu}$ and $q_s \rightarrow q_s
+ \delta q_s$,
$$\delta\int H\sqrt g d\omega =0,$$
where $g= | g^{\mu\nu}|$ and $ d\omega = dw_1dw_2dw_3dw_4.$
This variational principle is understood to apply throughout
a finite region of space-time.
Axiom II then simply states that this world function $H$ is taken to be
invariant under general coordinate
transformations.
By virtue of Axiom I, Hilbert obtained ten Lagrangian differential equations for the
ten gravitational potentials $g^{\mu\nu}$:
\begin{equation}\label{eq:grav-Lagrange}
\frac{\partial \sqrt g H}{\partial g^{\mu\nu}} - \sum_k
\frac{\partial}{\partial w_k}\frac{\partial \sqrt g H}
{\partial g_k^{\mu\nu}} + \sum_{k,l}\frac{\partial^2}
{\partial w_k \partial w_l} \frac{\partial \sqrt g H}
{\partial g_{kl}^{\mu\nu}}=0.
\end{equation}
Similarly, for the four electrodynamic potentials $q_s$, he derived the four equations:
\begin{equation}\label{eq:ed-Lagrange}
\frac{\partial \sqrt g H}{\partial q_h} - \sum_k \frac{\partial}
{\partial w_k}\frac{\partial \sqrt g H}{\partial q_{hk}} =0.
\end{equation}
Hilbert called the first set of equations the fundamental equations
of gravitation and the second the fundamental equations of
electrodynamics,
abbreviating these to read:
\begin{equation}\label{eq:Lagrange-derivs}
[\sqrt g H]_{\mu\nu} =0, \quad [\sqrt g H]_h =0.
\end{equation}
In the course of developing his theory, Hilbert focused on the special case where the Lagrangian $H$ takes the form $H= K+L$.
Here $K$ is the curvature scalar obtained by contracting the Ricci tensor $K_{\mu\nu}$, i.e.,
$K= \sum_{\mu\nu} g^{\mu\nu}K_{\mu\nu}$.
He placed no special conditions on the Lagrangian $L$, but noted that it contained no derivatives of the metric tensor, so that
$H = K + L(g^{\mu\nu}, q_s, q_{s,l})$. Utilizing a general theorem for constructing differential invariants from a given invariant, Hilbert showed how $L$ led to a differential equation that served as a generalized Maxwell equation, as in Gustav Mie's electromagnetic theory of matter.
Hilbert's central claim concerned four independent linear combinations satisfied by the $[\sqrt g L]_h$ and their derivatives. He showed toward the end of \cite{Hilbert1915} how these can be derived from the fundamental equations of gravitation. His argument depended on first identifying the electromagnetic part of his energy vector $e^l$ with Mie's expression for energy. Hilbert's more general expression passed over to Mie's when the metric tensor $g_{\mu\nu}$ took on values for a flat spacetime. He then showed that this same expression could be linked to the gravitational field equations, which for $H=K+L$ take the form:
\begin{equation}\label{eq:Hilbert-grav-eqns}
[\sqrt g K]_{\mu\nu} + \frac{\partial \sqrt g L}{\partial g^{\mu\nu}} =0.\footnote{Much has written about how Hilbert came to recognize the Lagrangian derivative $[\sqrt g K]_{\mu\nu}$ as being identical with the Einstein tensor $\sqrt g (K_{\mu\nu}-\frac{1}{2}g_{\mu\nu}K)$, but it should not be overlooked that this claim plays no role whatsoever in the arguments presented in \cite{Hilbert1915}. Hilbert surely felt it important to establish this linkage with Einstein's theory of gravitation, but the results he set forth did not make use of the specific form of Einstein's field equations.}
\end{equation}
The electromagnetic part of $e^l$ could then be written:
\begin{equation}\label{eq:Hilbert-EM-Energie}
\frac{-2}{\sqrt g}\sum_{\mu, s}\frac{\partial \sqrt g L}{\partial g^{\mu s}}g^{\mu l}p^s.
\end{equation}
After carrying out a number of quite complicated transformations and making use of (\ref{eq:Hilbert-grav-eqns}), Hilbert obtained four identities involving the Lagrangian expressions
$ [\sqrt g L]_m$ and their first derivatives:
\begin{equation}\label{eq:Lagrange-identities}
\sum_{m}(M_{\mu\nu} [\sqrt g L]_{m} + q_v \frac{\partial [\sqrt g L]_{m}}{\partial w_m}) =0,\footnote{His derivation of this equation is found on \cite[405--406]{Hilbert1915}.}
\end{equation}
where $M_{\mu\nu} = q_{\mu\nu}-q_{\nu\mu}.$
Somewhat misleadingly, however, he related this specific result to a much more general one, announced but not proved at the outset. This was his Theorem I, which generalized the situation described by axioms I and II to any invariant $J$ in $n$ variables and their derivatives. From this invariant variational framework one can then derive $n$ Lagrangian differential equations, from which $n-4$ of these lead to four identities satisfied by the other four and their total derivatives. So stated, this theorem asserts that
four of the fourteen equations $[\sqrt g H]_{\mu\nu} =0 \,\, [\sqrt g H]_h =0$
can be deduced directly from the other ten. Hilbert seized on this result to make a strong physical claim:
\begin{quote}
\dots on account of that theorem
we can immediately make the assertion, {\it that in the
sense indicated the electrodynamic phenomena are the effects of gravitation.}
In recognizing this, I discern the simple and very surprising solution of
the problem of Riemann, who was the first to search for a theoretical
connection between gravitation and light. \cite[397--398]{Hilbert1915}\footnote{Hilbert was alluding here to Riemann's
posthumously published ``Gravitation und Licht,''
in \cite[496]{Riem}.}
\end{quote}
Hilbert would later drop this passage in \cite{Hilbert1924}, a new version of his two notes, although the theorem he stated was correct and certainly important.
A more immediately controversial and confusing aspect, however, concerned Hilbert's handling of energy conservation. In the first part of his paper, he constructed a complicated invariant $e^l$, his energy vector, from which he proved that its divergence vanished; this was his invariant energy theorem:
\begin{equation}\label{eq:Hilbert-energy}
\sum_l \frac {\partial \sqrt g e^l}
{{\partial w_l} } =0.
\end{equation}
The energy vector $e^l$ is defined by starting with an arbitrary vector $p^l$ and then
building four other vectors by means of differential invariants.
The resulting construction takes this form:
\begin{equation}\label{eq:Hilbert-vector}
e^l= H\, p^l-a^l-b^l-c^l-d^l.
\end{equation}
Each of these five terms is an invariant, but only the first depends on both the gravitational and electromagnetic potentials. The vectors $a^l, \, b^l$ contain expressions without the $q_s$, whereas the $c^l, \, d^l$ are independent of the $g^{\mu\nu}$. Hilbert
emphasized that his energy equation holds for any $H$ satisfying the first two axioms, even though the construction of $e^l$ clearly reveals that he had the special case $H=K+L$ in mind. Thus, while he formulates Theorem II for a general invariant $J$ of the type $H$, Hilbert decomposes the operator $P$ that acts as the first polar:
\begin{equation}\label{eq:polar}
\sum_l \frac {\partial J}
{\partial w_s} p^s =P(J)\footnote{This equation vexed Einstein, who wrote to Hilbert on 25 May 1916 (see below). Hilbert noted in his reply that the coefficients in the power series expansion arising from a displacement of the variables in the invariant $J$ will themselves be invariant, and furthermore that the first order terms are those given by $P=P_g+P_q$. }
\end{equation}
by writing $P=P_g+P_q$ in order to separate the gravitational and electromagnetic terms, where
$$P_g= \sum_{\mu,\nu,l,k } (p^{\mu\nu}\frac {\partial }{\partial g^{\mu\nu}}+p^{\mu\nu}_l\frac {\partial }{\partial g^{\mu\nu}_l}+p^{\mu\nu}_{lk}\frac {\partial }{\partial g^{\mu\nu}_{lk}})$$ and
$$P_q= \sum_{l,k } (p_l\frac {\partial }{\partial q_l}+p_{lk}\frac {\partial }{\partial q_{lk}} ),$$
where the lower suffixes in $p$ denote coordinate derivatives.
Hilbert then applies the first operator to polarize the expression $\sqrt g H$:
\begin{equation}\label{eq:polar-g}
P_g(\sqrt g H) = \sum_{\mu,\nu,l,k } (p^{\mu\nu}\frac {\partial \sqrt g H}{\partial g^{\mu\nu}}+p^{\mu\nu}_l\frac {\partial \sqrt g H}{\partial g^{\mu\nu}_l}+p^{\mu\nu}_{lk}\frac {\partial \sqrt g H}{\partial g^{\mu\nu}_{lk}}).
\end{equation}
Using formal properties of tensors, Hilbert introduces the two vectors $a^l, \, b^l$ and shows that they satisfy the equation
$$P_g(\sqrt g H) - \sum_{l} \frac {\partial \sqrt g (a^l + b^l)}{\partial w_l} = \sum_{\mu ,\nu} [ \sqrt g H]_{\mu\nu} p^{\mu\nu} .$$
In constructing the vector $a^l$, he first notes that the coefficient of $p^{\mu\nu}_{lk}$ in the expression for (\ref{eq:polar-g}), namely $\frac {\partial \sqrt g H}{\partial g^{\mu\nu}_{lk}}$, is a mixed fourth-order tensor, which enables him to produce $a^l$ by multiplying this tensor with another of the third rank. Emmy Noether would point out in 1916 that the second derivatives of the metric tensor that appear in the definition of $a^l$ cannot be eliminated by means of the field equations; she took this as an indication that Hilbert's energy vector was not analogous to a first integral in classical mechanics.
By an analogous argument using the second operator, Hilbert obtains the vector $c^l$ which satifies the equation:
$$P_q(\sqrt g H) - \sum_{l} \frac {\partial \sqrt g (c^l)}{\partial w_l} = \sum_{k} [ \sqrt g H]_{\mu\nu} p_k .$$
Adding these two equations and applying the fundamental field equations (\ref{eq:Lagrange-derivs}), it follows that
$$P(\sqrt g H) = \sum_{l} \frac {\partial \sqrt g (a^l + b^l +c^l)}{\partial w_l}.$$
Hilbert now applies the identity (\ref{eq:polar}) to this equation to obtain
$$P(\sqrt g H) = \sum_{s} \frac {\partial \sqrt g Hp^s}{\partial w_s},$$ which leads immediately to the divergence equation:
$$\frac {\partial }{\partial w_l}\sqrt g (Hp^l-a^l-b^l-c^l)=0.$$ To complete the construction of the energy vector (\ref{eq:Hilbert-vector}), Hilbert defined $d^l$ by making use of the skew symmetric tensor
$\frac {\partial H}{\partial q_{lk}}-\frac{\partial H}{\partial q_{kl}}$. Since this $d^l$ has vanishing divergence, it follows immediately that the vector $e^l$ does as well, which completes his proof of (\ref{eq:Hilbert-energy}).
Hilbert wrote at the outset of this derivation that this was a fundamental result for his theory and that it followed from his two axioms alone, though he of course also made use of Theorem II. His readers must have been quite mystified, however, by the fact that he derived another result, Theorem III, before taking up his energy theorem.\footnote{Hilbert used Theorem III to show how electromagnetic energy (\ref{eq:Hilbert-EM-Energie}), expressed in terms of the derivatives of $L$ with respect to the gravitational potentials $g^{\mu\nu}$, leads by virtue of the gravitational equations (\ref{eq:Hilbert-grav-eqns}) to the identities (\ref{eq:Lagrange-identities}).} This theorem plays no role in Hilbert's treatment of energy conservation but, as we shall see below, it forms the starting point for Emmy Noether's analysis of Hilbert's energy vector.
\section{Einstein's Approach to Energy Conservation}
As is well known, Einstein originally introduced gravitational effects into his special theory of relativity (SR) by means of the equivalence principle. Once he accepted Minkowski's approach to SR, he eventually found a way to adapt the equivalence principle to it. In SR, force-free motion in an inertial frame of reference takes place along a straight-line path with constant velocity.
Viewed from a non-inertial frame, on the other hand, this path of motion will be a geodesic curve in a flat spacetime
\begin{equation}\label{eq:geodesic}
\frac{d^2x_{\tau}}{ds^2}= \Gamma^{\tau}_{\mu\nu}\frac{dx_{\mu}}{ds}\frac{dx_{\nu}}{ds},
\end{equation}
since this equation is independent of the coordinate system. Einstein made the plausible assumption that this geodesic motion also holds in the non-flat case, i.e. in a spacetime region for which it is impossible to find a coordinate system that leads to the Minkowski metric in SR.\footnote{A number of investigators, including Hermann Weyl, afterward showed how the geodesic equation for motion could be deduced from the field equations (see \cite{Havas1989}). Einstein, however, was reluctant to follow this lead for reasons discussed in \cite{Kennefick2005} and \cite{Lehmkuhl2017}.} This geometrical assumption served as the starting point for his gravitational theory; afterward it stood as a sturdy bridge that joined the special and general theories of relativity.
Einstein's classic paper \cite{Ein-6} was published as a separate brochure that came out just before Einstein began an interesting
correspondence with Hilbert, which will be discussed below.
In \cite{Ein-6} one encounters a number of arguments leading to different formulations of the gravitational field equations.
From the outset, Einstein posed the unimodular coordinate condition $\sqrt{-g}=1$,\footnote{On Einstein's use of unimodular coordinates, see the discussion in \cite{Janssen/Renn2007}.} which leads to a significant simplification of the field equations (\ref{eq:einst}).
He first considered the matter-free case ($T_{\mu\nu}=0$),
$R_{\mu\nu}=0$. Here $R_{\mu\nu}$ is the Ricci tensor, which simplifies in unimodular coordinates, so the ten differential field equations can be written
\begin{equation}\label{eq:modular1}
\frac{\partial{\Gamma^{\alpha}_{\mu\nu}}}{\partial{x_{\alpha}}}+\Gamma^{\alpha}_{\mu\beta}\Gamma^{\beta}_{\nu\alpha} =0,
\end{equation}
where the $\Gamma^{\alpha}_{\mu\nu}$ are Christoffel symbols of the second kind (another popular notation is $\Gamma^\sigma_{\mu\lambda} = \left \{ {\sigma \atop
\mu\lambda}\right \} = -\left \{ {\mu\lambda \atop \sigma
}\right \}$).
Einstein derives another form of the equations (\ref{eq:modular1}) by using variational methods. Assuming $\sqrt{-g}=1$, he takes the scalar
\begin{equation}\label{eq:modular-H}
H= \sum_{\alpha\beta\mu\nu}g^{\mu\nu}\Gamma^{\alpha}_{\mu\beta}\Gamma^{\beta}_{\nu\alpha}
\end{equation}
and
writes $$\delta \left\{\int H d\tau \right\}=0.$$ Carrying out the variation yields field equations in the form:
\begin{equation}\label{eq:modular2}
\frac{\partial}{\partial{x_{\alpha}}} \left\{\frac{\partial H}{\partial{g^{\mu\nu}_{\alpha}}}\right\}
-\frac{\partial H}{\partial{g^{\mu\nu}}}=0.
\end{equation}
After a series of intermediate calculations, he obtains:
\begin{equation}\label{eq:modular3}
\sum_{\alpha}\frac {\partial t^{\alpha}_{\sigma}}{\partial x_{\alpha}}=0;\,\,
-2\chi t^{\alpha}_{\sigma}= \sum_{\mu\nu}\left \{g^{\mu\nu}_{\sigma}\frac{\partial H}{\partial {g^{\mu\nu}_{\alpha}}}\right \}-\delta^{\alpha}_{\sigma}H.
\end{equation}
Einstein noted that although
$t^{\alpha}_{\sigma}$ is not a general tensor, the equations (\ref{eq:modular3}) are valid whenever $\sqrt{-g}=1$. He interpreted the
$t^{\alpha}_{\sigma}$ pseudo-tensor as representing the energy components of the gravitational field and (\ref{eq:modular3}) as expressing the equation for conservation of momentum and energy in the vacuum case. For the pseudo-tensor, he derives the equation
\begin{equation}\label{pseudo-tensor}
\chi t^{\alpha}_{\sigma}= \sum_{\mu\nu\lambda}\frac{1}{2}\delta^{\alpha}_{\sigma}g^{\mu\nu}\Gamma^{\lambda}_{\mu\beta}\Gamma^{\beta}_{\nu\lambda}-g^{\mu\nu}\Gamma^{\alpha}_{\mu\beta}\Gamma^{\beta}_{\nu\alpha}.
\end{equation}
Einstein's generalization of (\ref{eq:modular3}) in the presence of a matter tensor $T^{\sigma}_{\mu}$ takes the form
\begin{equation}\label{energy conservation}
\frac{\partial{(T^{\sigma}_{\mu}+t^{\sigma}_{\mu})}}{\partial{x_{\sigma}}}=0.
\end{equation}
He obtains this by deriving yet another form for the field equations (\ref{eq:modular1}), still assuming the condition
$\sqrt{-g}=1$:
\begin{equation}\label{eq:modular4}
\sum_{\alpha\beta}\frac{\partial}{\partial{x_{\alpha}}} \left (g^{\sigma\beta}\Gamma^{\alpha}_{\mu\beta} \right ) =-\chi
\left (t^{\sigma}_{\mu}-\frac{1}{2}\delta^{\sigma}_{\mu} t \right ),
\end{equation}
where $t=\sum_{\alpha}t^{\alpha}_{\alpha}$.
He then modifies equations (\ref{eq:modular4}) by replacing $t^{\sigma}_{\mu}$ with $t^{\sigma}_{\mu}+T^{\sigma}_{\mu}$ to obtain:
\begin{equation}\label{eq:modular5}
\sum_{\alpha\beta}\frac{\partial}{\partial{x_{\alpha}}} \left (g^{\sigma\beta}\Gamma^{\alpha}_{\mu\beta} \right ) =-\chi
\left [(t^{\sigma}_{\mu}+T^{\sigma}_{\mu})-\frac{1}{2}\delta^{\sigma}_{\mu}(t+T) \right ].
\end{equation}
By means of (\ref{eq:modular5}) and some intermediate calculations, Einstein derives the differential form for conservation of momentum and energy (\ref{energy conservation}). These results from \cite{Ein-6} clearly differ sharply from the findings in \cite{Hilbert1915} discussed above. Nevertheless, Emmy Noether was able to show that Hilbert's $e^l$ and Einstein's $t^{\alpha}_{\sigma}$ both possessed a common property which seemed to reflect that the energy laws in general relativity differ from those in classical mechanics or special relativity.
Soon after he published \cite{Ein-6} in May 1916, Einstein made a conscientious attempt to understand how Hilbert developed his far more complicated approach to energy-momentum conservation published in \cite{Hilbert1915}.\footnote{The complications were in part due to the fact that Hilbert decided to alter his definition of energy in the page proofs of his original submission from 20 November 1915. Thus the version in \cite{Hilbert1915} actually reflects an important shift in Hilbert's understanding of this aspect of his theory. For details, see Tilman Sauer's commentary in
\cite{Sauer/Majer2009}, pp. 11--13.}
Einstein struggled to understand the arguments in Hilbert's first note as he prepared to speak about it in Heinrich Ruben's colloquium. Twice he turned to Hilbert for clarifications, writing: ``I admire your method, as far as I have understood it. But at certain points I cannot progress and therefore ask that you assist me with brief instructions''
(Einstein to Hilbert, 25 May 1916, \cite[289]{Einstein1998a}). He was particularly baffled by Hilbert's energy theorem,
admitting that he could not comprehend it at all -- not
even what it asserted.\footnote{Hilbert claimed not only that the energy vector $e^l$ depended solely on the metric tensor and its derivatives,
he also showed that by passing to a flat metric its electromagnetic part turned out to be closely related to a formulation for energy derived from Mie's theory. Einstein was puzzled about this derivation, since the argument seemed to show that not only the divergence of the energy term but this term itself would have to vanish.}
Hilbert wrote back just two days later. He easily explained how, via the operation of polarization, an invariant $J$ will lead to a new invariant $P(J)$, its first polar. He then went on to say:
\begin{quote}
My energy law is probably related to yours; I have already assigned this question to Miss Noether. As concerns your objection, however, you must consider that in the boundary case $g^{\mu\nu} = 0,\, 1$ the vectors $a^l,\, b^l$ by no means vanish, as $K$ is linear in the $g^{\mu\nu}_{\sigma\kappa}$ terms and is differentiated with respect to these quantities. For brevity I give you the enclosed paper from Miss Noether.
\end{quote}
Hilbert's conjecture regarding the relationship between his and Einstein's versions of energy conservation was surely
no more than a first guess. Even on the purely formal level, he could hardly assert that his energy vector $e^l$ stood in some obvious relation to Einstein's pseudo-tensor $t^{\alpha}_{\sigma}$.
Einstein was well aware that Noether was working closely with Hilbert and that the latter had been trying to break the resistance in the faculty to her appointment as a {\it Privatdozent}. Despite strong support by the members of the natural sciences division, however, all such efforts proved impossible during wartime. Only after the fall of the German Reich and the advent of the Weimar Republic did these efforts succeed (see \cite{Tollmien1990}). Einstein responded to Hilbert's letter shortly afterward:
\begin{quote}
Your explanation of equation [(\ref{eq:polar})] in your paper delighted me. Why do you make it so hard for poor mortals by withholding the technique behind your ideas? It surely does not suffice for the thoughtful reader if, although able to verify the correctness of the equations, he cannot have a clear view of the overall plan of the analysis.
\end{quote}
Einstein was far more blunt about this in a letter he wrote to Paul Ehrenfest on May 24:
``Hilbert's description doesn't appeal to me. It is unnecessarily specialized regarding `matter,' is unnecessarily complicated, and not straightforward (= Gauss-like) in set-up (feigning the super-human through camouflaging the methods)'' \cite[288]{Einstein1998a}.
After receiving Hilbert's explanations, he may have felt somewhat more conciliatory. Certainly he made every effort to understand Hilbert's arguments, and could report: ``In your paper everything is understandable to me now except for the energy theorem. Please do not be angry with me that I ask you about this again'' \cite[293]{Einstein1998a}.
After explaining the difficulty he still had, Einstein ended by writing that it
would suffice if Hilbert asked Emmy Noether to clarify the point that was troubling him. This turned out to be a quite trivial matter, so Hilbert answered Einstein directly. The latter then responded with thanks, adding that ``now your entire fine analysis is clear to me, also with respect to the heuristics. Our results are in complete agreement'' \cite[295]{Einstein1998a}.
What Einstein meant by this would seem quite obscure. Perhaps he only meant to assure Hilbert that he would no longer be pestering him about these matters. One must assume that Hilbert had just as little interest to enter these waters further, for how else to account for the fact that he failed to publicize Emmy Noether's findings, which clearly stemmed from this correspondence with Einstein? Not until Felix Klein began to take an interest in the status of conservation theorems in GR more than a year later did Noether's name receive any attention in this connection.
\section{On Noether's Unpublished Manuscript from 1916}
Noether's original manuscript no longer survives, but fortunately R. J. Humm made
a partial transcription, probably in early 1918. He also included the original pagination, which indicates that his manuscript begins with page 15 of her text. Since a number of steps in Noether's arguments are based on equations from the first 14 pages, any attempt to reconstruct how she obtained these results would be necessarily conjectural. Here I will simply take such claims as established facts; I will follow the same procedure when Noether draws on results in \cite{Hilbert1915} and \cite{Ein-6}.
By so doing, the general train of her arguments is not difficult to follow. They show that Hilbert's energy vector as well as Einstein's pseudotensor representing gravitational energy can both be decomposed into two parts, one of which will have vanishing divergence, whereas the other vanishes as a result of the field equations.
Her analysis draws closely on Hilbert's own techniques in
\cite{Hilbert1915}, which she then applies in order to analyze Einstein's construct in \cite{Ein-6}.
Noether's analysis of Hilbert's energy vector exploited the fact that his ``world
function'' takes the form $H=K+L$ and that $K$ is defined solely by the metric tensor and its first and second derivatives.
In her manuscript,
she employs notation that deviates only slightly from that found in the two papers she discusses.
For \cite{Hilbert1915} she begins her discussion of Hilbert's energy vector (\ref{eq:Hilbert-vector}) by looking at the vacuum case, $H=K$, where the last two terms $c^l=d^l=0$, since these only enter through the electromagnetic potential. She proceeds then to produce a decomposition of Hilbert's expression into a sum of two vectors, one of which vanishes by virtue of the field equations, whereas the divergence of the other vanishes identically, i.e., independent of the field equations.
Hilbert writes $p_s^i$ for $\frac {\partial p^i}{\partial w_s}$, and
for the Lie variation:
\begin{equation}\label{eq:Lie-var}
\delta g^{\mu\nu} \equiv p^{\mu\nu}= \sum_s (g^{\mu\nu}_s p^s - g^{\mu s}p^{\nu}_s -g^{\nu s}p^{\mu}_s).
\end{equation}
Noether follows Hilbert's Theorem III, writing:
\begin{equation}\label{eq:i_s}
i_s= \sum_{\mu\nu} [ \sqrt g K]_{\mu\nu} g^{\mu\nu}_s
\end{equation}
\begin{equation}\label{eq:i_s^l}
i_s^l= -2 \sum_{\mu} [ \sqrt g K]_{\mu s} g^{\mu l} ,
\end{equation}
and then noting that
\begin{equation}\label{eq:Hilbert-ident-1}
\frac{1}{\sqrt g }
\sum_{\mu\nu} [ \sqrt g K]_{\mu\nu} p^{\mu\nu} =\frac{1}{\sqrt g }
\sum_{sl} {i_s \,p^s+ i_s^l\, p^s_l}.
\end{equation}
Hilbert's Theorem III asserts that $i_s = \sum_{l}\frac{\partial i_s^l}{\partial w_l },$\footnote{\cite[895]{Renn/Stachel2007} note that Theorem III ``corresponds to the contracted Bianchi identities,'' an insight that Hilbert and his contemporaries failed to notice, although the Bianchi identities had been discovered decades earlier. For the story of their recovery in the context of general relativity, see \cite[263--272]{Rowe2018}.} which means that $i_s$ can be written as a divergence or expressed in the form of the identity:
\begin{equation}\label{eq:Hilbert-ident-2}
\sum_{\mu\nu} [ \sqrt g K]_{\mu\nu} g^{\mu\nu}_s + 2\sum_{l}\frac{ \partial ([ \sqrt g K]_{\mu s} g^{\mu l})}{\partial w_l} =0.
\end{equation}
Noether exploits this in showing that the left side of (\ref{eq:Hilbert-ident-1}) can be written as a divergence. To do this she
introduces the vector
\begin{equation}\label{eq:Noether-vector}
i^l= \sum_{s}\frac{i_s^l}{\sqrt g }p^s,
\end{equation}
in order to rewrite equation (\ref{eq:Hilbert-ident-1}) as
\begin{equation}\label{eq:Noether-ident-1}
\frac{1}{\sqrt g }
\sum_{\mu\nu} [ \sqrt g K]_{\mu\nu} p^{\mu\nu} = \frac{1}{\sqrt g }
\sum_{sl}( \frac{\partial{i_s^l}}{\partial{w_l}} \,p^s+ i_s^l\, p^s_l)= Div(\sum_s\frac{i_s^l}{\sqrt g} \,p^s)=
Div (i^l).
\end{equation}
Drawing on previous calculations, she asserts that for
$e^l= K\, p^l-a^l-b^l$
\begin{equation}\label{eq:Div(e^l)}
\frac{1}{\sqrt g }
\sum_{\mu\nu} [\sqrt g K]_{\mu\nu} p^{\mu\nu} = Div(e^l).
\end{equation}
It follows from equations (\ref{eq:Noether-ident-1}) and (\ref{eq:Div(e^l)})
that $Div (e^l)= Div(i^l)$ and furthermore that
$Div (e^l-i^l)=0$ holds identically. By virtue of the fundamental equations $[\sqrt g K]_{\mu\nu}=0= Div (i^l)$ for arbitrary $p^s$, whereas (\ref{eq:i_s^l}) shows
that
$i_s^l$ also vanishes, which means that by definition (\ref{eq:Noether-vector}) $i^l=0$.
From this, Noether concludes that in the vacuum case one can always decompose Hilbert's energy vector as:
\begin{equation}\label{eq:Noether-ident-2}
e^l= i^l+(e^l-i^l),
\end{equation}
where the first part vanishes as a consequence of the fundamental equations $[\sqrt g H]_{\mu\nu} =0$, whereas the divergence of the second part vanishes identically. She then summarizes the physical significance of this result as follows:
``The energy is probably {\it not} to be regarded as a first integral (as in classical mechanics) because it contains the second derivatives of the $g^{\mu\nu}$, and these cannot be eliminated from the $a^l$ by means of the fundamental equations.''\footnote{Hilbert introduced $a^l$ in a purely formal manner; see the discussion of equation (\ref{eq:polar-g}) above.}
From here, Noether makes use of the identity $Div (e^l-i^l)=0$ to count the number of equations that the components of $e^l$ need to satisfy, arriving at 120 such conditions. She then shows that the identical argument goes through for the general Lagrangian $H$, so that Hilbert's energy vector can always be decomposed as above.
Noether next takes up a similar analysis of Einstein's version of the energy laws in general relativity, published in \cite{Ein-6}, arriving at very similar results. She begins by noting how Einstein bases his theory on the demand that the equations of motion be given by (\ref{eq:geodesic}). Noether then
rewrites Einstein's matter-free field equations (\ref{eq:modular2}) with only two small notational differences: her spacetime coordinates appear as $w_{\alpha}$ instead of $x_{\alpha}$, and she suppresses the coefficient $-2\chi$ in the second equation, which Einstein introduced for physical reasons. Likewise, she writes Einstein's law for conservation of momentum and energy (\ref{energy conservation}) in the form
$$\sum_l\frac{\partial({t_s^l+T_s^l})}{\partial{w_l}}=0.$$
Drawing on Hilbert's notation, and noting that for $\sqrt{-g}=1, \,\, H=K$, she writes for the Lagrangian derivative in (\ref{eq:modular2}):
\begin{equation}\label{eq:EN-modular2}
-[\sqrt g H]_{\mu\nu}= \sum_{\alpha}\frac{\partial}{\partial{w_{\alpha}}}\left\{\frac{\partial H}{\partial{g^{\mu\nu}_{\alpha}}}\right\}
-\frac{\partial H}{\partial{g^{\mu\nu}}}.
\end{equation}
Noether now connects (\ref{eq:EN-modular2}) with Einstein's pseudotensor for gravitational energy $t^{\alpha}_{\sigma}$
in (\ref{eq:modular3}).
Multiplying (\ref{eq:EN-modular2}) by $g^{\mu\nu}_{\sigma}$ and summing over the indices $\mu, \nu$ yields:
$$-\sum_{\mu\nu} g^{\mu\nu}_{\sigma}[\sqrt g H]_{\mu\nu}= \frac{\partial}{\partial{w_{\alpha}}}
\sum_{\mu\nu}g^{\mu\nu}_{\sigma}\frac{\partial H}{\partial{g^{\mu\nu}_{\alpha}}}
-\frac{\partial H}{\partial{w_{\sigma}}},$$
and thus
\begin{equation}\label{eq:EN-modular3}
-\sum_{\mu\nu} g^{\mu\nu}_{\sigma}[\sqrt g H]_{\mu\nu}=
\sum_{\alpha}\frac {\partial t^{\alpha}_{\sigma}}{\partial w_{\alpha}}
\end{equation}
in view of (\ref{eq:modular3}).
Using Hilbert's Theorem III and the identity (\ref{eq:Hilbert-ident-1}), Noether next obtains:
\begin{equation}\label{eq:Noether-ident-3}
\sum_{\mu\nu} [ \sqrt g H]_{\mu\nu} p^{\mu\nu} =
\sum_{sl} \frac{\partial{i_s^l}}{\partial{w_l}} \,p^s+ \sum_{sl}i_s^l\, p^s_l,
\end{equation}
and in place of (\ref{eq:i_s^l}) she writes:
\begin{equation}\label{eq:Noether-ident-4}
-2 \sum_{\mu} [ \sqrt g H]_{\mu s} g^{\mu l} = t^l_s + r^l_s.
\end{equation}
Her claim is that $i_s^l = t^l_s + r^l_s$ and that $Div(i_s^l)=Div(t^l_s)$, so that $Div(r^l_s)\equiv 0$.
She proves this by multiplying (\ref{eq:EN-modular3}) by $p^s$ and (\ref{eq:Noether-ident-4}) by $p^s_l$, and then adding these two equations to get:
\begin{equation}\label{eq:Noether-ident-5}
\frac{1}{\sqrt g }\sum_{\mu\nu} [ \sqrt g H]_{\mu\nu} p^{\mu\nu} = \sum_{l}\frac{\partial{t^l_s}}{\partial{w_l}}p^s+ \sum_{sl} (t^l_s + r^l_s)p^s_l.
\end{equation}
Comparing coefficients in (\ref{eq:Noether-ident-3}) and (\ref{eq:Noether-ident-5}), Noether deduces the equations:
\begin{equation}\label{eq:Noether-ident-6}
\sum_{l} \frac{\partial{i_s^l}}{\partial{w_l}}=\sum_{l} \frac{\partial{t_s^l}}{\partial{w_l}};\, i^l_s= t^l_s + r^l_s,
\end{equation}
from which follows that $$\sum_{l} \frac{\partial{r_s^l}}{\partial{w_l}}= Div(r_s^l) \equiv 0,$$ under the assumption that $\sqrt{-g}=1$ holds.
Summarizing, she concludes that the Einsteinian gravitational pseudo-tensor $t_s^l$ also decomposes into two parts,
$t^l_s = i_s^l - r^l_s$, where by (\ref{eq:i_s^l}) $i_s^l$
vanishes as a consequence of the field equations, whereas the divergence of $r^l_s$ vanishes identically, i.e., independent of the field equations.
Noether actually shows that $i^l_s= t^l_s + r^l_s$, the second equation in (\ref{eq:Noether-ident-6}), is equivalent to Einstein's field equations written in the form (\ref{eq:modular4}). Finally, she briefly notes that the same considerations hold in the presence of matter, just as in the case of Hilbert's theory.
Humm's copy of Noether's manuscript contains no date, so we can only fix bounds for the period during which she must have written it. In his correspondence with Einstein from late May and early June of 1916, Hilbert alluded to Noether's
efforts to reconcile their approaches to energy laws in general relativity. Much later, in January 1918, Hilbert and Klein both made reference to the results she had obtained more than one year earlier, so probably by December 1916 at the latest. Her text, on the other hand, contains no mention of \cite{Ein-5}, which surely circulated in G\"ottingen soon after its publication in early November 1916. Had she known of this text at the time, Noether would have most likely referred to the arguments Einstein set forth therein. These circumstances suggest that she probably completed her manuscript between June and October of 1916. After this date, Einstein published several times on energy conservation,\footnote{In addition to \cite{Ein-5}, see \cite{Ein-9} and \cite{Ein-8}.} which proved to be one of the most hotly debated issues in his theory of gravitation. For the G\"ottingen reception of general relativity, however, the most important of these notes was \cite{Ein-5}, to which we now turn.
\section{Einstein and Weyl respond to Hilbert}
Einstein recognized the importance of deriving his field equations for general relativity from an
appropriate variational principle, but he strongly opposed Hilbert's effort to link the new
theory of gravitation with Mie's electromagnetic theory of matter. He originally thought about addressing this issue in
\cite{Ein-6}, which quickly came to be regarded as a canonical text for the theory \cite{Gutfreund/Renn2015}. Among Einstein's posthumous papers, one finds an unpublished appendix written for
\cite{Ein-6}, in which Einstein adopts Hilbert's variational methods, but with a general matter tensor rather than Hilbert's $L$. In a footnote, he criticizes Hilbert for adopting Mie's matter function, which was based, of course, on the electrodynamic variables alone \cite[346]{Ein-11}. Quite possibly, Einstein withdrew this part of the text so as to avoid any potential polemics. He may have also considered this issue too important to merely appear in an appendix, and so he decided instead to publish a separate note on this topic. In
late October 1916 he submitted \cite{Ein-5} for publication in the {\it Sitzungsberichte der
Preu\ss ischen Akademie}. This provides a much fuller account of methods for
deriving the fundamental equations of general relativity using variational principles. In the introduction, he wrote:
\begin{quote}
The general theory of relativity has recently been given in a
particularly clear form by H.A. Lorentz and D. Hilbert, who have
deduced its equations from one single principle of variation. The
same thing will be done in the present paper. But my purpose here is
to present the fundamental connections in as perspicuous a manner as
possible, and in as general terms as is permissible from the point of
view of the general theory of relativity. In particular we shall make as
few specializing assumptions as possible, in marked contrast to Hilbert's
treatment of
the subject (\cite[165]{Ein-5}).
\end{quote}
Einstein thus employed Lagrangian equations of the type Hilbert derived earlier, but he began with only some
general assumptions about
such functions ${\frak H}$ of the field variables $g^{\mu\nu}, \, q_{\rho}$ and their derivatives. He first noted that
the second derivatives $g^{\mu\nu}_{\sigma\tau}$
in ${\frak H}$ could be removed by partial integration, leading to a new Lagrangian ${\frak H}^*$ which satisfies
$$\int {\frak H} d\tau = \int {\frak H}^* d\tau + F,$$ where
$F$ is a surface term that can be neglected when the integral is suitably varied.
In this way, Einstein was able to substitute ${\frak H}^*$ for ${\frak H}$ in his variational principle
\begin{equation}\label{eq:var-princ}
\delta \biggl \{\int {\frak H}\,d\tau \biggr \} = \delta \biggl \{\int {\frak H}^*\,d\tau \biggr \}=0.
\end{equation}
This leads to the Lagrangian equations:
\begin{equation}\label{eq:gen.Lagr.eqns1}
\sum_{\alpha}\frac{\partial}{\partial x_{\alpha}}\biggl (\frac{\partial {\frak H}^*}{\partial
g^{\mu\nu}_{\alpha}}\biggr ) - \frac{\partial {\frak H}^*}{\partial g^{\mu\nu}} = 0.
\end{equation}
\begin{equation}\label{eq:gen.Lagr.eqns2}
\sum_{\alpha}\frac{\partial}{\partial x_{\alpha}}\biggl (\frac{\partial {\frak H}^*}{\partial
q_{\rho\alpha}}\biggr ) - \frac{\partial {\frak H}^*}{\partial q_{\rho}} = 0.
\end{equation}
Einstein next
assumed $\frak H$ can be written $\frak H = \frak G + \frak M$ in order to assert the separate existence of the
gravitational field from matter. Furthermore, he assumed that $\frak M$ was a
function of the four electrodynamic variables $q_{\rho}$, their derivatives $q_{\rho\alpha}$ and $g^{\mu\nu}$. His $\frak G$ took the form
${\frak G}(g^{\mu\nu}, g^{\mu\nu}_{\sigma}, g^{\mu\nu}_{\sigma\tau})$, where the
coefficients of the
$g^{\mu\nu}_{\sigma\tau}$ were linear in the $g^{\mu\nu}$.
By introducing a function
$\frak G^*$ analogous to $\frak H^*$, Einstein was able to
deduce general gravitational field
equations of the form
\begin{equation}\label{eq:gen.fd.eqns1}
\sum_{\alpha}\frac{\partial}{\partial x_{\alpha}}\biggl (\frac{\partial {\frak G}^*}{\partial
g^{\mu\nu}_{\alpha}}\biggr ) - \frac{\partial {\frak G}^*}{\partial g^{\mu\nu}} = \frac{\partial {\frak M}}{\partial g^{\mu\nu}}.
\end{equation}
\begin{equation}\label{eq:gen.fd.eqns2}
\sum_{\alpha}\frac{\partial}{\partial x_{\alpha}}\biggl (\frac{\partial {\frak M}}{\partial
q_{\rho\alpha}}\biggr ) - \frac{\partial {\frak M}}{\partial q_{\rho}} = 0.
\end{equation}
Einstein next proceeded to specify further assumptions of his theory. This required that
$$ds^2 = \sum_{\mu,\nu} g_{\mu\nu}dx_{\mu}dx_{\nu},\quad H= \frac{{\frak H}}{\sqrt {-g}}, \quad
G= \frac{{\frak G}}{\sqrt {-g}}, \quad M= \frac{{\frak M}}{\sqrt {-g}}$$ all be invariants
under general coordinate transformations. This placed only limited restrictions on the
matter fields, but $G$, up to a constant factor, had to be the Riemann curvature scalar, which
entails that ${\frak G}^*$ must also be uniquely determined.\footnote{Einstein never cited a mathematical source for this and other related claims, though he was well aware that his whole theory depended on this uniqueness property (see \cite[167, footnote 1]{Ein-5}). Quite possibly this was part of ``folklore'' knowledge among experts on the Ricci calculus, in which case Einstein might have picked this up from Marcel Grossmann. In 1920 Hermann Weyl published a proof in an appendix to the fourth edition of \cite{Weyl-4} (see \cite{Weyl-5}), noting that the first proof was given by Hermann Vermeil in 1917; see also \cite[43]{Pau-2}.}
In a footnote, Einstein gave an explicit formula for $\frak G^*$:
\begin{equation}\label{eq:G^*}
{\frak G^*} = \sqrt{-g}\sum_{\alpha\beta\mu\nu}g^{\mu\nu}(\Gamma^{\alpha}_{\mu\beta}\Gamma^{\beta}_{\nu\alpha}-\Gamma^{\alpha}_{\mu\nu}\Gamma^{\beta}_{\alpha\beta}).
\end{equation}
This generalizes the Lagrangian (\ref{eq:modular-H}) that Einstein used in
\cite{Ein-6}. In his letter to Weyl, cited above, Einstein repudiated (\ref{eq:modular-H}), noting that $\frak G^*$ is the required gravitational Lagrangian for generally covariant field equations, as Hilbert had shown.
Einstein then went on to carry out the variation $\int {\frak G}^*\,d\tau$, followed by the usual partial integrations, from which he
deduced
four identities ($\sigma = 1,2,3,4$):
\begin{equation}\label{eq:4identities}
\sum_{\nu\alpha}\frac{\partial^2}{\partial x_{\nu}\partial x_{\alpha}}\biggl (\sum_{\mu}g^{\mu\nu}\frac{\partial {\frak G}^*}{\partial
g^{\mu\sigma}_{\alpha}}\biggr ) \equiv 0.
\end{equation}
From the general field equations (\ref{eq:gen.fd.eqns1}) he then derived
\begin{equation}\label{eq:AE-energy1}
\sum_{\alpha}\frac{\partial}{\partial x_{\alpha}}\biggl (\sum_{\mu}g^{\mu\nu}\frac{\partial {\frak G}^*}{\partial
g^{\mu\sigma}_{\alpha}}\biggr ) = - ({\frak T}_{\sigma}^{\nu} + {\frak t}_{\sigma}^{\nu}),
\end{equation}
where the terms on the right side of the equations denote
$${\frak T}_{\sigma}^{\nu} = - \sum_{\mu}\frac{\partial {\frak M}}{\partial g^{\mu\sigma}}g^{\mu\nu}; \,\,
{\frak t}_{\sigma}^{\nu} = \frac{1}{2}\biggl ({\frak G}^*\delta_{\sigma}^{\nu}
-\sum_{\mu\alpha} \frac{\partial {\frak G}^*}{\partial g^{\mu\alpha}_{\nu}}g^{\mu\alpha}_{\sigma}\biggr ).$$
From (\ref{eq:AE-energy1}) and the identities (\ref{eq:4identities}), Einstein could now deduce his version of the conservation laws:
\begin{equation}\label{eq:AE-energy2}
\sum_{\nu}\frac{\partial}{\partial x_{\nu}}({\frak T}_{\sigma}^{\nu} + {\frak t}_{\sigma}^{\nu}) =0.
\end{equation}
As before, he designated the ${\frak T}_{\sigma}^{\nu}$ as the
energy components of matter, whereas
the ${\frak t}_{\sigma}^{\nu}$ he regarded as
the components of the gravitational energy.
In closing, he derived the four equations for the energy components of matter
\begin{equation}\label{eq:AE-energy3}
\sum_{\mu\nu}\frac{\partial{{\frak T}_{\sigma}^{\nu}}}{\partial x_{\nu}}+ \frac{1}{2}g^{\mu\nu}_{\sigma}{\frak T}_{\mu\nu} =0.
\end{equation}
Einstein emphasized that in deriving the conservation laws (\ref{eq:AE-energy2}) and
(\ref{eq:AE-energy3}) he needed only the gravitational field equations (\ref{eq:gen.fd.eqns1}) but not the
field equations for matter
(\ref{eq:gen.fd.eqns2}).
Readers familiar with \cite{Hilbert1915} surely recognized
Einstein's desire to place
his variational approach to the fundamental equations of his gravitational theory
in the sharpest possible contrast with Hilbert's. He had struggled during the late spring of 1916 to understand how Hilbert constructed his invariant energy vector, but openly admitted that its physical significance eluded him entirely.
Apparently he felt no differently one year later when he spoke about it with Rudolf Humm. He wondered how energy could be a vector, but also what sense it made when its very definition was multi-valued, since it contained an arbitrary vector
\cite[70]{Rowe2019}.
Einstein also had deep misgivings about Hilbert's methodological approach.
Writing to Hermann Weyl shortly after \cite{Ein-5} was published, he confessed:
\begin{quote}
To me Hilbert's {\it Ansatz} about matter appears to be childish, just like an infant who is unaware
of the pitfalls of the real world\dots . In any case, one cannot accept the mixture of well-founded
considerations arising from the postulate of general relativity and unfounded, risky hypotheses about the structure of the electron\dots . I am the first to admit that the discovery of the proper hypothesis, or the Hamilton function, of the structure of the electron is one of the most important tasks of the current theory. The ``axiomatic method'', however, can be of little use in
this. (Einstein to Weyl, 23 November 1916, \cite[366]{Ein-11}.)
\end{quote}
Einstein's letter was written in response to a draft of \cite{Weyl-1}, which employed variational methods to deduce conservation laws
in general relativity.
Weyl shared Einstein's criticism of Hilbert's theory, especially its reliance on Mie's theory and the assumption of the special matter tensor
$T_{\mu\nu} = \frac{\partial L}{\partial g^{\mu\nu}}.$ He thus
emphasized the
provisional nature of all efforts to base gravitational theory on variational principles owing to lack of knowledge about elementary particles. ``Under these
circumstances,'' he wrote, ``it appears to me important to
formulate {\it a Hamiltonian principle that carries as far as our present
knowledge of
matter reaches} \dots '' (\cite[118]{Weyl-1}).
Weyl's theory combined a general matter function to the gravitational and electromagnetic fields. The field effects are then measured by the action integrals: $$\int H\, d\omega, \,\, \int L\, d\omega,$$
where $H$ is given by (\ref{eq:G^*}) and $$L= \frac{1}{2}F_{ik}F^{ik} = \frac{1}{2}g^{ij}g^{kh}F_{ik}F_{jh},
\quad F_{ik}= \frac{\partial \phi_k}{\partial x_i}- \frac{\partial \phi_i}
{\partial x_k}.$$ Alongside these field actions, Weyl introduces analogous substance actions given by integrals based on density functions for matter $dm$ and electricity $de$: $$\int \biggl \{dm \int \sqrt{g_{ik} \,dx_idx_k} \biggr \}, \,\,
\int \biggl \{de \int \phi_i \, dx_i \biggr \}.$$
All of these ingredients enter into Weyl's ``world function'' $F$ defined on a given region $\Omega$, for
which he postulates that under variations of the field variables that vanish at the boundary of $\Omega$ and infinitesimal spacetime displacements of the substance elements this $F$ will be an extremum.
From this postulate, he immediately derives corresponding results for gravitation, electromagnetism, and mechanics. Thus, by varying the $g^{ij}$ while holding the $\phi_i$ and the worldlines of substance fixed, one gets Einstein's gravitational equations (\ref{eq:einst}). Varying the $\phi_i$ yields the Maxwell-Lorentz equations
$$\frac{1}{\sqrt g}\frac{\partial(\sqrt g F^{ik})}{\partial x_k} = J^i=\epsilon \frac{dx_i}{ds}.$$ Finally, varying the worldlines of the substance elements leads to the equations of motion for mass points when acted on by electromagnetic forces
\begin{equation}\label{eq:geodesic-forces}
\rho \biggl (\frac{d^2x_i}{ds^2} - \Gamma^i_{hk}\frac{dx_h}{ds}\frac{dx_k}{ds} \biggr ) =p^i.
\end{equation}
Here the $p^i$ are the contravariant components of the force corresponding to the covariant $$p_i= \sum_k F_{ik}J^k.$$ Weyl remarks further that (\ref{eq:geodesic-forces}) can be shown to follow directly from the other two systems of field equations. He regarded these findings as purely phenomenological deductions analogous to those of classical Hamiltonian mechanics.
This approach thus stressed flexibility, and
the following year he
elaborated on some of these ideas in the first edition of {\it Raum--Zeit--Materie} \cite{Weyl-4}.
In section 2 of \cite{Weyl-1}, he introduces a general action integral defined on a region of spacetime $\Omega$ for which $$\int_{\Omega} (H-M)\, d\omega$$ is an extremum. Here the matter-density action $M$ is closely related to Einstein's energy-momentum tensor $T_{ik}$; the latter is defined, however, in connection with the total derivative of the former. Weyl's objective is to deduce Einstein's
energy-momentum equations for matter (\ref{eq:AE-energy3}) by an appropriate variation applied to ${\frak M}= M \sqrt g$.
He was apparently the first author to emphasize that the conservation of energy-momentum in general relativity should be deduced from a variational principle under which the variation of the field quantities is induced by coordinate transformations. In Weyl's language, the field variables are ``mitgenommen'' by means of infinitesimal coordinate transformations \cite[117]{Weyl-1}. One year later, Klein and also Noether alluded to earlier work of Sophus Lie, who introduced this method in his new group-theoretic approach to differential equations (see \cite{Haw}). Within the context of the calculus of variations, this technique came to be known as Lie variation. As was pointed out by Janssen and Renn, Einstein only gradually came to appreciate the importance of this mathematical technique for field physics (\cite[863]{Janssen/Renn2007}).
Adopting Weyl's notation, one considers transformations
$$ x_i \rightarrow x_i+\epsilon\xi_i(x_1,x_2,x_3,x_4)$$ for
infinitesimal $\epsilon$ and $\xi_i$, which along with
their derivatives vanish on the boundary of integration, and then calculates
$\delta g^{ik}$, the induced variation of the field quantities:
$$\delta g^{ik} = \epsilon( g^{\alpha k}\frac{\partial\xi_i}{\partial x_{\alpha}} + g^{i\beta} \frac{\partial\xi_k}{\partial x_{\beta}}).$$
Weyl then distinguished this $\delta$-variation from a second $\Delta$-variation given by
\begin{equation}\label{eq:Delta-var}
\Delta g^{ik} = \delta g^{ik} -\epsilon
\frac{\partial g^{ik}}{\partial x_{\alpha}}\xi_{\alpha}.
\end{equation}
Under a $\Delta$-variation, the domain of
definition $\Xi$ for the coordinates $(x_1,x_2,x_3,x_4)$ corresponding to the
region $\Omega$ remains identical, leading to what Weyl calls a
virtual displacement.
Using this
variational technique, Weyl rederives Einstein's
energy-momentum equations for matter (\ref{eq:AE-energy3}) (\cite[124]{Weyl-1}).
Writing $dx$ for $dx_1dx_2dx_3dx_4$, he notes that $\int {\frak M}\, dx$ is an invariant and that
$$\int_{\Xi} \Delta {\frak M}\, dx =0.$$ Furthermore,
$$\int_{\Xi} \Delta {\frak M}\, dx = \int {\frak T}_{ik}\,\Delta g^{ik}\, dx, \quad{\frak T}_{ik} = \sqrt g T_{ik}.$$
Substituting (\ref{eq:Delta-var}) and carrying out the partial integration leads to
$$ \int \biggl \{ \sum_{krs}\frac{\partial{{\frak T}_i^k}}{\partial {x_k}}+ \frac{1}{2}\frac{{\partial {g^{rs}}}}{{\partial {x_i}}}{\frak T}_{rs}\biggr \}\,\xi_i dx =0,$$
and since $\xi_i$ is arbitrary, he gets (\ref{eq:AE-energy3}):
$$\sum_{krs}\frac{\partial{{\frak T}_i^k}}{\partial {x_k}}+ \frac{1}{2}\frac{{\partial {g^{rs}}}}{{\partial {x_i}}}{\frak T}_{rs}=0.$$
Weyl next points out that a parallel argument using the gravitational action $H$ leads to four analogous equations satisfied by the Einstein tensor
$R^{\mu\nu}- \frac{1}{2}g^{\mu\nu}R$. Written in modern notation, these are $$(R^{\mu\nu}- \frac{1}{2}g^{\mu\nu}R)_{;\nu}=0,$$
known today as contracted
Bianchi identities. Since these are formally identical to the equations (\ref{eq:AE-energy3}),
Weyl made the noteworthy observation that the latter equations are an immediate consequence of Einstein's field equations (\ref{eq:einst}), written in the form
\begin{equation}
R^{\mu\nu}- \frac{1}{2}g^{\mu\nu}R = -\kappa T^{\mu\nu}.
\end{equation}
He further observed that this was a natural consequence of a generally covariant theory, since the freedom to choose any coordinate system is reflected in the fact that these ten gravitational field equations satisfy four differential identities.
Although Weyl clearly recognized the connection between his results and Hilbert's Theorem I, he made no direct comments about the latter. Instead, he cited Hilbert's second note \cite{Hilbert1917}, which addressed the problem of causality in GR while proposing a method for handling Cauchy problems.
Nor did he draw any clear distinction between relativistic conservation laws and their counterparts in classical mechanics.
Working within this novel context, Weyl's focus was on adapting variational principles to the new field physics, following the lead of Hilbert, Lorentz, and Einstein. None of these mathematicians and physicists was deeply versed in the fine points of Ricci's tensor calculus, including the full Bianchi identities.\footnote{As was pointed out in \cite[274--276]{Pais}; for the ensuing history, see \cite[263--272]{Rowe2018}.} Due to this circumstance, they came to regard the contracted Bianchi identities as a result obtained by using variational methods.
\section{Klein's Critique of \cite{Hilbert1915} }
The papers by Einstein and Weyl discussed above were carefully studied by
Felix Klein, who from early 1917 began
to play an active
role in ongoing discussions of conceptual problems in general relativity.
As noted above in section 3, in January 1918 Klein and Hilbert reached a first consensus
regarding some fundamental issues related to general relativistic physics \cite{Kl-1}.
With regard to variational methods and conservation laws derived from them, Klein
emphasized the importance of separating formal deductions from
physical claims, such as those that form the basis for Einstein's new gravitational theory. Much of what he and Hilbert discussed centered on the distinction between theories based on invariants of the orthogonal group and those that arise from a variational problem based on general invariants, as in Hilbert's adaptation of Einstein's theory.
Klein introduced a special Lagrangian in place of $L$, namely
$$L=\alpha Q=-\alpha\sum_{\mu\nu\rho\sigma}(q_{\mu\nu}-q_{\nu\mu})(q_{\rho\sigma}-q_{\sigma\rho})(g^{\mu\rho}g^{\nu\sigma}-g^{\mu\sigma}g^{\nu\rho}),$$
where $-\alpha$ is Einstein's $\kappa = \frac{8\pi K}{c^2}$ and $K$ the universal gravitational constant from Newton's theory \cite[333]{Ein-6}. He then observes that the tiny value $-\alpha = 1.87\cdot 10^{-27}$ will ensure that the new theory accords with Maxwell's theory, for which $\alpha =0$. Klein next takes the two integrals separately:
$$I_1= \int K d\omega, \,\, I_2= \alpha \int Q d\omega,$$ and carries out the variation in a purely formal manner, writing: $$\delta I_1 = \int K_{\mu\nu}\delta g^{\mu\nu}d\omega;$$
$$\delta I_2 = \alpha \int (\sum_{\mu\nu}Q_{\mu\nu}\delta g^{\mu\nu}+\sum_{\rho}Q^{\rho}
\delta q_{\rho})d\omega.$$ Here $K_{\mu\nu}$ is Hilbert's $[\sqrt g K]_{\mu\nu} : \sqrt g$, whereas
$Q_{\mu\nu}=(\frac{\partial \sqrt g Q}{\partial g^{\mu\nu}}) : \sqrt g$, and the vector
$$Q^{\rho}=-\sum_{\sigma}\frac{\partial(\frac{\partial \sqrt g Q}{\partial q^{\rho\sigma}})}{\partial w^{\sigma}} : \sqrt g.$$ Clearly the $Q_{\mu\nu}$ are the coefficient's in (\ref{eq:Hilbert-EM-Energie}), Hilbert's expression for electromagnetic energy, so Klein called these the energy components of the electromagnetic field. He further identified
$Q^{\rho}=0$ as the counterpart to the Maxwell equations.
Carrying out the variation for $I_1$ leads almost immediately to the four differential equations that Hilbert had derived using Theorem III (see (\ref{eq:Hilbert-ident-2})):
\begin{equation}\label{Klein-identities-1}
\sqrt g \sum_{\mu\nu}K_{\mu\nu} g^{\mu\nu}_{\sigma}+ 2\sum_{\mu\nu}\frac{\partial(\sqrt g K_{\mu\sigma}
g^{\mu\nu})}{\partial w^{\nu}}
=0, \,\, \sigma = 1,\,2,\,3,\,4,
\end{equation}
which Klein summarizes in the statement that the vectorial divergence of $K_{\mu\nu}$ vanishes. For the variation of
$I_2$ he obtains:
\begin{equation}\label{Klein-identities-2}
\sum_{\mu\nu}(\sqrt g Q_{\mu\nu} g^{\mu\nu}_{\sigma}+ 2\sum_{\mu\nu}\frac{\partial\sqrt g (Q_{\mu\sigma}
g^{\mu\nu})}{\partial w^{\nu}})+\sum_{\rho}(\sqrt g Q^{\rho} (
q_{\rho\sigma}-q_{\sigma\rho}))
=0, \,\, \sigma = 1,\,2,\,3,\,4.
\end{equation}
Only at this point does Klein make use of the field equations, which here appear in the form:
$$K_{\mu\nu}+\alpha Q_{\mu\nu}=0; \,\,Q^{\rho}=0.$$ Multiplying (\ref{Klein-identities-2}) by $\alpha$ and adding this to (\ref{Klein-identities-1}) yields:
\begin{equation}\label{Klein-identities-3}
\sum_{\mu\nu}\sqrt g ( K_{\mu\nu}+\alpha Q_{\mu\nu}) g^{\mu\nu}_{\sigma}+
2\sum_{\mu\nu}\frac{\partial(\sqrt g ( K_{\mu\sigma} +\alpha Q_{\mu\nu})
g^{\mu\nu})}{\partial w^{\nu}}
+\alpha \sum_{\rho}(\sqrt g
Q^{\rho} (q_{\rho\sigma}-q_{\sigma\rho}))
=0.
\end{equation}
From the equations (\ref{Klein-identities-3}) Klein immediately deduces that the four equations $Q^{\rho}=0$ follow directly from the ten equations
$K_{\mu\nu}+\alpha Q_{\mu\nu}=0$. If, on the other hand, one takes the generalized Maxwell equations $Q^{\rho}=0$
alongside the four identities (\ref{Klein-identities-2}), then one can conclude that the energy components $Q_{\mu\nu}$ have a vanishing vectorial divergence.
This straightforward analysis pointed to one of the glaring weaknesses in
Hilbert's theory, namely the
use he made of Theorem 1 to deduce
four identities from his fourteen fundamental equations. Hilbert's idea of reducing electrodynamics to gravitational effects hinged on applying Theorem 1 to the world function $H=K+L$. What
Klein simply pointed out was that by handling gravity and
electromagnetism separately, one can derive four identities
from each, namely the four Lagrangian equations
derivable from $\delta\int K d\omega =0$ and $\delta\int Q d\omega =0$,
respectively. This meant that
Hilbert's Theorem I led to {\it eight}
identities and not just four, an observation that seriously undermined
his unification program.
Klein was also able to shed new light on Hilbert's invariant energy vector $e^{\nu}$ by slightly transforming equations (\ref{Klein-identities-3}). This led to the recognition that $e^{\nu}$ could be decomposed into a sum of two vectors, the first being
$$e_1^{\nu}= -2\sum_{\mu\sigma} (( K_{\mu\sigma}+\alpha Q_{\mu\sigma}) g^{\mu\nu}+\frac{\alpha}{2}Q^{\nu}q_{\sigma})p^{\sigma}$$
and the second $e_2^{\nu}$ having vanishing divergence.\footnote{Klein also noted certain properties of $e_2^{\nu}$, but he found it too difficult to calculate directly. A few months later he discovered a different way to derive Hilbert's $e^l$ and presented this in \cite{Kl-2}.} Since the first vector vanishes by virtue of the field equations, Klein concludes that Hilbert's invariant energy theorem (\ref{eq:Hilbert-energy}) is merely an identity and thus by no means analogous to conservation of energy in classical mechanics. These findings were clearly in accord with what Emmy Noether had already pointed out to Hilbert more than a year before. Since she still had her manuscript, she was able to show her derivation to Klein. This probably took place on or shortly after 22 January 1918, when he spoke about these matters at a meeting of the G\"ottingen Mathematical Society. After mentioning her earlier results, Klein somewhat dismissively wrote that she had not brought out their importance as decisively as he had done in his lecture \cite[559]{Kl-1}.
\section{Klein's Correspondence with Einstein on Energy Conservation}
Klein's open letter to Hilbert contained similar remarks about Einstein's derivation of the
``conservation laws'' (\ref{eq:AE-energy2}) in \cite{Ein-5} (the quotation marks are Klein's).
Klein claimed that these
four equations should also be regarded as mathematical identities, by which he apparently meant that they were consequences of the field equations. This assertion was disputed by Einstein and led to
some lengthy exchanges between him and Klein during the month of March 1918.
On 13 March, Einstein wrote him:
\begin{quote}
It was with great pleasure that I read your extremely clear and elegant explanations regarding Hilbert's first note. However, I consider your remark about my formulation of the conservation laws to be inaccurate. For equation [(\ref{eq:AE-energy2})] is by no means an identity, any more than [(\ref{eq:AE-energy1})]; only [(\ref{eq:4identities})] is an identity. The conditions [(\ref{eq:AE-energy1})] are the mixed form of the field equations of gravitation. [(\ref{eq:AE-energy2})] follows from [(\ref{eq:AE-energy1})] on the basis of the identity [(\ref{eq:4identities})]. The relations here are exactly analogous to those of nonrelativistic theories. \cite[673]{Einstein1998b}
\end{quote}
Einstein might have noticed that what Klein meant by an identity differed from his own understanding, but he was mainly intent on spelling out the physical importance of the pseudo-tensor
${\frak t}_{\sigma}^{\nu}$
in the conservation laws (\ref{eq:AE-energy2}) The ${\frak t}_{\sigma}^{\nu}$'s not only lead to these laws but also with (\ref{eq:AE-energy1}) they provide a physical
interpretation entirely analogous to Gauss's law in electrostatics.
\begin{quote}
In the static case the number of ``lines of force'' running from a physical system to infinity is, according to [(\ref{eq:AE-energy1})], only dependent on the 3-dimensional spatial integrals
$$\int ({\frak T}_{\sigma}^{\nu} + {\frak t}_{\sigma}^{\nu})dV$$
to be taken over the system and the gravitational field belonging to the system. This state of affairs can be expressed in the following way. As far as its gravitational influence at a great distance is concerned, any (quasi-static) system can be replaced by a point mass. The gravitational mass of
this point mass is given by
$$\int ({\frak T}_{4}^{4} + {\frak t}_{4}^{4})dV$$
i.e., by the total energy (more precisely, total ``rest energy'') of the system, exactly as the inertial mass of the system. \dots
From [(\ref{eq:AE-energy2})] it can be concluded that the same integral
$\int ({\frak T}_{4}^{4} + {\frak t}_{4}^{4})dV$
also determines the system's inertial mass. Without the introduction and interpretation of ${\frak t}_{\sigma}^{\nu}$, one cannot see that the inertial and gravitational mass of a system agree.
I hope that this anything but complete explanation will enable you to guess what I mean. Above all, though, I hope you will abandon your view that I had formulated an identity, that is, an equation that places no conditions on the quantities in it, as the energy law. \cite[674]{Einstein1998b}
\end{quote}
Regarding this last point, Klein was still thoroughly unpersuaded, and so he sent Einstein
his ``rebuttal'' in a long letter from 20 March \cite[685--688]{Einstein1998b}. Klein's key assertion was that the equations (\ref{eq:AE-energy2}) are completely equivalent to $$\sum_{\nu}\frac{\partial ( K_{\sigma}^{\nu} +\alpha Q_{\sigma}^{\nu})}
{\partial w^{\nu}}=0$$ and that the latter are ``physically contentless.'' He meant by this nothing more than the observation that Einstein's conservation laws followed directly from the gravitational field equations.
Klein further informed Einstein that
Carl Runge had found a way to particularize
the coordinate system to obtain conserved quantities directly from:
$$\sum_{\nu} \frac{\partial T^{\nu}_{\sigma}}{\partial x_{\nu}} =0.$$
Delighted by this apparent breakthrough (``the pure egg of Columbus''), he
was anxious to learn what Einstein thought about Runge's finding.
Emmy Noether already knew about this proposal, and she was highly skeptical. She was visiting her father in Erlangen, so Klein mailed her a draft of \cite{Kl-1}.along with a description of Runge's result. She quickly went to work and found from concrete examples that
Runge's coordinate transformation led
to well-known identities that cannot be interpreted as energy
laws.\footnote{E. Noether to F. Klein, 12 March 1918, Nachlass Klein, (SUB), G\"ottingen.}
Einstein clarified his views on these matters in a letter from 24 March. In this reply, he
underscored that the equations above
contained {\it part} of the content of the field equations $$ K_{\sigma}^{\nu} +\alpha Q_{\sigma}^{\nu}=0.$$
The same was true for the equations
$$\sum_{\nu}\frac{\partial ({\frak T}_{\sigma}^{\nu} + {\frak t}_{\sigma}^{\nu})}
{\partial x_{\nu}}=0,$$ though with the important advantage that these equations can be used to obtain an integral formulation for energy conservation on regions of space-time over which the ${\frak T}$'s and ${\frak t}$'s vanish. One then obtains $$\frac{d}{dx_4}\{\int ({\frak T}_{\sigma}^{4} + {\frak t}_{\sigma}^{4}) \}=0.$$
Einstein emphasized that ``the temporal constancy of these four integrals is a nontrivial consequence of the field equations and can be looked upon as entirely similar and equivalent to the momentum and energy conservation laws in the classical mechanics of continua'' \cite[697]{Einstein1998b}.
As for Runge's proposal for obtaining the conservation laws by
particularizing the coordinate system, Einstein reported that he had explored that idea himself,
but had
given it up ``because the theory predicts energy losses
due to gravitational waves'' and these loses could not be taken
into account. Einstein included an offprint of his recent paper \cite{Ein-9}, in which he introduced the quadrupole formula for the propagation of gravitational radiation. This was a typical instance showing how Einstein could quickly cast aside a
mathematical idea when he noticed that it failed to conform to his physical understanding. Emmy Noether's reservations regarding Runge's approach were, of course, based on essentially mathematical considerations. Klein and Runge soon hereafter dropped this line of investigation, but Klein continued to explore the mathematical underpinnings of energy conservation in the context of invariant variational principles.
In mid-July, he wrote to Einstein with news of a first breakthrough: ``I have succeeded in finding the organic law of construction for Hilbert's energy vector'' \cite[833]{Einstein1998b}. Klein's innovation
was surprisingly simple. Previously, he and others has carried out infinitesimal variations using a vector field
$p^{\tau}$, which along with its derivatives was required to vanish on the boundary of the integration domain. Klein now dropped this restriction, so that in carrying out the variation he obtained an additional triple integral of the form
$$\int\int\int\sqrt g\{e^1dw^2dw^3dw^4+\dots +e^4dw^1dw^2dw^3\}.$$ He then found that Hilbert's energy vector was essentially identical to $(e^1,e^2,e^3,e^4)$,
differing only by terms with vanishing divergence.
In his letter to Einstein, Klein reported that he hoped now to find his way to Einstein's formulation of energy conservation based on ${\frak T}^{\nu}_{\sigma} +{\frak t}^{\nu}_{\sigma}.$
Einstein answered: ``It is very good that you want to clarify the formal significance of the ${\frak t}^{\nu}_{\sigma}$. For I must admit that the derivation of the energy theorem for field and matter together appears unsatisfying from the mathematical standpoint, so that one cannot
characterize the ${\frak t}^{\nu}_{\sigma}$ formally'' \cite[834]{Einstein1998b}. Einstein was also unhappy about the fact that his pseudotensor was unsymmetric, unlike the matter tensor.\footnote{In 1951 Landau and Lifschitz introduced a symmetric pseudotensor for gravitational energy; unlike the Einstein pseudotensor, it conserves angular momentum.}
It should be emphasized that Klein was working closely with
Emmy Noether during this period, as he acknowledged in \cite{Kl-1} and \cite{Kl-2}. In fact, the latter paper and \cite{Noether1918b} should be seen as complementary studies, and in today's world would surely have been co-authored publications. On Monday, 22 July, Klein spoke on ``Hilberts Energievektor'' before the G\"ottingen Mathematical Society, one day before Noether's talk on ``Invariante Variationsprobleme.'' Klein then submitted the preliminary version of her findings to
the G\"ottingen Scientific Society on Friday, 26 July, having done the same one week earlier with his manuscript for \cite{Kl-2}. Both papers underwent final revision in September and appeared in the {\it G\"ottinger Nachrichten} shortly afterward.
By this time, H.A. Lorentz had also derived differential equations for energy conservation in gravitational fields, so his was a third formulation in addition to those of Einstein and Hilbert. It seemed evident that these different versions must be somehow related, and Klein hoped to explain how. Noether's earlier work on the same question clearly helped to move this project forward.
Klein's framework in \cite {Kl-2} extends the one he utilized in \cite {Kl-1}. He now begins with a general variational problem for a scalar function $K$ viewed as a function of $g^{\mu\nu}, g^{\mu\nu}_{\rho}, g^{\mu\nu}_{\rho\sigma}$ alone. In this general setting he derives a series of identities leading to what he calls the principal theorem, which he writes in the form
\begin{equation}\label{eq:princ-theorem}
\sum_{\mu\nu}\sqrt g (K_{\mu\nu}g^{\mu\nu}_{\tau})\equiv 2\sum_{\sigma}
\frac{\partial \sqrt g U^{\sigma}_{\tau}}{\partial w^{\sigma}},
\end{equation}
where $K_{\mu\nu}$ is the Lagrangian derivative. This identity effectively turns the four expressions on
the left-hand side into what Klein calls elementary divergences because they only involve the first derivative of the $g^{\mu\nu}$. The right-hand side derives from the triple integral above, which Klein introduced in order to derive Hilbert's energy vector. In the previous derivations this expression simply vanishes due to the conditions imposed on the boundary of integration.
Klein gave a simple extension of equation (\ref{eq:princ-theorem}) after writing it in the abbreviated form:
$$\frak K_{\mu\nu}g^{\mu\nu}_{\tau}\equiv 2
\frac{\partial \frak U^{\sigma}_{\tau}}{\partial w^{\sigma}}.$$ He then noted that the Lagrangian derivative of any elementary divergence $\frak D\frak i\frak v$
vanishes. So for any $\frak K^*= \frak K + \frak D\frak i\frak v$, the left-hand side will remain the same, and the theorem then reads: $$\frak K_{\mu\nu}g^{\mu\nu}_{\tau}\equiv 2
\frac{\partial \frak U^{*\sigma}_{\tau}}{\partial w^{\sigma}}.$$
Only at this point does Klein take up analysis of these expressions as invariants of groups. Those deriving from the left-hand side then correspond to invariants under general coordinate transformations (or, as one would say today, arbitrary diffeomorphisms). The $U^{\sigma}_{\tau}$, resp.
$U^{*\sigma}_{\tau}$, on the other hand, are only invariant under affine transformations. This was also the case with Einstein's pseudo-tensor, but Klein now underscored the key property that Einstein had already noted before, namely that these affine invariants enter into an equation that is valid in all coordinate systems. In the present case, this reads:
\begin{equation}\label{eq:A-beta}
\frac{\partial (\frak K^{\sigma}_{\tau}+\frak U^{\sigma}_{\tau})}{\partial w^{\sigma}}\equiv 0,
\end{equation}
or from the extended theorem,
\begin{equation}\label{eq:A-gamma}
\frac{\partial (\frak K^{\sigma}_{\tau}+\frak U^{*\sigma}_{\tau})}{\partial w^{\sigma}}\equiv 0.
\end{equation}
These are evidently purely mathematical deductions valid for any invariant scalar function $K$.
Klein next turns to physics, by introducing the field equations in the simplest form suitable for his purposes, writing: $$\frak K^{\sigma}_{\tau}-\chi \frak T^{\sigma}_{\tau}=0.$$ Substituting in
(\ref{eq:A-beta}) and (\ref{eq:A-gamma}), leads to two forms of the conservation laws,
$$\frac{\partial (\frak T^{\sigma}_{\tau}+\frac{1}{\chi}\frak U^{\sigma}_{\tau})}{\partial w^{\sigma}}= 0,$$
$$\frac{\partial (\frak T^{\sigma}_{\tau}+\frac{1}{\chi}\frak U^{*\sigma}_{\tau})}{\partial w^{\sigma}}= 0.$$ The first of these reflects the form Lorentz derives, whereas Klein shows that Einstein's formulation (\ref{eq:AE-energy2}) conforms with
the second. Analyzing Hilbert's energy vector led to additional complications, but the net result was the same: except for additional terms of no physical significance, its form was also of the second type.
Soon after Klein's paper \cite {Kl-2} on the differential form of the conservation
laws came out in October, he sent a copy to Einstein.
The latter responded with enthusiasm:
``I have already studied your paper most thoroughly and with
true amazement. You have clarified this difficult matter fully.
Everything is wonderfully transparent'' \cite[917]{Einstein1998b}. He was particularly delighted that Klein had not rejected his controversial pseudo-tensor for gravitational energy. Only one question still bothered him: how can one prove that Hilbert's expression is truly a generally covariant vector?
Klein answered with a calculation, but Einstein found the argument behind it insufficient \cite[932]{Einstein1998b}. One week later, after consulting with his {\it Assistent} Hermann Vermeil, Klein sent Einstein a new calculation. He realized that the argument was anything but elegant, but was eager to learn what Einstein thought of it \cite[936--937]{Einstein1998b}. He received this immediate response:
\begin{quote}
Thank you very much for the transparent proof, which I understood completely. The fact that it cannot be realized without calculation does not detract from your overall investigation, of course, since you make no use of the vector character of $e^{\sigma}$.-- In the whole theory, one thing still disturbs me formally, namely, that $T{\mu\nu}$ must necessarily be symmetric but not $t{\mu\nu}$, even though both must enter equivalently in the conservation law. Maybe this disparity will disappear when ``matter'' is included, and not just superficially as it has been up to now, but in a real way in the theory. \cite[938]{Einstein1998b}.\footnote{Landau and Lifschitz introduced a symmetric pseudotensor for gravitational energy in 1951.}
\end{quote}
A few days later, Klein had the opportunity to discuss this problem with Emmy Noether, who explained that Hilbert had already alluded to a general method for proving that $e^{\sigma}$ transformed as a vector in \cite{Hilbert1915}. Klein immediately wrote to Einstein with a sketch of the proof, which did not depend on special properties of $K$ \cite[942--943]{Einstein1998b}. Once again, Noether emerged as the real expert when it came to unpacking the mysteries surrounding Hilbert's energy vector.
\section{Noether's Two Theorems}
This was also the case with Hilbert's Theorem I and its role in the formulation of
conservation laws.
Klein's main concern in \cite{Kl-1} was the status of conservation laws in general relativity.
Contrary to Einstein, he distinguished sharply between these new findings and traditional
conservation laws in classical mechanics. The latter, he argued, cannot simply be deduced from a variational principle; for example,
one cannot derive $$\frac{d(T+U)}{dt}=0$$ without invoking
specific physical properties or principles, such as Newton's law of motion.
Klein attached great significance to this issue in part because he wanted to promote
ideas from his ``Erlangen Program'' \cite{Kl-4}, which he was adapting into a general
doctrine applicable to the new physics. Relativity theory, according to
Klein, should not be thought of exclusively in terms of two groups -- the
Lorentz group of special relativity and the group of continuous point
transformations of general relativity -- but rather should be
broadly understood as the invariant theory {\it relative to some given
group} that happens to be relevant to a particular physical theory.
This was the mathematical context
Klein had in mind when he emphasized the distinction between conservation laws
in classical mechanics, special relativity, and the general theory of relativity.\footnote{ Klein's articles on
relativity theory originally appeared in the {\it G\"ottinger Nachrichten}
as well, but in 1921 he republished them along with additional commentary
in the first volume of his
collected works. In doing so, he placed them in a special section
entitled ``Zum Erlanger Programm'' (\cite[I: 411--612]{Kl-GMA}).}
Hilbert not only agreed with Klein's assertion, he went even further by expressing the opinion that
the lack of analogy between classical
energy conservation and his own energy equation was a characteristic feature of general relativity.
In his own inimitable manner, he
even claimed {\it one could prove a theorem}
effectively ruling out conservation laws for general transformations
analogous to those that hold for the transformations of the orthogonal group.
Klein replied by saying:
``It would interest me very much to see the mathematical
proof carried out that you alluded to in your answer'' \cite[565]{Kl-1}.
Hilbert's conjecture was resolved some months later when
Emmy Noether published ``Invariante Variationsprobleme'' \cite{Noether1918b}.
Noether was in Erlangen around the time Klein was
putting the last touches on \cite{Kl-1}. From there
she wrote him on 29 February
1918: ``I thank you very much for sending me your note and today's
letter [same day delivery was not uncommon in those times], and I'm very
excited about your second note \cite{Kl-2}; the notes will certainly contribute much
to the understanding of the Einstein--Hilbert theory.''\footnote{E.
Noether
to F. Klein, 29 February 1918, Nachlass Klein, (SUB), G\"ottingen.}
After this she proceeded to explain where
matters stood with regard
to the key question Klein hoped to answer, namely the relationship
between the classical and relativistic energy equations. Clearly, she
was already deeply immersed in this problem.
The fundamental results Noether obtained in \cite{Noether1918b} not only provided a general proof of
Hilbert's Theorem I, they also clarified mathematically how
conservation laws arise in Lagrangian systems
for
classical mechanics as well as modern field theories.
In her introduction,
she described her approach as one that combined the formal methods of the
calculus of variations with techniques from Sophus Lie's theory of continuous groups.
Most of Lie's work was motivated by a vision for solving
general systems of differential equations that admit a given group of
transformations. His pursuit of this program led him to develop what came to be known as the theory of Lie groups.\footnote{For historical
background
on Lie's work and its influence, see
\cite{Haw} and \cite{Hawkins2000}.} Noether pointed out that within
the context of invariant variational systems one could obtain much stronger theorems than in the general cases handled by Lie.
Noether's ``theorem'' is really two theorems, one dealing with
transformation groups determined by finitely many parameters,
the other concerned with groups determined by finitely
many {\it functions} and their derivatives. Following Lie,
she called the first type a finite continuous group, the second
an infinite continuous group. Of particular significance are those groups
containing both types of structures, which Lie called mixed groups.
With regard to physical interpretations, she noted that
her first theorem generalized the formalism underlying the standard results pertaining to first integrals in classical mechanics, whereas her second theorem constituted ``the most general group-theoretic
generalization of `general relativity''' \cite[240]{Noether1918b}.
She formulated these two theorems as follows:
\begin{quote}
Theorem I. Let $G_{\rho}$ be a finite continuous group with $\rho$ parameters. If the integral
$I$ is invariant with respect to $G_{\rho}$, then $\rho$ linearly independent combinations of the
Lagrangian expressions become divergences, and conversely. The theorem also holds in the
limiting case of infinitely many parameters.
\bigskip
Theorem II. Let $G_{\infty\rho}$ be an infinite continuous group depending on $\rho$ continuous
functions. If the integral
$I$ is invariant with respect to $G_{\infty\rho}$, in which arbitrary functions and their derivatives
up to the $\sigma$th order appear, then $\rho$
identical relations are satisfied between the
Lagrangian expressions and their derivatives up to the order $\sigma$. The converse also holds
here. \cite[238--239]{Noether1918b}
\end{quote}
Theorem I (``Noether's Theorem'') is usually the only result cited in the physics literature. Its importance for physical theories is that it precisely
characterizes how conserved quantities arise from symmetries in variational systems. In a letter to Einstein from 7 January 1926, Noether wrote that ``for me, what mattered in `Invariante Variationsprobleme' was the precise formulation of the
scope of the principle and, above all, its converse \dots '' \cite[2011: 164]{Kosmann-Schwarzbach2006}.
Likewise,
Theorem II characterizes the manner in which identities satisfied by a combination of the Lagrangian expressions and their derivatives come into play.
Hilbert's Theorem I may thus be seen as a special case of Noether's second theorem
corresponding to transformations of the group $G_{\infty 4}$ given by four
functions that depend on the four coordinates of the world--points.
Noether combined these two key results in order to distinguish between ``proper'' and ``improper'' conservation laws in physics.
Suppose the integral $I$ is invariant with respect to a group $G_{\infty\rho}$. One can then
particularize the functions $p_{\lambda}, \,\lambda = 1,2,\dots ,\rho$ to obtain a {\it finite}
continuous subgroup $G_{\sigma}$ of $G_{\infty\rho}$. The divergence relations
that arise will then be fully determined by this $G_{\sigma}$. Moreover, the divergence relations
associated with $G_{\sigma}$ must, being
a subgroup of
$G_{\infty\rho}$, also be derivable from identities
connecting the Lagrangian expressions and their total derivatives by suitably particularizing the
$p_{\lambda}$.
Noether called such relations that were derivable from an infinite
group $G_{\infty\rho}$ improper
(``uneigentliche'') divergence relations; all others were proper
(``eigentliche'').
From these considerations, she concluded that:
\begin{quote}
The divergence relations corresponding to a finite group $G_{\sigma}$
are improper if and only if $G_{\sigma}$ is a subgroup of an infinite group
with respect to which $I$ is invariant. \cite[254]{Noether1918b}
\end{quote}
As Noether noted, the conservation laws of classical
mechanics as well as those of special relativity theory are proper
in the above sense. One cannot deduce these as invariants of a
suitably particularized subgroup of an infinite group. In general relativity, on the other hand,
every Lagrangian variational formalism will lead to four identities as a consequence of
the principle of general covariance. In summarizing these findings,
Noether wrote:
\begin{quote}
Hilbert expressed his assertion regarding the absence of actual energy theorems as
a characteristic attribute of `general relativity theory.' If this assertion is to be literally valid, then the term `general relativity' must be taken more broadly than is usual and extended to groups that depend on $n$
arbitrary functions. \cite[256--257]{Noether1918b}
\end{quote}
From the mathematical standpoint, Noether's analysis provided
a strikingly clear and altogether general answer to the question Klein
had raised about the status of conservation laws in Lagrangian systems.
Her study \cite{Noether1918b} remains today nearly the last word
on this subject.\footnote{For remarks on modern refinements of Noether's results, see \cite{Kosmann-Schwarzbach2006}.}
\section{On Hilbert's Revised Theory in \cite{Hilbert1924}}
Klein took deep satisfaction in the part he was
able to play in elucidating the mathematical underpinnings of
key results in the general theory of relativity. In a letter to Pauli,
he related Einstein's remark about how Klein's third note on
general relativity \cite{Kl-7} had made him ``as happy as a child whose
mother had presented him with a piece of chocolate,'' adding
that ``Einstein is always so gracious in his personal remarks,
in complete contrast to the foolish promotional
efforts (``t\"orichten Reklametum'') undertaken to
honor him.''\footnote{Klein to Pauli, 8 March 1921,
in \cite[79]{Pau-3}.}
Klein also made it clear to
Pauli that his article ``could not pass over Hilbert's
efforts in silence.''\footnote{Klein to Pauli, 8 May 1921,
in \cite[31]{Pau-3}.} Pauli was skeptical of the various unified field theories that had been advanced by Mie, Weyl, and Einstein \cite[205--206]{Pau-2}. Regarding Mie's theory, he saw no way to deduce the properties of
the world function $L$ just by knowing its invariants; there were simply far too many alternatives \cite[189--190]{Pau-2}.
By this time, Hilbert had probably drawn a similar conclusion.
Nevertheless, three years later he published a
revised version of \cite{Hilbert1915} and \cite{Hilbert1917}
in {\it Mathematische
Annalen} \cite{Hilbert1924}. There, he advertised this as ``essentially a reprint
of the earlier communications \dots
and my remarks on them that F. Klein published in \dots
\cite{Kl-1} -- with only minor editorial alterations and changes in
order to ease understanding'' \cite[1]{Hilbert1924}.
In truth,
however, this ``reprint'' contains major changes
that no careful reader could possibly
miss. These pertain mainly to \cite{Hilbert1915}, the focus of discussion for the present account.
As noted above, Hilbert's invariant energy vector disappears entirely in this revised account. Furthermore, he
softened the physical claim that had been so central for the original theory, namely ``electrodynamic phenomena are the effects of gravitation.'' In place of this, he now wrote that the four independent identities that derive from the gravitational
equations $[\sqrt g H]_{\mu\nu} =0$
signify the ``connection between gravity and
electrodynamics'' \cite[10]{Hilbert1924}. Hilbert noted that his earlier
Theorem I had served as the leitmotiv for his theory, but he only mentions it in passing. In a
footnote, he cites Emmy Noether's paper \cite{Noether1918b} for a
``general proof'' \cite[6]{Hilbert1924}. In its place, he refers to a slightly more general version of the result formerly called Theorem III. Here it appears as Theorem 2, but instead of the expressions (\ref{eq:i_s}) and (\ref{eq:i_s^l}) Hilbert now writes:
\begin{equation}\label{Hilbert-Theorem 2}
i_s= \sum_{\mu\nu}( [ \sqrt g J]_{\mu\nu} g^{\mu\nu}_s + [ \sqrt g J]_{\mu}q_{\mu s})
\,;\, i_s^l= -2 \sum_{\mu} [ \sqrt g K]_{\mu s} g^{\mu l} +[ \sqrt g J]_{l}q_{s}.
\end{equation}
As before, (\ref{eq:Hilbert-ident-2}) still holds, and he now applies this theorem successively to $K$ and $L$, following Klein (and implicitly Noether). This leads in the first case to the four identities (\ref{Klein-identities-1}):
$$ \sum_{\mu\nu}[\sqrt g K]_{\mu\nu} g^{\mu\nu}_{s}+ 2\sum_{\mu m}\frac{\partial([\sqrt g K]_{\mu s}
g^{\mu m})}{\partial x_m}
=0, \,\, s = 1,\,2,\,3,\,4.$$
Hilbert also rewrites the field equations (\ref{eq:Hilbert-grav-eqns}) by introducing
$$T_{\mu\nu}= -\frac{1}{\sqrt g}\frac{\partial \sqrt g L}{\partial g^{\mu\nu}}.$$
Since
$[\sqrt g K]_{\mu\nu} = \sqrt g (K_{\mu\nu}-\frac{1}{2}g_{\mu\nu}K)$, the field equations now appear as:
$$K_{\mu\nu}-\frac{1}{2}g_{\mu\nu}K = T_{\mu\nu}.$$ Inserting $L$ in (\ref{eq:Hilbert-ident-2}) yields
\begin{equation}\label{eq:2-L}
\sum_{\mu\nu}-(\sqrt g T_{\mu\nu})g^{\mu\nu}_s+2\sum_m \frac{\partial(-\sqrt g T^m_{s})}{\partial x_m} +
\sum_{\mu}[\sqrt g L]_{\mu}q_{\mu s}-\sum_{\mu} \frac{\partial([\sqrt g L]_{\mu}q_s)}{\partial x_{\mu}}
=0.
\end{equation}
Invoking the field equations $[\sqrt g L]_{\mu}=0$, the last two terms vanish, leaving:
\begin{equation}
\sum_{\mu\nu} \sqrt g T_{\mu\nu}g^{\mu\nu}_s+2\sum_m \frac{\partial \sqrt g T^m_{s}}{\partial x_m}=0,
\end{equation}
which are the familiar equations (\ref{eq:AE-energy3}) for the matter tensor $T_{\mu\nu}$. As Einstein noted in \cite[325]{Ein-6}, these equations are the general relativistic analogue for the classical conservation laws of momentum and energy, where the second term represents the transfer of momentum-energy from the gravitational field to matter.\footnote{This passage is the only place in \cite{Ein-6} in which Einstein made direct reference to \cite{Hilbert1915}.} Hilbert remarks accordingly that these equations pass over to true conservation laws when the $g^{\mu\nu}$ are constant, in which case
$$\sum_m \frac{\partial T^m_{s}}{\partial x_m}=0.$$
Hilbert showed similarly that by invoking the gravitational field equations (\ref{eq:Hilbert-grav-eqns}) the first two terms above vanish, which leaves:
$$\sum_{\mu}[\sqrt g L]_{\mu}q_{\mu s}-\sum_{\mu} \frac{\partial([\sqrt g L]_{\mu}q_s)}{\partial x_{\mu}}
=0.$$
These are four differential equations connecting the electrodynamical Lagrangians with their first derivatives, as asserted by Noether's second theorem. Hilbert's derivation at this key point in \cite{Hilbert1924} follows the argument in \cite{Kl-1} almost to the letter. Thus many of the technical tricks he employed in \cite{Hilbert1915} have now disappeared, making this paper far easier to follow than the original.
It would seem unlikely that many readers noticed that \cite{Hilbert1924} was hardly what could be called
``essentially a reprint
of the earlier communications.'' Yet even later commentators accepted this characterization at face value, as pointed out in \cite[227]{Rowe1999}. In today's world, with our ready access to so many published sources, one might hope that historians would be held to a higher standard.
At the outset of his paper, Hilbert noted that only future research could decide whether a program like the one he first envisioned in 1915 might actually be realizable. Many physicists would continue to cling to this
dream of establishing a pure field theory that could account for microphysical phenomena, but a growing number had become skeptical. By 1924, Hilbert had begun to immerse himself in the foundational problems of quantum theory, and these would occupy a good part of his attention throughout the 1920s \cite[503--706]{Sauer/Majer2009}. Emmy Noether, on the other hand, would soon emerge to become the leader in G\"ottingen of an important research school, one whose followers promoted her special vision for abstract algebra. Her venture into mathematical physics, fruitful as it had been, was merely an episode in her early career. If physicists today think of her in connection with ``Noether's Theorem'' -- by which they mean the first and not the highly significant second theorem in \cite{Noether1918b} -- they typically overlook the role she played in the dramatic, but also highly complex story of how Einstein's theory of gravitation was received in Germany during the years of the First World War.
\newpage
|
train/arxiv
|
BkiUc_85qoYAq3Pz832E
| 5 | 1 |
\section{Introduction}\label{Sec_intro}
\noindent
Despite their simplicity, or perhaps because thereof, the first and the second moment method are the most widely used techniques in probabilistic combinatorics.
Erd\H{o}s\ employed the first moment method famously to lower-bound the Ramsey number as well as to establish the existence of graphs of high girth
and high chromatic number~\cite{ErdosRamsey,Erdos}.
Even a half-century on, deterministic constructions cannot hold a candle to these probabilistic results~\cite{BRSW,Nesetril}.
Moreover, the second moment method has been used
to count prime factors~\cite{Turan} and Hamilton cycles~\cite{RobinsonWormald}
as well as to determine the two possible values of the chromatic number of a sparse random graph~\cite{AchNaor}.
Yet there are quite a few problems for which the standard first and the second moment methods are too simplistic.
The {\em random $k$-SAT model} is a case in point.
There are $n$ Boolean variables $x_1,\ldots,x_n$ and $m$ clauses $a_1,\ldots,a_m$, where $m=\lceil\alpha n\rceil$ for some fixed $\alpha>0$.
Each clause binds $k$ variables, which are chosen independently and uniformly, and
discourages them from taking precisely one of the $2^k$ possible truth value combinations.
The forbidden combination is chosen uniformly and independently for each clause.
The random $k$-SAT instance $\PHI=\PHI_k(n,m)$ gives rise to a probability measure on the set $\{0,1\}^n$ of all Boolean assignments naturally.
Indeed, for a given parameter $\beta\geq0$ the {\em Gibbs measure} $\mu_{\PHI,\beta}$ is defined by letting
\begin{align}\label{eqkSAT1}
\mu_{\PHI,\beta}(\sigma)&=\frac1{Z_\beta(\PHI)}\prod_{i=1}^m\exp(-\beta\vec{1}\cbc{\sigma\mbox{ violates }a_i})
\qquad\mbox{for every assignment $\sigma\in\{0,1\}^n$, where}\\
Z_\beta(\PHI)&=\sum_{\sigma\in\cbc{0,1}^n}\prod_{i=1}^m\exp(-\beta\vec{1}\cbc{\sigma\mbox{ violates }a_i})\label{eqkSAT2}
\end{align}
is called the {\em partition function}.
Thus, the Gibbs measure weighs assignments according to the number of clauses that they violate.
In effect, by tuning $\beta$ we can interpolate between just the uniform distribution on $\{0,1\}^n$ ($\beta=0$) and a measure
that strongly favours satisfying assignments ($\beta\to\infty$).
Hence, if we think of $\PHI$ as inducing a ``height function'' $\sigma\mapsto\#\{\mbox{clauses of $\PHI$ violated by }\sigma\}$ on the set of assignments,
then varying $\beta$ allows us to explore the resulting landscape.
Apart from its intrinsic combinatorial interest,
the shape of the height function, the so-called ``Hamiltonian'', governs the performance of algorithms such as the Metropolis process or Simulated Annealing.
To understand the Gibbs measure it is key to get a handle on the partition function $Z_\beta(\PHI)$.
Of course, the default approach to this kind of problem would be to apply the first and second moment methods.
However, upon closer inspection it emerges that
$Z_\beta(\PHI)<\exp(-\Omega(n))\Erw[Z_\beta(\PHI)]$ with high probability for {\em any} $\alpha,\beta>0$~\cite{maxsat}.
In other words, the first moment over-estimates the partition function of a typical random formula by an exponential factor.
The reason for this is a ``lottery effect'': a tiny minority of formulas render an exceptionally high contribution to $\Erw[Z_\beta(\PHI)]$.
Unsurprisingly, going to the second moment only exacerbates the problem and thus
for any $\alpha,\beta>0$ we find $\Erw[Z_\beta(\PHI)^2]\geq\exp(\Omega(n))\Erw[Z_\beta(\PHI)]^2$.
In other words, the second moment method fails rather spectacularly for all possible parameter combinations.
The first and the second moment method fall victim to similar large deviations effects in many alike ``random constraint satisfaction problems''.
These problems, ubiquitous in combinatorics, information theory, computer science and physics~\cite{nature,MM,Rudi},
can be described along the following lines.
A random {\em factor graph}, chosen either from a uniform distribution (like the random $k$-SAT model above) or from a suitable configuration model,
induces interactions between the variables and the constraints.
The variables range over a fixed finite domain $\Omega$ and each constraint binds a few variables.
The constraints come with ``weight functions'' that either encourage or discourage certain value combinations of the incident variables.
Multiplying up the weight functions of all the contraints just like in (\ref{eqkSAT1})--(\ref{eqkSAT2}),
we obtain the Gibbs measure and the partition function.
With the standard first and second moment method drawing a blank, we seem to be at a loss as far as calculating the partition function is concerned.
However, physicists have put forward an ingenious albeit non-rigorous alternative called the {\em cavity method}~\cite{MM}.
This technique, which applies almost mechanically to any problem that can be described in the language of sparse random factor graphs,
yields an explicit conjecture as to the value of the partition function.
More specifically, the cavity method comes in several installments.
In this paper, we are concerned with the simplest, so-called ``replica symmetric'' version.
In one of their key papers~\cite{pnas} physicists hypothesized abstract conditions under which the replica symmetric cavity method
yields the correct value of the partition function.
The thrust of this paper is to prove corresponding rigorous results.
Specifically, according to \cite{pnas} the replica symmetric cavity method gives the correct answer if the Gibbs measure satisfies certain correlation decay properties.
For example, the {\em Gibbs uniqueness} condition requires that under the Gibbs measure the value assigned to a variable $x$ is asymptotically
independent of the values assigned to the variables at a large distance from $x$ in the factor graph.
In \Cor~\ref{Thm_smm} below we prove that this condition is indeed sufficient to guarantee the success of the cavity method.
Additionally, \Thm s~\ref{Thm_symUpperBound} and~\ref{Thm_nonReconstruction} yield rigorous sufficient conditions in terms of substantially weaker conditions, namely
a symmetry property and the non-reconstruction property.
A key feature of the paper is that we establish these results not for specific examples but generically for a very wide class of factor graph models.
Of course, stating and proving general results requires a degree of abstraction.
In particular, we resort to the framework of local weak convergence of graph sequences~\cite[Part~4]{Lovasz}.
This framework suits the physics predictions well, which come in terms of the ``limiting tree'' that describes the local structure of a large random factor graph.
To be precise,
the replica symmetric prediction is given by a functional called the {\em Bethe free energy} applied to an (infinite) random tree.
The principal tool to prove these results is a theorem about the structure of probability measures on sets of
the form $\Omega^n$ for some fixed finite set $\Omega$ and a large integer $n$, \Thm~\ref{Thm_decomp} below.
We expect that this result, which is inspired by Szemer\'edi's regularity lemma~\cite{Szemeredi}, will be of independent interest.
To prove our results about random factor graphs, we combine \Thm~\ref{Thm_decomp} with the theory of local weak convergence to carry out completely
generically ``smart'' first and second moment arguments that avoid the lottery effects that the standard arguments fall victim to.
In \Sec~\ref{Sec_hom} we begin with the abstract results about probability measures on cubes.
Subsequently, in \Sec~\ref{Sec_factorGraphs} we set the stage by introducing the formalism of factor graphs and local weak convergence.
Further, in \Sec~\ref{Sec_Bethe} we state and prove the main results about Gibbs measures on random factor graphs.
Finally, \Sec~\ref{Sec_crazyConfigs} contains the proof of a technical result that enables us to control the local structure of random factor graphs.
\subsection*{Related work}
A detailed (non-rigorous) discussion of the cavity method can be found in~\cite{MM}.
It is known that the replica symmetric version of the cavity method does not always yield the correct value of the partition function.
For instance, in some factor graph models there occurs a ``condensation phase transition'' beyond which the replica symmetric prediction is off~\cite{Lenka,pnas}.
The more complex ``1-step replica symmetry breaking (1RSB)'' version of the cavity method~\cite{MPZ} is expected to
yield the correct value of the partition function some way beyond condensation.
However, another phase transition called ``full replica symmetry breaking'' spells doom on even the 1RSB cavity method~\cite{MM}.
The replica symmetric cavity method has been vindicated rigorously in various special cases.
For instance, Montanari and Shah~\cite{MontanariShah} proved that in the random $k$-SAT model the
replica symmetric prediction is correct up to the Gibbs uniqueness threshold.
A similar result was obtained by
Bandyopadhyay and Gamarnik~\cite{Bandyopadhyay} for graph colorings and independent sets.
Furthermore, Dembo, Montanari and Sun~\cite{DMS} proved the replica symmetric conjecture on a class of models with specific types of constraints.
A strength of~\cite{DMS} is that the result applies even to sequences of non-random factor graphs under a local weak convergence assumption.
But both~\cite{DMS,MontanariShah} are based on the ``interpolation method''~\cite{FL,Guerra,PT}, which entails substantial restrictions on the types of models that can be handled.
By contrast, the present proof method is based on a completely different approach
centered around the abstract classification of measures on cubes that we present in \Sec~\ref{Sec_hom}.
Since the ``vanilla'' second moment method fails on the random $k$-SAT model, more sophisticated variants have been proposed.
The basic idea is to apply the second moment method not to the partition function itself but to a tweaked random variable.
For instance, Achlioptas and Moore~\cite{nae} applied the second moment method to NAE-satisfying assignments, i.e.,
both the assignment and its binary inverse satisfy all clauses.
However, the number of NAE-satisfying assignments is exponentially smaller than the total number of satisfying assignments
and thus this type of argument cannot yield the typical value of the partition function.
The same is true of the more subtle random variable of Achlioptas and Peres~\cite{yuval}.
Furthermore, the work of Ding, Sly and Sun~\cite{DSS3}
that yields the precise $k$-SAT threshold for large $k$ is based on applying the second moment method
to a random variable whose construction is guided by the 1RSB cavity method.
Among other things, the random variable from~\cite{DSS3} incorporates conditioning on the local structure of the factor graph,
an idea that will be fundamental to our arguments as well.
\subsection*{Notation}
If $\cX$ is a finite set, then we denote by $\cP(\cX)$ the set of probability measures on $\cX$.
Moreover, $\TV\nix$ signifies the total variation norm.
If $\mu$ is a probability measure on a product space $\cX^V$ for finite sets $\cX$, $V$ and $S\subset V$, then $\mu_{\marg S}\in\cP(\cX^S)$ denotes the marginal
distribution of $\mu$ on $S$.
That is, if $(x_s)_{s\in S}\in\cX^S$, then
$$\mu_{\marg S}((x_s)_{s\in S})=\sum_{(x_s)_{s\in V\setminus S}\in\cX^{V\setminus S}}\mu((x_s)_{s\in V}).$$
If $S=\{v\}$ for some $v\in V$, then we briefly write $\mu_{\marg v}$ rather than $\mu_{\marg\{v\}}$.
The entropy of a probability measure $\mu\in\cP(\cX)$ is denoted by $H(\mu)$.
Thus, with the convention that $0\ln0=0$ we have $H(\mu)=-\sum_{x\in\cX}\mu(x)\ln\mu(x)$.
Further, agreeing that $0\ln\frac00=0$ as well, we recall that the Kullback-Leibler divergence of $\mu,\nu\in\cP(\cX)$ is
\begin{align*}
\KL\nu\mu&=\sum_{x\in\cX}\nu(x)\ln\frac{\nu(x)}{\mu(x)}\in[0,\infty].
\end{align*}
We are going to work with probability measures on sets $\Omega^n$ for a (small) finite $\Omega$ and a large integer $n$ a lot.
If $\mu\in\cP(\Omega^n)$, then we write $\SIGMA_\mu,\TAU_\mu$ for two independent samples from $\mu$.
Where $\mu$ is obvious from the context we just write $\SIGMA,\TAU$.
Additionally, if $X(\SIGMA)$ is a random variable, then $\bck{X(\SIGMA)}_\mu=\sum_{\sigma\in\Omega^n}\mu(\sigma)X(\sigma)$ stands
for the expectation of $X$ with respect to $\mu$.
Further, if $\sigma\in\Omega^n$, $\emptyset\neq S\subset[n]$ and $\omega\in\Omega$, then we let
$$\sigma[\omega|S]={|\sigma^{-1}(\omega)\cap S|}/{|S|}.$$
Thus, $\sigma[\nix|S]$ is a probability distribution on $\Omega$, namely the distribution of $\sigma(\vec x)$ for a random $\vec x\in S$.
If $S=\{x\}$ for some $x\in[n]$, then we just write $\sigma[\omega|x]$ rather than $\sigma[\omega|\{x\}]$.
Clearly, $\sigma[\omega|x]=\vec{1}\{\sigma(x)=\omega\}$.
We use the $\bck\nix_\mu$ notation for averages over $\mu\in\cP(\Omega^n)$ to avoid confusion with averages over other, additional random quantities,
for which we reserve the common symbols $\Erw[\nix]$, $\pr[\nix]$.
Furthermore, we frequently work with conditional expectations.
Hence, let us recall that for a probability space $(\cX,\cA,\pr)$, a random variable $X:\cX\to\RR$ and a $\sigma$-algebra $\cF\subset\cA$ the conditional expectation
$\Erw[X|\cF]$ is a $\cF$-measurable random variable on $\cX\to\RR$ such that for every $\cF$-measurable event $F$ we have
$\Erw[\vec{1}\{F\}\Erw[X|\cF]]=\Erw[\vec{1}\{F\}X]$.
Moreover, recall that the conditional variance is defined as $\Var[X|\cF]=\Erw[X^2|\cF]-\Erw[X|\cF]^2$.
In line with the two previous paragraphs, if $Y:\Omega^n\to\RR$ is a random variable, $\mu\in\cP(\Omega^n)$ and $\cF$ is a $\sigma$-algebra on $\Omega^n$,
then we write $\bck{Y|\cF}_\mu$ for the conditional expectation, which is a $\cF$-measurable random variable
$\sigma\in\Omega^n\mapsto\bck{Y|\cF}_\mu(\sigma)$.
Accordingly, for an event $A\subset\Omega^n$ with $\mu(A)>0$ we write $\bck{Y|A}_\mu=\bck{Y\vec{1}\{A\}}_\mu/\mu(A)\in\RR$
for the expectation of $Y$ given
$A$.
\section{Probability measures on the cube}\label{Sec_hom}
\noindent
In this section we present a general ``regularity lemma'' for probability measures on sets $\Omega^n$ for some finite set $\Omega$ and a large integer $n$
(\Thm~\ref{Thm_decomp} below).
\subsection{Examples}
Needless to say, probability distributions on sets $\Omega^n$ for a small finite $\Omega$ and a large integer $n$ are ubiquitous.
To get an idea of what we might hope to prove about them in general, let us look at a few examples.
The simplest case certainly is a product measure $\mu=p^{\otimes n}$ with $p\in\cP(\Omega)$.
By the Chernoff bound, for any fixed $\eps>0$ there is $n_0=n_0(\eps,\Omega)>0$ such that for $n>n_0$ we have
\begin{align}\label{eqApprox1}
\bck{\TV{\SIGMA[\nix|S]-p}}_\mu&<\eps&\mbox{for every $S\subset[n]$ such that $|S|\geq\eps n$}.
\end{align}
In words, if we fix a large enough set $S$ of coordinates and then choose $\SIGMA$ randomly, then
with probability close to one the empirical distribution on $S$ will be close to $p$.
As a twist on the previous example, let $p\in\cP(\Omega)$, assume that $n$ is a square and define a measure $\mu$
by letting
\begin{align*}
\mu(\omega_1,\ldots,\omega_n)&=\prod_{i=0}^{\sqrt n-1}\brk{p(\omega_{1+i\sqrt n})
\vec{1}\{\forall j\in[\sqrt n]:\omega_{j+i\sqrt n}=\omega_{1+i\sqrt n}\}}.
\end{align*}
In words, the coordinates come in blocks of size $\sqrt n$.
While the values of all the coordinates in one block coincide and have distribution $p$, the coordinates in different blocks are independent.
Although $\mu$ is not a product distribution, (\ref{eqApprox1}) is satisfied for any fixed $\eps>0$ and large enough $n$.
Furthermore, if for a fixed $k>1$ we choose $\vec x_1,\ldots,\vec x_k\in[n]$ uniformly and independently, then
\begin{align}\label{eqApprox2}
\Erw\TV{\mu_{\marg\{\vec x_1,\ldots,\vec x_k\}}-
\mu_{\marg\vec x_1}\otimes\cdots\otimes\mu_{\marg\vec x_k}}&<\eps,
\end{align}
provided that $n>n_1(\eps,k,\Omega)$ is sufficiently large.
This is because for large enough $n$ it is unlikely that two of the randomly chosen $\vec x_1,\ldots,\vec x_k$ belong to the same block.
As a third example, consider the set $\Omega=\{0,1\}$ and the measure $\mu$ defined by
\begin{align*}
\mu^{(0)}(\omega_1,\ldots,\omega_n)&=\bcfr13^{\sum_{i=1}^n\omega_i}\bcfr23^{n-\sum_{i=1}^n\omega_i},&
\mu^{(1)}(\omega_1,\ldots,\omega_n)&=\bcfr23^{\sum_{i=1}^n\omega_i}\bcfr12^{n-\sum_{i=1}^n\omega_i},&
\mu&=\frac12(\mu^{(0)}+\mu^{(1)}).
\end{align*}
All the marginals $\mu_{\marg i}$, $i\in[n]$, are equal to the uniform distribution on $\{0,1\}$.
But of course the uniform distribution on $\Omega^n$ is a horrible approximation to $\mu$.
Indeed, by the Chernoff bound with overwhelming probability a point $(\omega_1,\ldots,\omega_n)$ drawn from $\mu$
either satisfies $\frac1n\sum_{i=1}^n\omega_i\sim1/3$ or $\frac1n\sum_{i=1}^n\omega_i\sim2/3$.
However, the {\em conditional} distribution given, say, $\frac1n\sum_{i=1}^n\omega_i\leq1/2$, is close to a product measure.
Thus, $\mu$ induces a decomposition of $\Omega^n$ into two ``states''
$S_0=\{\frac1n\sum_{i=1}^n\omega_i\leq1/2\}$, $S_1=\{\frac1n\sum_{i=1}^n\omega_i>1/2\}$ such that
$\mu[\nix|S_0]$, $\mu[\nix|S_1]$ are close to product measures.
As a final example, consider $\Omega=\{0,1\}$, assume that $n$ is even and define $\mu\in\cP(\Omega^n)$ by letting
\begin{align*}
\mu(\omega_1,\ldots,\omega_n)&=\bcfr12^{n/2}\bcfr{1}{3}^{\sum_{i>n/2}\omega_i}\bcfr{2}{3}^{n/2-\sum_{i>n/2}\omega_i}.
\end{align*}
In words, $\mu$ is a product measure with marginal distribution ${\rm Be}(1/2)$ on the first $n/2$ coordinates and ${\rm Be}(1/3)$ on the other coordinates.
Clearly, $\mu$ satisfies (\ref{eqApprox1}) with $p={\rm Be}(1/2)$ for sets $S\subset[n/2]$ and with $p={\rm Be}(1/3)$ for sets $S\subset[n]\setminus [n/2]$,
provided that $n$ is large.
In summary, the following picture emerges.
The conditions (\ref{eqApprox1}) and (\ref{eqApprox2}) are proxies for saying that a given measure $\mu$ resembles a product measure.
Furthermore, in order to obtain from a given $\mu$ measures that satisfy (\ref{eqApprox1}) or (\ref{eqApprox2})
it may be necessary to decompose the space $\Omega^n$ into ``states'' so that the conditional distributions have these properties.
In addition, because different coordinates may have different marginal distributions, for (\ref{eqApprox1}) to hold it may be necessary to partition
the set $[n]$ of coordinates.
\subsection{Homogeneity}
The main result of this section shows that by partitioning the space $\Omega^n$ and/or the set $[n]$ of coordinates it is always
possible to ``approximate'' a given measure $\mu$ by measures that satisfy (\ref{eqApprox1})
for some suitable $p$ as well as (\ref{eqApprox2}).
In fact, the number of parts that we have to partition $[n]$ and $\Omega^n$ into is bounded only in terms of the desired accuracy
but independently of $n$.
Let us introduce some terminology.
If $\vec V=(V_1,\ldots,V_k)$ is a partition of some set $V$, then we call $\#\vec V=k$ the \bemph{size} of $\vec V$.
Furthermore, a partition $\vec W=(W_1,\ldots,W_l)$ \bemph{refines} another partition $\vec V=(V_1,\ldots,V_k)$
if for each $i\in[l]$ there is $j\in[k]$ such that $W_i\subset V_j$.
For $\eps>0$ we say that $\mu\in\cP(\Omega^n)$ is \bemph{$\eps$-regular} on a set $U\subset[n]$ if
for every subset $S\subset U$ of size $|S|\geq\eps|U|$ we have
$$\bck{\TV{\SIGMA[\nix|S]-\SIGMA[\nix|U]}}_{\mu}<\eps.$$
Further, $\mu$ is \bemph{$\eps$-regular} with respect to a partition $\vec V$ if
there is a set $J\subset[\#\vV]$ such that $\sum_{i\in[\#\vV]\setminus J}|V_i|<\eps n$ and such that $\mu$ is $\eps$-regular on $V_i$ for all $i\in J$.
Additionally, if $\vec V$ is a partition of $[n]$ and $\vec S$ is a partition of $\Omega^n$, then
we say that $\mu$ is \bemph{$\eps$-homogeneous} with respect to $(\vec V,\vec S)$ if there is a subset $I\subset[\#\vec S]$ such that the following is true.
\begin{description}
\item[HM1] We have $\mu(S_i)>0$ for all $i\in I$ and $\sum_{i\in[\#\vS]\setminus I}\mu(S_i)<\eps$.
\item[HM2] for all $i\in[\#\vec S]$ and $j\in[\#\vec V]$ we have
$\max_{\sigma,\sigma'\in S_i}\TV{\sigma[\nix|V_j]-\sigma'[\nix|V_j]}<\eps.$
\item[HM3] for all $i\in I$ the measure $\mu[\nix|S_i]$ is $\eps$-regular with respect to $\vec V$.
\item[HM4] $\mu$ is $\eps$-regular with respect to $\vV$.
\end{description}
\begin{theorem}\label{Thm_decomp}
For any $\eps>0$ there exists $N=N(\eps,\Omega)>0$ such that for every $n>N$, any measure $\mu\in\cP(\Omega^n)$
and any partition $\vV_0$ of $[n]$ of size $\#\vV_0\leq1/\eps$ the following is true.
There exist a refinement $\vV$ of $\vV_0$ and a partition $\vS$ of $\Omega^n$ such that
$\#\vec V+\#\vec S\leq N$ and such that
$\mu$ is $\eps$-homogeneous with respect to $(\vec V,\vec S)$.
\end{theorem}
Informally speaking, \Thm~\ref{Thm_decomp} shows that any probability measure $\mu\in\cP(\Omega^n)$ admits a partition $(\vec V,\vec S)$
such that the following is true.
Almost the entire probability mass of $\mu$ belongs to parts $S_i$ such that the conditional measure $\mu[\nix|S_i]$ is $\eps$-regular w.r.t.\ $\vec V$.
This means that almost every coordinate $x\in[n]$ belongs to a class $V_j$ such that
for every ``large'' $U\subset V_j$ for
$\SIGMA$ chosen from $\mu[\nix|S_i]$ very likely the empirical distribution $\SIGMA[\nix|U]$ is close to the
marginal distribution $\bck{\SIGMA[\nix|V_j]}_{\mu[\nix|S_i]}$ of the entire class.
\Thm~\ref{Thm_decomp} and its proof, which we defer to \Sec~\ref{Sec_decomp}, are inspired by Szemer\'edi's regularity lemma~\cite{Szemeredi}.
Let us proceed to state a few consequences of \Thm~\ref{Thm_decomp}.
A \bemph{$(\eps,k)$-state} of $\mu$ is a set $S\subset\Omega^n$ such that $\mu(S)>0$ and
\begin{align*}
\frac1{n^k}\sum_{x_1,\ldots,x_k\in[n]}\TV{\mu_{\downarrow\{x_1,\ldots,x_k\}}[\nix|S]-\mu_{\downarrow x_1}[\nix|S]\otimes
\cdots\otimes \mu_{\downarrow x_k}[\nix|S]}<\eps.
\end{align*}
In other words, if we choose $\vec x_1,\ldots,\vec x_k\in[n]$ independently and uniformly at random, then the expected total variation distance
between the joint distribution $\mu_{\downarrow\{\vec x_1,\ldots,\vec x_k\}}[\nix|S]$ of $\vec x_1,\ldots,\vec x_k$
and the product
$\mu_{\downarrow\vec x_1}[\nix|S]\otimes\cdots\otimes \mu_{\downarrow \vec x_k}[\nix|S]$
of the marginal distributions is small.
\begin{corollary}\label{Thm_states}
For any $\eps>0$, $k\geq2$ there exists $\eta=\eta(\eps,k,\Omega)>0$ such that for every $n>1/\eta$ any measure $\mu\in\cP(\Omega^n)$
has pairwise disjoint $(\eps,k)$-states $S_1,\ldots,S_N$ such that
$\mu(S_i)\geq\eta$ for all $i\in[N]$ and $\sum_{i=1}^N\mu(S_i)\geq1-\eps$.
\end{corollary}
\noindent
Thus, we can chop the space $\Omega^n$ into subsets $S_1,\ldots,S_N$, $N\leq1/\eta$, that capture almost the entire probability mass
such that $\mu[\nix|S_i]$ ``resembles a product measure'' for each $i\in[N]$.
We prove \Cor~\ref{Thm_states} in \Sec~\ref{Sec_states}.
Let us call $\mu$ \bemph{$(\eps,k)$-symmetric} if $S=\Omega^n$ itself is an $(\eps,k)$-state.
\begin{corollary}\label{Cor_states}
For any $\eps,k$ there exist $\delta,\eta>0$ such that for all $n>1/\eta$ and all $\mu\in\cP(\Omega^n)$ the following is true.
If for any two $(\delta,k)$-states $S_1,S_2$ with $\mu(S_1),\mu(S_2)\geq\eta$ we have
\begin{equation}\label{eqCor_statesAssumption}
\frac1n\sum_{x\in[n]}\TV{\mu_{\downarrow x}[\nix|S_1]-\mu_{\downarrow x}[\nix|S_2]}<\delta,
\end{equation}
then $\mu$ is $(\eps,k)$-symmetric.
\end{corollary}
\noindent
Thus, the entire measure $\mu$ ``resembles a product measure'' if extensive states have similar marginal distributions.
Conversely, we have the following.
\begin{corollary}\label{Cor_states2}
For any $\eps>0$ there is $\gamma >0$ such that for any $\eta>0$ there exists $\delta>0$ such that for all $n>1/\delta$ and all $\mu\in\cP(\Omega^n)$ the following is true.
If $\mu$ is $(\delta,2)$-symmetric, then for any $(\gamma,2)$-state $S$ with $\mu(S)\geq\eta$ we have
$$\frac1n\sum_{x\in[n]}\TV{\mu_{\downarrow x}[\nix|S]-\mu_{\downarrow x}}<\eps.$$
\end{corollary}
The proofs of Corollaries~\ref{Cor_states} and~\ref{Cor_states2} can be found in \Sec s~\ref{Sec_Cor_states} and~\ref{Sec_Cor_states2}, respectively.
Finally, in \Sec~\ref{Sec_Prop_tensorise} we prove the following fact that will be useful in \Sec~\ref{Sec_Bethe}.
\begin{proposition}\label{Prop_tensorise}
For any $\eps>0$ there exist $\delta>0$ such that for large enough $n$ the following is true.
If $\mu\in\cP(\Omega^n)$ is $(\delta,2)$-symmetric, then $\mu\otimes\mu\in\cP(\Omega^n\times\Omega^n)$ is $(\eps,2)$-symmetric.
\end{proposition}
\subsection{Proof of \Thm~\ref{Thm_decomp}}\label{Sec_decomp}
Throughout this section we assume that $n$ is sufficiently large.
To prove \Thm~\ref{Thm_decomp} and guided by~\cite{Szemeredi}, we define the \bemph{index} of $\mu$ with respect to a partition $\vec V$ of $[n]$ as
$$\ind_\mu(\vec V)=\frac{1}{|\Omega|n}\sum_{\omega\in\Omega}\sum_{j\in[\#\vec V]}\sum_{x\in V_j}
\bck{(\SIGMA[\omega|x]-\SIGMA[\omega|V_j])^2}_\mu.$$
The index can be viewed as a conditional variance (cf.\ \cite{Tao}).
Indeed, choose $\vec x\in[n]$ uniformly and independently of $\SIGMA$.
Furthermore, let $\cF_{\vV}$ be the $\sigma$-algebra generated by
the events $\{\vec x\in V_i\}$ for $i\in[\#\vV]$.
Writing $\Erw[\nix]$ and $\Var[\nix]$ for the expectation and variance with respect to the choice of $\vec x$ only, we see that
$$\ind_\mu(\vV)=\frac1{|\Omega|}\sum_{\omega\in\Omega}\Erw\bck{\Var[\SIGMA[\omega|\vec x]|\cF_{\vV}]}_\mu.$$
\begin{lemma}\label{Lemma_refinement}
For any partition $\vec V$ of $[n]$ we have $\ind_\mu(\vec V)\in[0,1]$.
If $\vec W$ is a refinement of $\vec V$, then $\ind_\mu(\vec W)\leq\ind_\mu(\vec V)$.
\end{lemma}
\begin{proof}
The fact that $\ind_\mu(\vec V)\in[0,1]$ is immediate from the definition.
Moreover, if $\vW$ refines $\vV$, then $\cF_{\vV}\subset\cF_{\vW}$.
Consequently,
$\Erw\bck{\Var[\SIGMA[\omega|\vec x]|\cF_{\vW}]}_\mu\leq\Erw\bck{\Var[\SIGMA[\omega|\vec x]|\cF_{\vV}]}_\mu$.
Averaging over $\omega\in\Omega$ yields $\ind_\mu(\vec W)\leq\ind_\mu(\vec V)$.
\end{proof}
\begin{lemma}\label{Lemma_homreg}
If $\mu\in\cP(\Omega^n)$ fails to be $\eps$-regular with respect to $\vec V$, then
there is a refinement $\vec W$ of $\vec V$ such that $\#\vec W\leq2\#\vec V$ and
$\ind_\mu(\vec W)\leq\ind_\mu(\vec V)-\eps^4/(4|\Omega|^3).$
\end{lemma}
\begin{proof}
Let $\bar J$ be the set of all indices $j\in[\#\vec V]$ such that
there exists $S\subset V_j$ of size $|S|\geq\eps|V_j|$ such that
\begin{equation}\label{eqClaim_indinc1}
\bck{\TV{\SIGMA[\nix|S]-\SIGMA[\nix|V_j]}}_\mu\geq\eps.
\end{equation}
Since $\mu$ fails to be $\eps$-regular with respect to $\vV$ we have
\begin{equation}\label{eqClaim_indinc2}
\sum_{j\in \bar J}|V_j|\geq\eps n.
\end{equation}
For each $j\in\bar J$ pick a set $S_j\subset V_j$, $|S_j|\geq\eps|V_j|$ such that (\ref{eqClaim_indinc1}) is satisfied.
Then there exists $\omega_j\in\Omega$ such that
\begin{equation}\label{eqClaim_indinc99}
\bck{\abs{\SIGMA[\omega_j|S_j]-\SIGMA[\omega_j|V_j]}}_\mu\geq\eps/(2\abs\Omega).
\end{equation}
Let $\vW$ be the partition obtained from $\vV$ by splitting each class $V_j$, $j\in\bar J$, into the sub-classes $S_j,V_j\setminus S_j$.
Clearly, $\#\vW\leq2\#\vV$.
Furthermore,
\begin{align}\nonumber
\ind_\mu(\vV)&=\frac1{|\Omega|}\sum_{\omega\in\Omega}\Erw\bck{\Var[\SIGMA[\omega|\vec x]|\cF_{\vV}]}_\mu=
\frac1{|\Omega|}\sum_{\omega\in\Omega}\bc{\Erw\bck{\Var[\SIGMA[\omega|\vec x]|\cF_{\vW}]}_\mu
+\Erw\bck{\Var[\Erw[\SIGMA[\omega|\vec x]|\cF_{\vW}]|\cF_{\vV}]}_\mu}\\
&=\ind_{\mu}(\vW)+\frac1{|\Omega|}\sum_{\omega\in\Omega}\Erw\bck{\Var[\Erw[\SIGMA[\omega|\vec x]|\cF_{\vW}]|\cF_{\vV}]}_\mu.
\label{eqClaim_indinc666}
\end{align}
If $j\in\bar J$ then (\ref{eqClaim_indinc99}) implies that on $V_j$ we have
\begin{align} \label{eqClaim_indinc667}
\bck{\Var[\Erw[\SIGMA[\omega_j|\vec x]|\cF_{\vW}]|\cF_{\vV}]}_\mu&\geq
\frac{|S_j|}{|V_j|}\bck{(\SIGMA[\omega_j|S_j]-\SIGMA[\omega_j|V_j])^2}_\mu\geq\frac{\eps^3}{4|\Omega|^2}.
\end{align}
Hence, combining (\ref{eqClaim_indinc2}) and (\ref{eqClaim_indinc667}), we find
\begin{align} \label{eqClaim_indinc668}
\frac1{|\Omega|}\sum_{\omega\in\Omega}\Erw\bck{\Var[\Erw[\SIGMA[\omega|\vec x]|\cF_{\vW}]|\cF_{\vV}]}_\mu&\geq\frac{\eps^4}{4|\Omega|^3}.
\end{align}
Finally, the assertion follows from (\ref{eqClaim_indinc666}) and (\ref{eqClaim_indinc668}).
\end{proof}
\begin{proof}[Proof of \Thm~\ref{Thm_decomp}]
The set $\cP(\Omega)$ is compact.
Therefore, there exists a partition $\vec Q=(Q_1,\ldots,Q_K)$ of $\cP(\Omega)$ into pairwise disjoint sets
such that for all $i\in[K]$ and any two measures $\mu,\mu'\in Q_i$ we have $\TV{\mu-\mu'}<\eps$.
Given any partition $\vec W$ of $[n]$, we can construct a corresponding decomposition $\vec S(\vec W)$ of $\Omega^n$ as follows.
Call $\sigma,\sigma'\in\Omega^n$ $\vec W$-equivalent if for every $i\in[\#\vec W]$ there exists $j\in[\#\vec Q]$ such that
$\sigma[\nix|W_i],\sigma'[\nix|W_i]\in Q_j$.
Then $\vec S(\vec W)$ comprises of the equivalence classes.
We construct the desired partition $\vec V$ of $[n]$ inductively, starting from any given partition $\vec V(0)$ of size at most $1/\eps$.
The construction stops once $\mu$ is $\eps$-homogeneous with respect to $(\vec V(t),\vec S(\vec V(t)))$.
Assuming that this is not the case, we obtain $\vec V({t+1})$ from $\vec V(t)$ as follows.
If $\mu$ fails to be $\eps$-regular with respect to $\vV(t)$, then we let $\vV({t+1})$ be the partition promised by \Lem~\ref{Lemma_homreg},
which guarantees that
\begin{align}\label{eqSimpleStep}
\#\vV(t+1)\leq2\#\vV(t)
\quad\mbox{and}\quad
\ind_\mu(\vV(t+1))\leq\ind_\mu(\vec V(t))-\eps^4/(4|\Omega|^3).
\end{align}
Otherwise let $\vec S(t)=\vec S(\vec V(t))$ and $s(t)=\#\vec S(t)$ for the sake of brevity.
Further, let $\mu_{i,t}=\mu[\nix|S_i(t)]$ for $i\in[s(t)]$ with $\mu[S_i(t)]>0$.
Moreover, let $\bar I(t)$ be the set of all $i\in[s(t)]$ such that $\mu[S_i(t)]>0$ and $\mu_{i,t}$ fails to be $\eps$-regular with respect to $\vec V(t)$.
If $\mu$ fails to be $\eps$-homogeneous with respect to $(\vec V(t),\vec S(t))$ but $\mu$ is $\eps$-regular w.r.t.\ $\vV(t)$, then
\begin{equation}\label{eqThm_decomp1}
\sum_{i\in\bar I(t)}\mu[S_i(t)]\geq\eps.
\end{equation}
\Lem~\ref{Lemma_homreg} shows that for any $i\in\bar I(t)$ there exists a refinement $\vec W(t,i)$ of $\vec V(t)$ such that
\begin{equation}\label{eqThm_decomp2}
\ind_{\mu_{i,t}}(\vec W(t,i))\leq\ind_{\mu_{i,t}}(\vec V(t))-\eps^4/(4|\Omega|^3).
\end{equation}
Let $\vec V(t+1)$ be the coarsest common refinement of all the partitions $(\vec W(t,i))_{i\in\bar I(t)}$.
Then
\begin{equation}\label{eqThm_decomp3a}
\#\vec V(t+1)\leq\#\vec V(t)\cdot2^{\#\vec Q^{\#\vec V(t)}}.
\end{equation}
In addition, (\ref{eqThm_decomp2}) and \Lem~\ref{Lemma_refinement} imply
\begin{equation}\label{eqThm_decomp3}
\ind_{\mu_{i,t}}(\vec V(t+1))\leq\ind_{\mu_{i,t}}(\vec V(t))-
\vec{1}\{i\in\bar I(t)\}\eps^4/(4|\Omega|^3).
\end{equation}
Therefore, by (\ref{eqThm_decomp1}), (\ref{eqThm_decomp3}) and Bayes' rule
\begin{align}
\ind_\mu(\vec V(t+1))&=\frac1{n|\Omega|}\sum_{\omega\in\Omega}\sum_{j\in[\#\vec V(t+1)]}\sum_{x\in V_j(t+1)}
\bck{(\SIGMA[\omega|x]-\SIGMA[\omega|V_j(t+1)])^2}_\mu\nonumber\\
&=\frac1{n|\Omega|}\sum_{\omega,j,x} \sum_{i\in[s(t)]:\mu[S_i(t)]>0}\mu[S_i(t)]
\bck{(\SIGMA[\omega|x]-\SIGMA[\omega|V_j(t+1)])^2}_{\mu_{i,t}}\nonumber\\
&=\sum_{i:\mu[S_i(t)]>0}\mu[S_i(t)]\ind_{\mu_{i,t}}(\vec V(t+1))\nonumber\\
&\leq-\eps^5/(4|\Omega|^3)+\sum_{i:\mu[S_i(t)]>0}\mu[S_i(t)]\ind_{\mu_{i,t}}(\vec V(t))=\ind_\mu(\vec V(t))-\eps^5/(4|\Omega|^3).
\label{eqThm_decomp4}
\end{align}
Combining (\ref{eqSimpleStep}), (\ref{eqThm_decomp4}) and \Lem~\ref{Lemma_refinement}, we conclude that $\mu$ is $\eps$-homogeneous
with respect to $(\vec V(T),\vec S(T))$ for some $T\leq4|\Omega|^3/\eps^5$.
Finally, (\ref{eqThm_decomp3a}) entails that $\#\vec V(T),\#\vec S(T)$ are bounded in terms of $\eps,\Omega$ only.
\end{proof}
\subsection{Proof of \Cor~\ref{Thm_states}}\label{Sec_states}
To derive \Cor~\ref{Thm_states} from \Thm~\ref{Thm_decomp} we use the following handy sufficient condition for $(\eps,k)$-symmetry.
\begin{lemma}\label{Lemma_regularSymmetric}
For any $k\geq2$, $\eps>0$ there is $\delta=\delta(\eps,k,\Omega)$ such that for large enough $n$ the following is true.
Assume that $\mu\in\cP(\Omega^n)$ is $\delta$-regular with respect to a partition $\vec V$ and
set $\bar\mu_i(\nix)=\bck{\SIGMA[\nix|V_i]}_\mu$ for $i\in[\#\vV]$.
If
\begin{align}\label{eqLemma_regularSymmetric0}
\sum_{i\in[\#\vec V]}\frac{|V_i|}{n}\bck{\TV{\SIGMA[\nix|V_i]-\bar\mu_i}}_\mu<\delta,
\end{align}
then $\mu$ is $(\eps,k)$-symmetric.
\end{lemma}
\begin{proof}
Choose a small $\xi=\xi(\eps,k,\Omega)>0$ and a smaller $\delta=\delta(\xi)>0$.
Then (\ref{eqLemma_regularSymmetric0}) implies that there is $J\subset[\#\vec V]$ satisfying
\begin{equation}\label{eqLemma_regularSymmetric1}
\sum_{j\in J}|V_j|\geq(1-\xi)n
\end{equation}
such that for all $j\in J$, $S\subset V_j$, $|S|\geq\xi|V_j|$ we have
\begin{equation}\label{eqLemma_regularSymmetric2}
\bck{\TV{\SIGMA[\nix|S]-\bar\mu_j}}_\mu\leq\xi.
\end{equation}
In particular, we claim that (\ref{eqLemma_regularSymmetric2}) implies the following (if $\xi$ is small enough):
\begin{equation}\label{eqMarkovAnwenden}
\forall \omega\in\Omega, j\in J,\Sigma\subset\Omega^n:\mu(\Sigma)\geq\xi^{1/4}\Rightarrow
\abs{\cbc{x\in V_j:\abs{\bck{\SIGMA[\omega|x]|\Sigma}_\mu-\bar\mu_j(\omega)}>\xi^{1/4}}}\leq\xi^{1/4}|V_j|.
\end{equation}
Indeed, assume that $\bck{\vec{1}\{\SIGMA\in\Sigma\}}_\mu\geq\xi^{1/4}$ and
$\abs{\cbc{x\in V_j:\abs{\bck{\SIGMA[\omega_0|x]|\Sigma}_\mu-\bar\mu_j(\omega_0)}>\xi^{1/4}}}>\xi^{1/4}|V_j|$ for some $\omega_0\in\Omega$.
Then because $\bck{\SIGMA[\nix|x]|\Sigma}_\mu$ is a probability measure on $\Omega$ for every $x$, there exists $\omega\in\Omega$ such that
the set $S=\cbc{x\in V_j:\bck{\SIGMA[\omega|x]|\Sigma}_\mu<\bar\mu_j(\omega)-\xi^{1/4}/|\Omega|}$ has size $|S|>\xi^{1/4}|V_j|/(2|\Omega|)$.
In particular,
$\bck{\SIGMA[\omega|S]|\Sigma}_\mu\leq \bar\mu_j(\omega)-\xi^{1/4}/|\Omega|.$
Therefore, by Markov's inequality
$$\bck{\vec{1}\{\SIGMA[\omega|S]\geq \bar\mu_j(\omega)-\xi^{1/3}\}|\Sigma}_\mu\leq\frac{\bar\mu_j(\omega)-\xi^{1/4}/|\Omega|}{\bar\mu_j(\omega)-\xi^{1/3}}
\leq\frac{1-\xi^{1/4}/|\Omega|}{1-\xi^{1/3}}\leq1-\xi^{1/4}/(2|\Omega|).$$
Consequently, we obtain
$$\bck{\TV{\SIGMA[\nix|S]-\bar\mu_j}}_\mu\geq\xi^{1/3+1/4}\bck{\vec{1}\{\SIGMA\in\Sigma\}}_\mu/(2|\Omega|)\geq\xi^{7/8}.$$
Since $|S|>\xi^{1/4}|V_j|/(2|\Omega|)>\xi|V_j|$, this is a contradiction to (\ref{eqLemma_regularSymmetric2}).
Now,
fix any $\omega_1,\ldots,\omega_k\in\Omega$ and
let $\vec x_1,\ldots,\vec x_k\in[n]$ be chosen independently and uniformly at random.
Let $\Sigma_h=\Sigma_h(\vec x_1,\ldots,\vec x_h)\subset\Omega^n$ be the event that $\SIGMA(\vec x_i)=\omega_i$ for all $i\leq h$.
We are going to show that for $0\leq h<k$,
\begin{equation}\label{eqLemma_regularSymmetric3}
\Erw\brk{\mu(\Sigma_h)\abs{\bck{\SIGMA[\omega_{h+1}|\vec x_{h+1}]|\Sigma_h}_\mu
-\bck{\SIGMA[\omega_{h+1}|\vec x_{h+1}]}_\mu}}<\xi^{1/5}.
\end{equation}
In the case $h=0$ there is nothing to show.
As for the inductive step, condition on $\vec x_1,\ldots,\vec x_{h}$.
\begin{description}
\item[Case 1: $\mu(\Sigma_h)\leq\xi^{1/4}$]
regardless of the choice of $\vec x_{h+1}$ we have
\begin{align*}
\mu(\Sigma_h)\abs{\bck{\SIGMA[\omega_{h+1}|\vec x_{h+1}]|\Sigma_h}_\mu
-\bck{\SIGMA[\omega_{h+1}|\vec x_{h+1}}_\mu}\leq\xi^{1/4}.
\end{align*}
\item[Case 2: $\mu(\Sigma_h)>\xi^{1/4}$]
due to (\ref{eqLemma_regularSymmetric1}) with probability at least $1-2\xi$ we have
$\vec x_{h+1}\in V_j\setminus\{\vec x_1,\ldots,\vec x_h\}$ for some $j\in J$.
Hence, (\ref{eqMarkovAnwenden}) implies
$\Erw_{\vec x_{h+1}} \left[ \abs{\bck{\SIGMA[\omega_{h+1}|\vec x_{h+1}]|\Sigma_h}_\mu-\bck{\SIGMA[\omega_{h+1}|\vec x_{h+1}]}_\mu}\right]
\leq\xi^{1/4}.$
\end{description}
Hence, (\ref{eqLemma_regularSymmetric3}) follows.
To complete the proof, we are going to show by induction on $h\in[k]$ that
\begin{align}\label{eqNasty1}
\Erw\abs{\bck{\prod_{i=1}^h\SIGMA[\omega_i|\vec x_i]}_\mu-\prod_{i=1}^h\bck{\SIGMA[\omega_i| \vec x_i]}_\mu}&\leq h \xi^{1/5}.
\end{align}
For $h=1$ there is nothing to show.
To proceed from $h$ to $h+1$ we use the triangle inequality to write
\begin{align*}
\Erw\brk{
\abs{\bck{\prod_{i=1}^{h+1}\SIGMA[\omega_i|\vec x_i]}_\mu-\prod_{i=1}^{h+1}\bck{\SIGMA[\omega_i|\vec x_i]}_\mu}}
\leq & \Erw\brk{\mu(\Sigma_h)\abs{\bck{\SIGMA[\omega_{h+1}|\vec x_{h+1}]|\Sigma_h}_\mu
-\bck{\SIGMA[\omega_{h+1}|\vec x_{h+1}]}_\mu}}
\\ &+ \Erw\brk{
\bck{\SIGMA[\omega_{h+1}|\vec x_{h+1}]}_\mu \abs{ \bck{\prod_{i=1}^{h}\SIGMA[\omega_i|\vec x_i]}_\mu-\prod_{i=1}^{h}\bck{\SIGMA[\omega_i|\vec x_i]}_\mu}}.
\end{align*}
Invoking the induction hypothesis and (\ref{eqLemma_regularSymmetric3}) completes the proof.
\end{proof}
\begin{proof}[Proof of \Cor~\ref{Thm_states}]
For a small enough $\delta=\delta(\eps,k)>0$ let $(\vV,\vS)$
be a pair of partitions of size at most $N=N(\delta,\Omega)$ such that $\mu$ is $\delta/2$-homogeneous with respect to $(\vV,\vS)$ as guaranteed by \Thm~\ref{Thm_decomp}.
Let $\eta=\eps/(2N)$ and let $J$ be the set of all $j\in[\#\vS]$ such that $\mu(S_j)\geq\eta$ and such that $\mu[\nix|S_j]$ is $\delta$-regular with respect to $\vV$.
Then
\begin{align*}
\sum_{j\in[\#\vS]\setminus J}\mu(S_j)&\leq\delta+\eps/2<\eps.
\end{align*}
Furthermore, for every $j\in J$ the measure $\mu[\nix|S_j]$ satisfies (\ref{eqLemma_regularSymmetric0}) due to {\bf HM2}.
Therefore, \Lem~\ref{Lemma_regularSymmetric} implies that $\mu[\nix|S_j]$ is $(\eps,k)$-symmetric.
Consequently, the sets $(S_j)_{j\in J}$ are pairwise disjoint $(\eps,k)$-states with
$\mu(S_j)\geq\eta$ for all $j\in J$ and
$\sum_{j\in J}\mu(S_j)\geq1-\eps$.
\end{proof}
\subsection{Proof of \Cor~\ref{Cor_states}}\label{Sec_Cor_states}
Pick small enough $\delta=\delta(\eps,k,\Omega),\gamma=\gamma(\delta),\eta(\gamma)>0$.
Then by \Thm~\ref{Thm_decomp} $\mu$ is $\gamma$-homogeneous with respect to $(\vV,\vS)$ for partitions that satisfy $\#\vV+\#\vS\leq N=N(\gamma)$.
Let $J\subset[\#\vS]$ contain all $j$ such that $\mu[\nix|S_j]$ is $\gamma$-regular with respect to $\vV$ and such that $\mu(S_j)\geq\eta$.
Let $\bar\mu_{i,j}=\bck{\SIGMA[\nix|V_i]}_{\mu[\nix|S_j]}$.
Then by {\bf HM2} for every $j\in J$ we have
\begin{align*}
\frac1{n}\sum_{i\in[\#\vec V]}|V_i|\bck{\TV{\SIGMA[\nix|V_i]-\bar\mu_{i,j}}}_{\mu[\nix|S_j]}<3\gamma.
\end{align*}
Therefore, \Lem~\ref{Lemma_regularSymmetric} implies that $S_j$ is a $(\delta,2)$-state.
Consequently, our assumption (\ref{eqCor_statesAssumption}) and the triangle inequality entail that for all $j,j'\in J$,
\begin{align}\label{eqInTheMiddle}
\sum_{i\in[\#\vV]}\frac{|V_i|}n\TV{\bck{\SIGMA[\nix|V_i]}_{\mu[\nix|S_j]}-\bck{\SIGMA[\nix|V_i]}_{\mu[\nix|S_{j'}]}}&<\delta.
\end{align}
Choosing $\eta$ small, we can ensure that $\sum_{j\not\in J}\mu(S_j)\leq\delta$.
Therefore, letting $\bar\mu_i=\bck{\SIGMA[\nix|V_i]}_\mu$, we obtain from (\ref{eqInTheMiddle})
\begin{align}
\sum_{i\in[\#\vV]}\frac{|V_i|}n\bck{\TV{\SIGMA[\nix|V_i]-\bar\mu_i}}_{\mu}&
\leq\delta+
\sum_{i\in[\#\vV]}\frac{|V_i|}n\sum_{j\in J}\mu(S_j)\bck{\TV{\SIGMA[\nix|V_i]-\bar\mu_i}}_{\mu[\nix|S_j]}\nonumber\\
&\leq2\delta+\sum_{i\in[\#\vV]}\frac{|V_i|}n\sum_{j\in J}\mu(S_j)\TV{\bck{\SIGMA[\nix|V_i]}_{\mu[\nix|S_j]}-\bar\mu_i}
\qquad\mbox{[by {\bf HM2}]}\nonumber\\
&\leq5\delta.\label{eqInTheEnd}
\end{align}
Since $\mu$ is $\gamma$-regular and thus $5\delta$-regular w.r.t.\ $\vV$ by {\bf HM4}, (\ref{eqInTheEnd}) and
\Lem~\ref{Lemma_regularSymmetric} imply that $\mu$ is $(\eps,k)$-symmetric.
\subsection{Proof of \Cor~\ref{Cor_states2}}\label{Sec_Cor_states2}
Choose a small $\gamma = \gamma(\eps,\Omega)$ and a smaller $\delta = \delta(\gamma,\eta)$.
Assume that $S$ is a $(\gamma,2)$-state with $\mu(S) \geq \eta$ and that $\mu$ is $(\delta,2)$ symmetric.
Assume for contradiction that
\begin{equation}\label{eqCor_states2_proof1}
\frac1n\sum_{x\in[n]}\TV{\mu_{\downarrow x}[\nix|S]-\mu_{\downarrow x}}>\eps.
\end{equation}
Let
\begin{align*}
W&= \left \{x\in V: \TV{\mu_{\downarrow x}[\nix|S]-\mu_{\downarrow x}[\nix]]} \geq \eps/2 \right\}&&\mbox{and}\\
W_{s}(\omega)&= \left \{x \in W_i: s\cdot\left( \mu_{\marg x}[\omega|S] - \mu_{\marg x}[\omega] \right) \geq {\eps}/(2|\Omega|) \right \}
&&\mbox{for $\omega\in\Omega$, $s\in\{\pm1\}$.}
\end{align*}
Then (\ref{eqCor_states2_proof1}) entails that $|W| \geq \eps n/2$.
Therefore, there is $\omega \in \Omega$ such that $|W_{s}(\omega)| \geq \eps n/(2|\Omega|)$ for either $s=+1$ or $s=-1$.
Let $W' = W_{s}(\omega)$ for the sake of brevity.
Of course, by the definition of $W'$,
\begin{equation}} \newcommand{\eeq}{\end{equation} \label{eq_proof_cor_states2_v_0}
\bc{\bck{\SIGMA[\omega|W']}_{\mu[\nix|S]}-\bck{\SIGMA[\omega|W']}_{\mu}}^2\geq\frac{\eps^2}{4 | \Omega|^2 }
\eeq
Moreover, because $S$ is an $(\gamma,2)$-state, the measure $\mu[\nix|S]$ is $(\gamma,2)$-symmetric.
Therefore,
\begin{align}\nonumber
\bck{ \bc{\SIGMA[\omega|W']-\bck{\TAU[\omega|W']}_{\mu[\nix|S]}}^2}_{\mu[\nix|S]} &=
\frac{1}{|W'|^2} \sum_{x,y\in W'}\brk{\bck{\SIGMA[\omega|x]\SIGMA[\omega|y]}_{\mu[\nix|S]}-
\bck{\TAU[\omega|x]}_{\mu[\nix|S]}\bck{\TAU[\omega|y]}_{\mu[\nix|S]}}\\
&\leq \frac{4 \gamma|\Omega|^2}{\eps^2}\qquad[\mbox{as $|W'|\geq\eps n/(2|\Omega|)$]}.
\label{eq_proof_cor_states2_v_2}
\end{align}
Similarly, since $\mu$ is $(\delta,2)$-symmetric,
\begin{align}
\bck{\bc{\SIGMA[\omega|W']-\bck{\TAU[\omega|W']}_\mu}^2}_{\mu} &=
\frac{1}{|W'|^2} \sum_{x,y\in W'}
\brk{\bck{\SIGMA[\omega|x]\SIGMA[\omega|y]}_{\mu}-
\bck{\TAU[\omega|x]}_{\mu}\bck{\TAU[\omega|y]}_{\mu}}
\leq \frac{4 \delta|\Omega|^2}{\eps^2}.
\label{eq_proof_cor_states2_v_1}
\end{align}
On the other hand we have
\begin{align} \nonumber
\bck{\bc{\SIGMA[\omega|W']-\bck{\TAU[\omega|W']}_\mu}^2}_{\mu}
&\geq \mu(S) \bck{\left. \bc{\SIGMA[\omega|W']-\bck{\TAU[\omega|W']}_\mu}^2 \right.}_{\mu[\nix|S]} \\
&\hspace{-3cm} \geq \mu(S) \bc{ \frac{1}{2} \bc{\bck{\TAU[\omega|W']}_{\mu[\nix|S]}-\bck{\TAU[\omega|W']}_{\mu}}^2
-\bck{\left. \bc{\SIGMA[\omega|W']-\bck{\TAU[\omega|W']}_{\mu[\nix|S]}}^2 \right. }_{\mu[\nix|S]} }.
\label{eq_proof_cor_states2_v_3}
\end{align}
Finally, plugging (\ref{eq_proof_cor_states2_v_0}), (\ref{eq_proof_cor_states2_v_1}) and (\ref{eq_proof_cor_states2_v_2}) into
(\ref{eq_proof_cor_states2_v_3}),
we find
\begin{align*}
\frac{4\delta|\Omega|^2}{\eps^2}&\geq\eta\brk{\frac{\eps^2}{8|\Omega|^2}-\frac{4\gamma|\Omega|^2}{\eps^2}},
\end{align*}
which is a contradiction if $\delta$ is chosen small enough.
\subsection{Proof of \Prop~\ref{Prop_tensorise}}\label{Sec_Prop_tensorise}
Choose small enough $\alpha=\alpha(\eps,\Omega)$, $\gamma=\gamma(\alpha)>0$, $\chi = \chi(\gamma)>0$ and an even smaller $\delta=\delta(\gamma,\chi)>0$ and assume that $\mu$ is
$(\delta,2)$-symmetric.
Suppose that $\mu$ is $\chi$-homogeneous with respect to a partition $(\vV,\vS)$ such that $\#\vV+\#\vS\leq N=N(\gamma)$
as promised by \Thm~\ref{Thm_decomp}.
Let $J$ be the set of all $j\in[\#\vS]$ such that $\mu(S_j)\geq\gamma^2/N$.
Moreover, let $I$ be the set of all $i\in[\#\vV]$ such that $\mu$ is $\chi$-regular on $V_i$ and $|V_i|\geq\gamma n/N$.
By \Cor~\ref{Cor_states2} we have
\begin{align*}
\frac1{|V_i|}\sum_{x\in V_i}\TV{\mu_{\marg x}[\nix|S_j]-\mu_{\marg x}[\nix]}&<\gamma&\mbox{for all }i\in I,j\in J,
\end{align*}
provided that $\delta$ is chosen small enough.
Therefore, letting $\bar\mu_i=\bck{\SIGMA[\nix|V_i]}_\mu$, for all $i\in I$ we have
\begin{align}\label{eqProp_tensorise1}
\bck{\TV{\SIGMA[\nix|V_i]-\bar\mu_i}}_\mu&<2\gamma.
\end{align}
Fix some $i\in I$.
We claim that $\mu\otimes\mu$ is $\alpha$-regular on $V_i$.
Hence, let $U\subset V_i$ be a set of size $|U|\geq\alpha|V_i|$ and let
\begin{align*}
{\mathcal E}=\cbc{\TV{\SIGMA[\nix|U]-\bar\mu_i}\leq\gamma^{1/3}}.
\end{align*}
Then (\ref{eqProp_tensorise1}) implies that $\bck{\vec{1}\{\SIGMA\not\in{\mathcal E}\}}_\mu<\gamma^{1/3}$,
because $\mu$ is $\gamma$-regular on $V_i$.
Now, fix some $\sigma\in{\mathcal E}$.
For $\omega\in\Omega$ let $U(\sigma,\omega)=\{x\in U:\sigma(x)=\omega\}$.
Let
\begin{align*}
{\mathcal E}'(\sigma,\omega)=\cbc{\TV{\TAU[\nix|U(\sigma,\omega)]-\bar\mu_i}\leq\gamma^{1/3}}.
\end{align*}
If $|U(\sigma,\omega)|\geq\gamma^{1/2}|U|$, then due to (\ref{eqProp_tensorise1}) and $\gamma$-regularity we obtain, by a similar token as previously,
$\bck{\vec{1}\{\TAU\notin{\mathcal E}'(\sigma,\omega)\}}_\mu\leq \gamma^{1/3}$.
Consequently, the event ${\mathcal E}'(\sigma)$ that ${\mathcal E}'(\sigma,\omega)$ occurs
for all $\omega$ satisfying $|U(\sigma,\omega)|\geq\gamma^{1/2}|U|$ has probability at least $1-|\Omega|\gamma^{1/3}$.
Therefore, for any $\omega,\omega'\in\Omega$ we obtain
\begin{align*}
&\bck{\abs{\frac1{|U|}\sum_{x\in U}\vec{1}\{\SIGMA(x)=\omega\}\vec{1}\{\TAU(x)=\omega'\}-\mu_i(\omega)\mu_i(\omega')}}_\mu\\
&\qquad
\leq(|\Omega|+1)\gamma^{1/3}+
\bck{\abs{\frac1{|U|}\sum_{x\in U}\vec{1}\{\SIGMA(x)=\omega\}\vec{1}\{\TAU(x)=\omega'\}-\mu_i(\omega)\mu_i(\omega')}|
\SIGMA\in{\mathcal E},\TAU\in{\mathcal E}'(\SIGMA)}_\mu\\
&\qquad
\leq\gamma^{1/7}+
\bck{\max_{\omega:|U(\SIGMA,\omega)|\geq\gamma^{1/2}|U|}|\TAU[\omega'|U(\SIGMA,\omega)]-\mu_i(\omega')|
\,\big|\,\SIGMA\in{\mathcal E},\TAU\in{\mathcal E}'(\SIGMA)}_\mu
\leq\gamma^{1/8}.
\end{align*}
Summing over all $\omega,\omega'$ and choosing $\gamma$ small enough, we conclude that $\mu\otimes\mu$ is $\alpha$-regular on $V_i$.
Finally, (\ref{eqProp_tensorise1}) implies that $\mu\otimes\mu$ satisfies
\begin{align*}
\bck{\TV{(\SIGMA \vec \otimes \TAU) [\nix|V_i]-\bar\mu_i\otimes\bar\mu_i}}_{\mu\otimes\mu}&<\alpha.
\end{align*}
Therefore,
picking $\alpha$ small enough, we can apply
\Lem~\ref{Lemma_regularSymmetric} to conclude that $\mu\otimes\mu$ is $(\eps,2)$-symmetric.
\section{Factor graphs}\label{Sec_factorGraphs}
\subsection{Examples}\label{Sec_FactorGraphExamples}
The aim in this section is to set up a comprehensive framework for the study of ``random factor graphs'' and their corresponding Gibbs measures.
To get started let us ponder a few concrete examples.
In the \emph{Ising model} on a graph $G=(V,E)$ the variables of the problem are just the vertices of the graph.
The values available for each variable are $\pm1$.
Thus, an assignment is simply a map $\sigma:V\to\{\pm1\}$.
Moreover, each edge of $G$ gives rise to a constraint.
Specifically, given a parameter $\beta>0$ we define a weight function $\psi_e$ corresponding to the edge $e=\{v,w\}$ by letting
$\psi_e(\sigma)=\exp(\beta\sigma(v)\sigma(w)).$
Thus, edges $e=\{v,w\}$ give larger weight to assignments $\sigma$ such that $\sigma(v)=\sigma(w)$
than in the case $\sigma(v)\neq\sigma(w)$.
The corresponding partition function reads
\begin{align*}
Z_\beta(G)&=\sum_{\sigma:V\to\{\pm1\}}\prod_{e\in E}\psi_e(\sigma)=\sum_{\sigma:V\to\{\pm1\}}\exp\brk{\beta\sum_{\{v,w\}\in E}\sigma(v)\sigma(w)}.
\end{align*}
Further, the Gibbs distribution $\mu_{G,\beta}$ induced by $G$, $\beta$ is the probability measure on $\{\pm1\}^V$ defined by
\begin{align*}
\mu_{G,\beta}(\sigma)&=\frac1{Z_\beta(G)}\prod_{e\in E}\psi_e(\sigma)
=\frac1{Z_\beta(G)}\exp\brk{\beta\sum_{\{v,w\}\in E}\sigma(v)\sigma(w)}.
\end{align*}
Thus, $\mu_{G,\beta}$ weighs assignments according to the number of edges $e=\{v,w\}$ such that $\sigma(v)=\sigma(w)$.
The Ising model has been studied extensively in the mathematical physics literature on various classes of graphs, including and particularly random graphs.
For instance, if $\G(n,d)$ is a random regular graph of degree $d$ on $n$ vertices, then
$Z_\beta(\G(n,d))$ is known to ``converge'' to the value predicted by the cavity method~\cite{DM}.
Formally, the cavity method yields a certain number $F(\beta,d)$ such that
\begin{equation}\label{eqIsingConv}
\lim_{n\to\infty}\frac1n\Erw[\ln Z_\beta(\G(n,d))]=F(\beta,d).
\end{equation}
Because $Z_\beta(\G(n,d))$ is exponential in $n$ with high probability, the scaling applied in (\ref{eqIsingConv}) is the appropriate one to obtain a finite limit.
Furthermore, by Azuma's inequality $\ln Z_\beta(\G(n,d))$ is concentrated about its expectation.
Therefore, (\ref{eqIsingConv}) implies that $\frac1n\ln Z_\beta(\G(n,d))$ converges to $F(\beta,d)$ in probability.
The \emph{Potts antiferromagnet} on a graph $G=(V,E)$ can be viewed as a twist on the Ising model.
In this case we look at assignments $\sigma:V\to[k]$ for some number $k\geq3$.
The weight functions associated with the edges are defined by
$\psi_e(\sigma)=\exp(-\beta\vec{1}\{\sigma(v)=\sigma(w)\})$ for some $\beta>0$.
Thus, this time the edges prefer that the incident vertices receive {\em different} values.
The Gibbs measure and the partition function read
\begin{align*}
\mu_{G,\beta}(\sigma)&=\frac1{Z_\beta(G)}\exp\brk{-\beta\sum_{\{v,w\}\in E}\vec{1}\{\sigma(v)=\sigma(w)\}},&
Z_\beta(G)&=\sum_{\sigma:V\to[k]}\exp\brk{-\beta\sum_{\{v,w\}\in E}\vec{1}\{\sigma(v)=\sigma(w)\}}.
\end{align*}
While it is known that
$\lim_{n\to\infty}\frac1n\Erw[\ln Z_\beta(\G(n,d))]$
exists and that $\ln Z_\beta(\G(n,d))$ is concentrated about its expectation~\cite{bayati}, the precise value remains elusive for a wide range of $d,\beta$
(in contrast ferromagnetic version of the model~\cite{DMSS}).
However, it is not difficult to see that for sufficiently large values of $d,\beta$ we have~\cite{cond}
$$\lim_{n\to\infty}\frac1n\Erw[\ln Z_\beta(\G(n,d))]<\lim_{n\to\infty}\frac1n\ln \Erw[Z_\beta(\G(n,d))].$$
Hence, just like in the random $k$-SAT model the first moment overshoots the actual value of the partition function by an exponential factor.
The Potts model is closely related to the $k$-colorability problem.
Indeed, if we think of the $k$ possible values as colors, then for large $\beta$
the Gibbs measure concentrates on colorings with few monochromatic edges.
As a third example let us consider the following version of the random $k$-SAT model.
Let $k\geq3$, $\Delta>1$ be fixed integers, let $V_n=\{x_1,\ldots,x_n\}$ be a set of Boolean variables and let $d_n:V_n\times\{\pm1\}\to[\Delta]$ be a map
such that $$m=\sum_{x\in V_n}(d_n(x,1)+d_n(x,-1))/k$$ is an integer.
Then we let $\PHI(n,k,d_n)$ be a random $k$-CNF formula with $m$ clauses in which each variable $x\in V_n$ appears precisely $d_n(x,1)$ times
as a positive literal and precisely $d_n(x,-1)$ times as a negative literal.
As in \Sec~\ref{Sec_intro}, for a clause $a$ and a truth assignment $\sigma:V\to\{0,1\}$ we let
$\psi_a(\sigma)=\exp(-\beta\vec{1}\{\sigma\mbox{ violates }a\}).$
Then for a given parameter $\beta>0$ we obtain a Gibbs measure that weighs assignments by the number of clauses that they violate
and a corresponding partition function $Z_{\beta}(\PHI(n,k,d_n))$, cf.\ (\ref{eqkSAT1})--(\ref{eqkSAT2}).
Hence, for given $\beta>0$, $k\geq3$ and degree assignments $(d_n)_n$ the problem
of determining $\lim_{n\to\infty}\frac1n\Erw[\ln Z_\beta(\PHI(n,k,d_n))]$ arises.
This question is anything but straightforward even in the special case that $d_n(x,\pm1)=d_0$ is the same for all $x$.
In~\cite{clusters} we show how the results of the present paper can be put to work to tackle this case.
\subsection{Random factor graphs}
The following definition encompasses a variety of concrete models.
\begin{definition}\label{Def_model}
Let $\Delta>0$ be an integer, let $\Omega,\Theta$ be finite sets and let $\Psi=\cbc{\psi_1,\ldots,\psi_l}$ be a finite set of functions
$\psi_i:\Omega^{h_i}\to(0,\infty)$ of arity $h_i\in[\Delta]$.
A \bemph{$(\Delta,\Omega,\Psi,\Theta)$-model} $\cM=(V,F,d,t,(\psi_a)_{a\in F})$ consists of
\begin{description}
\item[M1] a countable set $V$ of \bemph{variable nodes},
\item[M2] a countable set $F$ of \bemph{constraint nodes},
\item[M3] a map $d:V\cup F\to\brk{\Delta}$ such that
\begin{equation}\label{eqDegSum}
\sum_{x\in V}d(x)=\sum_{a\in F}d(a),
\end{equation}
\item[M4] a map $t:C_V\cup C_F\to\Theta$, where we let
\begin{align*}
C_V&=\bigcup_{x\in V}\cbc{x}\times[d(x)],&
C_F&=\bigcup_{a\in F}\cbc{a}\times[d(a)],
\end{align*}
such that
\begin{equation}\label{eqTypeSum}
\abs{t^{-1}(\theta)\cap C_V}=\abs{t^{-1}(\theta)\cap C_F}\quad\mbox{ for each $\theta\in\Theta$,}
\end{equation}
\item[M5] a map $F\to\Psi$, $a\mapsto\psi_a$ such that $\psi_a:\Omega^{d(a)}\to(0,\infty)$ for all $a\in F$.
\end{description}
The \bemph{size} of the model is $\#\cM=|V|$.
Furthermore, a \bemph{$\cM$-factor graph} is a bijection $G:C_V\to C_F$, $(x,i)\mapsto G(x,i)$ such that $t(G(x,i))=t(x,i)$ for all $(x,i)\in C_V$.
\end{definition}
\noindent
Of course, (\ref{eqDegSum}) and (\ref{eqTypeSum})
require that either both quantities are infinite or both are finite.
The semantics is that $\Delta$ is the maximum degree of a factor graph.
Moreover, $\Omega$ is the set of possible values that the variables of the model range over, e.g., the set $\{\pm1\}$ in the Ising model.
Further, $\Theta$ is a set of ``types''.
For instance, in the random $k$-SAT model the types can be used to specify the signs of the literals.
Additionally, $\Psi$ is a set of possible weight functions.
A model $\cM$ comes with a set $V$ of variable nodes and a set $F$ of contraint nodes.
The degrees of these nodes are prescribed by the map $d$.
Just like in the ``configuration model'' of graphs with a given degree sequence
we create $d(v)$ ``clones'' of each node $v$.
The sets $C_V$, $C_F$ contain the clones of the variable and constraint nodes, respectively.
Further, the map $t$ assigns a type to each ``clone'' of either a constraint or variable node and
each constraint node $a$ comes with a weight function $\psi_a$.
A $\cM$-factor graph is a type-preserving matching $G$ of the variable and constraint clones.
Let $\cG(\cM)$ be the set of all $\cM$-factor graphs
and write $\G=\G(\cM)$ for a uniformly random sample from $\cG(\cM)$.
Contracting the clones of each node,
we obtain a bipartite (multi-)graph with variable nodes $V$ and constraint nodes $F$.
We often identify $\G$ with this multi-graph.
For instance, if we speak of the distance of two vertices in $\G$ we mean the length of a shortest path in this multi-graph.
For a clone $(x,i)\in C_V$ we denote by $\partial(G,x,i)=G(x,i)$ the clone that $G$ matches $(x,i)$ to.
Similarly, for $(a,j)\in C_F$ we write $\partial(G,a,j)$ for the variable clone $(x,i)$ such that $\partial(G,x,i)=(a,j)$.
Moreover, for a variable $x$ we let $\partial(G,x)=\{\partial(G,x,i):i\in[d(x)]\}$ and analogously for $a\in F$ we set $\partial(G,a)=\{\partial(G,a,j):j\in[d(a)]\}$.
To economise notation we sometimes identify a clone $(x,i)$ with the underlying variable $x$.
For instance, if $\sigma:V\to\Omega$ is an assignment, then we take the liberty of writing $\sigma(x,i)=\sigma(x)$.
Additionally, where convenient we view $\partial(G,x)$ as the set of all constraint nodes $a\in F$ such that there exist $i\in[d(x)]$, $j\in[d(a)]$ such that $(a,j)=G(x,i)$.
The corresponding convention applies to $\partial(G,a)$.
A \bemph{$\cM$-assignment} is a map $\sigma:V\to\Omega$
and we define
\begin{align*}
\psi_{G,a}(\sigma)&=\psi_a\big(\sigma(\partial_G(a,1)),\ldots,\sigma(\partial_G(a,d(a)))\big)\qquad\mbox{for }a\in F,\quad\mbox{and}\quad&
\psi_G(\sigma)&=\prod_{a\in F}\psi_a(\sigma).
\end{align*}
Further, the {\bem Gibbs distribution} and the \bemph{partition function} of $G$ are
\begin{align}\label{eqZ}
\mu_G(\sigma)&=\psi_G(\sigma)/Z_G,\quad\mbox{where}&
Z(G)&=\sum_{\sigma:V\to\Omega}\psi_G(\sigma).
\end{align}
We denote expectations with respect to the Gibbs measure by $\bck{\nix}_G=\bck{\nix}_{\mu_G}$.
The fundamental problem that arises is the study of the random variable $\ln Z(\G)$.
As mentioned in \Sec~\ref{Sec_intro},
this random variable holds the key to getting a handle the Gibbs measure and thus the combinatorics of the problem.
The following proposition establishes concentration about the expectation.
For two factor graphs $G,G'\in\cG(\cM)$ let
\begin{align}\label{eqDist}
\dist(G,G')&=\abs{\cbc{(x,i)\in C_V:\partial(G,x,i)\neq\partial(G',x,i)}}.
\end{align}
\begin{proposition}\label{Lemma_conc}
For any $\Delta,\Omega,\Theta,\Psi$ there exists
$\eta=\eta(\Delta,\Omega,\Theta,\Psi)>0$ such that for any $(\Delta,\Omega,\Psi,\Theta)$-model $\cM$ of size $n=\#\cM\geq1/\eta$
and any $\eps>0$ we have
$\pr\brk{\abs{\ln Z(\G)-\Erw[\ln Z(\G)]}>\eps}\leq\exp(-\eta\eps^2 n)$.
\end{proposition}
\begin{proof}
There exists a number $\rho>0$ that depends on $\Delta,\Omega,\Psi,\Theta$ only such that
for any two factor graphs $G,G'\in\cG(\cM)$ we have $|\ln Z(G)-\ln Z(G')|\leq\rho\cdot\dist(G,G')$.
Therefore, the assertion follows from Azuma's inequality.
\end{proof}
Thus, \Prop~\ref{Lemma_conc} reduces our task to calculating the expectation $\Erw[\ln Z(\G)]$.
Generally, the standard first and second moment method do not suffice to tackle this problem
because the logarithm sits {\em inside} the expectation.
While, of course, Jensen's inequality guarantees that
\begin{align}\label{eqLemmaAnnealed}
\Erw[\ln Z(\G)]&\leq\ln\Erw[Z(\G)],
\end{align}
equality does not typically hold.
In fact, we saw examples where $\ln\Erw[Z(\G)]-\Erw[\ln Z(\G)]$ is linear in the size $\#\cM$ of the model already.
If so, then the Paley-Zygmund inequality entails that $\ln(\Erw[Z(\G)^2]/\Erw[Z(\G)]^2)$ is linear in $\#\cM$ as well,
dooming the second moment method.
Furthermore, even if $\Erw[\ln Z(\G)]\sim\ln\Erw[Z(\G)]$ the second moment method does not generally succeed~\cite{Lenka}.
Let us now revisit the examples from \Sec~\ref{Sec_FactorGraphExamples}.
\begin{example}[the Ising model on the random $d$-regular graph]\label{Ex_Ising}\upshape
Suppose that $d\geq2,\beta>0$.
Let $\Delta=d$, $\Omega=\cbc{\pm1}$, $\Psi=\{\psi\}$, where
$\psi:\{\pm1\}^2\to(0,\infty)$, $(\sigma_1,\sigma_2)\mapsto\exp(\beta\sigma_1\sigma_2)$, and set $\Theta=\{0\}$.
Further, given $n\geq1$ such that $dn$ is even we define a $(\Delta,\Omega,\Psi,\Theta)$-model $\cM(d,n)$
by letting $V=\{x_1,\ldots,x_n\}$, $F=\{a_1,\ldots,a_{dn/2}\}$, $d(x)=d$ for all $x\in V$, $d(a)=2$ for all $a\in F$,
$t(x,i)=t(f,j)=0$ for all $(x,i)\in C_V$, $(f,j)\in C_F$, and $\psi_a=\psi$ for all $a\in F$.
Thus, all clones have the same ``type'' and all constraint nodes have arity two and the same weight function.
Hence, the random graph $\G(\cM)$ is obtained by matching the $dn$ variable clones randomly to the $dn$ constraint clones.
If we simply replace the constraint nodes, which have degree two, by edges joining the two adjacent variable nodes,
then the resulting random multigraph is contiguous to the uniformly random $d$-regular graph on $n$ vertices.
In the model $\cM$ (\ref{eqLemmaAnnealed}) holds with (asymptotic) equality for all $d,\beta$~\cite{DM}.
\end{example}
\begin{example}[the Potts antiferromagnet on the random $d$-regular graph]\label{Ex_Potts}\upshape
The construction is similar to the previous example, except that $\Omega=[k]$ is the set of colors and
$\psi(\sigma_1,\sigma_2)=\exp(-\beta\vec{1}\{\sigma_1=\sigma_2\})$.
In this example (\ref{eqLemmaAnnealed}) holds with asymptotic equality if either $d\leq d_0(k)$ or
$d>d_0(k)$ and $\beta\leq\beta_0(d,k)$ for certain critical values $d_0(k)$, $\beta_0(d,k)$.
However, for sufficiently large $d,\beta$ there occurs a linear gap~\cite{cond,CDGS}.
\end{example}
\begin{example}[random $k$-SAT]\label{Ex_Potts}\upshape
To capture the random $k$-SAT model we let $\Delta>0$ be a maximum degree and $\Omega=\Theta=\{\pm1\}$.
Further, each $s\in\{\pm1\}^k$ gives rise to a function
$$\psi_s:\{\pm1\}^k\to(0,\infty),\qquad \sigma\mapsto\exp(-\beta\vec{1}\{\sigma=-s\})$$
and we let $\Psi=\{\psi_s:s\in\{\pm1\}^k\}$.
The idea is that $s$ is the ``sign pattern'' of a $k$-clause, with $s_i=\pm1$ indicating that the $i$th literal is positive/negative.
Then a truth assignment $\sigma$ of the $k$ variables is satisfying unless $\sigma_i=-s_i$ for all $i$.
The corresponding model $\cM$ has a set $V=\{x_1,\ldots,x_n\}$ of Boolean variables and a set $F=\{a_1,\ldots,a_m\}$ of clauses.
Moreover, the map $d:V\to[\Delta]$ prescribes the degree of each variable, while of course each clause has degree $k$.
Additionally, the map $t:C_V\cup C_F\to\Theta=\{\pm1\}$ prescribes the positive/negative occurrences of the variables and the sign patterns of the clauses.
Thus, a variable $x$ occurs $|\{i\in[d(v)]:t(x,i)=\pm1\}|$ times positively/negatively and the $j$th literal of a clause $a$ is positive iff $t(a,j)=1$.
Finally, the weight function of clause $a$ is $\psi_{(t(a,1),\ldots,t(a,k))}$.
The bound (\ref{eqLemmaAnnealed}) does not generally hold with equality~\cite{maxsat,clusters}.
\end{example}
While \Def~\ref{Def_model} encompasses many problems of interest, there are two restrictions.
First, because all weight functions $\psi\in\Psi$ take strictly positive values, \Def~\ref{Def_model} does not allow for ``hard'' constraints.
For instance, \Def~\ref{Def_model} does not accommodate the graph coloring problem, which imposes the strict requirement that no single edge be monochromatic.
However, hard constraints can be approximated by soft ones, e.g., by choosing a very large value of $\beta$ in the Potts antiferromagnet.
Moreover, many of the arguments in the following sections do extend to hard constraints with a bit of care.
However, the assumption that all $\psi$ are strictly positive saves us many case distinctions as
it ensures that $Z(\G)$ is strictly positive and that therefore the Gibbs measure is well-defined.
The second restriction is that we prescribe a fixed maximum degree $\Delta$.
Thus, if we consider a sequence $\underline\cM=(\cM_n)_n$ of $(\Delta,\Omega,\Psi,\Theta)$-models with $\#\cM_n=n$, then all factor graphs have a bounded degree.
By comparison, if we choose a $k$-SAT formula with $n$ variables and $m=\alpha n/k$ clauses uniformly at random for fixed $k\geq3,\alpha>0$,
then the maximum variable degree will be of order $\ln n/\ln\ln n$.
Yet this case can be approximated well by a sequence of models with a large enough maximum degree $\Delta$.
In fact, if we calculate $\Erw[\ln Z]$ for any fixed $\Delta$, then the $\Delta\to\infty$ limit is easily seen to yield the answer in the case of
uniformly random formulas.
Nevertheless, the bounded degree assumption is
technically convenient because it facilitates the use of local weak convergence, as we will discuss next.
\begin{remark}
For the sake of simplicity in (\ref{eqZ}) we definied the partition function as the sum over all $\sigma:V\to\Omega$.
However, the results stated in the following carry over to the cases where $Z$ is defined as the sum over all configurations
of a subset of $\emptyset\neq{\mathcal C}_\cM\subset\Omega^V$, e.g., all $\sigma$ that have Hamming distance at most $\alpha n$ from some reference assignment $\sigma_0$
for a fixed $\alpha>0$.
Of course, in this case the Gibbs measure is defined such that its support is equal to ${\mathcal C}_\cM$.
\end{remark}
\subsection{Local weak convergence}
Suppose that we fix $\Delta,\Omega,\Psi,\Theta$ as in \Def~\ref{Def_model} and that $\underline\cM=(\cM_n)_n$ is a sequence of $(\Delta,\Omega,\Psi,\Theta)$-models
such that $\cM_n=(V_n,F_n,d_n,t_n,(\psi_a)_{a\in F_n})$ has size $n$.
Let us write $\G=\G(\cM_n)$ for the sake of brevity.
According to the cavity method, $\lim_{n\to\infty}\frac1n\Erw[\ln Z(\G)]$ is determined by the ``limiting local structure'' of the random factor graph $\G$.
To formalise this concept, we adapt the concept of {\em local weak convergence} of graph sequences~\cite[Part~4]{Lovasz} to our current
setup, thereby generalising the approach taken in~\cite{DMS}.
\begin{definition}\label{Def_template_2}
A $(\Delta,\Omega,\Psi,\Theta)$-\bemph{template} consists of
a $(\Delta,\Omega,\Psi,\Theta)$-model $\cM$,
a connected factor graph $H\in\cG(\cM)$ and a \bemph{root} $r_H$, which is a variable or factor node.
Its \bemph{size} is $\#\cM$.
Moreover, two templates $H,H'$
with models $\cM=(V,F,d,t,(\psi_a))$,
$\cM'=(V',F',d',t',(\psi_a'))$
are \bemph{isomorphic} if there exists a bijection $\pi:V\cup F\to V'\cup F'$ such that
\begin{description}
\item[ISM1] $\pi(r_H)=r_H'$,
\item[ISM2] $\pi(V)=V'$ and $\pi(F)= F'$,
\item[ISM3] $d(v)=d'(\pi(v))$ for all $v\in V\cup F$,
\item[ISM4] $t(v,i)=t'(\pi(v),i)$ for all $(v,i)\in C_V\cup C_F$,
\item[ISM5] $\psi_a=\psi_{\pi(a)}$ for all $a\in F$, and
\item[ISM6] if $(v,i)\in C_V,(a,j)\in C_F$ satisfy $\partial(G,x,i)=(a,j)$,
then $\partial(G',\pi(x),i)=(\pi(a),j)$.
\end{description}
\end{definition}
\noindent
Thus, a template is, basically, a finite or countably infinite connected factor graph with a distinguished root.
Moreover, an isomorphism preserves the root as well as degrees, types, weight functions and adjacencies.
Let us write $[H]$ for the isomorphism class of a template and let $\fG=\fG(\Delta,\Omega,\Theta,\Psi)$ be the set of all isomorphism classes
of $(\Delta,\Omega,\Psi,\Theta)$-templates.
For each $[H]\in\fG$ and $\ell\geq1$ let $\partial^\ell[H]$ be the isomorphism class of the template obtained
by removing all vertices at a distance greater than $\ell$ from the root.
We endow $\fG$ with the coarsest topology that makes all the functions
$$\Gamma\in\fG\mapsto\vec{1}\{\partial^\ell[\Gamma]=\partial^\ell[\Gamma_0]\}\in\{0,1\}\qquad\mbox{for $\ell\geq1,\Gamma_0\in\fG$}$$
continuous.
Moreover, the space $\cP(\fG)$ of probability measures on $\fG$ carries the weak topology.
So does the space $\cP^2(\fG)$ of probability measures on $\cP(\fG)$.
For $\Gamma\in\fG$ we write $\atom_\Gamma\in\cP(\fG)$ for the Dirac measure that puts mass one on the single point $\Gamma$.
Similarly, for $\lambda\in\cP(\fG)$ we let $\atom_\lambda\in\cP^2(\fG)$ be the Dirac measure on $\lambda$.
Our assumption that the maximum degree is bounded by a fixed number $\Delta$ ensures that $\fG$, $\cP(\fG)$, $\cP^2(\fG)$ are compact Polish spaces.
For a factor graph $G\in\cG(\cM_n)$ and a variable or constraint node $v$ we write $[G,v]$ for the isomorphism class of the
connected component of $v$ in $G$ rooted at $v$.
Then each factor graph $G\in\cG(\cM_n)$ gives rise to the empirical distribution
$$\lambda_{G}=\frac1{|V_n|+|F_n|}\sum_{v\in V_n\cup F_n}\atom_{[G,v]}\in\cP(\fG).$$
We say that $\underline\cM$ {\bem converges locally} to $\thet\in\cP(\fG)$ if
\begin{equation}\label{eqLocalWeakConvergence}
\lim_{n\to\infty}\Erw[\atom_{\lambda_{\G}}]=\atom_\thet.
\end{equation}
Denote a random isomorphism class chosen from the distribution $\thet$ by $\T=\T_{\thet}$.
Unravelling the definitions, we see that (\ref{eqLocalWeakConvergence}) holds iff for every integer $\ell>0$
and every $[H]\in\fG$ we have
\begin{align}\label{eqLocalWeakConvergence2}
\frac1{|V_n|+|F_n|}\sum_{v\in V_n\cup F_n}\vec{1}\{\partial^\ell[\G,v]=\partial^\ell[H]\}
&\ \stacksign{$n\to\infty$}\to\ \pr\brk{\partial^\ell\T_\thet=\partial^\ell[H]}
\quad\mbox{in probability}.
\end{align}
We are going to be interested in the case that $\underline\cM$ converges locally to a distribution $\thet$ on {\em acyclic} templates.
Thus, let $\fT$ be the set of all acyclic templates.
Further, we write $\cV$ for the set of all templates whose root is a variable node and $\cF$ for the set of all templates whose root is a constraint node.
Additionally, for a template $[H]$ we write $r_{[H]}$ for the root vertex, $d_{[H]}$ for its degree and $\psi_{[H]}$ for the weight function of the root vertex if $[H]\in\cF$.
Moreover, for $j\in[d_{[H]}]$ we write $[H]\reroot j$ for the template obtained from $[H]$ by re-rooting the template at the $j$th neighbor of $r_{[H]}$.
(This makes sense because condition {\bf ISM6} from \Def~\ref{Def_template_2} preserves the order of the neighbors.)
We will frequently condition on the depth-$\ell$ neighborhood of the random factor graph $\G$ for some finite $\ell$.
Hence, for $G,G'\in\cG(\cM_n)$ and $\ell\geq1$
we write $G\ism_\ell G'$ if $\partial^\ell[G,x]=\partial^\ell[G',x]$ for all variable nodes $x\in V_n$ and $\partial^{\ell+1}[G,a]=\partial^{\ell+1}[G',a]$
for all constraint nodes $a\in F_n$.
Let $\cT_\ell=\cT_{\ell,\cM_n}$ be the $\sigma$-algebra on $\cG(\cM_n)$ generated by the equivalence classes of the relation $\ism_\ell$.
Additionally, for $G\in\cG(\cM_n)$ and $\ell\geq0$ we let
$$\lambda_{G,\ell}=\frac1{|V_n|+|F_n|}\brk{\sum_{x\in V_n}\atom_{\partial^\ell[G,x]}+\sum_{a\in F_n}\atom_{\partial^{\ell+1}[G,a]}}$$
be the empirical distribution of the depth-$\ell$ neighborhood structure.
Furthermore, let
$$\fT_\ell=\cbc{\partial^\ell T:T\in\fT\cap\cV}\cup\cbc{\partial^{\ell+1} T:T\in\fT\cap\cF}.$$
Then for a probability measure $\thet\in\cP(\fT)$ we denote by $\thet_\ell$ the image of $\thet$ under
the map
$$\fT\to\fT_\ell,\qquad T\mapsto\begin{cases}
\partial^\ell T&\mbox{ if }T\in\fT\cap\cV,\\
\partial^{\ell+1} T&\mbox{ if }T\in\fT\cap\cF.
\end{cases}$$
Because all degrees are bounded by $\Delta$, the set $\fT_\ell$ is finite for every $\ell\geq1$.
Hence, (\ref{eqLocalWeakConvergence2}) entails that
$\underline\cM$ converges locally to $\thet\in\cP(\fT)$ iff
\begin{align}\label{eqLocalWeakConvergence3}
\lim_{n\to\infty}\Erw\TV{\lambda_{\G,\ell}-\thet_\ell}&=0\qquad\mbox{for every }\ell\geq1.
\end{align}
\subsection{The planted distribution}
While $\G$ is chosen uniformly at random (from the configuration model),
we need to consider another distribution that weighs factor graphs by their partition function.
Specifically, given $\ell\geq0$ let $\hat\G_\ell=\hat\G_{\ell,\cM_n}$ be a random graph chosen according to the distribution
\begin{align}\label{eqPlantedDistribution}
\pr\brk{\hat\G_\ell=G}&=Z(G)\cdot\Erw\brk{\frac{\vec{1}\{\G=G\}}{\Erw[Z|\cT_\ell]}}\qquad(G\in\cG(\cM_n)),
\end{align}
which we call the \bemph{planted distribution}.
The definition (\ref{eqPlantedDistribution}) ensures that the distribution of the ``depth-$\ell$ neighborhood structure'' of $\hat\G_\ell$
coincides with that of $\G$.
Perhaps more intuitively, the planted distribution can be described by the following experiment.
First step, choose a random factor graph $\G$.
Then, given $\G$, choose the factor graph $\hat\G_\ell$ randomly such that
a graph $G\ism_\ell\G$ comes up with a probability that is proportional to $Z(G)$.
Perhaps despite appearances, the planted distribution is reasonably easy to work with in many cases.
For instance, it has been employed successfully to study random $k$-SAT as well
as random graph or hypergraph coloring problems~\cite{Barriers,clusters,hyp2col,Lenka,DSS3}.
\subsection{Short cycles}
In most cases of interest the random factor graph is unlikely to contain many short cycles, and it will be convenient for us to exploit this fact.
Hence, let us call a factor graph $G$ {\bem $l$-acyclic} if it does not contain a cycle of length at most $l$.
We say that the sequence $\underline\cM$ of models has {\bem high girth} if for any $\ell,l>0$ we have
\begin{equation}\label{eqShortCycles}
\liminf_{n\to\infty}\,
\pr\brk{\mbox{$\G$ is $l$-acyclic}}>0,\qquad
\liminf_{n\to\infty}\,
\pr\brk{\mbox{$\hat\G_\ell$ is $l$-acyclic}}>0.
\end{equation}
Thus, there is a non-vanishing probability that the random factor graph $\G$ is $l$-acyclic.
Moreover, short cycles do not have too heavy an impact on the partition function
as the graph chosen from the planted distribution has a non-vanishing probability of being $l$-acyclic as well.
In the following, we are going to denote the event that a random factor graph is $l$-acyclic by $\cA_l$.
Let us highlight the following consequence of the high girth condition and the construction of the planted distribution.
\begin{proposition}\label{Prop_plantedModel}
Assume that $\underline\cM$ is a sequence of $(\Delta,\Omega,\Psi,\Theta)$-models of high girth.
Let $\ell\geq1$ be an integer and suppose that $\cB$ is an event such that
$\lim_{n\to\infty}\pr\brk{\hat\G_\ell\in\cB}=1$.
If $b$ is a real and $l\geq0$ is an integer such that
\begin{align}\label{eqProp_plantedModelAssm}
\lim_{n\to\infty}\pr\brk{\ln\Erw[Z(\G)|\cT_\ell]\geq b n|\cA_l}=1,
\end{align}
then
$\lim_{n\to\infty}\frac1n\ln\Erw\brk{\vec{1}\{\cB\cap\cA_l\}Z(\G)}\geq b$.
\end{proposition}
\begin{proof}
Since $\lim_{n\to\infty}\pr\brk{\hat\G_\ell\in\cB}=1$ the high girth condition (\ref{eqShortCycles}) implies that
$\lim_{n\to\infty}\pr\brk{\hat\G_\ell\in\cB|\cA_l}=1$
for every $l$.
Set $\cB_l=\cA_l\cap\cB$.
Then
by the definition~(\ref{eqPlantedDistribution}) of the planted distribution,
\begin{align*}
1-o(1)&=\pr\brk{\hat\G_\ell\in\cB|\cA_l}=\sum_{G\in\cB_l}Z(G)\Erw\brk{\frac{\vec{1}\{\G=G\}}{\Erw[Z|\cT_\ell]}\bigg|\cA_l}
=\Erw\brk{\frac{\vec{1}\{\G\in\cB_l\}Z(\G)}{\Erw[Z|\cT_\ell]}\bigg|\cA_l}
=\Erw\brk{\frac{\Erw[\vec{1}\{\G\in\cB_l\}Z|\cT_\ell]}{\Erw[Z|\cT_\ell]}\bigg|\cA_l}.
\end{align*}
Consequently,
$\pr\brk{\Erw[\vec{1}\{\G\in\cB_l\}Z]|\cT_\ell]\geq\Erw[Z|\cT_\ell]/2|\cA_l}=1-o(1)$.
Hence, (\ref{eqProp_plantedModelAssm}) yields
$$\pr\brk{\ln\Erw[\vec{1}\{\G\in\cB_l\}Z]|\cT_\ell]\geq bn-1|\cA_l}=1-o(1).$$
Therefore, the assertion follows from (\ref{eqShortCycles}).
\end{proof}
\begin{remark}
Strictly speaking, the first condition in (\ref{eqShortCycles}) is superfluous as it is implied by the second one.
\end{remark}
\medskip
\smallskip\noindent
{\em From here on out we assume that $\underline\cM$ is a sequence of $(\Delta,\Omega,\Psi,\Theta)$-models of high girth that converges locally to $\thet\in\cP(\fT)$ and
we fix $\Delta,\Omega,\Theta,\Psi$ for the rest of the paper.}
\section{The Bethe free energy}\label{Sec_Bethe}
\noindent
In this section we present the main results of the paper.
The thrust is that certain basic properties of the Gibbs measure entail an asymptotic formula for $\Erw[\ln Z(\G)]$.
The results are guided by the physics predictions from~\cite{pnas}.
\subsection{An educated guess}
The formula for $\Erw[\ln Z(\G)]$ that the cavity method predicts, the so-called ``replica symmetric solution'',
comes in terms of the distribution $\thet$ to which $\underline\cM$ converges locally.
Thus, the cavity method claims that in order to calculate $\Erw[\ln Z(\G)]$ it is not necessary to deal with the mind-boggling
complexity of the random factor graph with its expansion properties, long cycles etc.
Instead, it suffices to think about the random tree $\T=\T_\thet$, a dramatically simpler object.
The following definition will help us formalise this notion.
\begin{definition}\label{Def_margAssign}
A {\bem marginal assignment} is a measurable map $p:\fT\to\bigcup_{j=1}^\Delta\cP(\Omega^j)$, $T\mapsto p_T$ such that
\begin{description}
\item[MA1] $p_T\in\cP(\Omega)$ for all $T\in\cV$,
\item[MA2] $p_T\in\cP(\Omega^{d_T})$
and $p_{T\marg j}=p_{T\reroot j}$ for all $T\in\cF,j\in[d_T]$,
\item[MA3] For all $T\in\cF$ we have
\begin{align}\label{eqConstraintMargs}
H(p_T)+\bck{\ln\psi_T(\SIGMA)}_{p_T}&=\max\cbc{
H(\nu)+\bck{\ln\psi_T(\SIGMA)}_{\nu}:\nu\in\cP(\Omega^{d_T})\mbox{ s.t.\ }\nu_{\marg j}=p_{T\reroot j}\mbox{ for all }j\in[d_T]}.
\end{align}
\end{description}
Further, the {\bem Bethe free energy} of $p$ with respect to $\thet$ is
\begin{align}\label{eqBetheFreeEnergy}
\cB_\thet(p)&=
\Erw\brk{(1-d_{\T})H( p_{\T})|\cV}+\frac{\pr\brk{\T\in\cF}}{\pr\brk{\T\in\cV}}\Erw \brk{H(p_{\T})+\bck{\ln\psi_{\T}(\SIGMA)}_{p_{\T}}|\cF},
\end{align}
where, of course, $\Erw[\nix],\pr[\nix]$ refer to the choice of the random tree $\T=\T_\thet$.
\end{definition}
Thus, a marginal assignment provides a probability distribution $p_T$ on $\Omega$ for each tree whose root is a variable node.
Furthermore, for trees $T$ rooted at a contraint node $p_T$ is a distribution on $\Omega^{d_T}$, which we think of as the joint distribution
of the variables involved in the constraint.
The distributions assigned to $T$ rooted at a constraint node must satisfy a consistency condition:
the $j$th marginal of $p_T$ has to coincide with the distribution assigned to the tree $T\reroot j$ rooted at the $j$th child
of the root of $T$ for every $j\in[d_T]$;
of course, $T\reroot j$ is a tree rooted at a variable node.
In addition, {\bf MA3} requires that for $T\in\cF$ the distribution $p_T$ maximises the functional $H(\nu)+\bck{\psi_T(\SIGMA)}_{\nu}$
amongst all distribution $\nu$ with the same marginal distributions as $p_T$.
Furthermore, the Bethe free energy is a functional that maps each marginal assignment $p$ to a real number.
For a detailed derivation of this formula based on physics intuition we refer to~\cite{MM}.
Given a distribution $\thet$ on trees, the cavity method provides a plausible recipe for constructing marginal assignments.
Roughly speaking, the idea is to identify fixed points of an operator called Belief Propagation on the random infinite tree~\cite{MM}.
However, this procedure is difficult to formalise mathematically because generally there are several Belief Propagation fixed points
and model-dependent considerations are necessary to identify the ``correct'' one.
To keep matters as simple as possible we are therefore going to assume that a marginal assignment is given.
\begin{remark}
Because the entropy is concave, conditions {\bf MA2} and {\bf MA3} specify the distributions $p_T$ for $T\in\cF$ uniquely.
In other words, a marginal assignment is actually determined completely by the distributions $p_T$ for $T\in\cV$.
\end{remark}
For a marginal assignment $p$, an integer $\ell$ and a tree $T\in\fT_\ell\cap\cV$ we define
$$p_{\ell,T}=\Erw[p_{\T}|\partial^\ell\T=T].$$
Thus, $p_{\ell}$ is the conditional expectation of $p$ given the first $\ell$ layers of the tree.
Finally, to avoid notational hazards we let $p_T,p_{\ell,T}$ be the uniform distribution on $\Omega$ for all $T\in\fG\setminus\cT$.
\begin{lemma}\label{Lemma_martingale}
For any $\eps>0$ there is $\ell_0>0$ such that for all $\ell>\ell_0$ we have
$\Erw[\TV{p_{\ell,\partial^\ell\T}-p_{\T}}|\T\in\cV]<\eps.$
\end{lemma}
\begin{proof}
Define an equivalence relation $\equiv_\ell$ on $\fT\cap\cV$ by letting
$T\equiv_\ell T'$ iff $\partial^\ell T=\partial^\ell T'$.
Then for any $\omega\in\Omega$ the sequence of random variables
$X_\ell(\T)=p_{\ell,\partial^\ell\T}(\omega)$
is a martingale with respect to the filtration generated by the equivalence classes of $\equiv_\ell$.
By the martingale convergence theorem~\cite[\Thm~5.7]{Durrett}, $(p_\ell)_\ell$ converges $\thet$-almost surely to $p$.
\end{proof}
\subsection{Symmetry}
In the terminology of \Sec~\ref{Sec_hom}, the cavity method claims that $\frac1n\Erw[\ln Z(\G)]$ converges to
the Bethe free energy of a suitable marginal assignment iff
\begin{equation}\label{eqNonCondensation}
\lim_{n\to\infty}\pr\brk{\mu_{\G}\mbox{ is $(\eps,2)$-symmetric}}=1\qquad\mbox{for any }\eps>0\mbox{ (see \cite{pnas})}.
\end{equation}
This claim is, of course, based on bold non-rigorous deliberations.
Nonetheless, we aim to prove a rigorous statement that comes reasonably close.
To this end, let $p$ be a marginal assignment.
We say that $\underline\cM$ is {\bem $p$-symmetric} if for every $\eps>0$
there is $\ell_0>0$ such that for all $\ell>\ell_0$ we have
\begin{equation}\label{eqMyNonCondensation}
\lim_{n\to\infty}\ \pr\brk{\frac1{n^2}\sum_{x,y\in V_n}\TV{\mu_{\G\marg\{x,y\}}-p_{\ell,\partial^\ell[\G,x]}\otimes p_{\ell,\partial^\ell[\G,y]}}>\eps}=0.
\end{equation}
In other words, for any $\eps>0$ for $\ell$ sufficiently large random factor graph $\G$ enjoys the following property with high probability.
If we pick two variable nodes $x,y$ of $\G$ uniformly and independently, then the joint distribution $\mu_{\G\marg\{x,y\}}$
is close to the product distribution $p_{\ell,\partial^\ell[\G,x]}\otimes p_{\ell,\partial^\ell[\G,y]}$ determined by the depth-$\ell$ neighborhoods of $x,y$.
Of course, as $\G$ has bounded maximum degree the distance between randomly chosen $x,y$ is going to be greater than, say, $\ln\ln n$ with high probability.
Thus, similar in spirit to (\ref{eqNonCondensation}),
(\ref{eqMyNonCondensation}) provided that far-apart variables typically decorrelate and that $p$ captures the Gibbs marginals.
In analogy to (\ref{eqMyNonCondensation}), we say that the {\bem planted distribution of $\underline\cM$ is $p$-symmetric} if
for every $\eps>0$ there is $\ell_0>0$ such that for all $\ell>\ell_0$ we have
$$\lim_{n\to\infty}\
\pr\brk{\frac1{n^2}\sum_{x,y\in V_n}\TV{\mu_{\hat\G_\ell\marg\{x,y\}}-p_{\ell,\partial^\ell[\hat\G_\ell,x]}\otimes p_{\ell,\partial^\ell[\hat\G_\ell,y]}}>\eps}=0
\qquad\mbox{for any $\eps>0$}.$$
The main result of this paper is
\begin{theorem}\label{Thm_symUpperBound}
If $\underline\cM$ is $p$-symmetric, then
$$\limsup_{n\to\infty}\frac 1n\Erw[\ln Z(\G)]\leq\cB_\thet(p).$$
If the planted distribution of $\underline\cM$ is $p$-symmetric as well, then
$$\lim_{n\to\infty}\frac 1n\Erw[\ln Z(\G)]=\cB_\thet(p).$$
\end{theorem}
Thus, the basic symmetry assumption (\ref{eqMyNonCondensation}) implies that $\cB_\thet(p)$ is an upper bound on $\frac1n\Erw[\ln Z(\G)]$.
If, additionally, the symmetry condition holds in the planted model, then this upper bound is tight.
In particular, in this case $\frac1n\Erw[\ln Z(\G)]$ is completely determined by the limiting local structure $\thet$ and $p$.
The proof of \Thm~\ref{Thm_symUpperBound}, which can be found in \Sec~\ref{Sec_symUpperBound}, is based on \Thm~\ref{Thm_decomp},
the decomposition theorem for probability measures on cubes.
More precisely, we combine \Thm~\ref{Thm_decomp} with a conditional first and a second moment argument given the local structure of the factor graph,
i.e., given $\cT_\ell$ for a large $\ell$.
The fact that it is necessary to condition on the local structure in order to cope with ``lottery effects'' has been noticed in prior work~\cite{yuval,kSAT,DM,DMS}.
Most prominently, such a conditioning was crucial in order to obtain the precise $k$-SAT threshold for large enough $k$~\cite{DSS3}.
But here the key insight is that \Thm~\ref{Thm_decomp} enables us to carry out conditional moment calculations in a fairly elegant and generic way.
The obvious question that arises from \Thm~\ref{Thm_symUpperBound} is whether there is a simple way to show that
$\underline\cM$ is $p$-symmetric (and that the same is true of the planted distribution).
In Sections~\ref{Sec_nonRe} and~\ref{Sec_GU} we provide two sufficient conditions called non-reconstruction and Gibbs uniqueness.
That these two conditions entail symmetry was predicted in~\cite{pnas}, and \Thm~\ref{Thm_decomp} enables us to prove it.
\subsection{Non-reconstruction}\label{Sec_nonRe}
Following~\cite{pnas} we define a correlation decay condition, the ``non-reconstruction'' condition, on factor graphs and show that it implies symmetry.
The basic idea is to formalise the following.
Given $\eps>0$ pick a large $\ell=\ell(\eps)>1$, choose a random factor graph $\G$ for some large $n$ and pick a variable node $x$ uniformly at random.
Further, sample an assignment $\SIGMA$ randomly from the Gibbs measure $\mu_{\G}$.
Now, sample a second assignment $\TAU$ from $\mu_{\G}$ subject to the condition that $\TAU(y)=\SIGMA(y)$
for all variable nodes $y$ at distance at least $\ell$ from $x$.
Then non-reconstruction condition asks whether the distribution of $\TAU(x)$ is markedly different from the unconditional marginal $\mu_{\G\marg x}$.
More precisely, non-reconstruction occurs if for any $\eps$ there is $\ell(\eps)$ such that
with high probability $\G$ is such that the shift that a random ``bounary condition'' $\SIGMA$
induces does not exceed $\eps$ in total variation distance.
Of course, instead of conditioning on the values of {\em all} variables at distance at least $\ell$ from $x$,
we might as well just condition on the variables at distance either $\ell$ or $\ell+1$ from $x$, depending on the parity of $\ell$.
This is immediate from the definition (\ref{eqZ}) of the Gibbs measure.
As for the formal definition, suppose that $G\in\cG(\cM_n)$ is a factor graph, let $x\in V_n$ and let $\ell\geq1$.
Let $\nabla_\ell(G,x)$ signify the $\sigma$-algebra on $\Omega^n$ generated by the events
$\vec{1}\{\SIGMA(y)=\omega\}$ for $\omega\in\Omega$ and $y\in V_n$ at distance either $\ell$ or $\ell+1$ from $x$.
Thus, $\nabla_\ell(G,x)$ pins down all $\SIGMA(y)$ for $y$ at distance $\ell$ from $x$ if $\ell$ is even and $\ell+1$ otherwise.
Then we say that $\underline\cM$ has \bemph{non-reconstruction} with respect to a marginal assignment $p$ if
for any $\eps>0$ there is $\ell>0$ such that
\begin{align*}
\lim_{n\to\infty}\pr\brk{\frac1n\sum_{x\in V_n}\bck{\TV{\bck{\TAU[\nix|x]\big|\nabla_\ell(\G,x)}_{\G}-p_{\ell,\partial^\ell[\G,x]}}}_{\G}>\eps}=0.
\end{align*}
To parse the above, the outer $\pr\brk\nix$ refers to the choice of $\G$.
The big $\bck{\nix}_{\G}$ is the choice of the boundary condition called $\SIGMA$ above.
Finally, $\bck{\nix|\nabla_\ell(\G,x)}_{\G}$ is the random choice given the boundary condition.
Analogously, \bemph{the planted distribution of $\underline\cM$ has non-reconstruction} with respect to $p$ if for any $\eps>0$ there exists $\ell>0$ such that
\begin{align*}
\lim_{n\to\infty}\pr\brk{\frac1n\sum_{x\in V_n}\bck{\TV{\bck{\SIGMA[\nix|x]\big|\nabla_\ell(\hat\G_\ell,x)}_{\hat\G_\ell}-
p_{\ell,\partial^\ell[\hat\G_\ell,x]}}}_{\hat\G_\ell}>\eps}&=0.
\end{align*}
\begin{theorem}\label{Thm_nonReconstruction}
If $\underline\cM$ has non-reconstruction with respect to $p$, then $\underline\cM$ is $p$-symmetric.
If the planted distribution of $\underline\cM$ has non-reconstruction with respect to $p$, then it is $p$-symmetric.
\end{theorem}
In concrete applications the non-reconstruction condition is typically reasonably easy to verify.
For instance, in~\cite{clusters} we determine the precise location of the so-called ``condensation phase transition''
in the regular $k$-SAT model via \Thm s~\ref{Thm_symUpperBound} and~\ref{Thm_nonReconstruction}.
The proof of \Thm~\ref{Thm_nonReconstruction} can be found in \Sec~\ref{Sec_Thm_nonReconstruction}.
\subsection{Gibbs uniqueness}\label{Sec_GU}
Although the non-reconstruction condition is reasonably handy, to verify it we still need to ``touch'' the complex random graph $\G$.
Ideally, we might hope for a condition that can be stated solely in terms of the limiting distribution $\thet$ on trees,
which is conceptually far more accessible.
The ``Gibbs uniqueness'' condition as put forward in~\cite{pnas} fills this order.
Specifically,
suppose that $T$ is a finite acyclic template
whose root $r_T$ is a variable node.
Then we say that $T$ is \bemph{$(\eps,\ell)$-unique} with respect to a marginal assignment $p$ if
\begin{equation}\label{eqGibbsUniquenessCondition}
\TV{\bck{\SIGMA[\nix|r_T]\big|\nabla_\ell T}_{T}-p_{T}}<\eps.
\end{equation}
To parse (\ref{eqGibbsUniquenessCondition}), we observe that $\bck{\SIGMA[\nix|r_T]\big|\nabla_\ell T}_{T}$ is a random variable,
namely the average of the value $\SIGMA[\nix|r_T]$ assigned to the root variable under the Gibbs measure $\mu_T$ given the values of the variables at
distance at least $\ell$ from $r_T$.
Hence, (\ref{eqGibbsUniquenessCondition}) requires that $\bck{\SIGMA[\nix|r_T]\big|\nabla_\ell T}_{T}$ is at total variation distance less than $\eps$
for {\em every} possible assignment of the variables at distance at least $\ell$ from $r_T$, i.e., for every ``boundary condition''.
More generally, we say that $T\in\fT\cap\cV$ is $(\eps,\ell)$-unique with respect to $p$ if the finite template $\partial^{\ell+1}T$ has this property.
(That $\partial^{\ell+1}T$ is finite follows once more from the fact that all degrees are bounded by $\Delta$.)
Further, we call the measure
$\thet\in\cP(\fT)$ {\bem Gibbs-unique} with respect to $p$ if for any $\eps>0$ we have
$$\lim_{\ell\to\infty}\pr\brk{\T\mbox{ is $(\eps,\ell)$-unique w.r.t.\ }p}=1.$$
\begin{corollary}\label{Thm_smm}
If $\thet\in\cP(\fT)$ is Gibbs-unique with respect to $p$, then
$\lim_{n\to\infty}\frac1n\Erw[\ln Z(\G)]=\cB_\thet(p)$.
\end{corollary}
\begin{proof}
If $\thet$ is Gibbs-unique with respect to $p$, then (\ref{eqLocalWeakConvergence3}) guarantees that $\underline\cM$ has
non-reconstruction with respect to $p$. Indeed, given $\eps>0, \ell >0$ and a graph $G$ let $\mathcal{E}(G,\eps,\ell)$ denote the set of vertices $x \in V_n$ for which $\partial^\ell[G,x]$ is acyclic and $(\eps,\ell)$ unique. Then we have
\begin{align*} \frac1n\sum_{x\in V_n}\bck{\TV{\bck{\SIGMA[\nix|x]\big|\nabla_\ell(\G,x)}_{\G}-p_{\ell,\partial^\ell[\G,x]}}}_{\G} &\leq \frac1n\sum_{x\in V_n} \left \| \TV{\bck{\SIGMA[\nix|x]\big|\nabla_\ell(\G,x)}_{\G}-p_{\ell,\partial^\ell[\G,x]}} \right\|_\infty
\\ & \leq \eps + \left( 1 - \frac{|\mathcal{E}(\vec G,\eps,\ell)|}{n} \right) , \end{align*}
and by (\ref{eqLocalWeakConvergence3}) $\pr \brk{|\mathcal{E}(\vec G,\eps,\ell)| \leq (1-\eps)n}$ tends to $0$ as $n \to \infty$.
Similarly, because the distribution of the depth-$\ell$ neighborhood structure in the planted distribution $\hat\G_\ell$ coincides
with $\thet_\ell$, Gibbs-uniqueness implies that the planted model has non-reconstruction with respect to $p$ as well.
Therefore, the assertion follows from
\Thm s~\ref{Thm_symUpperBound} and~\ref{Thm_nonReconstruction}.
\end{proof}
In problems such as the random $k$-SAT model, the Ising model or the Potts antiferromagnet that come with an ``inverse temperature'' parameter $\beta\geq0$,
Gibbs uniqueness is always satisfied for sufficiently small values of $\beta$.
Consequently, \Cor~\ref{Thm_smm} shows that the cavity method always yields the correct value of $\lim_{n\to\infty}\frac1n\Erw[\ln Z(\G)]$
in the case of small $\beta$, the so-called ``high temperature'' case in physics jargon.
Furthermore, if the Gibbs uniqueness condition is satisfied then there is a canonical way of constructing the marginal assignment $p$
by means of the Belief Propagation algorithm~\cite[\Chap~14]{MM}.
Hence, \Cor~\ref{Thm_smm} provides a full comprehensive answer in this case.
\subsection{Meet the expectation}\label{Sec_expectations}
We proceed to prove \Thm s~\ref{Thm_symUpperBound}.
To this end, we need to get a handle on the conditional expectation of $Z$ given $\cT_\ell$
and for this purpose we need to study the possible empirical distributions of the values assigned to the variables
of a concrete factor graph $G\in\cG(\cM_n)$.
Specifically, by a {\bem $(G,\ell)$-marginal sequence} we mean a map $q:\fT_\ell\to\bigcup_{j=1}^\Delta\cP(\Omega^j)$, $T\mapsto q_T$ such that
\begin{description}
\item[MS1] $q_T\in\cP(\Omega)$ if $T\in\cV\cap\fT_\ell$,
\item[MS2] $q_T\in\cP(\Omega^{d_T})$ if $T\in\cF\cap\fT_\ell$,
\item[MS3] for all $T\in\fT_\ell\cap\cV$ we have
\begin{align}\label{eqMargsWorkOut}
\sum_{T'\in\fT_\ell\cap\cF}\sum_{j\in[d_{T'}]}\lambda_{G,\ell}(T')\vec{1}\{\partial^\ell[T'\reroot j]=T\}(q_{T'\marg j}-q_T)&=0.
\end{align}
\end{description}
Thus, $q$ assigns each tree $T\in\fT_\ell$ rooted at a variable node a distribution on $\Omega$ and each tree $T\in\fT_\ell$ rooted at a constraint node
a distribution on $\Omega^{d_T}$, just like in \Def~\ref{Def_margAssign}.
Furthermore, the consistency condition (\ref{eqMargsWorkOut}) provides that for a given $T$ rooted at a variable the average
marginal distribution over all $T',j$ such that $\partial^\ell[T'\reroot j]=T$ is equal to $q_T$.
However, in contrast to condition {\bf MA2} from \Def~\ref{Def_margAssign} {\bf MS3} does not require
this marginalisation to work out for every $T',j$ individually.
Suppose now that $U\subset F_n$ is a set of constraint nodes such that $d(a)=d_0$ for all $a\in U$.
Then for $\sigma:V_n\to\Omega$ we let
\begin{align*}
\sigma[(\omega_1,\ldots,\omega_{d_0})|U]&=\frac1{|U|}\sum_{a\in U}\prod_{j=1}^{d_0}\vec{1}\{\sigma(\partial(G,a,j))=\omega_j)\}.
\end{align*}
Thus, $\sigma[\nix|U]\in\cP(\Omega^{d_0})$ is the empirical distribution of the sequences
$\{(\sigma(\partial(G,a,1)),\ldots,\sigma(\partial(G,a,d_0))):a\in U\}$.
A factor graph $G$ and $\sigma:V_n\to\Omega$ induce a $(G,\ell)$-marginal sequence $q_{G,\sigma,\ell}$ canonically, namely the empirical distributions
\begin{align*}
q_{G,\sigma,\ell,T}&={\sigma[\nix|\{x\in V_n:\partial^\ell[G,x]=T]}&&\mbox{for }T\in\cT_\ell\cap\cV,\\
q_{G,\sigma,\ell,T}&={\sigma[\nix|\{a\in F_n:\partial^{\ell+1}[G,a]=T\}]}&&\mbox{for }T\in\cT_\ell\cap\cF.
\end{align*}
Conversely, given a $(G,\ell)$-marginal sequence $q$ let $\Sigma(G,\ell,q,\delta)$ be the set of all $\sigma:V_n\to\Omega$ such that
for all $T\in\fT_\ell\cap\cV$, $T'\in\fT_\ell\cap\cF$ we have
\begin{align}\label{eqZellqdelta}
\TV{q_{G,\sigma,\ell,T}-q_T}&\leq \delta,&
\TV{q_{G,\sigma,\ell,T'}-q_{T'}}&\leq \delta.
\end{align}
Moreover, let
\begin{align*}
Z_{\ell,q,\delta}(G)&=Z(G)\bck{\vec{1}\{\SIGMA\in\Sigma(G,\ell,q,\delta)\}}_G.
\end{align*}
Finally, define
\begin{align*}
\cB_{G,\ell}(q)&=\sum_{T\in\fT_\ell\cap\cV}(1-d_T)H(q_T)\lambda_{G,\ell}(T|\cV)
+\frac{|F_n|}{|V_n|}\sum_{T\in\cT_\ell\cap\cF}
\brk{H(q_T)+\bck{\ln\psi_T(\SIGMA)}_{q_T}
-\KL{q_{T}}{\bigotimes_{j\in[d_T]}q_{\partial^\ell[T\reroot j]}}}\lambda_{G,\ell}(T|\cF).
\end{align*}
\noindent
In \Sec~\ref{Sec_crazyConfigs} we are going to prove
the following formula for the expectation of $Z_{\ell,q,\delta}(G)$.
\begin{proposition}\label{Lemma_fmCalc}
For any $\eps>0$, $\ell>0$ there is $\delta>0$ such that for large enough $n$ the following is true.
Assume that $G\in\cG(\cM_n)$ is $100\ell$-acyclic and let $q$ be a $(G,\ell)$-marginal sequence.
Then
\begin{align*}
\abs{n^{-1}\ln\Erw[\vec{1}\{\cA_{2\ell+5}\}Z_{\ell,q,\delta}(\G)|\G\ism_\ell G]-\cB_{G,\ell}(q)}&<\eps.
\end{align*}
\end{proposition}
We are going to be particularly interested in the expectation of $Z_{\ell,q,\delta}(\G)$ for $q$ ``close'' to a specific marginal assignment $p$.
Formally, a $(G,\ell)$-marginal sequence $q$ is {\bem $(\eps,\ell)$-judicious} with respect to $p$ if
\begin{align*}
\sum_{T\in\fT_\ell\cap\cV}\lambda_{G,\ell}[T|\cV]\TV{q_T-p_T}+
\sum_{T\in\fT_\ell\cap\cF}\sum_{j\in[d_T]}\lambda_{G,\ell}[T|\cF]\TV{q_{T\marg j}-p_{\ell,\partial^\ell[T\reroot j]}}&<\eps.
\end{align*}
We say that $(G,\sigma)$ is {\bem $(\eps,\ell)$-judicious} with respect to $p$ if the empirical distribution $q_{G,\sigma,\ell}$ is $(\eps,\ell)$-judicious w.r.t.\ $p$.
\begin{corollary}\label{Cor_fmCalc}
For any $\alpha>0$ there exist $\eps>0,\ell>0$ such that for all $0<\beta,\gamma<\eps$ and all $l\geq\ell$ the following is true.
Let $\cL(\gamma,l)$ be the event that $\TV{\lambda_{\G,l}-\thet_l}<\gamma$.
Then
\begin{align*}
\limsup_{n\to\infty}\frac1n\ln\Erw\brk{\vec{1}\{\G\in\cL(\gamma,l)\cap\cA_{100l}\}Z(\G)\bck{(\G,\SIGMA)\mbox{ is $(\beta,l)$-judicious w.r.t.\ $p$}}_{\G}}
&\leq\cB_\thet(p)+\alpha.
\end{align*}
\end{corollary}
\begin{proof}
Pick a small enough $\eps=\eps(\alpha)>0$.
By \Lem~\ref{Lemma_martingale} there exists $\ell$ such that
$\Erw[\TV{p_{l,\partial^l\T}-p_{\T}}|\cV]<\eps$ \mbox{for all $l\geq\ell$.}
Now, fix any $0<\beta,\gamma<\eps$, $l\geq\ell$, pick $\xi=\xi(\beta,l)$ small enough and assume that $n$ is big enough.
Let $Q(G)$ be the set of all $(G,l)$-marginal sequences that are $(\beta,l)$-judicious w.r.t.\ $p$.
Because $\fT_l$ is a finite set, there exists a number $N=N(\xi)$
such that for every factor graph $G$ there is a subset $Q_*(G)\subset Q(G)$ of size $|Q_*(G)|\leq N$ such that the following is true.
If $(G,\sigma)$ is $(\beta,l)$-judicious w.r.t.\ $p$, then $\sigma\in\bigcup_{q\in Q_*(G)}\Sigma(G,l,q,\xi)$.
Therefore, for all $G$ we have
\begin{align}\label{eqCor_fmCalc1}
Z(G)\vec{1}\{\mbox{$(G,\sigma)$ is $(\eps,l)$-judicious w.r.t.\ $p$}\}&\leq
N\max_{q\in Q(G)}Z_{\ell,q,\xi}(G).
\end{align}
\Prop~\ref{Lemma_fmCalc} and (\ref{eqCor_fmCalc1}) imply that
for $\xi$ small enough and $n$ large enough for any factor graph $G\in\cA_{100\ell}$ there is $q^G\in Q(G)$ such that
\begin{align}\label{eqCor_fmCalc2}
\ln\Erw[\vec{1}\{\cA_{100\ell}\}Z(\G)\bck{\vec{1}\{\mbox{$(\G,\SIGMA)$ is $(\eps,l)$-judicious w.r.t.\ $p$}\}}_{\G}|\G\ism_\ell G]
&\leq\cB_{G,l}(q^G)+\alpha n/2.
\end{align}
To proceed, we recall that the Kullback-Leibler divergence is non-negative.
Hence, (\ref{eqCor_fmCalc2}) implies that for large $n$,
\begin{align}\nonumber
&\ln\Erw[\vec{1}\{\cA_{100\ell}\}Z(\G)\bck{\vec{1}\{\mbox{$(\G,\SIGMA)$ is $(\eps,l)$-judicious w.r.t.\ $p$}\}}_{\G}|\G\ism_\ell G]\\
&\qquad\leq\alpha n/2+
\sum_{T\in\fT_\ell\cap\cV}(1-d_T)H(q_T^G)\lambda_G(T|\cV)
+\frac{|F_n|}{|V_n|}\sum_{T\in\cT_\ell\cap\cF}
\brk{H(q_T^G)+\bck{\ln\psi_T(\SIGMA)}_{q_T^G}
}\lambda_G(T|\cF).
\label{eqCor_fmCalc3}
\end{align}
Further, for any $j\in[\Delta]$ the function $\nu\in\cP(\Omega^j)\mapsto H(\nu)$ is uniformly continuous
because $\cP(\Omega^j)$ is compact.
By the same token, $\nu\mapsto\bck{\ln\psi(\SIGMA)}_{\nu}$ is uniformly continuous for any $\psi\in\Psi$.
Consequently,
if $G\in\cL(\gamma,l)$ for some $\gamma<\eps$ and $\eps$ is chosen small enough, then (\ref{eqCor_fmCalc3}) entails
\begin{align}
\ln\Erw[\vec{1}\{\cA_{100\ell}\}Z(\G)\bck{\vec{1}\{\mbox{$(\G,\SIGMA)$ is $(\eps,l)$-judicious w.r.t.\ $p$}\}}_{\G}|\G\ism_\ell G]
\leq\cB_\thet(p)+\alpha n.
\label{eqCor_fmCalc4}
\end{align}
Finallt, the assertion follows from (\ref{eqCor_fmCalc4}) and Bayes' rule.
\end{proof}
\begin{corollary}\label{Cor_fmCalc_lower}
For any $\alpha>0$ there exists $\ell>0$ such that for all $l\geq \ell$ we have
\begin{align*}
\lim_{n\to\infty}\pr\brk{
\frac1n\ln\Erw\brk{Z(\G)|\cT_l}\leq\cB_\thet(p)-\alpha\bigg|\cA_{100 l}}=0.
\end{align*}
\end{corollary}
\begin{proof}
Choose a small $\eps=\eps(\alpha)>0$.
By \Lem~\ref{Lemma_martingale} there exists $\ell$ such that $\Erw[\TV{p_{l,\partial^l\T}-p_{\T}}|\cV]<\eps$ for all $l\geq \ell$.
Hence, fix some $l\geq\ell$ and define
$q:T\in\fT_l\cap\cV\to\cP(\omega)$, $T\mapsto p_{l,\partial^{l} T}$.
Moreover, for $T\in\fT_l\cap\cF$ let $q_{T}\in\cP(\Omega^{d_T})$ be such that
$H(q_{T})+\bck{\ln\psi_T(\SIGMA)}_{q_{T}}$ is maximum subject to the condition that $q_{T\marg j}=q_{\partial^lT\reroot j}$ for all $j\in[d_T]$
(cf.\ (\ref{eqConstraintMargs})).
Further, pick $\delta=\delta(\eps,l)>0$ small enough.
Then \Prop~\ref{Lemma_fmCalc} implies that for large $n$ and any $G\in\cA_{100 l}$
\begin{align}\nonumber
&\ln\Erw[Z_{l,q,\delta}(\G)|\G\ism_\ell G]\geq\cB_{G,\ell}(q)-\alpha n/2\\
&\qquad=-\alpha n/2+\sum_{T\in\fT_\ell\cap\cV}(1-d_T)H(q_T)\lambda_{G,l}(T|\cV)
+\frac{|F_n|}{|V_n|}\sum_{T\in\cT_\ell\cap\cF}
\brk{H(q_T)+\bck{\ln\psi_T(\SIGMA)}_{q_T}}\lambda_{G,l}(T|\cF),
\label{eqCor_fmCalc_lower1}
\end{align}
because the definition of $q$ ensures that the Kullback-Leibler divergences vanish.
Since $\TV{\thet_l-\lambda_{\G,l}}<\eps$ with high probability by (\ref{eqLocalWeakConvergence3})
and $\Erw[\TV{p_{l,\partial^l\T}-p_{\T}}|\cV]<\eps$, the assertion follows from (\ref{eqCor_fmCalc_lower1}).
\end{proof}
\subsection{Proof of \Thm~\ref{Thm_symUpperBound}}\label{Sec_symUpperBound}
We begin by spelling out the following consequence of the symmetry assumption.
\begin{lemma}\label{Lemma_psymmetric}
If $\underline\cM$ is $p$-symmetric, then
for any $\eps>0$ for all sufficiently large $\ell$ we have
\begin{align}\label{eqLemma_psymmetric_1}
\lim_{n\to\infty}\pr\brk{\sum_{x\in V_n}\TV{\mu_{\G\marg x}-p_{\ell,\partial^\ell[\G,x]}}>\eps n}&
=\lim_{n\to\infty}\pr\brk{\mu_{\G}\mbox{ fails to be $(\eps,2)$-symmetric}}=0\quad\mbox{and}\\
\lim_{n\to\infty}\pr\brk{\sum_{x\in V_n}\TV{\mu_{\hat\G_\ell\marg x}-p_{\ell,\partial^\ell[\hat\G_\ell,x]}}>\eps n}
&=\lim_{n\to\infty}\pr\brk{\mu_{\hat\G_\ell}\mbox{ fails to be $(\eps,2)$-symmetric}}=0.\label{eqLemma_psymmetric_2}
\end{align}
\end{lemma}
\begin{proof}
Choose $\eta=\eta(\eps)>0$ small enough.
For an integer $\ell>0$ consider the event
$${\mathcal E}_\ell=\cbc{\sum_{x,y\in V_n}\TV{\mu_{\G\marg\{x,y\}}-p_{\ell,\partial^\ell[\G,x]}\otimes p_{\ell,\partial^\ell[\G,y]}}<\eta^2 n^2}$$
If $\cM$ is $p$-symmetric, then $\lim_{n\to\infty}\pr\brk{\G\in{\mathcal E}_\ell}$ for sufficiently large $\ell$.
Similarly, if the planted distribution is $p$-symmetric, then
$\lim_{n\to\infty}\pr\brk{\hat\G_\ell\in{\mathcal E}_\ell}$ for large $\ell$.
Hence, assume that $G\in{\mathcal E}_\ell$.
Then by the triangle inequality, for any $\omega\in\Omega$,
\begin{align*}
\frac1n\sum_{x\in V_n}\abs{p_{\ell,\partial^\ell[G,x]}(\omega)-\mu_{G\marg x}(\omega)}
&=\frac1{n^2}\sum_{x\in V_n}\abs{\brk{\sum_{y\in V_n}\sum_{\omega'\in\Omega}p_{\ell,\partial^\ell[G,x]}(\omega)p_{\ell,\partial^\ell[G,y]}(\omega')}-
\brk{\sum_{y\in V_n}\sum_{\omega'\in\Omega}\mu_{G\marg x,y}(\omega,\omega')}}\leq\eta^2.
\end{align*}
Therefore,
\begin{align}\label{eqLemma_psymmetric2}
\frac1n\sum_{x\in V_n}\TV{p_{\ell,\partial^\ell[G,x]}-\mu_{G\marg x}}&\leq\eta^2|\Omega|<\eta.
\end{align}
Furthermore, by (\ref{eqLemma_psymmetric2}) and the triangle inequality,
\begin{align}\label{eqLemma_psymmetric3}
\frac1{n^2}\sum_{x,y\in V_n}\TV{\mu_{G\marg x}\otimes\mu_{G\marg y}-p_{\ell,\partial^\ell[G,x]}\otimes p_{\ell,\partial^\ell[G,y]}}\leq2 \eta.
\end{align}
Since $G\in{\mathcal E}_\ell$, (\ref{eqLemma_psymmetric3}) entails that
\begin{align*}
\frac1{n^2}\sum_{x,y\in V_n}\TV{\mu_{G\marg x}\otimes\mu_{G\marg y}-\mu_{G\marg\{x,y\}}}\leq 3 \eta<\eps,
\end{align*}
i.e., $G$ is $(\eps,2)$-symmetric.
\end{proof}
\begin{lemma}\label{Lemma_judicious}
There is a number $\eps_0=\eps_0(\Delta,\Omega,\Psi,\Theta)$ such that for all
$0<\eps<\eps_0$, $\ell>0$ there exists $\chi>0$ such that for large enough $n$ the following is true.
If $G\in\cG(\cM_n)$ is a $(2\ell+5)$-acyclic factor graph such that
\begin{align}\label{eqLemma_judicious1}
\sum_{x\in V_n}\TV{\mu_{G\marg x}-p_{\ell,\partial^\ell[G,x]}}&<\eps^3n
\end{align}
and $\mu_G$ is $(\chi,2)$-symmetric,
then $\bck{\vec{1}\{(G,\SIGMA)\mbox{ is $(\eps,\ell)$-judicious w.r.t.\ $p$}\}}_{G}\geq1/2$.
\end{lemma}
\begin{proof}
Pick $\delta=\delta(\ell,\eps)>0$ small, $\beta=\beta(\delta)$ and $\gamma = \gamma(\beta)$ smaller and
$\chi = \chi(\gamma)>0$ smaller still and assume that $n>n_0(\chi)$.
Let $\vec V_0$ be the partition of $V_n$ such that $x,y\in V_n$ belong to the same class iff $\partial^{\ell+2}[G,x]=\partial^{\ell+2}[G,y]$.
By \Thm~\ref{Thm_decomp} there exists a refinement $\vec V$ of $\vec V_0$ such that $\mu_G$ is $\gamma$-homogeneous
with respect to $(\vV,\vS)$ for some partition $\vS$ of $\Omega^n$ such that $\#\vV+\#\vS\leq N=N(\gamma)$.
We may index the classes of $\vec V$ as $V_{T,i}$ with $T=\partial^{\ell+2}[G,x]$ for all $x$ in the class and $i\in[N_T]$ for some integer $N_T$.
Let $J$ be the set of all $j\in[\#\vS]$ such that $\mu(S_j)\geq\delta^7/N$ and $\mu[\nix|S_j]$ is $\gamma$-regular.
Then by {\bf HM1}
\begin{equation}\label{eqNothingMuch}
\sum_{j\in J}\mu(S_j)\geq1-\delta^6.
\end{equation}
Further, \Lem~\ref{Lemma_regularSymmetric} shows that $S_j$ is a $(\beta,2)$-state if $j\in J$.
Therefore, choosing $\chi$ small enough, we obtain from \Cor~\ref{Cor_states2} that
\begin{align*}
\frac1n\sum_{x\in V_n}\TV{\mu_{G\marg x}[\nix|S_j]-\mu_{G\marg x}}&<\delta^7
\quad\mbox{
for all $j\in J$.}
\end{align*}
Therefore, by (\ref{eqLemma_judicious1}) and the triangle inequality, for $j\in J$
we get
\begin{align*}
\frac1n\sum_{x\in V_n}\TV{\mu_{G\marg x}[\nix|S_j]-p_{\ell,\partial^\ell[G,x]}}\leq
\eps^3+\frac1n\sum_{x\in V_n}\TV{\mu_{G\marg x}[\nix|S_j]-\mu_{G\marg x}}&<\eps^3+3\delta^7<2\eps^3.
\end{align*}
Consequently, by (\ref{eqNothingMuch}),
Bayes' rule and the triangle inequality,
summing over all $T\in\fT_{\ell+2}\cap\cV$ and $i\in[N_T]$ we get
\begin{align}
\frac1n \sum_{T,i}|V_{T,i}|\bck{\TV{\SIGMA[\nix|V_{T,i}]-p_{\ell,T}}}_G&=
\frac1n \sum_{T,i}\sum_{j\in[\#\vS]}|V_{T,i}|\mu_G(S_j)\bck{\TV{\SIGMA[\nix|V_{T,i}]-p_{\ell,T}}|S_j}_G\nonumber\\
&\leq\delta^7+\frac1n\sum_{T,i}\sum_{j\in[\#\vS]}|V_{T,i}|\mu_G(S_j)\TV{\bck{\SIGMA[\nix|V_{T,i}]|S_j}_G-p_{\ell,T}}
\qquad\mbox{[by {\bf HM2}]}\nonumber\\
&\leq\delta^7+\frac1n\sum_{T,i}\sum_{x\in V_{T,i}}\sum_{j\in[\#\vS]}
\mu_G(S_j)\TV{\mu_{G\marg x}[\nix|S_j]-p_{\ell,\partial^\ell[G\marg x]}}<3\eps^3.
\label{eqLemma_judicious2}
\end{align}
Applying the triangle inequality once more, we find
\begin{align}\label{eqLemma_judicious3}
\sum_{T\in\fT_\ell\cap\cV}\lambda_{G,\ell}[T|\cV]\bck{\TV{q_{G,\SIGMA,\ell,T}-p_{\ell,T}}}&\leq
\frac1n\sum_{T,i}|V_{T,i}|\bck{\TV{\SIGMA[\nix|V_{T,i}]-p_{\ell,T}}}_G<3\eps^3.
\end{align}
Further, consider $T\in\fT_\ell\cap\cF$ such that $\lambda_{G,\ell}[T|\cF]>0$ and let $j\in[d_F]$.
Because $G$ is $(2\ell+5)$-acyclic, there exists a set $\Gamma(T,j)\subset\fT_{\ell+2}\cap\cV$ with the following two properties.
First, for every constraint node $a$ with $\partial^{\ell+1}[G,a]=T$ the variable node $x=\partial(G,a,j)$ satisfies $\partial^{\ell+2}[G,x]\in\Gamma(T,j)$.
Second, for every variable node $x$ with $\partial^{\ell+2}[G,x]\in\Gamma(T,j)$ there is a constraint node $a$
with $\partial^{\ell+1}[G,a]=T$ such that $\partial(G,a,j)=x$.
For $R\in\Gamma(T,j)$ let $m_{R,T,j}$ be the number of constraint nodes $a$ with $\partial^{\ell+1}[G,a]=T$ such that
$x=\partial(G,a,j)$ satisfies $\partial^{\ell+2}[G,x]=R$.
Then by the triangle inequality,
\begin{align}
&\hspace{-2cm}\sum_{T\in\fT_\ell\cap\cF}\sum_{j\in[d_T]}\lambda_{G,\ell}[T|\cF]\bck{\TV{q_{G,\SIGMA,\ell,T\marg j}-p_{\ell,\partial^\ell[T\reroot j]}}}_G
\nonumber\\
&\leq\sum_{T\in\fT_\ell\cap\cF}\sum_{j\in[d_T]}\sum_{R\in\Gamma(T,j)}\frac{m_{R,T,j}}{|F_n|}
\bck{\TV{\SIGMA[\nix|V_{R,i}]-p_{\ell,R}}}_G\nonumber\\
&\leq\frac{\Delta^2}n\sum_{R\in\fT_{\ell+2}\cap\cV}\sum_{i\in[d_R]}\bck{\TV{\SIGMA[\nix|V_{R,i}]-p_{\ell,R}}}_G;
\label{eqLemma_judicious4}
\end{align}
the last inequality follows because all degrees are between one and $\Delta$.
Finally, the assertion follows from (\ref{eqLemma_judicious2}), (\ref{eqLemma_judicious3}) and (\ref{eqLemma_judicious4}).
\end{proof}
We proceed by proving the upper bound and the lower bound statement from \Thm~\ref{Thm_symUpperBound} separately.
Strictly speaking, the proof of the lower bound implies the upper bound as well.
But presenting the arguments separately makes them slightly easier to follow.
\begin{proof}[Proof of \Thm~\ref{Thm_symUpperBound}, upper bound]
For $\eps,l>0$ let
${\mathcal E}(\eps,l)=\{\sum_{x\in V_n}\TV{\mu_{\G\marg x}-p_{l,\partial^l[\G,x]}}<\eps n\}.$
Additionally,
let $\cS(\chi)$ be the event that $\mu_{\G}$ is $(\chi,2)$-symmetric
and let $\cL(\eps,l)$ be the event that $\TV{\lambda_{\G,l}-\thet_l}<\eps$.
We assume that $\underline\cM$ is $p$-symmetric.
Given $\alpha>0$ choose a small enough $\eps>0$ and a large enough $\ell>0$ as promised by \Cor~\ref{Cor_fmCalc}.
By \Lem~\ref{Lemma_martingale} there is $\ell_*>\ell$ such that
\begin{align}\label{eqmyMartingale1}
\Erw\brk{\TV{p_{l,\partial^l\T}-p_{\T}}|\cV}<\eps^4\qquad\mbox{ for all $l\geq\ell_*$}.
\end{align}
Let $\chi=\chi(\eps,\ell_*)$ be the number provided by \Lem~\ref{Lemma_judicious}.
Then \Lem~\ref{Lemma_psymmetric} implies that
$\lim_{n\to\infty}\pr\brk{\G\in\cS(\chi)}=1.$
Similarly, \Lem~\ref{Lemma_psymmetric} implies that for large enough $l$ we have
$\lim_{n\to\infty}\pr\brk{\G\in{\mathcal E}(\eps^4,l)}=1$
Hence,
the local convergence assumption~(\ref{eqLocalWeakConvergence3}) implies that for all large enough $l$,
\begin{align}
\lim_{n\to\infty}\pr\brk{\G\in\cS(\chi)\cap\cL(\eps^4,l)\cap{\mathcal E}(\eps^4,l)}&=1\label{eqmyMartingale4}.
\end{align}
Further, we claim that $\cL(\eps^4,l)\cap{\mathcal E}(\eps^4,l)\subset\cL(\eps^4,\ell_*)\cap{\mathcal E}(\eps^3,\ell_*)$.
Indeed, if $l \geq \ell_*$, then $\cL(\eps^4,l)\subset\cL(\eps^4,\ell_*)$.
Moreover, if $G\in \cL(\eps^4,l)\cap{\mathcal E}(\eps^4,l)$, then with $\vec x\in V_n$ chosen uniformly at random we find
\begin{align*}
\Erw\TV{\mu_{G\marg\vec x}-p_{\ell_*,\partial^{\ell_*}[G,\vec x]}}&\leq
\Erw\TV{\mu_{G\marg\vec x}-p_{l,\partial^{l}[G,\vec x]}}+\Erw\TV{p_{\ell_*,\partial^{\ell_*}[G,\vec x]}-p_{l,\partial^{l}[G,\vec x]}}\\
&\leq\eps^4+\sum_{T\in\fT_l\cap\cV}\lambda_{G,l}(T)\TV{p_{l,T}-p_{\ell_*,\partial^{\ell_*}T}}\\
&\leq\eps^4+2\TV{\thet_l[\nix|\cV]-\lambda_{G,l}[\nix|\cV]}+\Erw\brk{\TV{p_{l,\partial^l\T}-p_{\ell_*,\partial^{\ell_*}\T}}|\cV}<\eps^3.
\end{align*}
Consequently, combining (\ref{eqmyMartingale1}) and (\ref{eqmyMartingale4}), we find that the event
$\cB(\alpha)=\cS(\chi)\cap\cL(\eps^4,\ell_*)\cap{\mathcal E}(\eps^3,\ell_*)$ satisifes
\begin{align}
\lim_{n\to\infty}\pr\brk{\G\in\cB(\alpha)
}&=1.\label{eqmyMartingale6}
\end{align}
Further, if
$G\in\cB(\alpha)\cap\cA_{100\ell_*}$,
then
$Z(G)\leq2Z(G)\bck{\vec{1}\{(G,\SIGMA)\mbox{ is $(\eps,\ell_*)$-judicious w.r.t.\ $p$}\}}_{G}$ by
\Lem~\ref{Lemma_judicious} and the choice of $\chi$.
Therefore,
\begin{align}\label{eqImjudicious_untensorised}
\Erw[\vec{1}\{\G\in\cB(\alpha)\cap\cA_{100\ell_*}\}Z(\G)]&\leq2
\Erw\brk{\vec{1}\{\G\in\cL(\eps^4,\ell_*)\cap\cA_{100\ell_*}\}
\bck{\vec{1}\{(\G,\SIGMA)\mbox{ is $(\eps,\ell_*)$-judicious w.r.t.\ $p$}\}}_{\G}Z(\G)}.
\end{align}
Since $\ell_*>\ell$, for large enough $n$ \Cor~\ref{Cor_fmCalc} and (\ref{eqImjudicious_untensorised}) yield
\begin{align}\label{eqProofThm_symUpperBound2}
\Erw[\vec{1}\{\G\in\cB(\alpha)\cap\cA_{100\ell_*}\}Z(\G)]&\leq2\exp(n(\cB_\thet(p)+\alpha)).
\end{align}
Further,
combining (\ref{eqmyMartingale6}) and (\ref{eqProofThm_symUpperBound2}) and using Markov's inequality,
we conclude that
$$\lim_{n\to\infty}\pr\brk{Z(\G)>\exp(n(\cB_\thet(p)+2\alpha))|\cB(\alpha)\cap\cA_{100\ell_*}}=0.$$
Therefore, (\ref{eqmyMartingale6}), the high girth assumption and \Prop~\ref{Lemma_conc} yield
\begin{equation}\label{eqProofThm_symUpperBound99}
\lim_{n\to\infty}\pr\brk{Z(\G)>\exp(n(\cB_\thet(p)+2\alpha))}=0.
\end{equation}
Finally, since $|n^{-1}\ln Z(\G)|$ is bounded by some number $C=C(\Delta,\Omega,\Psi,\Theta)>0$ by the definition (\ref{eqZ}) of $Z$,
(\ref{eqProofThm_symUpperBound99}) implies that
$\limsup_{n\to\infty}n^{-1}\Erw[\ln Z(\G)]\leq\cB_\thet(p)+3\alpha$.
Taking $\alpha\to0$ completes the proof.
\end{proof}
To establish the lower bound we introduce a construction reminiscent of those used in~\cite{DMSS,DSS1,Galanis,MWW,SS}.
Namely, starting from the sequence $\underline\cM$ of $(\Delta,\Omega,\Psi,\Theta)$-models, we define another
sequence $\underline\cM^\otimes=(\cM^\otimes_n)_n$ of models as follows.
Let $\Omega^\otimes=\Omega\times\Omega$ and let us denote pairs $(\omega,\omega')\in\Omega^\otimes$ by $\omega\otimes\omega'$.
Further, for any $\psi:\Omega^h\to(0,\infty)$ we define a function
$$\psi^\otimes:(\Omega^{\otimes})^{h}\to(0,\infty),\qquad
({\omega_1}\otimes{\omega_1'},\ldots,{\omega_{h}}\otimes{\omega_{h}'})\mapsto
\psi(\omega_1,\ldots,\omega_{h})\cdot\psi(\omega_1',\ldots,\omega_{h}').$$
Let $\Psi^\otimes=\{\psi^\otimes:\psi\in\Psi\}$.
Then the $(\Delta,\Omega,\Psi,\Theta)$-model $\cM_n=(V_n,F_n,d_n,t_n,(\psi_a)_{a\in F_n})$ gives rise to the $(\Delta,\Omega^\otimes,\Psi^\otimes,\Theta)$-model
$\cM_n^\otimes=(V_n,F_n,d,t,(\psi_a^\otimes)_{a\in F_n})$.
Clearly, there is a canonical bijection $\cG(\cM)\to\cG(\cM^\otimes)$, $G\mapsto G^\otimes$.
Moreover, the construction ensures that the Gibbs measure $\mu_{G^\otimes}\in\cP(\Omega^{\otimes\,n})$ equals $\mu_G\otimes\mu_G$.
Explicitly, for all $\omega_1,\omega_1',\ldots,\omega_n,\omega_n'\in\Omega$,
\begin{align}\label{eqTensorConstruction1}
\mu_{G^\otimes}(\omega_1\otimes\omega_1',\ldots,\omega_n\otimes\omega_n')=\mu_G(\omega_1,\ldots,\omega_n)\mu_G(\omega_1',\ldots,\omega_n').
\end{align}
In effect, we obtain
\begin{align}\label{eqTensorConstruction2}
Z(G^\otimes)&=Z(G)^2.
\end{align}
Further, writing $\fG^\otimes,\fT^\otimes$ for the $(\Delta,\Omega^\otimes,\Psi^\otimes,\Theta)$-templates and the acyclic
$(\Delta,\Omega^\otimes,\Psi^\otimes,\Theta)$-templates,
we can lift the marginal assignment $p$ from $\fT$ to $\fT^\otimes$ by letting $p^\otimes_{T^\otimes}=p_T\otimes p_T$ for all $T$.
Additionally, let $\thet^\otimes\in\cP(\fT^\otimes)$ be the image of $\thet$ under the map $T\in\fT\mapsto T^\otimes\in\fT^\otimes$ so that
\begin{align}\label{eqBetheSquare}
\cB_{\thet^\otimes}(p^\otimes)&=2\cB_{\thet}(p).
\end{align}
\begin{proof}[Proof of \Thm~\ref{Thm_symUpperBound}, lower bound]
We assume that $\underline\cM$ is $p$-symmetric and that the same is true of the planted distribution.
For $\eps,l>0$ consider the event
\begin{align}\label{eqEevent}
{\mathcal E}^\otimes(\eps,l)&=\cbc{\frac1{n}\sum_{x\in V_n}\TV{\mu_{\G^\otimes\marg x}-p^\otimes_{\partial^l[\G^\otimes,x]}}<\eps}.
\end{align}
and let $\cS^\otimes(\chi)$ be the event that $\mu_{\G^\otimes}$ is $(\chi,2)$-symmetric.
Moreover, as before we let $\cL(\eps,\ell)=\{\TV{\lambda_{\G,\ell}-\thet_\ell}<\eps\}$.
Basically, we are going to apply the same argument as in the proof of the upper bound to the random factor graph
$\G^\otimes$ and to $\hat\G_\ell$ for a large enough $\ell$.
Hence, let $\alpha>0$.
Then \Cor~\ref{Cor_fmCalc} applied to $\underline\cM^\otimes$
yields a small $\eps=\eps(\alpha)>0$ and a large $\ell=\ell(\alpha)>0$.
Moreover, \Cor~\ref{Cor_fmCalc_lower} provides a large $\ell'(\alpha)>0$.
Further, by \Lem~\ref{Lemma_martingale} and (\ref{eqTensorConstruction1}) there exists $\ell_*>\ell+\ell'$ such that
\begin{align}\label{eqmyMartingale1Tensor}
\Erw\brk{\TV{p_{\ell,\partial^\ell\T}-p_{\T}}|\cV}+
\Erw\brk{\TV{p^\otimes_{\ell,\partial^\ell\T^\otimes}-p^\otimes_{\T^\otimes}}|\cV}<\eps^4\qquad\mbox{ for all $\ell\geq\ell_*$}.
\end{align}
Applying \Lem~\ref{Lemma_judicious} to $\underline\cM^\otimes$, we obtain $\chi_*=\chi_*(\eps,\ell_*)>0$ and
\Prop~\ref{Prop_tensorise} and \Lem~\ref{Lemma_psymmetric} imply that
\begin{align}\label{eqmyMartingale2Tensor}
\lim_{n\to\infty}\pr\brk{\G\in\cS^\otimes(\chi_*)}&=1.
\end{align}
Further, \Lem~\ref{Lemma_psymmetric} shows that for $l$ we have
\begin{align}\label{eqmyMartingale3Tensor}
\lim_{n\to\infty}\pr\brk{\G\in{\mathcal E}^\otimes(\eps^4,l)}&
=1.
\end{align}
In effect, just as before (\ref{eqmyMartingale2Tensor}), (\ref{eqmyMartingale3Tensor}) and~(\ref{eqLocalWeakConvergence3}) show that large $l$,
\begin{align}
\lim_{n\to\infty}\pr\brk{\G\in\cS^\otimes(\chi_*)\cap\cL(\eps^4,l)\cap{\mathcal E}^\otimes(\eps^4,l)}&=1\label{eqmyMartingale4Tensor}.
\end{align}
Like in the upper bound proof we have
$\cL(\eps^4,l)\cap{\mathcal E}^\otimes(\eps^4,l)\subset
\cL(\eps^4,l)\cap{\mathcal E}^\otimes(\eps^3,\ell_*)$.
Therefore, (\ref{eqmyMartingale1Tensor}) and (\ref{eqmyMartingale4Tensor})
show that the event
$\cB^\otimes(\alpha)=\cS^\otimes(\chi_*)\cap\cL(\eps^4,\ell_*)\cap{\mathcal E}^\otimes(\eps^3,\ell_*)$
satisfies
\begin{align}
\lim_{n\to\infty}\pr\brk{\G\in\cB^\otimes(\alpha)}&=1.\label{eqmyMartingale6Tensor}
\end{align}
Define
$\cZ_{\alpha}(\G)=\vec{1}\{\G\in\cB^\otimes(\alpha)\cap\cA_{100\ell_*}\}Z(\G)$.
If $G\in\cB^\otimes(\alpha)\cap\cA_{100\ell_*}$,
then by (\ref{eqTensorConstruction2}),
\Lem~\ref{Lemma_judicious} and the choice of $\chi_*$ we have
\begin{align*}
Z(G)^2&=Z(G^\otimes)\leq2Z(G^\otimes)
\bck{\vec{1}\{(G^\otimes,\SIGMA)\mbox{ is $(\eps,\ell_*)$-judicious w.r.t.\ $p^\otimes$}\}}_{G^\otimes}.
\end{align*}
Hence, we obtain an upper bound on the second moment of $\cZ_\alpha$, namely
\begin{align}\label{eqImjudicious}
\Erw[\cZ_{\alpha}(\G)^2]&\leq2
\Erw\brk{\vec{1}\{\G\in\cL(\eps^4,\ell_*)\cap\cA_{100\ell_*}\}
\bck{\vec{1}\{(\G^\otimes,\SIGMA)\mbox{ is $(\eps,\ell_*)$-judicious w.r.t.\ $p^\otimes$}\}}_{\G^\otimes}
Z(\G^\otimes)}.
\end{align}
Due to (\ref{eqBetheSquare}) and the choice of $\eps,\ell$ and because $\ell_*>\ell$,
\Cor~\ref{Cor_fmCalc} enables us to estimate the r.h.s.\ of (\ref{eqImjudicious}) explicitly, whence
\begin{align}\label{eqProofThm_symUpperBound2}
\Erw[\cZ_{\eps}(\G)^2]&\leq\exp(n(2\cB_\thet(p)+\alpha)).
\end{align}
As a next step, we are going to
show that
\begin{align}\label{eqProofThm_symLowerBound1}
\Erw[\cZ_{\eps}(\G)]\geq \exp(n(\cB_\thet(p^\otimes)-2\alpha)).
\end{align}
Indeed, by \Prop~\ref{Prop_tensorise} and \Lem~\ref{Lemma_psymmetric} we have
\begin{align}\label{eqmyMartingale2_lower}
\lim_{n\to\infty}\pr\brk{\hat\G_l\in\cS^\otimes(\chi_*)}&=1
\end{align}
for large enough $l$.
Similarly, (\ref{eqTensorConstruction1}), the assumption that the planted distribution is $p$-symmetric and \Lem~\ref{Lemma_psymmetric} imply that for $l$ large enough
\begin{align}\label{eqmyMartingale3_lower}
\lim_{n\to\infty}\pr\brk{\hat\G_l\in{\mathcal E}^\otimes(\eps^4,l)}&=1.
\end{align}
Hence, (\ref{eqmyMartingale2_lower}), (\ref{eqmyMartingale3_lower}), the local convergence assumption~(\ref{eqLocalWeakConvergence3})
and the construction (\ref{eqPlantedDistribution}) of the planted distribution imply that for $l$ large enough
\begin{align}
\lim_{n\to\infty}\pr\brk{\hat\G_l\in\cS^\otimes(\chi_*)\cap\cL(\eps^4,l)\cap{\mathcal E}^\otimes(\eps^4,l)}&=1.\label{eqmyMartingale5_lower}
\end{align}
Combining (\ref{eqmyMartingale1Tensor}) and (\ref{eqmyMartingale5_lower}) and using the high girth assumption, we thus obtain for large $l$
\begin{align}
\lim_{n\to\infty}\pr\brk{\hat\G_l\in\cB^\otimes(\alpha)
}&=1.\label{eqmyMartingale7}
\end{align}
Further, \Cor~\ref{Cor_fmCalc_lower} shows that
\begin{align*}
\lim_{n\to\infty}\pr\brk{
\frac1n\ln\Erw\brk{Z(\G)|\cT_{l}}\geq\cB_\thet(p)-\alpha\bigg|\cA_{100l}}=1.
\end{align*}
Thus, (\ref{eqmyMartingale7}) and \Prop~\ref{Prop_plantedModel} yield (\ref{eqProofThm_symLowerBound1}).
Finally, combining (\ref{eqProofThm_symUpperBound2}) and (\ref{eqProofThm_symLowerBound1}) and applying the Paley-Zygmund inequality, we obtain
\begin{align}\label{eqProofThm_symLowerBound2}
\pr\brk{Z(\G)\geq\exp(n(\cB_\thet(p)-4\alpha))}&\geq\pr\brk{\cZ_\eps(\G)\geq\exp(n(\cB_\thet(p)-4\alpha))}
\geq\frac{\Erw[\cZ_\eps(\G)]^2}{2\Erw[\cZ_\eps(\G)^2]}\geq\exp(-10\alpha n).
\end{align}
As this holds for any $\alpha>0$, the assertion follows from (\ref{eqProofThm_symLowerBound2}) and \Prop~\ref{Lemma_conc}.
\end{proof}
\subsection{Proof of \Thm~\ref{Thm_nonReconstruction}}\label{Sec_Thm_nonReconstruction}
The key step of the proof is to establish the following statement.
\begin{lemma}\label{Lemma_nonRe}
For any $\eps>0$ there exists $\delta>0$ such that for any $\ell>0$ there exists $n_0$ such that for all $n>n_0$ the following is true.
Assume that $G\in\cG(\cM_n)$ satisfies
\begin{align}\label{eqThm_nonReconstruction1}
\frac1n\sum_{x\in V_n}\bck{\TV{\bck{\SIGMA[\nix|x]|\nabla_\ell(G,x)}_\mu-p_{\ell,\partial^\ell[G,x]}}}_\mu<\delta^9.
\end{align}
Then $G$ is $(\eps,2)$-symmetric and
$\sum_{x\in V_n}\TV{\mu_{G\marg x}-p_{\ell,\partial^\ell[G,x]}}<\eps n$.
\end{lemma}
\noindent
Before we prove \Lem~\ref{Lemma_nonRe} let us show how it implies \Thm~\ref{Thm_nonReconstruction}.
\begin{proof}[Proof of \Thm~\ref{Thm_nonReconstruction}]
If $G\in\cG(\cM_n)$ satisfies is $(\eps,2)$-symmetric and $\sum_{x\in V_n}\TV{\mu_{G\marg x}-p_{\ell,\partial^\ell[G,x]}}<\eps n$, then by the triangle inequality
\begin{align*}
\sum_{x,y\in V_n}\TV{\mu_{G\marg\{x,y\}}-p_{\ell,\partial^\ell[G,x]}\otimes p_{\ell,\partial^\ell[G,y]}}&
\leq\sum_{x,y\in V_n}\TV{\mu_{G\marg\{x,y\}}-\mu_{G\marg x}\otimes\mu_{G\marg y}}+
\TV{\mu_{G\marg x}\otimes\mu_{G\marg y}-p_{\ell,\partial^\ell[G,x]}\otimes p_{\ell,\partial^\ell[G,y]}}\\
&\leq4\eps n^2.
\end{align*}
Therefore, the theorem follows by applying \Lem~\ref{Lemma_nonRe} either to the random factor graph $\G$ or
to the random factor graph $\G_\ell$ chosen from the planted model.
\end{proof}
\begin{proof}[Proof of \Lem~\ref{Lemma_nonRe}]
Let $\gamma=\gamma(\eps)>0$ be sufficiently small.
By \Thm~\ref{Thm_decomp} we can pick $\delta=\delta(\gamma)>0$ small enough so that there exists a partition
$(\vec V,\vec S)$ with $\#\vec V+\#\vec S<\delta^{-1}$ with respect to which $\mu_G$ is $\gamma^4$-homogeneous.
Suppose that $V_i$, $S_j$ are classes such that $|V_i|\geq\delta^{3/2}n$, $\mu_G(S_j)\geq\delta^{3/2}$
and such that $\mu[\nix|S_j]$ is $\gamma^4$-regular on $V_i$.
We claim that
\begin{align}\label{eqMyContradiction}
\frac1{|V_i|}\sum_{x\in V_i}\TV{\mu_{G\marg x}[\nix|S_j]-p_{\ell,\partial^\ell[G,x]}}&<3\gamma.
\end{align}
The assertion is immediate from this inequality.
Indeed, suppose that (\ref{eqMyContradiction}) is true for all $i,j$ such that $|V_i|\geq\delta^{3/2}n$, $\mu_G(S_j)\geq\delta^{3/2}$
such that $\mu[\nix|S_j]$ is $\gamma^4$-regular on $V_i$.
Then
because $\#\vV+\#\vS\leq1/\delta$
\begin{align}\label{eqMyContradiction666}
\sum_{x\in V_n}\TV{\mu_{G\marg x}[\nix|S_j]-p_{\ell,\partial^\ell[G,x]}}<4\gamma n.
\end{align}
Hence, by {\bf HM1} and Bayes' rule,
$\sum_{x\in V_n}\TV{\mu_{G\marg x}-p_{\ell,\partial^\ell[G,x]}}<5\gamma n<\eps n$.
Further, (\ref{eqMyContradiction666}) and \Lem~\ref{Lemma_regularSymmetric} imply that $\mu_G$ is $(\eps,2)$-regular (provided that
we pick $\gamma$ small enough).
Thus, we are left to prove (\ref{eqMyContradiction}).
Assume for contradiction that (\ref{eqMyContradiction}) is violated
for $V_i$, $S_j$ such that $|V_i|\geq\delta^{3/2}n$, $\mu_G(S_j)\geq\delta^{3/2}$.
Then by the triangle inequality there is a set $W\subset V_i$ of size at least $\gamma|V_i|$ such that for all $x\in W$ we have
\begin{align*}
\TV{\mu_{G\marg x}[\nix|S_j]-p_{\ell,\partial^\ell[G,x]}}&\geq\gamma.
\end{align*}
For $x\in W$ pick $\omega_x\in\Omega$ such that
$|\mu_{G\marg x}[\omega_x|S_j]-p_{\ell,\partial^\ell[G,x]}|\geq\gamma$ is maximum.
Then by the pigeonhole principle there exist $\omega\in\Omega$ and $W'\subset W$, $|W'|\geq|W|/(2|\Omega|)$, such that either
\begin{align}\label{eqLeadAstray1_prime}
\forall x\in W':\mu_{G\marg x}[\omega|S_j]\geq p_{\ell,\partial^\ell[G,x]}(\omega)+\gamma
&\qquad\mbox{or}\\
\forall x\in W':\mu_{G\marg x}[\omega|S_j]\leq p_{\ell,\partial^\ell[G,x]}(\omega)-\gamma
\label{eqLeadAstray2}
\end{align}
In particular we have
\begin{align}\label{eqLeadAstray1}
\forall x\in W':\mu_{G\marg x}[\omega|S_j]\geq p_{\ell,\partial^\ell[G,x]}(\omega)+\gamma/|\Omega|
\end{align}
We claim that there is a set $L\subset W'$ of size $|L|=\lceil1/\delta\rceil$ with the following properties.
\begin{enumerate}[(i)]
\item the pairwise distance between any two $x,y\in L$ is at least $10(\ell+1)$.
\item for all $x\in L$ we have
\begin{align}\label{eqW''}
\bck{\TV{\bck{\SIGMA[\nix|x]|\nabla_\ell(G,x)}_{G}-p_{\ell,\partial^\ell[G,x]}}}_{\mu_G}<\delta^4.
\end{align}
\end{enumerate}
Indeed,
because $|V_i|\geq\delta^2n$ and $\mu(S_j)\geq\delta^2$ the assumption (\ref{eqThm_nonReconstruction1}) implies that
\begin{align}\label{eqThm_nonReconstruction2}
\sum_{x\in V_i}\bck{\TV{\bck{\SIGMA[\nix|x]|\nabla_\ell(G,x)}_{G}-p_{\ell,\partial^\ell[G,x]}}}_{\mu_G[\nix|S_j]}<\delta^5|V_i|.
\end{align}
Since $|W'|\geq\gamma|V_i|/|\Omega|\geq\delta|V_i|$,
(\ref{eqThm_nonReconstruction2}) implies that there is a set $W''\subset W'$ of size $|W''|\geq|W'|/2$ such that (\ref{eqW''}) holds for all $x\in W''$.
Now, construct a sequence $W''=W_0''\supset W_1''\cdots$ inductively as follows.
In step $i\geq1$ pick some $x_i\in W_{i-1}''$.
Then $W_i''$ contains $x_i$ and all $y\in W_{i-1}''\setminus\{x_i\}$ whose distance from $x_i$ is greater than $10(\ell+1)$.
Since for each $x_i$ the total number of variable nodes at distance at most $10(\ell+1)$ is bounded by $\Delta^{10(\ell+1)}$ and
$|W_0''|\geq\delta|V_i|/2\geq\delta^3n/2$, the set $\bigcap_{i\geq1}W_i''$ has size at least $\delta^3\Delta^{-10(\ell+1)}n/2>1/\delta$,
provided that $n$ is large enough.
Finally, simply pick any subset $L\subset\bigcap_{i\geq1}W_i''$ of size $|L|=\lceil1/\delta\rceil$.
Consider the event ${\mathcal E}=\cbc{\SIGMA[\omega|L]\geq|L|^{-1}\sum_{x\in L}p_{\ell,\partial^\ell[G,x]}+\gamma^3}.$
We claim that
\begin{align}\label{eqfrakL4}
\mu_G[{\mathcal E}|S_j]\leq2\delta^2.
\end{align}
Indeed, by (\ref{eqW''}) and the union bound we have
\begin{align}\nonumber
\bck{\vec{1}\cbc{\forall x\in L:\TV{\bck{\SIGMA[\nix|x]|\nabla_\ell(G,x)}_{G}-p_{\ell,\partial^\ell[G,x]}}\leq\delta}}_{\mu_G}&\geq
1-\sum_{x\in L}\bck{\vec{1}\cbc{\TV{\bck{\SIGMA[\nix|x]|\nabla_\ell(G,x)}_{G}-p_{\ell,\partial^\ell[G,x]}}>\delta}}_{\mu_G}\\
&\geq1-\delta^2.
\label{eqfrakL1}
\end{align}
Now, let $\mathfrak L$ be the coarsest $\sigma$-algebra such that $\mathfrak L\supset\nabla_\ell(G,x)$ for all $x\in L$.
Suppose that $\sigma\in S_j$ is such that
\begin{align}\label{eqfrakL}
\TV{\bck{\SIGMA[\nix|x]|\nabla_\ell(G,x)}_{G}(\sigma)-p_{\ell,\partial^\ell[G,x]}}\leq\delta\quad\mbox{ for all $x\in L$}.
\end{align}
We claim that (\ref{eqfrakL}) implies
\begin{align}\label{eqfrakL2}
\bck{\vec{1}\{\SIGMA\in{\mathcal E}\}|\mathfrak L}_{G}(\sigma)&<\delta^3.
\end{align}
Indeed, let $X=\sum_{x\in L}\vec{1}\{\SIGMA(x)=\omega\}$.
Then (\ref{eqfrakL}) implies that
\begin{align}\label{eqfrakL3}
\bck{X(\SIGMA)|\mathfrak L}(\sigma)\leq2\delta|L|+\sum_{x\in L}p_{\ell,\partial^\ell[G,x]}(\omega).
\end{align}
Furthermore, the pairwise distance of the variables in $L$ is at least $2(\ell+1)$ and given $\mathfrak L$
the values of the variables at distance either $\ell$ or $\ell+1$ from each $x\in L$ are fixed.
Therefore, given $\mathfrak L$ the events $\{\SIGMA(x)=\omega\}$ are mutually independent.
In effect, $X$ is stochastically dominated by a sum of independent random variables.
Hence, recalling that $\delta$ is much smaller than $\gamma$, we see that (\ref{eqfrakL2}) follows from (\ref{eqfrakL3}) and the Chernoff bound.
Finally, combining (\ref{eqfrakL1}) and (\ref{eqfrakL2}) we obtain (\ref{eqfrakL4}).
But (\ref{eqfrakL4}) does not sit well with (\ref{eqLeadAstray1}).
In fact, (\ref{eqLeadAstray1}) entails that
$\mu_G[{\mathcal E}|S_j]\geq\gamma^2$;
for consider the random variable $Y=\sum_{x\in L}\vec{1}\{\SIGMA(x)\neq\omega\}$.
Then (\ref{eqLeadAstray1}) yields
$\bck{Y}_{\mu[\nix|S_j]}\leq\sum_{x\in L}(1-\mu_{G \marg x }[\omega|S_j]) \leq|L|(1-\gamma/|\Omega|)-\sum_{x\in L}p_{\ell,\partial^\ell[G,x]}(\omega)$.
Hence, by Markov's inequality
$$1-\mu_G[{\mathcal E}|S_j]\leq\frac{\bck{Y}_{\mu[\nix|S_j]}}{|L|(1-\gamma^3)-\sum_{x\in L}p_{\ell,\partial^\ell[G,x]}(\omega)}
\leq\frac{|L|(1-\gamma/|\Omega|)-\sum_{x\in L}p_{\ell,\partial^\ell[G,x]}(\omega)}{|L|(1-\gamma^3)-\sum_{x\in L}p_{\ell,\partial^\ell[G,x]}(\omega)}
\leq\frac{1-\gamma/|\Omega|}{1-\gamma^3}\leq1-\gamma^2.$$
Combining this bound with (\ref{eqfrakL4}), we obtain
$\gamma^2\leq{\mu_G({\mathcal E})}/{\mu_G(S_j)}\leq2\delta^2/\mu_G(S_j)$.
Thus, choosing $\delta$ much smaller than $\gamma$, we conclude that $\mu_G(S_j)<\delta^{3/2}$, which is a contradiction.
Thus, we have established that (\ref{eqMyContradiction}).
\end{proof}
\section{Conditioning on the local structure}\label{Sec_crazyConfigs}
\subsection{A generalised configuration model}
The aim in this section is to prove \Prop~\ref{Lemma_fmCalc}.
The obvious problem is the conditioning on the $\sigma$-algebra $\cT_\ell$ that fixes the depth-$\ell$ neighborhoods
of all variable nodes and the depth-$\ell+1$ neighborhoods of all constraint nodes.
Following~\cite{BordenaveCaputo}, we deal with this conditioning by setting up a generalised configuration model.
Recall that $\fT_\ell$ is the (finite) set of all isomorphism classes $\partial^\ell T$ for $T\in\fT\cap\cV$ and $\partial^{\ell+1} T$ for $T\in\fT\cap\cV$.
Let $\ell,n>0$ be integers and let $\cM=(V,F,d,t,(\psi_a)_{a\in F})$ be a $(\Delta,\Omega,\Psi,\Theta)$-model of size $n$.
Moreover, let $G\in\cG(\cM)$ be a $100\ell$-acyclic factor graph.
Then we define an enhanced $(\Delta,\Omega,\Psi,\Theta_\ell)$-model $\cM(G,\ell)$ with type set $\Theta_\ell=(\fT_\ell\cap\cV)\times[\Delta]$ as follows.
The set of variable nodes is $V$, the set of constraint nodes is $F$, the degrees are given by $d$ and the weight function
associated with each constraint $a$ is $\psi_a$ just as in $\cM$.
Moreover, the type of a variable clone $(x,i)$ is
$t_{G,\ell}(x,i)=(\partial^\ell[G,x],i)$.
Further, the type of a constraint clone $(a,j)$ such that $\partial(G,a,j)=(x,i)$ is $t_{G,\ell}(a,j)=(\partial^\ell[G,x],i)$.
Clearly, $\cG(\cM(G,\ell))\subset\cG(\cM)$.
The following lemma shows that the model $\cM(G,\ell)$ can be used to generate factor graphs whose local structure coincides with that of $G$.
\begin{lemma}\label{Lemma_beMyType}
Assume that $\ell\geq0$ and that $G'\in\cG(\cM(G,\ell))$ is $2\ell+4$-acyclic.
Then $G'$ viewed as a $\cM$-factor graph satisfies $G\ism_\ell G'$.
\end{lemma}
\begin{proof}
We are going to show inductively for $l\in[\ell]$ that $G\ism_l G'$.
The case $l=0$ is immediate from the construction.
Thus, assume that $l>0$, let $(x,i)\in C_V$ and let $B$ be the set of all clones that have distance precisely $l-1$ from $(x,i)$.
Since $G'$ is $(2\ell+2)$-acyclic, the pairwise distance of any two clones in $B$ is at least $2$.
Moreover, by induction we know that $t_{G,1}(w,j)=t_{G',1}(w,j)$ for all $(w,j)\in B$.
Therefore, $t_{G,l}(x,i)=t_{G',l}(x,i)$.
\end{proof}
In order to prove \Prop~\ref{Lemma_fmCalc} we need to enhance the model $\cM(G,\ell)$ further to accommodate an assignment
that provides a value from $\Omega$ for each clone.
Thus, let $\hat\sigma:C_V\cup C_F\to\Omega$ be a map.
We call $\hat\sigma$ {\bem valid} if $\hat\sigma(x,i)=\hat\sigma(x,j)$ for all $x\in V$, $i,j\in[d(x)]$
and if for all $\theta\in\Theta_\ell$ we have
$$\forall\omega\in\Omega:
\abs{\cbc{(x,i)\in C_V:\hat\sigma(x,i)=\omega,t_{G,\ell}(x,i)=\theta}}
=\abs{\cbc{(a,j)\in C_F:\hat\sigma(a,j)=\omega,t_{G,\ell}(a,j)=\theta}}.$$
Of course, we can extend a valid $\hat\sigma$ to a map $V\to\Omega$, $x\mapsto\hat\sigma(x,1)$.
Given a valid $\hat\sigma$
we define a model $(\Delta,\Omega,\Psi,\Theta_\ell\times\Omega)$-model $\cM(G,\hat\sigma,\ell)$ with
variable nodes $V$, constraint nodes $F$, degrees $d$ and weight functions $(\psi_a)_{a\in F}$ such that the type $t_{G,\hat\sigma,\ell}(x,i)$
of a variable clone $(x,i)$ is $(\partial^\ell[G,x],i,\hat\sigma(x,i))$ and such that the type
$t_{G,\hat\sigma,\ell}(a,j)$ of a constraint clone $(a,j)$ with $\partial(G,a,j)=(x,i)$
is $(\partial^\ell[G,x],i,\hat\sigma(a,j))$.
By construction,
$\cG(\cM(G,\hat\sigma,\ell))\subset\cG(\cM(G,\ell))\subset\cG(\cM)$.
Let us recall the definition of the distance from (\ref{eqDist}).
Further, for two maps $\hat\sigma,\hat\sigma':C_V\cup C_F\to\Omega$
let $\dist(\hat\sigma,\hat\sigma')=|\cbc{(v,i)\in C_V\cup C_F:\hat\sigma(v,i)\neq\hat\sigma'(v,i))}|$.
In \Sec~\ref{Sec_getBack} we are going to establish the following.
\begin{lemma}\label{Lemma_getBack}
For any $\eps,\ell>0$ there is $n_0=n_0(\eps,\ell,\Delta,\Omega,\Psi,\Theta)$ such that for $n>n_0$ the following holds.
If $\cM$ is a $(\Delta,\Omega,\Psi,\Theta)$-model of size $n$, $G\in\cG(\cM)$ is $100\ell$-acyclic
and $\hat\sigma$ is valid, then with probability at least $1-\eps$ the random factor graph $\G(\cM(G,\hat\sigma,\ell))$ has the following property.
There exist a valid $\hat\sigma'$ and a $4\ell$-acyclic $G'\in\cG(\cM(G,\hat\sigma',\ell))$ such that
$\dist(\hat\sigma,\hat\sigma')+
\dist(G',\G(\cM(G,\hat\sigma,\ell)))\leq n^{0.9}.$
\end{lemma}
To proceed consider a $(G,\ell)$-marginal sequence $q$.
We call $\hat\sigma$ {\bem $q$-valid} if the following two conditions hold.
\begin{description}
\item[V1] For all $T\in\fT_\ell\cap\cV,\omega\in\Omega$ we have
\begin{align*}
\abs{\cbc{x\in V:\partial^\ell[G,x]=T,\hat\sigma(x)=\omega}}&=q_T(\omega)\abs{\cbc{x\in V:\partial^\ell[G,x]=T}}.
\end{align*}
\item[V2] For all $T\in\fT_\ell\cap\cF,\omega_1,\ldots,\omega_{d_F}\in\Omega$ we have
\begin{align*}
\abs{\cbc{a\in F:\partial^{\ell+1}[G,a]=T,\forall j\in[d_F]:\hat\sigma(a,j)=\omega_j}}&=q_T(\omega_1,\ldots,\omega_{d_T})
\abs{\cbc{a\in F:\partial^{\ell+1}[G,a]=T}}.
\end{align*}
\end{description}
\begin{lemma}\label{Lemma_countingConfigurations}
For any $\eps,\ell>0$ there is $n_0=n_0(\eps,\ell,\Delta,\Omega,\Psi,\Theta)$ such that for $n>n_0$ the following holds.
Assume that $\cM$ is a $(\Delta,\Omega,\Psi,\Theta)$-model of size $n$, $G\in\cG(\cM)$ is $100\ell$-acyclic and $q$ is a
$(G,\ell)$-marginal sequence such that there exists a $q$-valid $\hat\sigma$.
Then with the sum ranging over all $q$-valid $\hat\sigma$ we have
\begin{align*}
\exp\bc{n\cB_G(\ell,q)-\sqrt n}\leq\sum_{\hat\sigma}\frac{|\cG(\cM(G,\hat\sigma,\ell))|}{|\cG(\cM(G,\ell))|}
\leq\exp\bc{n\cB_G(\ell,q)+\sqrt n}.
\end{align*}
\end{lemma}
\noindent
We defer the proof of \Lem~\ref{Lemma_countingConfigurations} to \Sec~\ref{Sec_countingConfigurations}.
\begin{proof}[Proof of \Prop~\ref{Lemma_fmCalc}]
We claim that
\begin{equation}\label{eqProp_fmCalc1}
\abs{\cbc{G'\in\cG(\cM(n)):G'\ism_\ell G}}\geq|\cG(\cM(G,\ell))|\exp(-n^{0.91}).
\end{equation}
To see this, apply \Lem~\ref{Lemma_getBack} to the constant map $\hat\sigma:(v,j)\in C_V\cup C_F\mapsto\omega_0$ for some fixed $\omega_0\in\Omega$.
Then we conclude that with probability at least $1/2$ the random graph $\G(\cM(G,\ell))=\G(\cM(G,\hat\sigma,\ell))$ is at distance at most $n^{0.9}$ from a $4\ell$-acyclic
$G'\in\G(\cM(G,\ell))\subset\cG(\cM)$.
Furthermore, by \Lem~\ref{Lemma_beMyType} this factor graph $G'$, viewed as an element of $\cG(\cM)$, satisfies $G\ism_\ell G'$.
Finally, since the total number of factor graphs at distance at most $n^{0.9}$ from $G'$ is bounded by $\exp(n^{0.91})$ because all degrees are bounded,
we obtain (\ref{eqProp_fmCalc1}).
Let $\delta>0$ be small enough.
If $\sigma\in\Sigma(G,\ell,q,\delta)$,
then by (\ref{eqZellqdelta}) there exists a $(G,\ell)$-marginal sequence $q'$ such that $\sigma\in\Sigma(G,\ell,q',0)$
such that $\TV{q_T-q_T'}<\delta$ for all $T\in\fT_\ell$.
Because $\fT_\ell$ is finite and $\Sigma(G,\ell,q',0)\neq\emptyset$, the total number of such $q'$ is bounded by a polynomial in $n$.
Moreover, due to the continuity of $\cB_{G,\ell}(\nix)$ we can choose $\delta=\delta(\ell)$ small enough so that
$|\cB_{G,\ell}(q')-\cB_{G,\ell}(q)|<\eps/2$ for all such $q'$.
Hence,
summing over all $\hat\sigma$ corresponding to $\sigma\in\Sigma(G,\ell,q,\delta)$,
we obtain from (\ref{eqProp_fmCalc1}) and \Lem~\ref{Lemma_countingConfigurations} that
\begin{align*}
\Erw[Z_{\ell,q}(\G)|\G\ism_\ell G]&\leq\sum_{\hat\sigma}\frac{|\cG(\cM(G,\hat\sigma,\ell))|}{\abs{\cbc{G'\in\cG(\cM(n)):G'\ism_\ell G}}}
\leq\exp(n\cB_G(\ell,q)+\eps n).
\end{align*}
Conversely, by \Lem~\ref{Lemma_getBack} with probability at least $1/2$ the graph $\G(\cM(G,\hat\sigma,\ell))$ is within distance
at most $n^{0.9}$ of a $4\ell$-acyclic $G'$, which satisfies $G'\ism_\ell G$ by \Lem~\ref{Lemma_beMyType}.
As before, the total number of graphs at distance at most $n^{0.9}$ off $G'$ is bounded by $\exp(n^{0.91})$.
Similarly, the total number of $\hat\sigma'$ at distance at most $n^{0.9}$ off $\hat\sigma$ is bounded by $\exp(n^{0.91})$.
Therefore, by \Lem~\ref{Lemma_beMyType}
\begin{align*}
\Erw[\vec{1}\{\cA_{2\ell+1}\}Z_{\ell,q}(\G)|\G\ism_\ell G]&\geq
\frac{\exp(-2n^{0.98})}2\sum_{\hat\sigma}\frac{|\cG(\cM(G,\hat\sigma,\ell))|}
{\abs{\cG(\cM(G,\ell))}}
\geq\exp(n\cB_G(\ell,q)-\eps n),
\end{align*}
as desired.
\end{proof}
\subsection{Proof of \Lem~\ref{Lemma_getBack}}\label{Sec_getBack}
Let $\Theta_*=\cbc{t_{G,\hat\sigma,\ell}(x,i):(x,i)\in C_V}$ be the set of all possible types.
For each $\tau\in \Theta_*$ let $n_\tau$ be the number of clones $(x,i)\in C_V$ with $t_{G,\hat\sigma,\ell}(x,i)=\tau$.
Throughout this section we assume that $n>n_0(\eps,\ell,\Delta,\Omega,\Psi,\Theta)$ is sufficiently large.
\begin{lemma}\label{Lemma_getBack1}
There exists $\beta>0$ such that the following is true.
For any $G,\hat\sigma$ there exists $3/4<\gamma<7/8$ such that for every $\tau\in \Theta_*$ either
$n_\tau\leq n^\gamma$ or $n_\tau>n^{\gamma+\beta}$.
\end{lemma}
\begin{proof}
The number of possible types is bounded independently of $n$.
Hence, choosing $\beta$ small enough, we can ensure that there exists an integer $j>0$ such that $3/4+j\beta<7/8$
such that $[n^{3/4+j\beta},n^{3/4+(j+1)\beta}]\cap\cbc{n_\tau:\tau\in T}=\emptyset$.
\end{proof}
Fix $\beta,\gamma$ as in the previous lemma.
Call $\tau$ {\bem rare} if $n_\tau\leq n^\gamma$ and {\bem common} otherwise.
Let $Y$ be the number of variable clones that belong to cycles of length at most $10\ell$ in $\G(\cM(G,\hat\sigma,\ell))$.
\begin{lemma}\label{Lemma_getBack2}
For large enough $n$ we have $\Erw[Y]\leq n^\gamma\ln n$.
\end{lemma}
\begin{proof}
Let $R$ be the set of variable clones $(v,i)$ of a rare type and
let $U$ be the set of all variable clones whose distance from $R$ in $G$ does not exceed $100\ell$.
Since the maximum degree as well as the total number of types are bounded, we have $|U|\leq|R|\ln\ln n\leq n^{\gamma}\sqrt{\ln n}$,
provided that $n$ is big enough.
Thus, to get the desired bound on $\Erw[Y]$ we merely need to consider the set $W$ of common clones that are at distance more than $100\ell$ from $R$.
More specifically, let $(v,i)$ be a common clone.
We are going to bound the probability that $(v,i)\in W$ and that $(v,i)$ lies on a cycle of length at most $10\ell$.
To this end, we are going to explore the (random) factor graph from $(v,i)$ via the principle of deferred decisions.
Let $i_1=i,\ldots,i_l\in[\Delta]$ be a sequence of $l\leq10\ell$ indices.
If $(v,i)$ lies on a cycle of length at most $10\ell$, then there exists such a sequence $(i_1,\ldots,i_l)$ that corresponds to this cycle.
Namely, with $v_1=v$ the cycle comprises of the clones $(v_1,i_1),\ldots,(v_{l},i_{l})$ such that $\partial(\G(\cM(G,\hat\sigma,\ell)),v_j,i_j)=(v_{j+1},i_{j+1})$.
In particular, $v_l=v_1$.
Clearly, the total number of sequences $(i_1,\ldots,i_l)$ is bounded.
Furthermore, given that $(v_l,i_l)$ is common, the probability that $v_l=v_0$ is bounded by $2n^{-\gamma}$.
Since $\gamma>3/4$, the linearity of expectation implies that $\Erw[Y]\leq|U|+2n^{1-\gamma}\ln n\leq n^\gamma\ln n$.
\end{proof}
\begin{lemma}\label{Lemma_getBack3}
Assume that $G''\in\cG(\cM(G,\hat\sigma,\ell))$ satisfies $Y(G'')\leq n^\gamma\ln^2 n$.
Then there is a $4\ell$-acyclic $G'\in\cG(\cM(G,\ell))$ such that $\dist(G',G'')\leq n^{0.9}$.
\end{lemma}
\begin{proof}
Let $R$ be the set of variable clones $(v,i)$ of a rare type and
let $U$ be the set of all variable clones whose distance from $R$ in $G$ does not exceed $10\ell$.
Moreover, let $G'''\in\cG(\cM(G,\ell))$ minimise $\dist(G'',G''')$ subject to the condition that $\partial(G''',v,i)=\partial(G,v,i)$ for all $(v,i)\in U$.
Then $\dist(G'',G''')\leq n^\gamma\ln n$ because the total number of types is bounded.
Therefore, the assumption $Y(G'')\leq n^\gamma\ln^2 n$ implies that $Y(G''')\leq n^\gamma\ln^3 n$, say.
In addition, because $G$ is $100\ell$-acyclic, none of the clones in $R$ lies on a cycle of length at most $4\ell$ in $G'''$.
Altering only a bounded number of edges in each step, we are now going to remove the short cycles of $G'''$ one by one.
Let $C$ be the set of common clones.
The construction of $G'''$ ensures that only common clones lie on cycles of length at most $4\ell$.
Consider one such clone $(v,i)$ and let $N$ be the set of all variable clones that can be reached from $(v,i)$ by traversing precisely two edges of $G'''$;
thus, $N$ contains all clones $(w,j)$ such that $w$ has distance two from $v$ and all clones $(v,j)$ that are incident to the same constraint node as $(v,i)$.
Once more by the construction of $G'''$ we have $N\subset C$.
Furthermore, $|N|\leq\Delta^2$.
We claim that there exists $N'\subset C$ and a bijection $\xi:N\to N'$ such that the following conditions are satisfied.
\begin{enumerate}[(i)]
\item $t_{G,\hat\sigma,\ell}(w,j)=t_{G,\hat\sigma,\ell}(\xi(w,j))$ for all $(w,j)\in N$.
\item the pairwise distance in $G'''$ between any two clones in $N'$ is at least $100\ell$.
\item the distance in $G'''$ between $N\cup \{(v,i)\}$ and $N'$ is at least $100\ell$.
\item the distance between $R$ and $N'$ is at least $100\ell$.
\item any $(w,j)\in N'$ is at distance at least $100\ell$ from any clone that belongs to a cycle of $G'''$ of length at most $4\ell$.
\end{enumerate}
Since the maximum degree of $G'''$ is bounded by $\Delta$, there are no more than $n^\gamma\ln^4 n$ clones violate condition (iii), (iv) or (v).
By comparison, there are at least $n^{\gamma+\beta}$ clones of any common type.
Hence, the existence of $\xi$ follows.
Now, obtain $G''''$ from $G'''$ as follows.
\begin{itemize}
\item let $G''''(\xi(w,j))=G'''(w,j)$ and $G''''(w,j)=G'''(\xi(w,j))$ for all $(w,j)\in N$.
\item let $G''''(w,j)=G'''(w,j)$ for all $(w,j)\not\in N\cup N'$.
\end{itemize}
It is immediate from the construction that any clone on a cycle of length at most $4\ell$ in $G''''$ also lies on such a cycle of $G'''$.
Moreover, $(v,i)$ does not lie on a cycle of length at most $4\ell$ in $G''''$.
Hence, $Y(G'''')<Y(G''')$.
In addition, all clones on cycles of length at most $4\ell$ and their neighbours are common.
Hence, the construction can be repeated on $G''''$.
Since $Y(G''')\leq n^\gamma\ln^3n$, we ultimately obtain a $4\ell$-acyclic $G''$ with $\dist(G',G'')\leq n^{\gamma}\ln^4n<n^{0.9}$.
\end{proof}
\begin{proof}[Proof of \Lem~\ref{Lemma_getBack}]
The assertion is immediate from \Lem s~\ref{Lemma_getBack2} and~\ref{Lemma_getBack3} and Markov's inequality.
\end{proof}
\subsection{Proof of \Lem~\ref{Lemma_countingConfigurations}}\label{Sec_countingConfigurations}
Let $\cV_\ell=\fT_\ell\cap\cV$ and for $T\in\cV_\ell$ let $n_T$ be the number of variable nodes $x$ such that $\partial^\ell[G,x]=T$.
By Stirling's formula the number
$|\Sigma(G,\ell,q,0)|$ of assignments $\sigma:V_n\to\Omega$ with marginals as prescribed by $q$ satisfies
\begin{align}\label{eqLemma_countingConfigurations666}
\abs{\ln|\Sigma|-\sum_{T\in\cV_\ell}n_T H(q_T)}\leq\ln^2n.
\end{align}
Further, for $T\in\cV_\ell$ and $i\in[d_T]$ let $C_V(T,i)$ be the set of all clones $(x,i)\in C_V$ such that $t_{G,\ell}(x,i)=(T,i)$.
Moreover, let $C_F(T,i)$ be the set of all clones $(a,j)\in C_F$ such that $t_{G,\ell}(a,j)=(T,i)$.
Additionally, let $\cF_\ell(T,i)$ be the set of all pairs $(T',j)$ with $T'\in\fT_\ell\cap\cF$, $j\in[d_{T'}]$ such that
there is $(a,j)\in C_F(T,i)$ such that $\partial^{\ell+1}[G,a]=T'$.
Of course, the total number of perfect matchings between $C_V(T,i)$ and $C_F(T,i)$ equals $n_T!$.
If we fix $\sigma\in\Sigma(G,\ell,q,0)$, then any such perfect matching induces an assignment $\hat\sigma:C_F(T,i)\to\Omega$ by
mapping a clone $(a,j)\in C_F(T,i)$ matched to $(x,i)$ to the value $\sigma(x)$.
Let $B_{T,i}$ be the event that in a such random matching for all $(T',j)\in\cF_\ell(T,i)$ and all $\omega$ we have
\begin{align*}
\abs{\cbc{(a,j)\in C_F:\partial^{\ell+1}[G,a]=T',\hat\sigma(a,j)=\omega}}
&=q_{T'\marg j}(\omega)
\abs{\cbc{(a,j)\in C_F:\partial^{\ell+1}[G,a]=T'}}
\end{align*}
Moreover, for $(T',j)\in\cF_\ell(T,i)$ let $m_{T'}$ be the number of $a\in F$ such that $\partial^{\ell+1}[G,a]=T'$.
Then
\begin{align*}
\pr\brk{B_t}&=\frac1{n_t!}\brk{\prod_{\omega\in\Omega}\bink{q_T(\omega)n_T}{(q_{T'\marg j}(\omega) m_{T'})_{(T',j)\in\cF_\ell(T,i)}}}
\brk{\prod_{(T',j)\in\cF_{\ell}(T,i)}\bink{m_{T'}}{(q_{T'\marg j}(\omega)m_{T'})_{\omega\in\Omega}}}
\prod_{(T',j)\in\cF_{\ell}(T,i),\omega\in\Omega}(q_{T'\marg j}(\omega)m_{T'})!\\
&=\bink{n_T}{(q_T(\omega)n_T)_{\omega\in\Omega}}^{-1}\prod_{(T',j)\in\cF_\ell(T,i)}\bink{m_{T'}}{(q_{T'\marg j}(\omega)m_{T'})_{\omega\in\Omega}}
=\exp\brk{O(\ln n)-\sum_{(T',j)\in\cF_\ell(T,i)}m_{T'}\KL{q_{T'\marg j}}{q_{T}}}.
\end{align*}
Let $\cF_\ell=\fT_\ell\cap\cF$.
Multiplying up over all $(T,i)$, we obtain for $B=\bigcap B_{T,i}$
\begin{align}\label{eqSillyExpectation1}
\pr\brk{B}&=\prod_{T\in\cV_\ell}\prod_{i\in[d_T]}\pr\brk{B_{T,i}}=\exp\brk{O(\ln n)-\sum_{T'\in\cF_\ell}\sum_{j\in[d_{T'}]}m_{T'}
\KL{q_{T'\marg j}}{q_{\partial^\ell[T'\reroot j]}}},
\end{align}
where the constant hidden in the $O(\nix)$ depends on $\Delta,\Omega,\Psi,\Theta,\ell$ only.
Further, for $T'\in\cF_\ell$ let $S_{T'}$ be the event that for every $(\omega_1,\ldots,\omega_{d_{T'}})\in\Omega^{d_{T'}}$
we have
$$\abs{\cbc{a\in F:\partial^{\ell+1}[G,a]=T',\forall j\in[d_{T'}]:\hat\sigma(a,j)=\omega_j}}
=q_{T'}(\omega_1,\ldots,\omega_{d_{T'}})\abs{\cbc{a\in F:\partial^{\ell+1}[G,a]=T'}}.$$
Then
\begin{align*}
\pr\brk{S_{T'}|B}&=\bink{m_{T'}}{m_{T'} q_{T'}}\prod_{j\in[d_{T'}]}\bink{m_{T'}}{m_{T'} q_{T'\marg j}}^{-1}
=\exp\brk{O(\ln n)-m_{T'}\KL{q_{T'}}{q_{T'\marg 1}\otimes\cdots\otimes q_{T'\marg d_{T'}}}}.
\end{align*}
Hence, letting $S=\bigcap S_{T'}$, we obtain
\begin{align}\label{eqSillyExpectation2}
\pr\brk{S|B}&
=\exp\brk{O(\ln n)-\sum_{T'\in\cF_\ell}m_{T'}\KL{q_{T'}}{q_{T'\marg 1}\otimes\cdots\otimes q_{T'\marg d_{T'}}}}.
\end{align}
Once more the constant hidden in the $O(\nix)$ depends on $\Delta,\Omega,\Psi,\Theta,\ell$ only.
Further, given $S\cap B$ we have
\begin{align}\label{eqSillyExpectation3}
\prod_{a\in F}\psi_a(\sigma)&=\exp\brk{\sum_{T'\in\cF_\ell}m_{T'}\bck{\ln\psi_{T'}(\SIGMA)}_{q_{T'}}}.
\end{align}
Finally, the assertion follows from (\ref{eqLemma_countingConfigurations666})--(\ref{eqSillyExpectation3}).
\subsection*{Acknowledgment}
The second author thanks Dimitris Achlioptas for inspiring discussions.
|
train/arxiv
|
BkiUfYw4uzliDEmfhXf1
| 5 | 1 |
\section{Introduction}
The {\it randomly perturbed graph model} is the following: for a fixed positive constant $\delta>0$, let $\mathcal{G}(n, \delta)$ be the set of graphs on vertex set $[n]$ with minimum degree at least $\delta n $. A graph $H=(V, E) $ is chosen arbitrarily from $\mathcal{G}(n,\delta)$ and a set $R$ of $m$ edges are chosen uniformly at random from $\binom{[n]}{2} \setminus E$ (i.e., those edges not in $H$) and added to $H$. The resulting {\it perturbed graph} is
\[
G_{H, m}=(V, E \cup R).
\]
This model was introduced by Bohman, Frieze, and Martin \cite{BFM}. As taking $\delta=0$ gives an ordinary random graph, this model can be viewed as a generalization of the classic Erd\H os-R\'enyi random graph model.
Given a class of graphs $\mathcal{G}$ and a monotone increasing property $P$, a natural line of questioning is: for a graph $G\in \mathcal{G}$ can we perturb the graph, specifically by adding edges randomly so that the graph satisfies $P$ with high probability?
For example, in \cite{BFM} it is shown that for an $n$-vertex dense graph $H$, the perturbed graph $G_{H,m}$ is Hamiltonian with high probability if and only if $m$ is at least linear in $n$.
Various other properties of randomly perturbed graphs have been investigated including clique number, chromatic number, diameter and connectivity \cite{BFKM}, spanning trees \cite{BHKMPP, BMPP, JK, KKS, KRS}, Ramsey properties \cite{DT, DMT}, tilings \cite{BTW}, and fixed subgraphs \cite{MST}.
Anastos and Frieze \cite{AF} examined random edge-colorings of perturbed graphs. Let $G^r_{H, m}$ be the graph $G_{H,m}$ equipped with an $r$-edge-coloring where the edge-colors are independently and uniformly selected from $[r]$ (in this way the coloring need not be proper). In \cite{AF} it was shown that for a linear number of edges $m$ and number of colors $r \geq (120 + 20 \log \delta)n$ suffice in order for $G^r_{H,m}$ to have rainbow-Hamilton cycle. Later Aigner-Horev and Hefetz \cite{AH} improved the value of $r$ to the asymptotically best-possible $(1+o(1))n$.
An edge-colored graph is {\it rainbow connected} if there is a rainbow path (i.e., each edge has a distinct color) path between every pair of vertices.
Rainbow connectivity has been studied in graphs in general as well as in the standard Erd\H os-R\'enyi random graph model. Typically, the question is to find the minimum $r$ such that a graph $G$ has an $r$-edge-coloring such that $G$ is rainbow connected. In this case we say that $G$ has {\it rainbow connection number} $r$. This parameter was introduced by Chartrand, Johns, McKeon and Zhang \cite{CJMZ}. For survey of rainbow connectivity in graphs see Li and Sun \cite{LS}.
In random graphs the goal is to determine a threshold on the edge probability $p$ such that w.h.p.\ $G(n,p)$ has rainbow connection number $r$. Caro, Lev, Roditty, Tuza and Yuster \cite{CLRTY} showed that $p = \sqrt{\frac{\log n}{n}}$ is a sharp threshold for rainbow connection number $r=2$ and He and Liang \cite{HL} proved that $\frac{(\log n)^{1/r}}{n^{1-1/r}}$ is a sharp threshold for $r=3$. A further refinement of the threshold for $r\geq 3$ was given by Heckel and Riordan \cite{HR}.
In a perturbed graph $G_{H, m}^r$,
Anastos and Frieze \cite{AF} proved that if $r\geq 7 $ and $m=\omega (1)$ then with high probability $G_{H, m}^r$ is rainbow connected.
Moreover, they showed the following.
\begin{thm}[Anastos, Frieze \cite{AF}]\label{AF-main}
\begin{enumerate}
\item[(i)] If $r=3$ and $m \geq 60 \delta^{-2} \log n$, then w.h.p.\ $G_{H,m}^3$ is rainbow connected.
\item[(ii)] For $\delta \leq 0.1$ there exists $H \in \mathcal{G}(n,\delta)$ such that if $m \leq 0.5 \log n$, then w.h.p.\ $G_{H,m}^4$ is not rainbow connected.
\item[(iii)] If $r \geq 7$ and $m = \omega(1)$, then w.h.p.\ $G_{H,m}^r$ is rainbow connected.
\end{enumerate}
\end{thm}
For $r=3$ colors, (i) shows that if $m$ has order of magnitude $\log n$, then $G^3_{H,m}$ is rainbow connected w.h.p., which shows that (ii) is best possible.
Anastos and Frieze \cite{AF} put forth that likely part (iii) holds for $5$ or $6$. In this note we prove the result for $r\geq 5$. Note that part (ii) implies that this cannot be improved to $r \geq 4$.
\begin{thm}\label{main-thm}
If $r \geq 5$ and $m \geq 20 \delta^{-2}$, then w.h.p.\ $G_{H,m}^r$ is rainbow connected.
\end{thm}
We make no particular attempt to optimize the constant $m$ in Theorem~\ref{main-thm}, but proving sharp bounds remains an interesting problem. A natural modification of this model would be to introduce some deterministic part to the edge-coloring which may lead to different values of $m$ and $r$.
\section{Proof of Theorem~\ref{main-thm}}
Fix $r= 5$ and $\delta<1$ and let $m=m(\delta)$ be large enough constant depending on $\delta$. Let $H$ be an arbitrary $n$-vertex graph with minimum degree at least $\delta n$. Randomly add $m$ edges from $\binom{[n]}{2}$ to $H$ (ignoring duplicate edges) and randomly $5$-color the edges of the resulting graph $G_{H,m}^5$. Note that this is a slightly weaker assumption than adding $m$ edges from $\binom{[n]}{2} \setminus E(H)$ to $H$, which will simplify arguments later.
Let $t = 100 \log n$.
Select $t$ sets $S_1,S_2,\dots, S_t$ each of $k=k(\delta)$ vertices uniformly at random with replacement from $V(H)$.
Put $S = \bigcup S_i$.
A set $S_i$ is {\it good} if for every pair $a,b \in S_i$ there exists an edge of $G_{H,m}^5$ between $N(a) \setminus S$ and $N(b) \setminus S$. Note that these two sets are not necessarily disjoint, so such an edge may be contained in their intersection.
\begin{claim}\label{good-claim}
$P(S_i \textrm{ is good}) > 0.99$ for $n$ large enough.
\end{claim}
\begin{proof}
As $|S| = |\cup S_i| \leq kt = 100 k \log n$, for every $\epsilon> 0$ we can choose $n$ large enough such that
that $N(a)\setminus S$ and $N(b)\setminus S$ both have size at least $\frac{1}{2} \delta n$.
Therefore, the probability that there is no edge between $N(a)\setminus S$ and $N(b)\setminus S$ is at most
\[
\left(\frac{\binom{n}{2} - \binom{\frac{1}{2} \delta n}{2}}{\binom{n}{2}}\right)^m \leq \left(1 - \frac{1}{4}\delta^2 \right)^m < \exp\left(-\frac{1}{4} \delta^2m\right) < 0.01
\]
when $m \geq 20 \delta^{-2}$.
\end{proof}
\begin{claim}\label{index-claim}
For every pair of vertices $u,v \in V(G_{H,m}^5)$, there is an index set $I_{u,v} \subset [t]$ such that $|I_{u,v}| \geq 0.6t$ and for all $i \in I_{u,v}$ we have $u,v \in N(S_i)$.
\end{claim}
\begin{proof}
For distinct vertices $u,v$ the probability $P(u \not \in N(S_i) \textrm{ or } v \not \in N(S_i))$ is at most
\begin{align*}
& P(u \not \in N(S_i)) + P(v \not \in N(S_i))
\leq 2 P(u \not \in N(S_i))
= 2P(S_i \cap N(u) = \emptyset) \\
& \leq 2\frac{\binom{(1-\delta)n}{k}}{\binom{n}{k}}
\leq 2(1-\delta)^k < 2 \exp(-\delta k) < 0.01
\end{align*}
for $k > 6\delta^{-1}$.
Thus, $P(u,v\in N(S_i)) \geq 0.99$.
Let $X_i$ be the indicator random variable for the event that $u,v \in N(S_i)$. Now $X = X_1+ X_2 + \dots + X_t$ is the number of sets $S_i$ such that $u,v \in N(S_i)$.
Therefore, $E[X] \geq 0.99t$.
As each $S_i$ is selected uniformly at random with replacement, the random variables $X_i$ are independent.
Therefore, by a simple Chernoff bound (see \cite[pg. 66]{textbook}) gives
\begin{align*}
P(X \leq (1-\alpha) E[X]) \leq \exp\left(-\frac{1}{2} \alpha^2 E[X]\right)
\end{align*}
for $0 < \alpha < 1$.
With $\alpha = 0.39$ this gives
\[
P(X \leq 0.6 t) \leq P(X \leq 0.61 \cdot 0.99 t) \leq \exp\left(-\frac{1}{2} (0.39)^2 \cdot 0.99 \cdot 100 \log n\right) < n^{-7} < n^{-2}.
\]
Therefore, by the union bound, the probability that some $u,v$ is not contained in $0.6t$ sets $S_i$ is less than $1$, which completes the proof.
\end{proof}
Fix arbitrary vertices $u$ and $v$ in $G_{H,m}^5$. We estimate the probability that there is a rainbow $u$--$v$ path (i.e.\ a path with end-vertices $u$ and $v$) of length at most $5$. If $uv$ is an edge, then we have such a path with probability $1$, so assume that $uv$ is not an edge.
Let $I_{u,v}$ be the index set guaranteed by Claim~\ref{index-claim}. For $i \in I_{u,v}$, let us estimate the probability that there is a rainbow $u$--$v$ path of length at most $5$ using vertices in $S_i$. Let $a \in S_i$ be a neighbor of $u$ and let $b \in S_i$ be a neighbor of $v$. If $a=b$, then we have a $u$--$v$ path of length $2$ which is rainbow with probability $4/5$, so assume that $a \neq b$.
\begin{figure}[h]
\begin{center}
{\includegraphics{p5fig.pdf}}
\end{center}
\caption{A $u$--$v$ path of length $5$.}
\end{figure}
By Claim~\ref{good-claim}, with probability at least $0.99$, there is an edge $a'b'$ between $N(a)\setminus S$ and $N(b)\setminus S$.
Conditioning on the existence of $a'b'$, the probability that the path $uaa'b'bv$ is rainbow is $\frac{4!}{5^4}$.
Thus, the
probability that there is a rainbow $u$--$v$ path using vertices $u,v,a,b$ is at least
$0.99 \cdot \frac{4!}{5^4} > 0.35$.
Therefore, the probability that there is no rainbow $u$--$v$ path of length at most $5$ is at most
\begin{align*}
\left(1-0.99 \cdot\frac{4!}{5^4}\right)^{|I_{uv}|} & \leq \left(1-0.99 \cdot\frac{4!}{5^4}\right)^{0.6t} \leq \exp\left(-0.99 \cdot \frac{4!}{5^4} \cdot 0.6 t\right) \\
&= \exp\left(-0.99 \cdot \frac{4!}{5^4} \cdot 0.6 \cdot 100 \log n\right) < n^{-2.25}
= o(n^{-2}).
\end{align*}
Now, the union bound implies that the probability that there is a pair $u,v$ not connected by a rainbow path of length at most $5$ is $o(1)$, i.e., w.h.p.\ $G_{H,m}^5$ is rainbow connected. \hfill \qedsymbol
|
train/arxiv
|
BkiUeCXxK6wB9jjDg2UQ
| 5 | 1 |
\section{Conclusion}
\label{sec:conclusion}
Existing work in formal models of autonomous driving (AD) systems focuses either on modelling one component in particular as in~\cite{Ingrand-19}, or on verifying a given AD scenario as in~\cite{chen2021formal}, where a DSL is proposed to specify AD scenarios, which are subsequently translated into networks of stochastic hybrid automata and are verified using the UPPAAL-SMC model checker~\cite{Bulychev-et-al-12}.
In contrast, we presented two formal LNT models specifying the interaction of an AV with its environment, for the purpose of generating AD scenarios and testing the vehicle or some of its components in an deployment environment.
Our first model is more general, i.e., consists of a higher abstraction of both the AV and the environment, and includes several components (perception, decision, action) of the AV, whereas our second model focuses more on a particular component of the vehicle (i.e., the perception component) and considers a fragment of the geographical map, which results in a more refined representation of both the component and the geographical map representation.
Both models can be and have been used to test AVs: the first one to generate tests for the entire system, which does not require a refined abstraction of each AV component, and the second one to generate tests for validating a particular perception component, which requires a more precise representation of the map.
As future work, we plan to unify both models.
This could be beneficial for the generation of more refined test cases, i.e., including the control of the car on a precise fragment of the geographical map.
We then plan to transform these refined test cases into scenarios for an AD simulator such as CARLA, using the approach of~\cite{Horel-Laugier-Marsso-Mateescu-Muller-Paigwar-Renzaglia-Serwe-22}.
Finally, we plan to take advantage of both the refined test cases and the corresponding derived AD scenario to test the control component using online conformance testing techniques.
~
\begin{small}
\noindent
\textbf{Acknowledgments.} This work was partly supported by project ArchitectECA2030 that has been accepted for funding within the Electronic Components and Systems for European Leadership Joint Undertaking in collaboration with the European Union's H2020 Framework Programme (H2020/2014-2020) and National Authorities, under grant agreement No.~877539.
\end{small}
\section{Model focused on control}
\label{sec:graph}
\input{graph-architecture}
In the first model, the autonomous car itself consists of four components: a GPS, a radar, a decision (or trajectory) controller, and an action controller.
We chose these four components in order to represent the essential functionalities present in an autonomous car: perception (GPS and radar), decision, and action.
The GPS keeps the current position of the car updated.
The radar detects the presence of the obstacles close to the car and builds a perception grid summarizing information about perceived obstacles.
The decision controller computes an itinerary from the current position to the destination, avoiding streets containing obstacles.
The action controller commands the engine and direction to follow the itinerary computed by the decision controller, using the perception grid built by the radar to avoid collisions.
As shown in Figure~\ref{fig:graph-model} these four components communicate in various ways: the GPS sends the current position to the decision controller upon request, the radar periodically sends the perception grid to the action controller, and the action controller requests a new itinerary from the decision controller.
This LNT model is a translation of the GRL model~\cite[Appendix~B]{Marsso-19} used to illustrate the combination of synchronous and asynchronous test generation tools~\cite{Marsso-Mateescu-Parissis-Serwe-19} to validate GALS (Globally Asynchronous, Locally Synchronous) systems.
\subsection{Elements and processes composing the model}
The car has an initial position and a destination.
Each obstacle has an initial position and a list of moves.
The model's behavior is defined such that inevitably either the car arrives, a collision occurs between the car and an obstacle, or all obstacles finish their list of moves.
The LNT model is generic and is instantiated for a particular scene by providing global constants for the map, the initial position and destination of the car, and the set of obstacles with their initial positions and lists of moves.
\paragraph{Car.}
Each component of the car is specified as a LNT process, and a process \lstinline+CAR+ defines the overall behavior as the parallel composition of these four processes (\lstinline+PERCEPTION_RADAR+, \lstinline+PERCEPTION_GPS+, \lstinline+DECISION+, and \lstinline+ACTION+) as shown in the following LNT fragment (the visible gates of a process are specified between square brackets ``\lstinline+[+...\lstinline+]+''; a process must synchronize on the gates before the arrow ``\lstinline+->+''):
\begin{lstlisting}
par
CURRENT_GRID ->
PERCEPTION_RADAR [UPDATE_GRID, CURRENT_GRID]
||
REQUEST_POSITION, CURRENT_POSITION ->
PERCEPTION_GPS [UPDATE_POSITION, REQUEST_POSITION, CURRENT_POSITION]
||
REQUEST_PATH, CURRENT_PATH, REQUEST_POSITION, CURRENT_POSITION ->
DECISION [REQUEST_PATH, CURRENT_PATH, REQUEST_POSITION,
CURRENT_POSITION, ARRIVAL] (map, destination)
||
CURRENT_PATH, CURRENT_GRID, REQUEST_PATH ->
ACTION [REQUEST_PATH, CURRENT_PATH, CURRENT_GRID,CAR_MOVE,COLLISION]
end par
\end{lstlisting}
\paragraph{Car components.}
The LNT process \lstinline+PERCEPTION_RADAR+ has two local variables to keep track of the current and previous perception grid.
A perception grid is represented by a list of edges corresponding to the streets occupied by the obstacles.
This grid is initially empty (i.e., it has the value \lstinline+Radar ({})+).
After each obstacle or car move, the radar receives the current grid as a message
from the environment, but informs the driving controller only in case of a chang
.
The LNT process \lstinline+PERCEPTION_GPS+ initializes the car position, receives updates of the car position (on gate \lstinline+UPDATE_POSITION+) and position requests from the decision controller (on gate \lstinline+REQUEST_POSITION+).
Upon request, it sends the car position to the decision controller (on gate \lstinline+CURRENT_POSITION+).
The LNT process \lstinline+DECISION+ has two value parameters: the initial \lstinline+map+ and the \lstinline+destination+ of the car. Process \lstinline+DECISION+ waits for receiving from process \lstinline+ACTION+ (on gate \lstinline+REQUEST_PATH+) a request to compute a path avoiding obstacles, then requests the current position from process \lstinline+PERCEPTION_GPS+ (on gate \lstinline+REQUEST_POSITION+).
When process \lstinline+DECISION+ receives the current position (on gate \lstinline+CURRENT_POSITION+), it checks if the car arrived at the destination, in which case it performs a rendezvous on gate \lstinline+ARRIVAL+ and stops; otherwise it computes an itinerary (using classical graph exploration algorithms) and sends it to process \lstinline+ACTION+ (on gate \lstinline+CURRENT_PATH+).
An itinerary is a list of controls, i.e., a turn in one of the crossroads (\lstinline+turned_n (N: Nat)+) or a brake (\lstinline+brakes+).
If there is no possible itinerary avoiding the obstacles, an empty itinerary is sent (if the obstacles have finished their moves, this leads to a deadlock).
The LNT process \lstinline+ACTION+ requests an itinerary avoiding the obstacles from process \lstinline+DECISION+ (on gate \lstinline+REQUEST_PATH+), and then it checks if the itinerary is feasible: if yes, it moves the car according to the first control in the itinerary (e.g., turn in the 5th crossroad), if not it waits for obstacles moves. Finally, after each perception change received from process \lstinline+PERCEPTION_RADAR+, it requests a new itinerary with the updated list of obstacle positions (on gate \lstinline+REQUEST_PATH+).
\paragraph{Geographical map.}
\begin{figure}
\centering
\begin{tabular}{c@{\hspace{3em}}c}
\includegraphics[scale=0.09]{simpleMap.eps} &
\includegraphics[scale=0.6]{geographical-map.eps}
\\
(A) Map (with the car and a crossing pedestrian) &
(B) Map representation as directed graph
\end{tabular}
\caption{Example of a geographical map and its corresponding graph representation}
\label{fig:graph-map}
\end{figure}
The geographical map is represented as a directed graph as illustrated in Figure~\ref{fig:graph-map}.B, in which edges correspond to streets and nodes correspond to crossroads; for simplicity, we assume that an actor (car or an obstacle) occupies a street completely (a longer street can be represented by several edges in the graph).
A set of functions is defined to explore this graph, to compute itineraries, etc.
The example below shows a a fragment of the LNT constant~\lstinline+initial_map+ that returns the graph corresponding to the map illustrated on Figure~\ref{fig:graph-map}.
\begin{lstlisting}[language=LNT]
function initial_map : Graph is
var g: Graph, e: Edges, v: Vertices in
v := {0, 1, 2, 3, 4, 5, 6, 7, 8};
e := {Edge (0, Coronation_Street, 1),
Edge (0, Corporation_Street, 3),
Edge (1, Coronation_Street_bis, 0),
... };
g := Graph (v, e);
return g
end var
end function
\end{lstlisting}
\paragraph{Obstacles.}
As the car, obstacles move from a street-segment to an (adjacent) street-segment.
If an obstacle is on the same segment as the car, this corresponds to a collision.
To limit the complexity, each obstacle executes a fixed number of random or statically chosen moves. More precisely, there are three types of obstacle moves: (1) the obstacle can either leave, (2) turn in one of the crossroads, or (3) perform a random move.
These moves are defined by the following LNT type \lstinline+Operation+.
\begin{lstlisting}[language=LNT]
type Operation is -- obstacle operations
random,
turned_n (N: Nat),
leave
end type
\end{lstlisting}
The behaviour of an obstacle is specified in the process \lstinline+OBSTACLE+.
It consists of executing the sequence of the obstacle's moves and then stop.
The first move corresponds to the appearance of the obstacle at its initial position.
For \lstinline+random+, the next move is chosen randomly among those possible in the current map (e.g., leave or turn (0)) as shown in the following LNT fragment of the process \lstinline+OBSTACLE+.
\begin{lstlisting}[language=LNT]
if opi == random then
select
opi := leave
[] var n0, n1: Nat in
n1 := length (succ_l (map.G.E, o.position));
n0 := any Nat where n0 <= n1;
opi := turned_n (n0)
end var
end select
end if;
\end{lstlisting}
Note that the LNT operator \lstinline+select+ specifies non-deterministic choice.
Finally, when the next move is chosen, the obstacle only executes the move if the destination segment is free; otherwise, it waits.
Note that the process \lstinline+OBSTACLE+ receives from the process \lstinline+ENVIRONMENT+ updates of the positions of the car and obstacles.
The LNT process \lstinline+OBSTACLES_MANAGER+ groups several obstacles in a parallel composition, providing the initial list of moves to each obstacle.
\paragraph{Map management.}
The map is akin to a global variable modified at each move of the car or an obstacle.
The updates of the map are handled by the process \lstinline+MAP_MANAGEMENT+, which ensures that:
(1) the geographical map information, such as the position of the car (\lstinline+map.c+) and obstacles (\lstinline+grid+), is shared with the process \lstinline+RADAR+ (by sending this information to \lstinline+RADAR+ on gate \lstinline+POSITIONS+);
(2) the geographical map information is updated when the car or the obstacles move (by receiving these moves from the processes \lstinline+ACTION+ and \lstinline+RADAR+)%
; and
(3) the processes can only be executed as long as the car did neither arrive at destination nor crash, and at least one obstacle has still some moves to execute.
The environment is defined as the parallel composition of the process \lstinline+OBSTACLES_MANAGER+ and \lstinline+MAP_MANAGEMENT+ in the process \lstinline+ENVIRONMENT+.
\subsection{Module hierarchy and scenario module}
The model is split in three different parts.
First, a part with definitions independent of the AV, e.g., types related to graph definitions together with the classical graph functions.
Second, a generic part, defining types, functions, and processes common to all configurations.
Finally, a particular part, defining the constants characterizing the considered configuration: the geographical map, the number of obstacles, as well as the initial position and behaviour of these obstacles.
\paragraph{Stats.}
The resulting LNT specification has 881 lines dispatched in four modules, containing 14~types, 19~functions, six channels, and ten~processes.
Using CADP, for the map with 22 streets and 8 crossroads represented in Figure~\ref{fig:graph-map} and two obstacles, each with one random move, we generated (in about a minute on a standard laptop) the corresponding LTS (Labelled Transition System): 59,781 states and 179,884 transitions (13,305 states and 28,601 transitions after strong bisimulation minimization).
\subsection{Validation by model checking}
We validated our LNT model by checking several safety and liveness properties characterizing the correct behavior of the AV.
We expressed the properties in MCL~\cite{Mateescu-Thivolle-08}, the data-handling, action-based, branching-time temporal logic of the on-the-fly model checker of CADP.
We describe here two properties in natural language and in MCL---more were verified for the associated GRL model~\cite{Marsso-Mateescu-Parissis-Serwe-19}.
\paragraph{Property 1:}
\sloppypar
``\emph{The position of the car is correctly updated after each move of the car.}''
This safety property expresses that on each transition sequence, an update of the car position (``\lstinline+UPDATE_POSITION ?current_street+'', where \lstinline+current_street+ is the street on which the car currently is) followed by a car move (``\lstinline+CAR_MOVE ?control+'', where \lstinline+control+ is a move) cannot be followed by an update of the car position inconsistent with \lstinline+current_street+, \lstinline+control+, and the map.
This can be expressed in MCL using the necessity modality below, which forbids transition sequences containing inconsistent position updates:
\begin{lstlisting}[language=MCL]
[ true* .
{ UPDATE_POSITION ?current_street:String } .
(not ({ CAR_MOVE ... } or { UPDATE_POSITION ... }))* .
{ CAR_MOVE ?control:String } .
(not ({ CAR_MOVE ... } or { UPDATE_POSITION ... }))* .
{ UPDATE_POSITION ?new_street:String where
not (Consistent_Move (current_street, control, new_street)) }
] false
\end{lstlisting}
The values of the current position, the move, and the new position of the car occurring as offers for the gates \lstinline+UPDATE_POSITION+ and \lstinline+CAR_MOVE+ are captured in the variables \lstinline+current_street+, \lstinline+control+, and \lstinline+new_street+ of the corresponding action predicates (surrounded by curly braces) and reused in the \lstinline[language=MCL]+where+ clause of the last action predicate.
The Boolean function \lstinline[language=MCL]+Consistent_Move+ defines all valid combinations for \lstinline+current_street+, \lstinline+control+, and \lstinline+new_street+ allowed by the map.
\paragraph{Property 2:} ``\emph{Inevitably, the system should reach a state where either the car arrived (\lstinline[language=MCL]+ARRIVED+), or a collision occurred between the car and an obstacle (\lstinline[language=MCL]+COLLISION+), or all obstacles have finished their list of moves (\lstinline[language=MCL]+END_OBSTACLE+).}''
This property can be expressed in MCL using the formula below, which forbids infinite transition sequences not containing one of the three terminal actions (\lstinline[language=MCL]+TERMINATE+):
\begin{lstlisting}[language=MCL]
not <(not TERMINATE)>@
\end{lstlisting}
Here, \lstinline[language=MCL]+TERMINATE+ encompasses all three terminal actions \lstinline[language=MCL]+ARRIVED+, \lstinline[language=MCL]+COLLISION+, and \lstinline[language=MCL]+END_OBSTACLE+.
Note that this formula correctly expresses the property if and only if the model is free of deadlocks, which can be easily verified by another check.
Note that we also specified test purposes and generated test cases from this model using TESTOR \cite{Marsso-Mateescu-Serwe-18} as part of the evaluation of a test generation approach~\cite{Marsso-Mateescu-Serwe-20}.
\subsection{Discussion}
We specified an AV close to a deployed one, i.e., we considered an AV as consisting of several components for action, decision, and perception.
To naturally specify the search of an itinerary, we represented the geographical map as a graph, and formalized classical graph exploration algorithms.
This model is a translation of a GALS model~\cite{Marsso-Mateescu-Parissis-Serwe-19,Marsso-19} previously described in the formal GRL language~\cite{Jebali-Lang-Mateescu-16,Jebali-16}, where the various components of the car are represented as synchronous programs interacting globally in an asynchronous manner.
This first model can directly be used to test the control of an AV, since it computes the possible itineraries avoiding collisions.
However, the abstraction of each component can be refined according to different evaluation goals.
For instance, to only test the perception component (e.g., radar) the current formal model is too general and the current abstraction of the map is not optimal to verify a perception component.
In particular, more precision is required for the geographical map (e.g., concrete angles and lengths of street segments) and (the trajectories of) obstacles (e.g., their size and speed).
On the other hand, some information might not be necessary to test a particular component (e.g., the control and action processes might be irrelevant to test the perception): thus, these aspects can be dropped from the model, consequently improving validation performance by reducing the size of the underlying LTS.
In the next section, we will present a second formal model for an AV.
If the overall structure of the second model is the same (in particular the module hierarchy and the processes \lstinline+OBSTACLES_MANAGER+ and \lstinline+MAP_MANAGEMENT+), the second model focuses on the perception component, which requires a more precise representation of the geographical map.
\section{Model focused on perception}
\label{sec:grid}
To validate the perception components, which are crucial for an AV, it is necessary to devise a model focused on perception aspects. We present below such a model in LNT, derived from the control-focused model given in Section~\ref{sec:graph}. This second model keeps the same management of actors and important events (arrival of the car at destination, end of obstacle trajectories, and collision), but represents the map using an array instead of a graph, and abstracts away most car components, except for the perception LiDAR. This perception-focused model was used to generate scenarios to be executed on an AD simulator.
\subsection{Elements and processes composing the model}
\input{grid-architecture}
\paragraph{Constants and Channels.}
The principal constants are the height and width of the array defining the map, and those of the array defining the perception grid of the \lstinline+LIDAR+ component. They are mostly used to verify if an actor is present on the map or not. The default size of the map is ten$\times$ten cells, and it can be increased for a higher resolution. The various processes in the model communicate through gates defined with channels used to convey specific data values. We defined seven different channels in the model.
\paragraph{Obstacles.}
The obstacles are actors on the map that represent various objects and elements of the environment. Obstacles can have various sizes (minimum one cell) as they can be pedestrians, cars, or buildings. Each obstacle has a unique identifier defined as a value of an enumerated type. Obstacles can be static or mobile, the latter ones being able to move in any direction. Each move consists in traversing a number of cells determined by the obstacle speed. Some obstacles can hide the view (e.g., a car or a building) and some cannot (e.g., a pedestrian or a pole).
In a configuration, each obstacle has a list of moves defining its behavior. Its moves depend on its speed and direction. An obstacle may also choose not to move or may randomly choose between several possible directions, which adds nondeterminism to the model and enables the exploration of further scenarios. As in the first model, obstacle moves must not lead to collisions (i.e., end on occupied cells of the map) in order to yield relevant scenarios. To keep the model size tractable, we enable full random moves only for obstacles close enough to the car, the random moves of the farther obstacles being restricted to directions bringing them closer to the car.
Each obstacle is modeled by an instance of the \lstinline+OBSTACLE+ process, which has as parameters the obstacle's identifier, its list of moves, and a flag indicating whether the obstacle has a cyclic movement. Before attempting the next obstacle move (at the head of the list), process \lstinline+OBSTACLE+ obtains, via gate \lstinline+GRID_UPDATE+, the current map from the \lstinline+MAP_MANAGER+ process. Based on the map, on the current obstacle information (position and speed), and on the direction of the next move, the obstacle determines whether the move is valid, i.e., does not lead to a collision. Then, process \lstinline+OBSTACLE+ sends on gate \lstinline+OBSTACLE_POSITION+ the previous position, next position, and new direction of the obstacle (including the case when the obstacle does not move) to process \lstinline+MAP_MANAGER+, in charge of updating the map.
If the next direction of move is random, the possible moves of the obstacle are constrained by the \lstinline+RESTRAND+ process, described later in this section. When the list of moves is finished, process \lstinline+OBSTACLE+ performs an \lstinline+END_OBSTACLE+ action and stops moving, except when it has a cyclic behaviour, in which case it starts again using its list of moves given initially. The process \lstinline+OBSTACLES_MANAGER+ is the parallel composition of all \lstinline+OBSTACLE+ processes in the considered configuration.
\begin{figure}
\centering
\includegraphics[scale=0.5]{map_grid_carla_cross.eps}
\caption{Representation of the map with three actors (car and two mobile obstacles)}
\label{fig:map_grid}
\end{figure}
\paragraph{Map.}
The map is essentially the representation of the ground truth, the world in which the different actors move. The map is represented as a 2--dimensional array composed of cells with different values: \lstinline+free+ when there is no obstacle nor car on the cell, \lstinline+occupied (obstacle)+ when the cell is occupied by an obstacle (including all the obstacle information), and \lstinline+car_pos+ when the cell is occupied by the car (we consider only one car on the map). The map is defined and initialized in LNT as follows:
\begin{lstlisting}[language=LNT]
type Map_Info is free, car_pos, occupied (O:Obstacle) end type
type Map_Line is array [0..9] of Map_Info end type
type Map is array [0..9] of Map_Line end type
function initiate_map (out var m:Map) is
-- create an empty map
m := Map (Map_line (free));
-- add static obstacles into the ground truth array
add_obstacle (!?m, Obstacle (Scenery,
CellRectPosition (Position (0, 0)),
Obstacle_Speed (0), none, false));
...
-- add mobile obstacles into the ground truth array
add_obstacle (!?m, Obstacle (Other_Car,
CellRectPosition (Position (6, 7)),
Obstacle_Speed (1), up, false));
...
end function
\end{lstlisting}
\noindent
The \lstinline+MAP_MANAGER+ process is a central part of the model, in charge of maintaining the map (ground truth) and of communicating with the other processes to update the position of the actors. The map is initialized with the positions of static obstacles and the initial positions of mobile obstacles. It is sent on gate \lstinline+GRID_UPDATE+ to the \lstinline+OBSTACLE+ processes to determine their next moves, and is also sent on gate \lstinline+GRID_CAR+ (accompanied by the car position) to the process \lstinline+LIDAR_MANAGER+ to generate the perception grid. If at some moment the position of the car becomes the same as one of the obstacles, \lstinline+MAP_MANAGER+ performs an action \lstinline+COLLISION+ and stops, entailing the termination of the whole scenario. The \lstinline+ENVIRONMENT+ process is the parallel composition of \lstinline+MAP_MANAGER+ and \lstinline+OBSTACLES_MANAGER+.
\paragraph{Car.}
The \lstinline+CAR+ process is the parallel composition of the \lstinline+LIDAR_MANAGER+ and \lstinline+CAR_MOVE+ processes, the latter being in charge of managing the car moves. Process \lstinline+CAR_MOVE+ has as parameters the car's information (position, list of moves, speed, and a flag indicating if the car has a cyclic movement) and is similar to process \lstinline+OBSTACLE+. The car moves essentially in the same way as the obstacles, except that it can also move to an occupied cell, and thus trigger a \lstinline+COLLISION+ action from the \lstinline+GRID_MANAGER+ process. Upon each move of the car, its previous and current positions are transferred to the \lstinline+MAP_MANAGER+ process on gate \lstinline+CAR_POSITION+ to be updated on the map. If the car has finished its list of moves, it performs an action \lstinline+ARRIVAL+, which terminates the scenario.
\paragraph{LiDAR.}
The perception grid represents the perception of the car (as computed by the LiDAR) up to a certain distance. It is modeled as a 2-dimensional array centered on the car position and having a default size of 5x5 cells. The cells of the perception grid have different values from those of the map: \lstinline+F+ for free cells, \lstinline+C+ for the car position, \lstinline+O+ for occupied cells, \lstinline+M+ for cells that were free on the last grid but became occupied, \lstinline+T+ for cells occupied by a transparent obstacle, \lstinline+N+ similar to \lstinline+M+ but for cells occupied by transparent obstacles, and \lstinline+U+ for unknown cells, i.e., those out of the map (if the grid exceeds the map boundaries) or those hidden from view (behind an opaque obstacle). The perception grid is defined as follows and initialized as an empty array:
\begin{lstlisting}[language=LNT]
type Perception_Info is C, F, O, M, T, N, U end type
type L is array [0..4] of Perception_Info end type
type P is array [0..4] of L end type
\end{lstlisting}
\noindent
There are several functions calculating the perception grid. The principal one takes the map, the previous perception grid and the current position of the car to calculate the new grid by translating the values of map cells into the values of the corresponding grid cells. Other functions calculate the hidden cells depending on the presence of opaque obstacles around the car, and update the value of grid cells from \lstinline+F+ to \lstinline+M+ or to \lstinline+N+ if these changed between two consecutive grids.
The perception grid is maintained by the \lstinline+LIDAR_MANAGER+ process, which sends on gate \lstinline+LIDAR_MAP+ the new value of the grid and map to process \lstinline+MOVE_CAR+ to compute the next car moves. Process \lstinline+LIDAR_MANAGER+ also has a special parameter to indicate whether the contents of the grid must be kept available in the LTS for validation purposes.
\paragraph{Scheduler and Restrand.}
Two auxiliary processes optimize the model regarding both its scalability and its realism when connected to an AD simulator. The \lstinline+SCHEDULER+ process introduces additional synchrony in the model to bring it closer to its physical counterpart, by enforcing that between two consecutive time instants, every actor must perform one move (at its own speed). Process \lstinline+SCHEDULER+ monitors the moves of the car and the obstacles by synchronizing on gates \lstinline+CAR_POSITION+ and \lstinline+OBSTACLE_POSITION+, indicates consecutive time periods by emitting signals on a special gate \lstinline+TICK+, and forces all actors (in a fixed order) to perform one move between two \lstinline+TICK+ actions (note that an obstacle may choose not to move, which amounts to keep its position unchanged). This improves the execution of transition sequences in the AD simulator, by allowing all actor moves between two \lstinline+TICK+ actions to be performed in parallel, yielding realistic movements, as opposed to jerky ones induced by equivalent, but less realistic interleavings in the absence of \lstinline+TICK+ actions. This also reduces the size of the LTS by pruning redundant interleavings of actions (equivalent orderings of obstacle moves between two \lstinline+TICK+ actions). Between two \lstinline+TICK+ actions, the car moves after all obstacles, because at the end of the scenario (car arrival or a collision), the obstacles must still be able to finish their moves in the simulator.
\begin{lstlisting}[language=LNT]
process SCHEDULER [OBSTACLE_POSITION:O, CAR_POSITION:C, TICK:none] is
loop
OBSTACLE_POSITION (?Obstacle (Pedestrian, any RectPosition,
any Obstacle_Speed, any Direction, true),
?any Obstacle, ?any Direction);
... -- all the other obstacles
CAR_POSITION (?any Position, ?any Position);
TICK
end loop
end process
\end{lstlisting}
\noindent
The \lstinline+RESTRAND+ (for ``restraining random'') process limits the random moves of the obstacles to keep them in a meaningful neighbourhood of the car. This is useful both for specifying scenarios with relevant obstacle trajectories (obstacles close enough to be perceived by the LiDAR) and for reducing the LTS. Process \lstinline+RESTRAND+ monitors the obstacle moves by synchronizing on gate \lstinline+OBSTACLE_POSITION+ and constrains the random moves (depending on the car position, the previous obstacle position, a direction and the minimal distance) to force obstacles to get close enough to the car to be perceived.
\begin{lstlisting}
OBSTACLE_POSITION (?any Obstacle, ?any Obstacle, dir)
where dir != random
[] OBSTACLE_POSITION (?obst, ?obst_prev, random)
where moveAllowed (car, obst_prev, obst.dir, distMin)
\end{lstlisting}
\subsection{Module hierarchy and scenario module}
The LNT model is structured as a hierarchy of modules, depending on the type of data the functions or processes work on. All type definitions are grouped in the top module of the hierarchy, which is then separated in two branches, one for the modules concerning the environment (obstacles, map) and the other for the modules concerning the car. The main module inherits from both branches.
The modules are the following: \lstinline+types+ contains the data types, constants, and generic functions; \lstinline+lidar+ contains the functions and the process for the perception grid; \lstinline+car+ defines the behaviour of the car; \lstinline+map+ contains the functions for updating the map; \lstinline+obstacles+ contains the functions and processes to move obstacles; \lstinline+scenario+ contains the definitions specific to a configuration; \lstinline+map_manager+ contains the process monitoring the map; finally, the main module contains the parallel composition of the principal processes.
To easily build various configurations of the LNT model, the scenario module enables to choose the map, the initial positions of (static and dynamic) obstacles, and the behaviour of the car.
The only parameter not defined in the scenario module is the size of the map, defined in the \lstinline+types+ module because of the hierarchy. The example \lstinline+SCENARIO+ given in Appendix~\ref{ap:grid-model} contains four static obstacles and two moving obstacles (another car and a pedestrian) with a simple trajectory. We decided to place the processes \lstinline+RESTRAND+ and \lstinline+SCHEDULER+ in the scenario module because these processes depend on the number of obstacles and, for \lstinline+RESTRAND+, on the distance from the car that is chosen for randomly moving obstacles.
\paragraph{Stats.}
The resulting LNT specification has 1059 lines (excluding the scenario module, the size of which depends on the configuration---the one given in Appendix~\ref{ap:grid-model} has 161 lines) dispatched in eight modules, containing 13~types, 38~functions, seven channels, and eleven processes.
Using CADP, for the map of size ten$\times$ten represented in Figure~\ref{fig:map_grid} and two obstacles, we generated (in less than a minute on a standard laptop) the corresponding LTS with 27,168 states and 50,719 transitions (14,595 states and 28,287 transitions after strong bisimulation minimization).
\subsection{Test case generation for simulation scenarios}
The model focused on perception was devised for the synthesis of AD simulation scenarios from conformance test cases generated using the TESTOR~\cite{Marsso-Mateescu-Serwe-18} tool, as reported in~\cite{Horel-Laugier-Marsso-Mateescu-Muller-Paigwar-Renzaglia-Serwe-22}. The model was first validated by checking several temporal logic properties expressed in MCL~\cite{Mateescu-Thivolle-08}.
The generation of test cases is guided by test purposes, which are high-level descriptions of the sequences of actions to be reached by testing. A test purpose is an LNT process specifying a sequence of actions terminated by a \lstinline+TESTOR_ACCEPT+ action characterizing the desired situation. An example of test purpose defining a sequence of (zero or more) actions leading to a \lstinline+COLLISION+ is given below.
\begin{lstlisting}[language=LNT]
process PURPOSE [TESTOR_ACCEPT: none, COLLISION: COLLISION_O] is
COLLISION (Pedestrian);
loop TESTOR_ACCEPT end loop
end process
\end{lstlisting}
\noindent
We tested the model with ten different configurations, on three different maps, with three different test purposes. The first configuration was the one represented on Figure~\ref{fig:map_grid}. On a crossroad, a car (moving obstacle) is going straight to the north, the car is following it, and a pedestrian is trying to cross and walk between them. The two different outcomes defined in test purposes are that a collision happens or not. Three other configurations were variants of this one on the same map. In two other configurations, the obstacles do not have defined trajectories, their moves being specified in the test purpose (reaching a collision or a particular perception grid). Another configuration involved a map representing a highway where three vehicles (including the car) move at different speeds in the same direction and try to change the lane. A further configuration involved a T-shaped crossroad, where another vehicle ignores the signalisation, leading to a collision with the car. Finally, we specified two configurations containing an additional obstacle that may produce near misses, where the car would be just next to an obstacle but not collide with it.
\subsection{Discussion}
This second model focuses on a particular component (i.e., the perception), with a more precise representation of the geographical map.
The advantage of this focus is the possibility to refine the precision of the moves of the obstacles and the car (e.g., by increasing the resolution of the map and perception grid) and to fine-tune the model to cover a large number of relevant AD perception scenarios. For instance, this model enables random trajectories for the obstacles with different speeds around the car, within an area of parameterized size managed by the \lstinline+RESTRAND+ process.
Although the second model focuses on the perception component, (re)integrating a control component could enrich the car's trajectory, which also impacts the perception in AD scenarios.
However, a simple control component computing a random trajectory (or executing a precomputed one) for the car would be sufficient.
\section{Introduction}
Autonomous vehicles (AV) are complex safety critical systems, as undesired behaviours can lead to fatal accidents~\cite{boudette-21}.
Both the complete AV and its components need to be tested to handle critical scenarios.
Because these critical scenarios are unlikely to happen in real environments, a common practice in robotics~\cite{UNECE-21} is to reproduce these critical scenarios in an autonomous driving (AD) simulator, such as CARLA~\cite{Dosovitskiy17}.
The specifications of these critical scenarios are obtained manually, by random generation, or by derivation from a (formal) model~\cite{Fremont-Dreossi-Ghosh-et-al-19,Riedmaier-et-al-20,Horel-Laugier-Marsso-Mateescu-Muller-Paigwar-Renzaglia-Serwe-22}.
The main contributions of this paper are
(a) two \emph{formal} models of an AV and its environment, which can be used to generate relevant critical scenarios for testing AVs and/or their components, and
(b) a discussion comparing the models and motivating the existence of two different models by their intended principal uses.
Both models describe an autonomous ego vehicle, called car, moving around in a scene, called (geographical) map, towards a goal or destination position, and interacting with its environment, i.e., a given set of moving obstacles (pedestrians, cyclists, other cars, etc.) to avoid collisions.
The formal models are written in the LNT language~\cite{Garavel-Lang-Serwe-17,Champelovier-Clerc-Garavel-et-al-10-v7.0}, which is the most recent modeling language supported by the CADP verification toolbox~\cite{Garavel-Lang-Mateescu-Serwe-13} and a state-of-the-art replacement for the international standards LOTOS and E-LOTOS~\cite{Garavel-Lang-Serwe-17}.
We chose LNT rather than a scenario modeling language such as Scenic~\cite{Fremont-Dreossi-Ghosh-et-al-19} to illustrate a model-based, formal verification and testing approach~\cite{Marsso-Mateescu-Serwe-20,Horel-Laugier-Marsso-Mateescu-Muller-Paigwar-Renzaglia-Serwe-22}.
The first formal model specifies the control of the car (including route planning), considering an abstract representation of the geographical map as a graph.
The second model focuses on the perception components of the car, and therefore it does not need to include a specification of the vehicle's control, but requires a refined, more precise representation of the map.
Both models were not yet fully disclosed.
We validated both models using different approaches supported by the CADP tools.
First, we checked several safety and liveness properties characterizing the correct behavior of an AV.
Concretely, we expressed the properties in the MCL~\cite{Mateescu-Thivolle-08} data-handling, action-based temporal property language, and verified them on the models using the on-the-fly model checker of CADP.
Second, we generated several relevant AD scenarios from the models.
More precisely, we used TESTOR~\cite{Marsso-Mateescu-Serwe-18}, a tool developed on top of CADP for on-the-fly conformance test case generation guided by test purposes, to generate abstract test cases, which were automatically translated into AD scenarios~\cite{Horel-Laugier-Marsso-Mateescu-Muller-Paigwar-Renzaglia-Serwe-22}.
The rest of this paper is organized as follows.
Section~\ref{sec:graph} presents the first formal model of an AV, including its control, and the validation of the model using safety and liveness properties.
Section~\ref{sec:grid} presents the second formal model of an AV, with a refined map, and generation of AD scenarios based on the model.
Section~\ref{sec:conclusion} compares both models and gives concluding remarks.
Appendices~\ref{ap:graph-model} and \ref{ap:grid-model} give the complete LNT source code of the first and second model, respectively.
\section{Full model focused on control}
\label{ap:graph-model}
This appendix presents the first LNT model of the autonomous vehicle (AV) and its interaction with the environment.
This model is fully self-contained and does not depend on any externally-defined library.
For readability, the specification has been split into 8 parts, each part being devoted to a particular module, or a collection of test purpose.
The first parts contain general definitions that could be independent of the autonomous vehicle; starting from Section~\ref{section:av-related}, the definitions become increasingly more AV-specific.
\subsection{Definitions of generic graph functions and datatypes}
\label{section:graph_car}
The module \lstinline+graph_car+ defines the types related to graph definitions together with usual graph-exploration functions.
This module is independent of the autonomous vehicle: for a use in another context, one only needs to replace the type \lstinline+Street+ by an appropriate type characterising edge labels, e.g., the generic type \lstinline+String+ or another type specific to a domain.
The module clause \lstinline+with+ request the automatic definition of predefined functions for all types defined in the module.
\lstinputlisting{graph_car.lnt}
\subsection{Definitions of AV specific datatypes}
\label{section:av-related}
Using the types and functions defined in module \lstinline+graph_car+ (see Section~\ref{section:graph_car}), the module \lstinline+types+ defines the AV specific datatypes together with their functions.
It also defines the channels, i.e., the types of the offers exchanged during rendezvous.
\lstinputlisting{types.lnt}
\subsection{Definitions for the autonomous vehicle}
\label{section:car}
The four component of the autonomous car are defined as separate LNT processes \lstinline+PERCEPTION_RADAR+, \lstinline+PERCEPTION_GPS+, \lstinline+DECISION+, and \lstinline+ACTION+.
These four components are composed in parallel in the process \lstinline+CAR+ and synchronize on their shared actions using the parallel composition operator.
\lstinputlisting[firstline=3,lastline=153]{car.lnt}
\subsection{Definitions for the autonomous vehicles environment}
\label{section:environment}
The environment of the car is modeled by two LNT processes.
First, the process \lstinline+OBSTACLE+ specifies the behaviour of an obstacle.
Second, the process \lstinline+MAP_MANAGEMENT+ manages the ground truth, i.e., the positions of the car and obstacles on the map.
Because many obstacles can be interacting with the AV, we added a third process \lstinline+OBSTACLES_MANAGER+ instantiating the number of obstacles, this process is presented in the next section.
The \lstinline+MAP_MANAGEMENT+ and \lstinline+OBSTACLES_MANAGER+ processes are composed in the process \lstinline+Environment+.
\lstinputlisting[firstline=152,lastline=295]{car.lnt}
The processes described in Sections~\ref{section:car} and \ref{section:environment} are grouped into a module \lstinline+car+, collecting the behavioral specification.
\subsection{Example of a simulation scenario (scene and actor behaviour)}
A scenario, or specific instance of the model, is characterized by the map, together with the initial position and destination of the car, and the set of obstacles with their initial position and behavior.
This information is grouped in the \lstinline+main+ module, providing the (constant) functions for the \lstinline+initial_map+, the process handling the parallel composition of the various obstacles, and the principal process \lstinline+MAIN+ instantiating everything.
\lstinputlisting{main.lnt}
\section{Full model focused on perception}
\label{ap:grid-model}
\input{modules-schema}
This appendix contains the full LNT code of the model with the refined array-based representation of the map required by the focus on the perception.
There is a separate subsection for each module.
The order of subsections follows the module inclusion as shown in architecture depicted in Figure~\ref{fig:model_architecture} (from the deepest inclusion to the top level module).
\subsection{Definitions of datatypes}
Module \lstinline+modele_types+ defines the constants (including the size of the arrays representing the map and the perception grid), the data types encoding the different actors and their attributes, and the channels used for communication between processes, together with two extra functions used to factor the update of the position for the obstacle data type.
\lstinputlisting{modele_types.lnt}
\subsection{Definition of the LiDAR}
Module \lstinline+modele_lidar+ contains the functions to create and modify the perception grid, together with the process managing the grid.
\lstinputlisting{modele_lidar.lnt}
\subsection{Definitions of the car}
Module \lstinline+modele_car+ contains the functions to compute the moves of the car and the process managing the car's behaviour, according to the current configuration of the ground truth and the arguments given in the scenario module to define the car's behaviour.
\lstinputlisting{modele_car.lnt}
\subsection{Definitions of the geographical map}
Module \lstinline+modele_map+ contains all necessary functions to manage the array representing the ground truth.
These functions are used to update the array and check the value of cells when moving an actor.
The module also contains functions factoring some features, such as the update of an attribute or the access to an array cell.
\lstinputlisting{modele_map.lnt}
\subsection{Definitions of the obstacles}
Module \lstinline+modele_obstacles+ contains the function to correctly compute obstacle moves and the process managing the behaviour of the obstacles, depending on the arguments given in the scenario module (see Section~\ref{section:modele_scenario} and the current version of the grid received via the gate with channel \lstinline+G+. It also contains the functions used to restrain the random moves of an obstacle.
\lstinputlisting{modele_obstacles.lnt}
\subsection{Example of a simulation scenario (scene and actor behaviour)}
\label{section:modele_scenario}
Module \lstinline+modele_scenario+ is an example of a scenario definition.
It illustrates how the model is configured for a particular scenario (scenery and actors with their behaviour).
Concretely, module \lstinline+modele_scenario+ defines a ten$\times$ten ground truth array scenery with four buildings (i.e., a X-shaped crossroad as shown in Figure~\ref{fig:map_grid} (x-coordinates increase from zero on the left to ten on the right, y-coordinates increase from zero on top to ten on the bottom).
Moving actors are a pedestrian (moving to the right at speed one), a car (moving upwards at speed two), and the ego vehicle (moving upwards after waiting a bit).
Besides functions to initialize the scenario, module \lstinline+modele_scenario+ contains also a process with a parallel composition with an instance of process \lstinline+OBSTACLE+ for each moving obstacle and a process instantiating the ego vehicle as a parallel composition of the motion control process \lstinline+MOVE_CAR+ and the perception process \lstinline+LIDAR_MANAGER+.
Last, but not least, module \lstinline+modele_scenario+ contains also the processes \lstinline+SCHEDULER+ and \lstinline+RESTRAND+, which depend on the initial position of the car and the number of moving obstacles in the scenario.
\lstinputlisting{modele_scenario.lnt}
Different scenarios are obtained by simply modifying the contents of module \lstinline+modele_scenario+.
\subsection{Definitions for the environment of the autonomous vehicle}
Module \lstinline+modele_map_manager+ contains the process managing the ground truth map.
Observing the moves of the actors (car and obstacles), process \lstinline+MAP_MANAGER+ updates the map accordingly.
It also detects and signals a collision of the car with an obstacle.
Module \lstinline+modele_map_manager+ also contains the process \lstinline+ENVIRONMENT+, which models the environment of the autonomous vehicle, i.e., the parallel composition of the moving obstacles and the map manager.
\lstinputlisting{modele_map_manager.lnt}
\subsection{Definition of the principal process}
The main module contains the principal process \lstinline+MAIN+, mandatory to compile and generate the model. This process is the parallel composition of the two major processes \lstinline+ENVIRONMENT+ and \lstinline+CAR+, together with the processes \lstinline+SCHEDULER+ and \lstinline+RESTRAND+ adding the global constraints.
\lstinputlisting{main_modele.lnt}
|
train/arxiv
|
BkiUcMrxK02iP15vfYGc
| 5 | 1 |
\section{Introduction}
Recent cosmological observations
indicate late-time acceleration of the observable universe \cite{NL1, NL2}. Why the evolution of the universe is interposed between an early inflationary phase and the late-time acceleration is a yet-unresolved problem. Various theoretical attempts have been undertaken to confront this observational fact. Although the simplest way to explain this behavior is the consideration of a cosmological constant \cite{wein}, the known fine-tuning problem \cite{DE} led to the dark energy paradigm. Here one introduces exotic dark energy component in the form of scalar fields such as quintessence \cite{quint1, quint2, quint3, quint4, quint5, quint6, quint7}, k-essence \cite{kessence1, kessence2, kessence3} etc. Quintessence is based on scalar field models using a canonical field with a slowly varying potential. On the other hand the models grouped under k-essence are characterized by noncanonical kinetic terms.
A key feature of the k-essence models is that the cosmic acceleration is realized by the kinetic energy of the scalar field. The popular models under this category include the phantom model, the ghost condensate model etc \cite{DE}.
It is well-known that the late time cosmic acceleration requires an exotic equation of state $\omega_{DE} < -\frac{1}{3}$. Current observations allow
$\omega_{DE} < -1$ which can be explained by considering negative kinetic energy
with a field potential.
The resulting phantom model \cite{phantom1, phantom2, phantom3, phantom4, phantom5, phantom6} is extensively used to confront cosmological observation \cite{phantom_obs1, phantom_obs2, phantom_obs3, phantom_obs4, phantom_obs5, phantom_obs6}. This model is however ridden with various instabilities as its energy density is unbounded. This instability can be eliminated in the so-called ghost-condensate models \cite{GC} by including a term quadratic in the kinetic energy.
In this context let us note that to realize the late-time acceleration scenario some self-interaction must be present in the phantom model. In contrast, in the ghost-condensate models the inclusion of self-interaction of the scalar field is believed to be a matter of choice \cite{DE}.
This fact, though not unfamiliar, has not been emphasised much in the literature.
Since very little is known about the nature of dark energy it may appear that the presence or otherwise of an interaction term in the ghost-condensate model may not be ascertained from any fundamental premise. However, in this letter we show that this issue can be settled by demanding a consistent scalar field dynamics. We establish here that this consistency requirement imposes non-trivial restriction on the choice of the self-interaction in the ghost-condensate model. Using this restriction we show that describing the general evolutionary scenario of the universe using a ghost-condensate without self-interaction may lead to too restrictive a situaton. Specifically, in the bouncing universe scenario \cite{ekpyrotic1, ekpyrotic2, ekpyrotic3, bounce1, bounce2, bounce3, bounce4} where the universe bounces from a contracting to an expanding phase, absence of self-interaction of the ghost-condensate is not admissible at all. Further, that a real solution for the self-interaction potential is compatible with the ghost field dynamics has been demonstraed using the restriction obtained here. It may also be noted that in the appropriate limit the ghost-condensate model is known to go over to the phantom model. Reassuringly, the restriction we have derived, reproduces the phantom potential \cite{gumjudpai} in the same limit.
At this point it will be appropriate to describe the organisation of this letter. In section 2 the ghost condensate model is introduced where we include an arbitrary self-interaction potential. The equations of motion for the scalar field and the scale factor are derived. These equations exhibit the coupling between the scalar field dynamics and gravity. Expressions for the energy density and pressure of the dark energy components are computed. These expressions are used in section 3 to demonstrate that the requirement of consistency between the Friedman equations and the scalar field equations imposes nontrivial restriction on the self-interaction potential in the form of a quadratic equation. The consequences of this is discussed. The concluding remarks are contained in section 4. We use mostly positive signature of the metric.
\section{The ghost condensate model with self-interaction of the scalar field}
\label{model}
In this section we consider the ghost condensate model with a self-interaction potential $V(\phi)$.
The action is given by
\begin{equation}
S=\int d^{4}x \sqrt{-g} \left[\frac{R}{2k^{2}}
+{\cal{L}}_{\phi}
+{\cal{L}}_{\rm{m}}\right], \label{ghost}
\end{equation}
where
\begin{eqnarray}
{\cal{L}}_{\phi} &=& -X + \frac{X^{2}}{M^{4}} - V\left( \phi \right) \label{l}\\
X &=& -\frac{1}{2}g^{\mu \nu}\partial_{\mu}\phi \partial_{\nu}\phi
\label{kinetic}
\end{eqnarray}
$M$ is a mass parameter, $R$ the Ricci scalar and $G = k^{2}/8\pi$ the gravitational constant. The term ${\cal{L}}_{\rm{m}}$ accounts for the total (dark plus baryonic) matter content of the
universe, which is assumed to be a barotropic fluid with energy density $\rho_m$ and pressure $p_m$, and equation-of-state parameter $w_m=p_m/\rho_m$. We neglect the radiation sector for simplicity
The action given by equation (\ref{ghost}) describes a scalar field interacting with gravity.
Invoking the cosmological principle one requires the metric to be
of the Robertson-Walker (RW) form
\begin{equation}
ds^2=dt^2-a^2(t)\left[\frac{dr^2}{1-Kr^2}+r^2d\Omega_2^2\right],
\end{equation}
where $t$ is the cosmic time, $r$ is the spatial radial coordinate, $\Omega_2$ is the 2-dimensional unit sphere volume, $K$ characterizes the curvature of 3-dimensional space and
$a(t)$ is the scale factor.
The Einstein equations lead to the Friedmann equations
\begin{eqnarray}
H^{2}&=&\frac{k^{2}}{3}\Big(\rho_{m}+\rho_{\phi}\Big)-
\frac{K}{a^2} \label{FR1}\\
\dot{H}&=&-\frac{k^{2}}{2} \Big(\rho_{m}+p_m+\rho_{\phi}+p_{\phi}\Big)+\frac{K}{a^2}, \label{FR2}
\end{eqnarray}
In the above a dot denotes derivative with respect to $t$ and
$H\equiv\dot{a}/a$ is the Hubble parameter. In these expressions,
$\rho_{\phi}$ and $p_\phi$ are respectively the energy density
and pressure of the scalar field. The quantities
$\rho_{\phi}$ and $p_\phi$ are defined through the symmetric energy-momentum tensor
\begin{equation}
T^{(\phi)}_{\mu\nu} = \frac{-2}{\sqrt{-g}}\frac{\delta}{\delta g^{\mu\nu}}\left({\sqrt{-g}} \right)
\end{equation}
A straightforward calculation gives
\begin{equation}
T^{(\phi)}_{\mu\nu} = g_{\mu\nu}{\cal{L}}_{\phi} + \left(-1 + \frac{2X}{M^4}\right)\partial _\mu\phi\partial _\nu\phi
\end{equation}
Assuming a perfect fluid model we identify
\begin{eqnarray}
\rho_{\phi} &=& -X + \frac{3X^{2}}{M^{4}} + V\left( \phi \right)
\label{density}\\
p_{\phi} &=& {\cal{L}}_{\phi} = -X + \frac{X^{2}}{M^{4}} - V\left( \phi \right)
\label{pressure}
\end{eqnarray}
The equation of motion for the scalar field $\phi$ can be derived from the action (\ref{ghost}). Due to the isotropy of the FLRW universe the scalar field is a function of time only. Consequently, its equation of motion reduces to
\begin{equation}
\left(1 - \frac{3 \dot{\phi}^{2}}{M^{4}}\right)\ddot{\phi} + 3H\left(1 - \frac{\dot{\phi}^{2}}{M^{4}}\right)\dot{\phi}- \frac{dV}{d \phi} = 0. \label{eqm}
\end{equation}
As is well known the same equation of motion follows from the conservation of $T_{\mu\nu}$. Indeed under isotropy the equations (\ref{density}) and (\ref{pressure}) reduce to
\begin{eqnarray}
\rho_{\phi} &=& - \frac{1}{2}\dot{\phi}^{2} + \frac{\dot{3\phi}^{4}}{4 M ^{4}} + V\left( \phi \right)
\label{rhophi}\\
p_{\phi} &=& - \frac{1}{2}\dot{\phi}^{2} + \frac{\dot{\phi}^{4}}{4 M ^{4}} - V\left( \phi \right)
\label{pphi}
\end{eqnarray}
From the conservation condition $\nabla_\mu T^{(\phi)\mu\nu} = 0$ we get
\begin{equation}
\dot{\rho}_\phi+3H(\rho_\phi+p_\phi)=0, \label{rhodot}
\end{equation}
which, written equivalently in field terms gives equation (\ref{eqm}).
To complete the set of differential equations (\ref{FR1}), (\ref{FR2}), (\ref{rhodot}) we include the equation for the evolution of matter density
\begin{eqnarray}
\dot{\rho}_m+3H(1+w_m)\rho_m=0, \label{rhomdot}
\end{eqnarray}
where $w_{m}=p_{m}/\rho_{m}$ is the matter equation of state parameter. The solution to equation (\ref{rhomdot}) can immediately be written down as
\begin{equation}
\frac{\rho_m}{\rho_{m0}} = \left[\frac{a\left(t_{0}\right)}{a\left(t\right)}\right]^{n}, \label{rhom}
\end{equation}
where $n = 3 (1 + w_m) $ and $\rho_{m0} \geq 0$ is the value of matter density at present time $t_0$. Now, the set of equations (\ref{FR1}), (\ref{FR2}), (\ref{rhodot}) and (\ref{rhomdot}) must give the dynamics of the scalar field under gravity in a self-consistent manner. In the next section we demonstrate that this consistency requirement constrains the self-interaction $V\left(\phi\right)$ in (\ref{ghost}).
\section{Restriction on the self-interaction of the scalar field}
We start by constructing two independent combinations of the pressure and energy density of the dark energy sector in terms of the Hubble parameter $H$, matter energy density $\rho_{m}$, matter equation of state parameter $w_{m}$ and curvature parameter $K$ using (\ref{FR1}), (\ref{FR2}) and (\ref{rhomdot})
\begin{eqnarray}
\rho_{\phi} + p_{\phi} =A &=& -\frac{2 \dot{H}}{k^{2}}-\frac{n}{3} \rho_{m} + \frac{2K}{k^{2}a^{2}} \label{A} \\
\rho_{\phi} + 3p_{\phi} =B &=& -\frac{6 \ddot{a}}{k^{2}a} - \left(n-2\right) \rho_{m} \label{B}
\end{eqnarray}
Using equations (\ref{rhophi}) and (\ref{pphi}), we rewrite these combinations in terms of the ghost condensate field derivative $\dot{\phi}$ and potential $V\left( \phi \right)$:
\begin{eqnarray}
\rho_{\phi} + p_{\phi} =A &=& - \dot{\phi}^{2} + \frac{\dot{\phi}^{4}}{M ^{4}} \label{A1} \\
\rho_{\phi} + 3p_{\phi} =B &=& - 2 \dot{\phi}^{2} + \frac{3\dot{\phi}^{4}}{2M ^{4}} - 2 V\left( \phi \right) \label{B1}
\end{eqnarray}
Inverting these equations to write $\dot{\phi}^{2}$ and $\dot{\phi}^{4}$ in terms of $A$, $B$ and $V\left( \phi \right)$ and utilizing the algebraic identity $\left(\dot{\phi}^{2}\right)^{2} = \dot{\phi}^{4}$ we obtain the following quadratic equation
\begin{eqnarray}
V^{2}\left(\phi\right) && + \left(B - \frac{3A}{2} + \frac{M^{4}}{4}\right) V\left(\phi\right) \nonumber \\
&& + \frac{\left(3A - 2B\right)^{2} - 4 M^{4}\left(A - B/2\right)}{16} = 0
\label{quad}
\end{eqnarray}
This is the restriction on the choice of potential in the ghost-condensate model which has been indicated earlier. Note that the simple fact that $V\left(\phi\right)$ has to satisfy a restriction of this form implies that care must be taken in asserting the absence of the self-interaction term. We will presently discuss this and other issues related to equation (\ref{quad}) in the following. Meanwhile, observe that in the limit $M^{4} \to \infty$ the equation (\ref{quad}) reduces to
\begin{eqnarray}
V\left(\phi\right) = A - B/2
\label{limit}
\end{eqnarray}
Substituting for $A$ and $B$ in (\ref{limit}) and simplifying, we get
\begin{eqnarray}
V\left(\phi\right) = \frac{1}{k^{2}}\left(3H^{2} + \dot{H} + \frac{2K}{a^{2}}\right) + \frac{n-6}{6} \rho_{m}
\label{limit1}
\end{eqnarray}
where equations (\ref{A}) and (\ref{B}) have been used. This reproduces the result for the potential in the phantom model \cite{gumjudpai}.
Coming back to equation (\ref{quad}) let us first investigate the possibility of a vanishing self-interaction. Substituting $V\left(\phi\right) = 0$ we get
\begin{eqnarray}
\frac{\left(3A - 2B\right)^{2}}{4 M^{4}} = \left(A - B/2\right)
\label{zero}
\end{eqnarray}
Since the left hand side is positive definite we immediately get the condition
\begin{eqnarray}
\left(A - \frac{B}{2}\right) = \frac{1}{k^{2}}\left(3 H^{2} + \dot{H} + \frac{2K}{a^{2}}\right) + \frac{n-6}{6} \rho_{m} \ge 0 \nonumber \\
\label{zero1}
\end{eqnarray}
Assuming matter in the form of dust ($n = 3 $) in a universe with flat geometry ($K = 1$), this can be further simplified to
\begin{eqnarray}
3 H^{2} + \dot{H} \ge \frac{k^{2}}{2} \rho_{m} > 0
\label{zero2}
\end{eqnarray}
Using $\dot{H} + H^{2} = \ddot{a}/a$ we reexpress this as
\begin{eqnarray}
H^{2} > - \frac{1}{2} \frac{\ddot{a}}{a}
\label{zero3}
\end{eqnarray}
This condition appears to be too restrictive, in fact in the decelerating phase ($\ddot{a} < 0 $) this imposes a definite relation between $\dot{a}$ and $\ddot{a}$. Remembering that the Friedmann equation is of second order in time, there is no a priori reason that such constraint holds. Moreover, the ekpyrotic \cite{ekpyrotic1, ekpyrotic2, ekpyrotic3} and other bouncing theories \cite{bounce1, bounce2, bounce3, bounce4} of the early universe require that spacetime ``bounce'' from a contracting to an expanding phase, perhaps even oscillating cyclically [9, 10]. Clearly, during the switch over from expanding to contracting phase, $\dot{a} = 0$ but $\ddot{a} < 0$ and thus the condition (\ref{zero3}) is violated.
The analysis detailed above demonstrates that in order to apply the ghost condensate model for general evolution of the universe a certain self-interaction should always be included. At this point one may wonder whethar the constraining equation (\ref{quad}) on $ V\left(\phi\right) $ at all allows a real solution. Solving (\ref{quad}) we get
\begin{eqnarray}
V\left(\phi\right) = \left(\frac{3A - 2B}{4} -\frac{M^{4}}{8}\right) \pm \left\{\frac{M^{4}}{16}\left(\frac{M^{4}}{4} +A\right)\right\}^{\frac{1}{2}}
\label{V}
\end{eqnarray}
The reality condition is thus
\begin{eqnarray}
\left(\frac{M^{4}}{4} +A \right) \ge 0
\label{reality}
\end{eqnarray}
That this condition is satisfied in general can be established explicitly if we substitute for $A$ from equation (\ref{A1}) which gives
\begin{eqnarray}
\left(\frac{M^{4}}{4} +A \right) = \frac{1}{M^{4}}\left(\dot{\phi}^{2} - \frac{M^{4}}{2}\right)^{2} \ge 0
\label{reality_check}
\end{eqnarray}
This completes our argument in favour of including a self-interaction potential in ghost-condensate model.
\section{Conclusion}
We have considered the ghost-condensate model of dark energy with a self-interaction potential in a general FLRW universe with curvature $K$. The combined dynamics of dark energy and gravity leads to coupled differential equations involving the universal scale factor $a\left(t\right)$ and the scalar field $\phi$. The standard barotropic matter equation of state is assumed. Two independent combination of the pressure and energy density of the dark energy are expressed in terms of the observable quantities from the normal matter and gravity sector. These combinations are then used to impose a consistency condition which leads to a quadratic equation for the self-interaction $V\left(\phi\right)$. This equation is shown to admit real roots.
Also, in the appropriate limit it leads to the phantom model potential \cite{gumjudpai}.
A very interesting consequence arises when we examine the plausibility of the choice of zero self-interaction. Using the quadratic equation satisfied by the self-interaction it has been demonstrated that this choice is too restrictive for the general evolution of the scale factor. In fact, the bouncing universe scenario disallows such a choice. Our analysis thus establishes that in the class of ghost-condensate models for general evolution of the universe a self-interaction of the dark energy must be included.
\section*{Acknowledgement} The authors would like to thank IUCAA, Pune, where part of the work was done.
|
train/arxiv
|
BkiUc6DxK4sA-46xNrKR
| 5 | 1 |
\section{\label{sec:level1}Introduction}
The field of metaphotonics
\cite{shalaev_np07,veselago06,pendry96,pendry99,PhysRevB.62.10696,Gralak:00,luoprb02,casseapl07,gmachl07}, or the merging of
metamaterials \cite{veselago68} and photonics \cite{Joannopoulos95,yablonovitch87,sajeevjohn87}, has opened doors to a plethora
of unusual electromagnetic properties, such as negative refraction \cite{RAShelby04062001,parimi03}, cloaking
\cite{D.Schurig11102006} and optical data storage \cite{kosmas_nat_07}, that cannot be obtained with naturally occurring
materials. The holy grail of manufacturing these artificial photonic metamaterials structures, is to manipulate light at the
nanoscale level for optical information processing and high-resolution imaging. In this paper we demonstrate how a
binary-staircase optical element can be tailor-made to have an effective negative refractive index, and thus bringing an
alternative approach to negative-index optical elements.
Here we consider a binary-staircase type of lens \cite{alda05}, which consists of a
sequence of zones configured as flat parallel steps each having an
annular shape. The binary-staircase lens is a plano-concave lens.
Focusing by plano-concave lenses \cite{enoch03,parazzoli04} were
realized in 2D and 1D photonic crystals
\cite{vodo:201108,vodo:084104}. Proof-of-concept experiments which
demonstrate negative refraction in a plano-concave grating lens have
been realized earlier by our group in the microwave regime
\cite{luopt07}. However the plano-concave lens used in the microwave
range consisted of an assembly of commercial alumina bars, placed in
a parallel-plate waveguide, which are not suitable for integration
in optoelectronic circuits.
Geometrical parameters of the binary-staircase lens were determined
by considering the transverse size of the lens, the focal length,
wavelength of the incoming radiation, the index of the material used
to fabricate the lens itself and mainly the surface periodicity. The
actual lens has been nanofabricated by a combination of electron
beam lithography and reactive ion etching in an InP/InGaAsP
heterostructure. Subsequently, the focusing properties of the device
were experimentally verified using a scanning probe optical
technique. Three-dimensional (3D) finite-difference time-domain
(FDTD) simulations have been used to further study the beam
propagation in the lens. The FDTD simulations were in excellent
agreement with the experimental results.
We use a surface modification scheme to alter the index of
refraction of the medium \cite{luopt07,huang:013824}.
An incident wave impinging on a smooth surface with incident angle
larger than the critical angle will be totally reflected.
However a proper surface grating will allow the wave to be transmitted.
This is equivalent to give the incident wave a transverse momentum kick.
In the case that the grating period is much smaller than the
incident wavelength, an effective refractive index $n_{\mathrm{eff}}$ can be
used to describe the refraction at the modified
surface. For a binary-staircase lens with a plano-concave shape
as shown in Fig. \ref{fig:sketch}, this effective index
is related to the bulk refractive index of the medium $n_{\mathrm{med}}$ by
\cite{luopt07}
\begin{equation}\label{eq:fresnel_zones}
n_{\mathrm{eff}} = n_{\mathrm{med}} -\dfrac{\lambda}{a}
\end{equation}
where `$a$' is a fixed step length along the optical axis (or surface periodicity) and $\lambda$ is the free space wavelength
(with $a<\lambda$). The number of steps $N_\mathrm{steps}$ or zones is then $R/a$, where $2R$ is the transverse size of the
binary-staircase lens. The focal length $f$ is calculated by using the formula $f=R/(1-n_{\mathrm{eff}})$. To obtain a good
focus, $a\sim\lambda/n_{\mathrm{med}}$ (Abbe's diffraction limit). In the present case, $\lambda=1550$ nm and $a$ was chosen as
450 nm with $n_{\mathrm{med}}$ $=$ 3.231 for the transverse electric (TE) modes and $n_{\mathrm{med}}$
$=$ 3.216 for the transverse magnetic (TM) modes. $a$ has been chosen as an arbitrary value in
the vicinity of $\lambda/n_{\mathrm{med}}$. $N_\mathrm{steps}$ $=$ 11, so that $2R$ reads 10 $\mu$m. Thus $n_\mathrm{eff}$ is
$-$0.2133 and $-$0.2889 for TE and TM modes, respectively.
\begin{figure}[htbp]
\centering{\includegraphics[width=5cm]{kino_lens_fig1.eps}}
\caption{(Color online) Sketch of a plano-concave grating lens.
The lens is made of a medium with $n_{\mathrm{med}}$.
The horizontal step size $a$ is smaller than the free space wavelength $\lambda$
with $a\sim \lambda/n_{\mathrm{med}}$.}
\label{fig:sketch}
\end{figure}
The fabrication platform consisted of a 400 nm InGaAsP core layer on
an InP substrate with a 200 nm InP top cladding layer. The waves are
trapped and propagate within the core layer plane with an effective
permittivity of 3.231 (TE modes) and 3.216 (TM modes). The final
structure for optical measurements consisted of three sub units
(shown in Fig. \ref{fig:overallview}(a)): (i) a 0.5 mm long
waveguide, laterally tapered, having 5 $\mu$m wide trenches on
each side; The taper starts at a distance of 100 $\mu$m from the
edge of the waveguide, with a core width varying from 5 $\mu$m to
10 $\mu$m. (ii) Binary-staircase plano-concave lens with 10 zones
on the optical axis, having a step height of 450 nm and a transverse
size of 10 $\mu$m, located at a distance of 5 $\mu$m from the
tapered end of the waveguide (shown in Fig.
\ref{fig:overallview}(b)). (iii) And finally an open cavity
(semi-circle juxtaposed to a 20 $\mu$m $\times$ 20 $\mu$m
square) at the end of the binary-staircase lens.
\begin{figure}[htbp]
\centering{\includegraphics[width=7.5cm]{kino_lens_fig2.eps}}
\caption{(a) Bird's eye view of the tapered waveguide and the binary-staircase lens,
(b) close-up view of the binary-staircase lens. }
\label{fig:overallview}
\end{figure}
An analogue structure, having the same geometrical
dimensions but bearing no steps (or zones), was fabricated. The
purpose of the analogous design was to prove that the periodicity of
the steps are decisive structure elements to realizing a
negative-index prototype. The structures were written using electron
beam lithography on polymethylmethacrylate (PMMA) resist. Pattern
transfers to a silicon nitride working mask and subsequently to the
InP/InGaAsP layers were achieved with a reactive ion etching (RIE)
method.
In the characterization experiment, a continuous wave tunable
semiconductor laser (1550 nm--1580 nm) was used as the input light
source. The laser light was coupled into the cleaved end of the
input waveguides using a monomode lensed fiber (working distance
$\approx$ 14$\mu$m and FWHM $\approx$ 2.5 $\mu$m in air) mounted
on a five-axis positioning stage. An infrared (IR) camera (Hamamatsu
Model C2741) connected to a microscope port aids the initial
alignment to optimize the IR light coupling from the optical fiber
to the waveguide. In the FDTD simulation, a 10 $\mu$m wide plane
parallel, Gaussian beam was chosen as incident field for the grating
lens. In the actual sample, the 5 $\mu$m wide input facet of the
waveguide was inversely tapered to 10 $\mu$m width (see Fig.
\ref{fig:overallview}(a)) so that the propagating Gaussian beam is
expanded sufficiently inside the guiding channel before reaching the
device end. The planar wavefront after emerging from the
binary-staircase lens is expected to focus in the air
cavity.
A tapered fiber probe (250 nm aperture diameter) metallized with a thin chromium and gold layer was raster scanned just above the
sample surface. The output end of the fiber probe was connected to nitrogen cooled germanium detector (North Coast Scientific
Corp. Model $\#$ EO-817L). Additionally, a typical lock-in amplifier was utilized to optimize the detection scheme. Scanning the
fiber tip at a constant height about 500 nm above the sample surface allowed us to probe the optical intensity distribution over
a grid of $256\times256$ points spanning $15\times15$ $\mu$m$^2$ area. The reconstructed image is shown in Fig.
\ref{fig:NSOM}(a).
\begin{figure}[htbp]
\centering{\includegraphics[width=8.3cm]{kino_lens_fig3.eps}}
\caption{(Color online) Optical images, from an optical scanning microscope,
obtained at $\lambda$ = 1550 nm around the focal point of (a) the binary-staircase lens, (b)
and the analogous structure with no zones/steps (semi-circle with smooth
walls). Note that focusing is observed only with the binary-staircase lens.}
\label{fig:NSOM}
\end{figure}
Intensity distribution near the cavity center clearly shows the
light focusing from the binary-staircase lens. Identical focusing
fingerprints were observed when the experiment was repeated over a
range of wavelengths varying from 1510 nm to 1580 nm. Another
controlled experiment was performed where the binary-staircase lens
was replaced by an analogous structure (having the same geometrical
features) with no steps. In the latter case, as shown in Fig.
\ref{fig:NSOM}(b), no beam focusing was observed. Nevertheless we
can distinguish a bright spot near the device's edge, which is
attributed to a sudden beam divergence as it propagates into open
space from initial confinement in the InGaAsP core waveguide layer
(diffraction).
The 3D FDTD simulations were performed using
perfectly matched layer boundary conditions that minimize
reflections at the edges. The chosen input field excitation for the
FDTD simulation was a TE polarized Gaussian beam which closely
resembles the beam shape of the fiber source in actual experiment.
The energy density of the propagating H-field was mapped at
different plane heights. Figures \ref{fig:plc3d}(a) and
\ref{fig:plc3d}(b) shows the simulated H-field density of the
binary-staircase lens and the analogue structure at about 800 nm
above the center of the core layer, respectively.
\begin{figure
\center{\includegraphics[width=8.7cm]{kino_lens_fig4.eps}} \caption{(Color online) 3D FDTD simulations of (a) the plano-concave
binary-staircase lens, and (b) the lens having the same geometrical dimensions as the binary-staircase one, but bearing no steps
(or zones).} \label{fig:plc3d}
\end{figure}
The traditional metamaterials structures are composed of arrays of split ring resonators and metal wires. This type of metallic
structures, which operates under resonance, becomes lossy at optical frequencies due to the inherent imaginary part of the
metal's permittivity. The purely dielectric system, such as the one mentioned in this letter, is free from these drawbacks and
thus has low intrinsic material loss, which is a clear-cut advantage for optical frequency operations. Extrinsic losses in the
binary-staircase dielectric structure itself arise solely from the imperfections in the fabrication (e.g. surface and sidewall
roughness).
We have experimentally designed a binary-staircase optical element
having an effective negative index of refraction, whereby the
surface periodicity of the structure acted as the tunable parameter
for controlling the sign change of the refractive index. The beam
propagation in the plano-concave lens was simulated using in-house
3D FDTD codes. Based on the design and
simulations, we have nano-engineered a prototype structure in an
InP/InGaAsP heterostructure tailored for the 1.55 $\mu$m
wavelength, where indium phosphide (InP) is a natural starting
fabrication platform for wholesale integration of passive and active
devices for a complete system-on-a-chip at this frequency.
Characterization of the prototype with a near-field scanning optical
microscope revealed that the plano-concave binary-staircase lens can
act as a convex lens and thereby focusing plane waves. No focusing
is achieved if the zones are removed, reinforcing the fact the
steps are the decisive structure elements. A notable aspect of our
work is the extension of electromagnetic properties (that are
theoretically available) of optical elements for possible
integration in optoelectronic circuits.
This work was supported by the Air Force Research Laboratories,
Hanscom through grant no. FA8718-06-C-0045 and the National Science
Foundation through grant no. PHY-0457002. This work was performed in
part at the Kostas Center for High-Rate Nanomanufacturing at
Northeastern University, and Center for Nanoscale Systems, a member
of the National Nanotechnology Infrastructure Network, under NSF
award no. ECS-0335765.
|
train/arxiv
|
BkiUcPfxK7ICUmfbzIvZ
| 5 | 1 |
\section{Introduction}
Marine propellers are traditionally manufactured from Nickel Aluminium
Bronze (NAB) alloys or Manganese Nickel Aluminium Bronze (MAB) alloys.
However, recently the use of engineered materials, more specifically
laminated composites, to develop marine propellers has received considerable
attention equally among researchers and industry. This is driven by
the increasing demand for efficiency, high strength-to-weight and
high stiffness-to-weight ratio of materials. Some of the advantages
exhibited by composite over alloys are light weight, reduced corrosion,
reduced noise generation, lack of magnetic signature and shape adaptability.
Shape-adaptability is an interesting phenomenon from a mechanical
and optimization perspective.
Shape-adaptability refers to the capability of composites to deform,
without the involvement of external mechanisms based on the flow conditions
and rotational speed in order to achieve a higher efficiency, compared
to rigid alloy propellers, throughout its operating domain. Composite
propellers can potentially be custom tailored to enhance the performance
through shape-adaptability. This can be achieved by using the intrinsic
extension-shear, bend-twist and bend-extension coupling effects of
anisotropic composites. Ideal shape change at various flow conditions
is a result of composite layup optimizations such that the propeller
has an optimum bend-twist coupling performance.
Bend-twist coupling refers to the special characteristic of anisotropic
materials where out of plane bending moments can cause twisting strains.
With correct layup arrangements this effect can be optimized for a
certain application using layered composites. Various researchers
in the past have used flexibility and bend-twist coupling characteristics
of composites to design marine propellers that have the capability
of self-varying pitch (shape adaptable) based on out of plane bending
moments caused by the incoming flow \citep{LeeLin_2004,young2007,Liu2009,Mulcahy_Offshore,Motley_etal_2011,MulcahyRINA_2011}.
The approach taken by Lin and Lee \citep{LeeLin_2004,Linetal_2009,Linetal2004}
was to minimize the change of torque coefficient of the propeller
when moving from the design advance ratio to one other off-design
advance ratio. The reason behind this strategy was maintaining the
torque, thrust and efficiency the same as the design value when moving
away from the design point. However, only one off-design point was
considered. The optimization process used previously by Liu, et al.
\citep{Liu2009}, Motley, et al. \citep{Motley_etal_2011}, Pluci\'{n}ski,
et al. \citep{Plucinski2007} attempted to ensure that the ply configuration
was chosen such that the blade can achieve the maximum possible pitch
variation when moving from unloaded to loaded state. Essentially,
the optimization technique attempted to make the blade more flexible
while maintaining strain and shape limitations.
In this proposed Finite Element (FE) approach, the propeller blade
assuming it is plate idealized to the mid-plane of the overall blade
shape. Various structural theories proposed for evaluating the characteristics
of composite laminates under different loading situations were reviewed
by Noor and Burton (1989) \citep{noor1989}, Mallikarjuna and Kant
(1993) \citep{mallikarjunakant1993}, Kant and Swaminathan (2000)
\citep{kantswaminathan2000} and recently by Khanda et. al \citep{khandannoroozi2012}.
A set of methods have emerged to address the shear locking in the
FEM. By incorporating the strain smoothing technique into the finite
element method (FEM), Liu et al. \citep{liudai2007} have formulated
a series of smoothed finite element methods (SFEM), named as cell-based
SFEM (CS-FEM) \citep{nguyenbordas2008,bordasnatarajan2010}, node-based
SFEM \citep{liunguyen2009}, edge-based SFEM \citep{liunguyen2009a},
face-based SFEM \citep{thoiliu2009} and \textgreek{a}-FEM \citep{liunguyen2008}.
And recently, edge based imbricate finite element method (EI-FEM)
was proposed in \citep{Cazes2012} that shares common features with
the ES-FEM. As the SFEM can be recast within a Hellinger-Reissner
variational principle, suitable choices of the assumed strain/gradient
space provides stable solutions. Depending on the number and geometry
of the sub-cells used, a spectrum of methods exhibiting a spectrum
of properties is obtained. Further details can be found in other literature
\citep{nguyenbordas2008} and references therein. Nguyen-Xuan et al.
\citep{nguyenrabczuk2008} employed CS-FEM for Mindlin-Reissner plates.
The curvature at each point is obtained by a non-local approximation
via a smoothing function. From the numerical studies presented, it
was concluded that the CS-FEM technique is robust, computationally
inexpensive, free of locking and importantly insensitive to mesh distortions.
The SFEM was extended to various problems such as shells \citep{nguyenrabczuk2008},
heat transfer \citep{wuliu2010}, fracture mechanics \citep{nguyenliu2013}
and structural acoustics \citep{hecheng2011} among others. In \citep{bordasnatarajan2011},
CS-FEM has been combined with the extended FEM to address problems
involving discontinuities.
A framework to design laminated composite marine propellers with enhanced
performance by utilizing the bend-twist coupling characteristics is
proposed in this paper. The framework consists of the Cell-based Smoothed
Finite Element Method (CS-FEM) combined with a Genetic Algorithm (GA)
to optimize the layup configuration of laminated composites. The key
requirement for the optimization technique proposed here is to achieve
an efficiency curve for the composite propeller that is tangential
to all the efficiency curves in the vicinity of the design (cruise)
advance ratio of the vessel. In contrast to the approaches taken by
previous researchers \citep{Leeetal2005,Plucinski2007,Liu2009}, the
proposed method attempts to achieve accurate pitch angles derived
from propeller efficiency curves based on many off-design points.
It also gives the freedom to specify weightages to off-design points
based on probabilities the blade is likely to operate at each off-design
point. The approach was presented by means of a simple plate optimisation
study by Herath and Prusty \citep{Herath_ACAM7}. It was further enhanced
to an Iso-Geometric (NURBS) FEM based optimisation technique by Herath
et al. \citep{Herath2013}.
\section{Theoretical Development}
The shape-adaptive technique presented in this paper predominantly
relies on bend-twist coupling characteristics of laminated composites
to change the pitch of the blade based on bending caused by fluid
loadings at different flow speeds. Bend-twist coupling characteristics
can be demonstrated using the standard stiffness matrix system for
composite materials (eq. \ref{eq:ABBD}). Here, $\begin{array}{cc}
\left[A\right], & \left[B\right]\end{array}$ and $\left[D\right]$ matrices have the usual laminate stiffness
definition.
\begin{equation}
\begin{array}{c}
\left[\begin{array}{c}
\mathbf{N}\\
\mathbf{M}
\end{array}\right]=\left[\begin{array}{cc}
\left[A\right] & \left[B\right]\\
\left[B\right] & \left[D\right]
\end{array}\right]\left[\mathbf{\begin{array}{c}
\boldsymbol{\boldsymbol{\mathbf{\mathbf{\epsilon}}}}\\
\boldsymbol{\kappa}
\end{array}}\right]\;\textup{\textup{where;}}\\
\mathbf{N}=\mathrm{\mathrm{\left[\begin{array}{ccc}
N_{x} & N_{y} & N_{xy}\end{array}\right]}^{T},}\:\mathbf{M}=\mathrm{\left[\begin{array}{ccc}
M_{x} & M_{y} & M_{xy}\end{array}\right]^{T}}\\
\mathbf{\boldsymbol{\epsilon}=\mathrm{\left[\begin{array}{ccc}
\epsilon_{x} & \epsilon_{y} & \epsilon_{xy}\end{array}\right]^{T}}\mathrm{,}}\:\mathbf{\boldsymbol{\kappa}=\mathrm{\left[\begin{array}{ccc}
\kappa_{x} & \kappa_{y} & \kappa_{xy}\end{array}\right]^{T}}}
\end{array}\label{eq:ABBD}
\end{equation}
Bend-twist coupling characteristics are dominated through coupling
terms ($D_{xs}$ and $D_{ys}$) in the $\left[D\right]$ matrix (eq.
\ref{eq:Dmat}), where $D_{ij}=\frac{1}{3}\sum_{k=1}^{n}Q_{ij}^{k}\left(\theta\right)\left(z_{k}^{3}-z_{k-1}^{3}\right)$
with $Q_{ij}\left(\theta\right)$ representing the in-plane stiffness
of a composite layer in the $xy$ directions. With $D_{xs},\: D_{ys}\neq0$
bending moments ($M_{xx}$ and $M_{yy}$) can cause twisting strains
$\left(\frac{\partial^{2}w}{\partial x\partial y}\right)$. The purpose
of an optimization scheme is to obtain the optimum fiber angles to
achieve the required response (pressure vs required pitch). The matrix
system gives the relationship for a simple laminate element. For a
plate-like structure the stiffness coefficients have to be used in
combination with plate theories such as Kirchhoff-Love (thin plate)
and Mindlin-Reissner (moderately thick plate) with appropriate boundary
conditions. Thus, it is essential to have an FEM technique that can
accurately determine the response of a complex blade shape.
\begin{equation}
\begin{array}{c}
\left[\begin{array}{c}
M_{xx}\\
M_{yy}\\
M_{xy}
\end{array}\right]\end{array}=\left[\begin{array}{ccc}
D_{xx} & D_{xy} & D_{xs}\\
D_{xy} & D_{yy} & D_{ys}\\
D_{xs} & D_{ys} & D_{ss}
\end{array}\right]\left[\begin{array}{c}
\kappa_{xx}\\
\kappa_{yy}\\
\kappa_{xy}
\end{array}\right]=\left[\begin{array}{ccc}
D_{xx} & D_{xy} & D_{xs}\\
D_{xy} & D_{yy} & D_{ys}\\
D_{xs} & D_{ys} & D_{ss}
\end{array}\right]\left[\begin{array}{c}
-\frac{\partial^{2}w}{\partial x^{2}}\\
-\frac{\partial^{2}w}{\partial y^{2}}\\
-2\frac{\partial^{2}w}{\partial x\partial y}
\end{array}\right]\label{eq:Dmat}
\end{equation}
\subsection{Genetic Algorithm based optimization}
The key idea behind the proposed optimization scheme for a marine
propeller is to construct a \textquotedblleft{}difference-scheme\textquotedblright{}
relative to the operating point (cruise advance ratio) in terms of
pressure and twist. The optimum alloy propeller geometry must be chosen
for the application before it is further developed as a composite
propeller. The process can be summarized as:
\begin{enumerate}
\item Evaluate pressure maps on the propeller blade surface for various
speeds including and around the operating/cruise speed.
\item Construct pressure difference functions with respect to the operating
condition for every chosen point around the operating point.
\item Assess the pitch changes required relative to the operating point
for the chosen points to maintain an optimum efficiency. Pitch differences
can be assessed using standard propeller efficiency curves for a propeller
series, which the alloy propeller is based upon.
\item The objective function of optimization will attempt to minimize the
total difference (corresponding to the respective pressure difference)
between the optimum pitch that is required and the pitch that was
obtained by the chosen ply configuration (Eq. \ref{eq:ObjFunc}).
Weightages $\left(w_{i}\right)$ can be assigned to each off design
point based on the likelihood of the propeller operating at each off-design
point.
\begin{equation}
\begin{array}{c}
\begin{array}{c}
min\\
\mathbf{\boldsymbol{\theta}}
\end{array}f\left(\mathbf{\boldsymbol{\theta}}\right)=\frac{\sum_{i=1}^{n}w_{i}\left|\Delta\phi_{Optimum}^{i}\left(\Delta P\right)-\Delta\phi_{GA}^{i}\left(\mathbf{\boldsymbol{\theta}},\Delta P\right)\right|}{\sum_{i=1}^{n}w_{i}}\\
\textup{s.t. failure criteria and strain limitations}
\end{array}\label{eq:ObjFunc}
\end{equation}
\end{enumerate}
The optimization technique must be capable of handling non-linear
objective functions, non-linear constraints and both discrete and
continuous variables. Thus, the Genetic Algorithm (GA) was chosen
as it can satisfy all these requirements. The GA has been used by
several authors \citep{Kameyamaetal2007,Leeetal2005,Linetal2004,Plucinski2007,SoremekunGurdal2001}
in composite ply optimization tasks proving its attractiveness and
credibility.
The process of GA involves applying mutations to the ply angle configuration
and evaluating whether the blade can achieve the required angle at
the tip. This gives rise to the requirement of having an accurate
means of calculating deflections and rotations of the blade structure
for an applied loading. Thus, an in-house FE code based on the first
order shear theory using Cell Based Smoothed FEM was developed for
propeller blade shapes and coupled with the GA. Figure \ref{fig:FEM-coupled-with}
shows a summary of the optimization process coupled with FEM. Both
the GA and the FE solver were coded in the commercial numerical processing
software Matlab\texttrademark{}. Although it is possible to couple
the GA with an existing commercial FEM solver as attempted by several
authors in similar research \citep{LeeLin_2004,Leeetal2005,Motley_etal_2011,MulcahyRINA_2011,Plucinski2007},
a coupled fully in-house solver is seen as a future proof approach.
This is due to the inherent freedom the user has within such a solver
and capability of improvement and further streamlining in the future.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.5]{Figures/Fig1_FE_coupled_GA_Flow}
\par\end{centering}
\caption{\label{fig:FEM-coupled-with}FEM coupled with GA flow}
\end{figure}
Furthermore, a composite propeller optimized for pitch variation cannot
be manufactured at its optimum shape required at the cruise speed.
The blade has to be manufactured with a pre-deformation such that
it reaches the optimum shape at its cruise condition. Thus, the second
stage of the design process involves an iterative process to evaluate
the unloaded shape. The proposed methodology is iterative as explained
by an example in Section \ref{sub:Unloaded-shape-calculation}. A
similar methodology was also used by Mulcahy, et al. \citep{MulcahyRINA_2011}
and Pluci\'{n}ski, et al. \citep{Plucinski2007}.
\subsection{Cell based smoothed finite element method with discrete shear gap
technique}
\label{csdsg3}In this study, the propeller blade is approximated
by a hypothetical plate at the mid-plane of the blade. Three-noded
triangular element with five degrees of freedom (dofs) $\boldsymbol{\delta}=\left\{ u,v,w,\theta_{x},\theta_{y}\right\} $
is employed to discretise the plate domain. The displacement is approximated
by
\begin{equation}
\mathbf{u}^{h}=\sum_{I}N_{I}\boldsymbol{\delta}_{I}
\end{equation}
where $\boldsymbol{\delta}_{I}$ are the nodal dofs and $N_{I}$ are
the standard finite element shape functions given by
\begin{equation}
N=\left[1-\xi-\eta,\;\;\eta,\;\;\xi\right]
\end{equation}
In the CS-DSG3~\cite{thoivan2012}, each triangular element is divided into three sub-triangles.
The displacement vector at the center node is assumed to be the simple
average of the three displacement vectors of the three field nodes.
In each sub-triangle, the stabilized DSG3 is used to compute the strains
and also to avoid the transverse shear locking. Then the strain smoothing
technique on the whole triangular element is used to smooth the strains
on the three sub-triangles.
\begin{center}
\begin{figure}[htpb]
\begin{centering}
\includegraphics[scale=0.5]{Figures/Fig2_Subtriangles}
\par\end{centering}
\caption{A triangular element is divided into three sub-triangles. $\Delta_{1},\Delta_{2}$
and $\Delta_{3}$ are the sub-triangles created by connecting the
central point $O$ with three field nodes.}
\label{fig:triEle}
\end{figure}
\par\end{center}
Consider a typical triangular element $\Omega_{e}$ as shown in Figure
\ref{fig:triEle}. This is first divided into three sub-triangles
$\Delta_{1},\Delta_{2}$ and $\Delta_{3}$ such that $\Omega_{e}=\bigcup\limits _{i=1}^{3}\Delta_{i}$.
The coordinates of the center point $\mathbf{x}_{o}=(x_{o},y_{o})$
is given by:
\begin{equation}
(x_{o},y_{o})=\frac{1}{3}(x_{I},y_{I})
\end{equation}
The displacement vector of the center point is assumed to be a simple
average of the nodal displacements as
\begin{equation}
\boldsymbol{\delta}_{eO}=\frac{1}{3}\boldsymbol{\delta}_{eI}\label{eqn:centerdefl}
\end{equation}
The constant membrane strains, the bending strains and the shear strains
for sub-triangle $\Delta_{1}$ is given by:
\begin{equation}
\begin{array}{c}
\boldsymbol{\epsilon}_{p}=\left[\begin{array}{ccc}
\mathbf{p}_{1}^{\Delta_{1}} & \mathbf{p}_{2}^{\Delta_{1}} & \mathbf{p}_{3}^{\Delta_{1}}\end{array}\right]\left\{ \begin{array}{c}
\boldsymbol{\delta}_{eO}\\
\boldsymbol{\delta}_{e1}\\
\boldsymbol{\delta}_{e2}
\end{array}\right\} \\
\boldsymbol{\epsilon}_{b}=\left[\begin{array}{ccc}
\mathbf{b}_{1}^{\Delta_{1}} & \mathbf{b}_{2}^{\Delta_{1}} & \mathbf{b}_{3}^{\Delta_{1}}\end{array}\right]\left\{ \begin{array}{c}
\boldsymbol{\delta}_{eO}\\
\boldsymbol{\delta}_{e1}\\
\boldsymbol{\delta}_{e2}
\end{array}\right\} \\
\boldsymbol{\epsilon}_{s}=\left[\begin{array}{ccc}
\mathbf{s}_{1}^{\Delta_{1}} & \mathbf{s}_{2}^{\Delta_{1}} & \mathbf{s}_{3}^{\Delta_{1}}\end{array}\right]\left\{ \begin{array}{c}
\boldsymbol{\delta}_{eO}\\
\boldsymbol{\delta}_{e1}\\
\boldsymbol{\delta}_{e2}
\end{array}\right\}
\end{array}\label{eq:constrains}
\end{equation}
Upon substituting the expression for $\boldsymbol{\delta}_{eO}$ in
Eqs. \ref{eq:constrains}, we obtain:
\begin{equation}
\begin{array}{c}
\boldsymbol{\epsilon}_{p}^{\Delta_{1}}=\left[\begin{array}{ccc}
\frac{1}{3}\mathbf{p}_{1}^{\Delta_{1}}+\mathbf{p}_{2}^{\Delta_{1}} & \frac{1}{3}\mathbf{p}_{1}^{\Delta_{1}}+\mathbf{p}_{3}^{\Delta_{1}} & \frac{1}{3}\mathbf{p}_{1}^{\Delta_{1}}\end{array}\right]\left\{ \begin{array}{c}
\boldsymbol{\delta}_{eO}\\
\boldsymbol{\delta}_{e1}\\
\boldsymbol{\delta}_{e2}
\end{array}\right\} =\mathbf{B}_{p}^{\Delta_{1}}\boldsymbol{\delta}_{e}\\
\boldsymbol{\epsilon}_{b}^{\Delta_{1}}=\left[\begin{array}{ccc}
\frac{1}{3}\mathbf{b}_{1}^{\Delta_{1}}+\mathbf{b}_{2}^{\Delta_{1}} & \frac{1}{3}\mathbf{b}_{1}^{\Delta_{1}}+\mathbf{b}_{3}^{\Delta_{1}} & \frac{1}{3}\mathbf{b}_{1}^{\Delta_{1}}\end{array}\right]\left\{ \begin{array}{c}
\boldsymbol{\delta}_{eO}\\
\boldsymbol{\delta}_{e1}\\
\boldsymbol{\delta}_{e2}
\end{array}\right\} =\mathbf{B}_{b}^{\Delta_{1}}\boldsymbol{\delta}_{e}\\
\boldsymbol{\epsilon}_{s}^{\Delta_{1}}=\left[\begin{array}{ccc}
\frac{1}{3}\mathbf{s}_{1}^{\Delta_{1}}+\mathbf{s}_{2}^{\Delta_{1}} & \frac{1}{3}\mathbf{s}_{1}^{\Delta_{1}}+\mathbf{s}_{3}^{\Delta_{1}} & \frac{1}{3}\mathbf{s}_{1}^{\Delta_{1}}\end{array}\right]\left\{ \begin{array}{c}
\boldsymbol{\delta}_{eO}\\
\boldsymbol{\delta}_{e1}\\
\boldsymbol{\delta}_{e2}
\end{array}\right\} =\mathbf{B}_{s}^{\Delta_{1}}\boldsymbol{\delta}_{e}
\end{array}
\end{equation}
where $\mathbf{p}_{i},\,(i=1,2,3),\:\mathbf{b}_{i},\,(i=1,2,3)$ and
$\mathbf{s}_{i},\,(i=1,2,3)$ are given by:
\begin{equation}
\begin{array}{c}
\mathbf{B}_{p}=\frac{1}{2A_{e}}\left[\underbrace{\begin{array}{c}
b-c\\
0\\
d-a
\end{array}}_{\mathbf{p}_{1}}\underbrace{\begin{array}{c}
0\\
d-a\\
-d
\end{array}}_{\mathbf{p}_{2}}\underbrace{\begin{array}{c}
0\\
0\\
a
\end{array}}_{\mathbf{p}_{3}}\begin{array}{rrrrrrrrrrrr}
0 & 0 & c & 0 & 0 & 0 & 0 & -b & 0 & 0 & 0 & 0\\
0 & 0 & 0 & -d & 0 & 0 & 0 & a & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right]\\
\mathbf{B}_{b}=\frac{1}{2A_{e}}\left[\underbrace{\begin{array}{c}
0\\
0\\
0
\end{array}}_{\mathbf{b}_{1}}\underbrace{\begin{array}{c}
0\\
0\\
0
\end{array}}_{\mathbf{b}_{2}}\underbrace{\begin{array}{c}
0\\
0\\
0
\end{array}}_{\mathbf{b}_{3}}\begin{array}{rrrrrrrrrrrr}
b-c & 0 & 0 & 0 & 0 & c & 0 & 0 & 0 & 0 & -b & 0\\
0 & d-a & 0 & 0 & 0 & 0 & -d & 0 & 0 & 0 & 0 & a\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right]\\
\mathbf{B}_{s}=\frac{1}{2A_{e}}\left[\underbrace{\begin{array}{c}
0\\
0
\end{array}}_{\mathbf{s}_{1}}\underbrace{\begin{array}{c}
0\\
0
\end{array}}_{\mathbf{s}_{2}}\underbrace{\begin{array}{c}
b-c\\
0
\end{array}}_{\mathbf{s}_{3}}\begin{array}{rrrrrrrrrrrr}
A_{e} & 0 & 0 & 0 & c & \frac{ac}{2} & \frac{bc}{2} & 0 & 0 & -b & \frac{-bd}{2} & \frac{-bc}{2}\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right]
\end{array}
\end{equation}
where $a=x_{2}-x_{1};b=y_{2}-y_{1};c=y_{3}-y_{1}$ and $d=x_{3}-x_{1}$
(see Figure \ref{fig:dsg3}), $A_{e}$ is the area of the triangular
element and $\mathbf{B}_{s}$ is altered shear strains \citep{bletzingerbischoff2000}.
The strain-displacement matrix for the other two triangles can be
obtained by cyclic permutation.
\begin{center}
\begin{figure}[htpb]
\begin{centering}
\includegraphics[scale=0.5]{Figures/Fig3_DSG3_Element}
\par\end{centering}
\caption{Three-noded triangular element and local coordinates in discrete shear
gap method.}
\label{fig:dsg3}
\end{figure}
\par\end{center}
Now applying the cell-based strain smoothing \citep{bordasnatarajan2010},
the constant membrane strains, the bending strains and the shear strains
are respectively employed to create a smoothed membrane strain $\boldsymbol{\bar{\epsilon}}_{p}$,
smoothed bending strain $\boldsymbol{\bar{\epsilon}}_{b}$ and smoothed
shear strain $\boldsymbol{\bar{\epsilon}}_{s}$ on the triangular
element $\Omega_{e}$ as:
\begin{equation}
\begin{array}{c}
\boldsymbol{\bar{\epsilon}}_{p}=\int\limits _{\Omega_{e}}\boldsymbol{\epsilon}_{p}\Phi_{e}\left(\mathbf{x}\right)d\Omega=\sum\limits _{i=1}^{3}\boldsymbol{\epsilon}_{p}^{\Delta_{i}}\int\limits _{\Delta_{i}}\Phi_{e}(\mathbf{x})d\Omega\\
\boldsymbol{\bar{\epsilon}}_{b}=\int\limits _{\Omega_{e}}\boldsymbol{\epsilon}_{b}\Phi_{e}\left(\mathbf{x}\right)d\Omega=\sum\limits _{i=1}^{3}\boldsymbol{\epsilon}_{p}^{\Delta_{i}}\int\limits _{\Delta_{i}}\Phi_{e}(\mathbf{x})d\Omega\\
\boldsymbol{\bar{\epsilon}}_{s}=\int\limits _{\Omega_{e}}\boldsymbol{\epsilon}_{s}\Phi_{e}\left(\mathbf{x}\right)d\Omega=\sum\limits _{i=1}^{3}\boldsymbol{\epsilon}_{s}^{\Delta_{i}}\int\limits _{\Delta_{i}}\Phi_{e}(\mathbf{x})d\Omega
\end{array}
\end{equation}
where $\Phi_{e}\left(\mathbf{x}\right)$ is a given smoothing function
that satisfies. In this study, following constant smoothing function
is used:
\begin{equation}
\Phi\left(\mathbf{x}\right)=\left\{ \begin{array}{cc}
1/A_{c} & \mathbf{x}\in\Omega_{c}\\
0 & \mathbf{x}\notin\Omega_{c}
\end{array}\right.
\end{equation}
where $A_{c}$ is the area of the triangular element, the smoothed
membrane strain, the smoothed bending strain and the smoothed shear
strain is then given by
\begin{equation}
\left\{ \boldsymbol{\bar{\epsilon}}_{p},\boldsymbol{\bar{\epsilon}}_{b},\boldsymbol{\bar{\epsilon}}_{s}\right\} =\frac{\sum\limits _{i=1}^{3}A_{\Delta_{i}}\left\{ \boldsymbol{\epsilon}_{p}^{\Delta_{i}},\boldsymbol{\epsilon}_{b}^{\Delta_{i}},\boldsymbol{\epsilon}_{s}^{\Delta_{i}}\right\} }{A_{e}}
\end{equation}
The smoothed elemental stiffness matrix is given by
\begin{align}
\mathbf{K} & =\int\limits _{\Omega_{e}}\overline{\mathbf{B}}_{p}\mathbf{A}\overline{\mathbf{B}}_{p}^{{\rm T}}+\mathbf{\overline{B}}_{p}\mathbf{B}\mathbf{\overline{B}}_{b}^{{\rm T}}+\mathbf{\overline{B}}_{b}\mathbf{B}\mathbf{\overline{B}}_{p}^{{\rm T}}+\mathbf{\overline{B}}_{b}\mathbf{D}\mathbf{\overline{B}}_{b}^{{\rm T}}+\mathbf{\overline{B}}_{s}\mathbf{E}\mathbf{\overline{B}}_{s}^{{\rm T}}d\Omega\nonumber \\
& =\left(\overline{\mathbf{B}}_{p}\mathbf{A}\overline{\mathbf{B}}_{p}^{{\rm T}}+\mathbf{\overline{B}}_{p}\mathbf{B}\mathbf{\overline{B}}_{b}^{{\rm T}}+\mathbf{\overline{B}}_{b}\mathbf{B}\mathbf{\overline{B}}_{p}^{{\rm T}}+\mathbf{\overline{B}}_{b}\mathbf{D}\mathbf{\overline{B}}_{b}^{{\rm T}}+\mathbf{\overline{B}}_{s}\mathbf{E}\mathbf{\overline{B}}_{s}^{{\rm T}}\right)A_{e}
\end{align}
where $\overline{\mathbf{B}}_{p},\overline{\mathbf{B}}_{b}$ and $\overline{\mathbf{B}}_{s}$
are the smoothed strain-displacement matrix.
\section{Numerical Results}
Numerical results were obtained using GA coupled FEM solver for Wageningen-B
series propellers. Convergence and stability of the FEM solver was
validated using a simple plate example. The solver was then successfully
used to optimize standard propeller blade shapes. To ensure that the
solver and the optimization technique is applicable for various blade
shapes, three blades with different Expanded Area Ratios (EAR) were
considered. The B5-45 blade was further analyzed to optimize for a
weighted off-design point case, integer ply optimization and investigate
the effect of layer thickness. The unloaded shape was also calculated
for the B5-45 blade.
\subsection{CS-FEM mesh convergence and stability}
A mesh convergence study was conducted to ensure that the cell-based
smoothed finite element technique is stable and provides accurate
results with the increase in the number of degrees of freedom. The
mesh was refined by increasing the node number of the test structure
(h-refinement). A simple rectangular plate with dimensions: 0.4 m
(L) x 0.2 m (W) x 3 mm (t) was considered for the convergence study.
It was assumed that the plate was made out of unidirectional CFRP
(Table \ref{tab:Material-Properties}) and has 24 plies all having
a fiber orientation of 40\textdegree{} counter clockwise from x-axis
towards y-axis. The plate was assumed to be clamped at the left edge
and a uniform pressure loading (normal to the surface) of 100 Pa (upwards)
was applied on the top surface. Details of the meshes that were validated
and their results are given in Table 1. As an independent verification,
maximum deflection obtained using Q8 elements (using the commercial
software ANSYS\texttrademark{}, 8-noded shell 281) is also presented.
Convergence results showed that CS-FEM was highly accurate with good
stability and convergence. Thus, it can be used for complex shapes
in further applications.
\begin{table}
\begin{centering}
\begin{tabular}{ccc}
\toprule
& Node Array & Max. Deflection (mm)\tabularnewline
\midrule
\midrule
Mesh 1 & 5\texttimes{}5 & 4.296 \tabularnewline
\midrule
\midrule
Mesh 2 & 10\texttimes{}10 & 5.704\tabularnewline
\midrule
\midrule
Mesh 3 & 20\texttimes{}20 & 6.087\tabularnewline
\midrule
\midrule
Mesh 4 & 40\texttimes{}40 & 6.165\tabularnewline
\midrule
\midrule
Mesh 5 & 80\texttimes{}80 & 6.194\tabularnewline
\midrule
\midrule
\multicolumn{2}{c}{ANSYS\texttrademark{} Q8 (9841 Nodes)} & 6.212\tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption{\label{tab:Mesh-convergence}Mesh convergence of CS-FEM}
\end{table}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.5]{Figures/Fig4_Plate_Convergence}
\par\end{centering}
\caption{\label{fig:Mesh-convergence-curve}CS-FEM mesh convergence curve}
\end{figure}
\subsection{Preliminary layup optimization of three blade shapes}
The developed GA coupled FE solver can be used to optimize the Wageningen-B
Series propeller blades against various flow conditions at various
speeds. The proposed idea is to extract pressure distribution maps
at various speeds above and below the operating (cruise) advance ratio
and use the pressure maps as the basis of optimization. Method by
which pressure maps are evaluated is at the discretion of the hydrodynamist.
These pressure maps have to be used in the optimization scheme's finite
element routine as pressure differences $\left(\Delta P_{ij}\right)$
at each Gauss point calculated with respect to the pressure map at
cruise condition. However, this paper focuses on presenting the optimization
methodology rather than actual values; thus, pressure values are directly
used as the basis for optimization. Pressure variations used in this
paper were chosen as uniform arbitrary distributions with no obvious
relationship in order to maintain the generality. They were chosen
sensibly based on the pressure distribution at cruise condition obtained
using CFD (ANSYS CFX) analyzes in preliminary studies (Table \ref{tab:Pressure-variations}).
The blades were chosen from the Wageningen-B five bladed series having
expanded area ratios (EAR) of 0.75, 0.6 and 0.45. The propellers were
chosen to have a diameter of 0.4 m with the hub (boss) having a diameter
of 0.08 m respecting the standards of Wageningen-B series. One special
characteristic of the Wageningen-B series is all propellers, apart
from 4-bladed propellers, have constant pitch distributions in the
radial direction \citep{Kuiper1992}, making the blades 2-dimensional
on the plane of the blade. Thus, 2-dimensional meshes were generated
for these three blade shapes using 3-noded triangular elements (Figure
\ref{fig:T3-meshes-generated}). The chosen pressure distributions
and required pitch to diameter ratios are given in Table \ref{tab:Pressure-variations}.
Further note that unlike a Controllable Pitch Propeller (CPP), a passive
shape adaptive propeller cannot change its pitch at the hub. Thus,
the pitch values presented in Table \ref{tab:Pressure-variations}
and throughout this paper are pitch values required at the tip of
the blade.
\begin{figure}
\begin{centering}
\subfloat[]{\centering{}\includegraphics[scale=0.5]{Figures/Fig5a_Mesh_B5_75}}\subfloat[]{\centering{}\includegraphics[scale=0.5]{Figures/Fig5b_Mesh_B5_60}}
\par\end{centering}
\centering{}\subfloat[]{\begin{centering}
\includegraphics[scale=0.5]{Figures/Fig5c_Mesh_B5_45}
\par\end{centering}
}\caption{\label{fig:T3-meshes-generated}T3 meshes generated for blade shapes;
(a) B5-75, (b) B5-60, (c) B5-45}
\end{figure}
\begin{table}
\begin{centering}
\begin{tabular}{ccccc}
\toprule
P (kPa) & $\Delta$P (kPa) & P/D & $\phi$ (deg) & $\Delta\phi$ (deg)\tabularnewline
\midrule
\midrule
180 & -70 & 0.7 & 12.56 & -3.44\tabularnewline
\midrule
\midrule
205 & -45 & 0.8 & 14.3 & -1.7\tabularnewline
\midrule
\midrule
250 (Cruise) & 0 & 0.9 & 16 & 0\tabularnewline
\midrule
\midrule
270 & +20 & 1.0 & 17.66 & +1.66\tabularnewline
\midrule
\midrule
300 & +50 & 1.1 & 19.3 & +3.3\tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption{\label{tab:Pressure-variations}Pressure variations and required optimum
pitch values}
\end{table}
The optimization was carried out using prepreg AS4 carbon fiber reinforced
3501-6 epoxy with a nominal layer thickness of 125 \textgreek{m}m
(Table \ref{tab:Material-Properties} \citep{SodenProperties1998}).
It was considered that the blades were made of 40 layers. The lay
up was taken to be symmetric (i.e. - $\left[B\right]=\left[0\right]$)
to prevent warping during the manufacturing process. Thus, the optimization
algorithm was used to optimize 20 independent variables. It was initially
assumed that any fiber angle is possible to be manufactured; thus,
optimization was performed assuming ply angles could be any real number
(continuous variable ply optimization). Integer ply optimization was
also attempted and will be demonstrated in the proceeding section.
Ply angle results obtained using the optimization process, required
to enable the optimum pitch variation is given in Table \ref{tab:Ply-angle-results}
(all ply angle results specified in this paper are measured counter-clockwise
from x-axis to y-axis with coordinate system specified in Figure \ref{fig:T3-meshes-generated}).
\begin{table}
\begin{centering}
\begin{tabular}{cc}
\toprule
Property & Value\tabularnewline
\midrule
\midrule
E\textsubscript{1}(GPa) & 126\tabularnewline
\midrule
\midrule
E\textsubscript{2} (GPa) & 11\tabularnewline
\midrule
\midrule
G\textsubscript{12} (GPa) & 6.6\tabularnewline
\midrule
\midrule
\textgreek{n}\textsubscript{12} & 0.28\tabularnewline
\midrule
\midrule
\textgreek{n}\textsubscript{23} & 0.4\tabularnewline
\midrule
\midrule
Thickness (\textgreek{m}m) & 125\tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption{\label{tab:Material-Properties}AS4-3501-6 Material Properties \citep{SodenProperties1998}}
\end{table}
\begin{center}
\begin{landscape}
\begin{table}
\begin{centering}
\begin{tabular}{cl}
\toprule
Blade & \multirow{1}{*}{Ply angle configuration (deg) }\tabularnewline
\midrule
\midrule
B5-75 & \multirow{1}{*}{{[}43.4/10.7/40.1/43.2/15.1/22.9/23.0/30.3/42.7/21.7/11.2/12.7/21.1/11.3/13.7/47.3/21.3/38.2/25.3/41.8{]}\textsubscript{s}}\tabularnewline
\midrule
\midrule
B5-60 & {[}40.1/56.2/53.6/14.7/55.1/51.0/46.4/27.7/33.9/32.7/39.1/4.0/52.3/4.0/15.5/123.5/56.1/12.6/9.8/21.9{]}\textsubscript{s }\tabularnewline
\midrule
\midrule
B5-45 & {[}11.7/53.6/28.9/51.6/37.8/28.2/36.6/51.0/51.4/31.0/24.7/13.9/21.8/3.6/35.2/52.0/26.5/26.9/44.2/39.4{]}\textsubscript{s} \tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption{\label{tab:Ply-angle-results}Ply angle results}
\end{table}
\end{landscape}
\par\end{center}
For the above three blade shapes the genetic algorithm converged to
a minimum objective function value of 0.51 deg ($8.86\times10^{-3}$rad).
Figure \ref{fig:GA-Conv} demonstrates the typical convergence of
GA for one of the optimization tasks. The minimum objective function
value can be thought of as the average deviation between the required
and the achieved result for each pitch angle. Thus, the results were
of high satisfaction. Although the requirement of tip angle vs pressure
variation was non-linear, the best a passive pitch adapting blade
can achieve is a linear tip angle variation with uniform pressure
as evident from Figure \ref{fig:Comparison-between-pitch}.
Above ply angle results were verified using a commercial FE software
(ANSYS) and were found to provide bend-twist coupling characteristics
exactly as predicted by the in-house GA coupled FEM code. Thus, the
coupled code for GA and FEM can be used to optimize ply angles based
on the required deformation. Figure \ref{fig:Comparison-between-pitch}
demonstrates the comparison between achieved pitch angle and required
pitch angle. All three blade shapes behaved similarly and were able
to achieve the same pitch angles; thus, for brevity, only one comparison
is presented in Figure \ref{fig:Comparison-between-pitch}.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.3]{Figures/Fig6_B5_45_40L_PnonLin_NewObjf}
\par\end{centering}
\caption{\label{fig:GA-Conv}Typical Convergence of the Genetic Algorithm}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.7]{Figures/Fig7_Req_vs_Achieved_Nnlin}
\par\end{centering}
\caption{\label{fig:Comparison-between-pitch}Comparison between pitch angles}
\end{figure}
\subsection{Further analysis of Wageningen B5-45}
The Wageningen B5-45 blade was further analyzed for a weighted off-design
point case, integer optimization and the effect of layer thickness.
The unloaded shape required to behave according to Table \ref{tab:Pressure-variations}
was also obtained. For the weighted off-design point, it was assumed
that the $P=270\, kPa$ point has a relative weightage of 4, while
all other off-design points have a relative weightage of 1 (refer
to eq. \ref{eq:ObjFunc}). The value of 4 was chosen in this paper
purely for demonstration purposes; actual values of weightages have
to be evaluated for a particular propeller application based on its
probability to operate at each off-design point. Similar to previous
cases, continuous ply optimization was performed. Table \ref{tab:Weighted-Ply-Angle-results}
summarizes the obtained ply angle results and Figure \ref{fig:Wghted_Tip_Ang_Comp}
demonstrates and compares the pitch angle variation with equally weighted
optimization. Figure \ref{fig:Wghted_Tip_Ang_Comp} demonstrates that
assigning weightages enables the achieved pitch curve to move towards
more critical off-design point at the expense of deviating further
from other off-design points. This is further elaborated by the reduction
of difference between the achieved and required tip angle for $270\, kPa$
pressure (Figure \ref{fig:270_Error_Reduction}). Thus, weightages
have to be carefully chosen based on the operating conditions of the
propeller.
\begin{center}
\begin{landscape}
\begin{table}
\begin{centering}
\subfloat[]{\centering{}%
\begin{tabular}{|c|c|}
\hline
B5-45 (Weighted) & {[}51.7/58.5/40.9/57.8/55.8/38.5/45.1/40.6/28.8/76.1/32.2/20.3/54.8/58.3/56.1/32.9/63.0/57.9/32.8/106.4{]}\textsubscript{s}\tabularnewline
\hline
\end{tabular}}
\par\end{centering}
\centering{}\subfloat[]{\centering{}%
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Pressure (kPa) & 180 & 205 & 250 & 270 & 300\tabularnewline
\hline
\hline
Required $\phi$ (deg) & 12.56 & 14.3 & 16 & 17.66 & 19.3\tabularnewline
\hline
Non-weighted $\phi$ (deg) & 12.56 & 13.79 & 16 & 16.98 & 18.46\tabularnewline
\hline
Weighted $\left(w_{4}=4\right)$$\phi$ (deg) & 11.38 & 13.03 & 16 & 17.32 & 19.3\tabularnewline
\hline
\end{tabular}}\caption{\label{tab:Weighted-Ply-Angle-results}Weighted ply angle results;
(a) Ply angles (degrees), (b) Resulting pitch angles}
\end{table}
\end{landscape}
\par\end{center}
\begin{center}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.4]{Figures/Fig8_Weighted_Comp_Graph}
\par\end{centering}
\caption{\label{fig:Wghted_Tip_Ang_Comp}Tip angle results for weighted optimization
case}
\end{figure}
\par\end{center}
\begin{center}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.6]{Figures/Fig9_Weighted_Error_Comp}
\par\end{centering}
\caption{\label{fig:270_Error_Reduction}Difference between achieved and required
tip angles for non-weighted and weighted cases}
\end{figure}
\par\end{center}
Integer ply optimization was performed for equal weighting case. Integer
ply optimization is preferable as it greatly simplifies the lay up
and manufacturing process. It was observed that integer ply optimization
problems required a considerably longer time to converge at a result
compared to continuous ply optimization problems. Integer ply optimization
was performed for four cases: only integer ply angles $\left(0\leq\theta<\pi\right)$
are allowed, only products of 5 degrees are allowed, only products
of 10 degrees are allowed and only angles $\left\{ 0\text{\textdegree},30\text{\textdegree},45\text{\textdegree},60\text{\textdegree},90\text{\textdegree},120\text{\textdegree},135\text{\textdegree},150\text{\textdegree}\right\} $
are allowed. Results of these optimization problems are given in Table
\ref{tab:Integer-ply-results}. It was observed that for the first
three cases, the objective function was able to achieve the same minimum
value obtained for continuous ply optimization. Thus, limiting the
variable domain was not detrimental towards final results. The last
case with more aggressive variable domain limitations was found to
increase the value of the objective function. Hence, the achieved
tip angles further deviated from the required tip angles.
\begin{landscape}
\begin{table}
\begin{centering}
\begin{tabular}{ccc}
\toprule
Case & Ply Angle Result (deg) & $f_{min}\:(rad)$\tabularnewline
\midrule
\midrule
All Integer & {[}92/66/107/45/34/31/59/27/71/35/38/34/68/89/108/22/5/79/119/29{]}\textsubscript{s} & $8.86\times10^{-3}$ \tabularnewline
\midrule
\midrule
Only 5n deg & {[}75/55/75/40/35/45/50/90/25/15/80/100/130/30/90/80/65/55/65/90{]}\textsubscript{s} & $8.86\times10^{-3}$ \tabularnewline
\midrule
\midrule
Only 10n deg & {[}30/90/90/60/40/40/50/40/20/30/40/10/80/160/30/40/70/120/30/90{]}\textsubscript{s} & $8.86\times10^{-3}$ \tabularnewline
\midrule
\midrule
Limited Ply Angles & {[}(60)\textsubscript{9}/90/60/90/150/60/135/(150)\textsubscript{5}{]}\textsubscript{s} & $4.41\times10^{-2}$ \tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption{\label{tab:Integer-ply-results}Integer ply optimization results}
\end{table}
\end{landscape}
All above optimization tasks were carried out as unconstrained optimization.
One of the major drawbacks witnessed was the high flexibility of the
resulting blade, which resulted in a substantial rake deformation
in the process of pitch variation. It was investigated whether the
flexibility can be reduced by increasing the number of layers on the
blade. Inasmuch as, an attempt was made to optimize the blade assuming
there are 50 CFRP prepreg layers all having a nominal thickness of
$125\mu m$ (Table \ref{tab:Material-Properties}). It was observed
that, increasing the number of layers resulted in a stiffer blade,
which had relatively less bend-twist coupling performance compared
to the 40 layer blades attempted earlier (Table \ref{tab:different_thk_Results}).
However, the same twisting performance was able to be achieved if
the total laminate thickness of the 50 layer layup was the same as
the 40 layer layup; in other words, if the nominal layer thickness
of CFRP layers were reduced to $100\mu m.$ The resulting blade had
the same twisting performance as the 40 layer model, but with a less
lateral deformation. This was enabled by its higher bend-twist coupling
coefficient in the bending compliance matrix ($\left[d\right]$).
In addition, the layup with 40 $100\mu m$ layers had the same twisting
performance with a higher lateral bend. It must be noted that, layer
thickness is a practical limitation under the current composite prepreg
lamina manufacturing technology that is beyond composite designer's
control. However, it was clear that in order achieve best bend-twist
coupling performance one must attempt to reduce the the layer thickness
while increasing the number of layers in the laminate. Table \ref{tab:different_thk_Results}
summarizes results obtained for different layer numbers.
\begin{landscape}
\begin{table}
\begin{centering}
\begin{tabular}{c>{\raggedright}p{1.5cm}>{\centering}p{1.5cm}cc}
\toprule
Layers & Layer Thk. ($\mu m$) & Total Thk. (mm) & Optimum Layup (deg) & $f_{min}\:(rad)$\tabularnewline
\midrule
\midrule
\multirow{2}{*}{40} & \multirow{2}{1.5cm}{\centering{}125} & \multirow{2}{1.5cm}{\centering{}5} & {[}11.7/53.6/28.9/51.6/37.8/28.2/36.6/51.0/51.4/31.0/24.7/ & \multirow{2}{*}{0.00886}\tabularnewline
& & & 13.9/21.8/3.6/35.2/52.0/26.5/26.9/44.2/39.4{]}\textsubscript{s} & \tabularnewline
\midrule
\midrule
\multirow{2}{*}{50} & \multirow{2}{1.5cm}{\centering{}125} & \multirow{2}{1.5cm}{\centering{}6.25} & {[}41.9/41.9/78.8/41.6/43.5/42.3/45.1/79.4/42.8/41.6/78.8/44.8/44.4/ & \multirow{2}{*}{0.0134}\tabularnewline
& & & 41.5/44.1/41.5/79.2/45.1/79.0/41.0/80.4/43.0/45.7/41.7/81.2{]}\textsubscript{s} & \tabularnewline
\midrule
\midrule
\multirow{2}{*}{50} & \multirow{2}{1.5cm}{100} & \multirow{2}{1.5cm}{5} & {[}32.1/46.4/44.8/55.6/50.3/52.5/43.2/2.1/22.4/50.2/21.1/17.2/36.6/ & \multirow{2}{*}{0.00886}\tabularnewline
& & & 36.5/13.5/40.8/38.1/49.3/45.3/26.9/52.3/12.2/55.8/8.8/3.3{]}\textsubscript{s} & \tabularnewline
\midrule
\midrule
\multirow{2}{*}{40} & \multirow{2}{1.5cm}{100} & \multirow{2}{1.5cm}{4} & {[}4.1/30.3/11.3/7.6/24.3/109.4/60.4/0.41/42.9/15.7/15.2/ & \multirow{2}{*}{0.00886}\tabularnewline
& & & 49.6/39.1/33.2/110.7/113.9/83.7/46.7/64.4/97.5{]}\textsubscript{s} & \tabularnewline
\midrule
\midrule
\multirow{2}{*}{20} & \multirow{2}{1.5cm}{250} & \multirow{2}{1.5cm}{5} & \multirow{2}{*}{{[}15.7/19.7/30.3/49.7/37.2/44.3/22.1/34.9/47.0/3.4{]}\textsubscript{s}} & \multirow{2}{*}{0.00886}\tabularnewline
& & & & \tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption{\label{tab:different_thk_Results}Results for different layer thicknesses}
\end{table}
\end{landscape}
Note further that the current unconstrained optimization scheme is
capable of satisfying the pitch angle variation requirement almost
perfectly greatly enhancing the efficiency envelope. However, this
may not be possible due to the considerable increase in flexibility
and lateral deformations. Optimization constraints must be introduced
based on the application to limit such deformations at the expense
of efficiency gains.
\subsection{\label{sub:Unloaded-shape-calculation}Unloaded shape calculation}
The second stage of the proposed design process is the calculation
of the unloaded blade shape. The process is iteration based where
negative loads are applied to the shape at the design/cruise speed
of blade. Once the first unloaded shape is obtained, the loads at
design speed are applied to the unloaded shape to check whether it
reaches the desired shape at the operating speed. The desired shape
is the shape of the optimum alloy propeller chosen. If the shape requirement
has not been met, the initial shape is adjusted by the difference
and further iterated until there is satisfactory shape convergence
(Figure \ref{fig:Unloaded-shape-iteration}).
\begin{center}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.7]{Figures/Fig10_Unloaded_Flw}
\par\end{centering}
\caption{\label{fig:Unloaded-shape-iteration}Unloaded shape iteration process}
\end{figure}
\par\end{center}
The unloaded shape for the B5-45 blade was calculated for the layup
given in Table \ref{tab:Ply-angle-results}. As summarized in Figure
\ref{fig:Unloaded-shape-iteration}, the first step of the process
was to apply reverse strains or reverse loads. In this example reverse
loads were applied. If reverse strains were applied the iteration
path would be different, but will eventually converge to the same
unloaded shape. Based on the iterations it was concluded that the
blade has to be manufactured with an initial uniform twist of 16\textsuperscript{o}
pitch angle at the root and 4.53\textsuperscript{o} at the tip. Further
loading due to the movement of the ship is expected to increase the
pitch of the propeller. Summary of iteration values is presented in
Table \ref{tab:Unloaded-shape-iteration}. These values are depicted
in Figure \ref{fig:Unloaded-shape-Summary} for better representation.
\begin{table}
\begin{centering}
\begin{tabular}{ccccc}
\toprule
Iteration & Ini. Tip & Loaded Tip & \%Error & Adjustment\tabularnewline
\midrule
\midrule
1 & 3.44\textsuperscript{o} & 14.76\textsuperscript{o} & 7.73 & 1.24\textsuperscript{o}\tabularnewline
\midrule
\midrule
2 & 4.68\textsuperscript{o} & 16.18\textsuperscript{o} & -1.11 & -0.18\textsuperscript{o}\tabularnewline
\midrule
\midrule
3 & 4.50\textsuperscript{o} & 15.97\textsuperscript{o} & 0.17 & -0.03\textsuperscript{o}\tabularnewline
\midrule
\midrule
4 & 4.53\textsuperscript{o} & 16.00\textsuperscript{o} & -0.03 & 0.00\textsuperscript{o}\tabularnewline
\bottomrule
\end{tabular}
\par\end{centering}
\caption{\label{tab:Unloaded-shape-iteration}Unloaded shape iteration steps}
\end{table}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.4]{Figures/Fig11_Unloaded_Iterations}
\par\end{centering}
\caption{\label{fig:Unloaded-shape-Summary}Unloaded shape iteration summary}
\end{figure}
\section{Conclusion}
The paper presented an optimization scheme for optimizing the ply
lay up of composite marine propeller blades using a coupled Genetic
Algorithm and Finite Element approach. The genetic algorithm was utilized
to vary the ply angles based on their fitness values while the Cell-Based
Smoothed FE code using Discrete Shear Gap Method, was used to evaluate
the deflection and pitch of the blade under given loadings. Pressure
distributions were chosen to be uniform and arbitrary, letting the
readers to use exact pressure distribution functions using CFD methods.
Continuous ply optimization was performed for three blade shapes to
demonstrates the design scheme applicability for various shapes. Further,
B5-45 was analyzed for weighted off-design point optimization, integer
ply optimization and effect of different layer thicknesses. Unloaded
shape for B5-45 was also calculated and was found to provide good
convergence. Results obtained using the coupled optimization algorithm
had a high accuracy and the process can be confidently used in future
research endeavors. The framework presented in the paper was purely
based on optimization for best efficiency using shape adaptability
achieved through optimization of composite layup angles. There are
many steps and adjustments to be made based on strength testing, cavitation
performance testing, natural frequency and structural divergence analysis,
etc. Thus, the proposed framework can be seen as a foundation for
designing shape-adaptive composite marine propellers.
\section*{Acknowledgments}
Authors would like to thank the Defence Science and Technology Organization
of Australia (DSTO) for the financial support and expertise provided
for the project. In addition, a special acknowledgment to Dr Andrew
Philips of DSTO for his constant support and input. S Natarajan would
like to acknowledge the financial support of the School of Civil and
Environmental Engineering, The University of New South Wales for his
research fellowship since September 2012.
\section*{References}
\bibliographystyle{elsarticle/elsarticle-num}
\section{}
\label{}
\bibliographystyle{model1-num-names}
\section{}
\label{}
\bibliographystyle{model1a-num-names}
\section{}
\label{}
\bibliographystyle{model1b-num-names}
\section{}
\label{}
\bibliographystyle{model1c-num-names}
\section{}
\label{}
\bibliographystyle{model2-names}
\section{}
\label{}
\bibliographystyle{model3-num-names}
\section{}
\label{}
\bibliographystyle{model3a-num-names}
\section{}
\label{}
\bibliographystyle{model4-names}
\section{}
\label{}
\bibliographystyle{model5-names}
\section{}
\label{}
\bibliographystyle{model6-num-names}
\section{}
\label{}
\bibliographystyle{elsarticle-harv}
\section{}
\label{}
\bibliographystyle{elsarticle-num}
|
train/arxiv
|
BkiUdgbxK7ICUmfbz7vz
| 5 | 1 |
\section{Introduction}
Cells live in a highly dynamic environment, which means that they
continually have to respond to a large number of different signals.
One possible strategy for signal transmission would be to use
distinct signal transduction pathways for the transmission of the respective
signals. However, it
is now clear that components are often shared between different
pathways. Prominent examples are the Mitogen-Activated Protein Kinase
(MAPK) signaling pathways in yeast, which share multiple components
\cite{Schwartz2004,Rensing2009a}. In fact, cells can even transmit
different signals through one and the same pathway, and yet respond
specifically and reliably to each of them. Arguably the best-known
example is the rat PC-12 system, in which the epidermal growth factor
(EGF) and neuronal growth factor (NGF) stimuli are transmitted through
the same MAPK pathway, yet give rise to different cell fates, respectively
differentiation and proliferation \cite{Marshall1995,
Sasagawa2005}. Another example is the p53 system, in which the
signals representing double-stranded and single-stranded breaks in the
DNA are transmitted via the same pathway \cite{Batchelor2011}. These observations suggest
that cells can {\em multiplex} biochemical signals \cite{DeRonde2011},
{\em i.e.} transmit multiple signals through one and the same
signaling pathway, just as many telephone calls can be transmitted
simultaneously via a shared medium, a copper wire or the ether.
One of the key challenges in transmitting multiple signals via pathways that
share components is to avoid unwanted crosstalk between the
different signals. In recent years, several mechanisms for generating signaling
specificity have been proposed. One strategy is spatial insulation, in
which the shared components are recruited into distinct macromolecular
complexes on scaffold proteins \cite{Schwartz2004, Patterson2010}. This
mechanism effectively creates independent communication channels, one for each
signal to be transmitted. Another mechanism is kinetic insulation, in which the
common pathway is used at different times, and a temporal separation between the
respective signals is thus established \cite{Behar2007}. Another
solution is cross-pathway inhibition, in which one signal dominates
the response \cite{Bardwell2007, McClean2007, Hu2009,Thalhauser2009,Hu2011}.
In the latter two schemes, kinetic insulation and cross-pathway inhibition, the
signals are effectively transmitted via one signaling pathway, though in these
schemes multiple messages cannot be transmitted simultaneously.
We have recently demonstrated that cells can truly multiplex signals:
they can transmit at least two signals simultaneously through a common
pathway, and yet respond specifically and reliably to each of them
\cite{DeRonde2011}. In the multiplexing scheme that we proposed, the
input signals are encoded in the concentration levels of the signaling
proteins. The underlying principle is, however, much more generic,
since essentially any coding scheme can be used to multiplex
signals. This observation is important, because it is becoming
increasingly clear that cells employ a wide range of coding strategies
for transducing signals. One is to encode the signals in the duration
of the signal. This is the scheme used by the NGF-EGF system: while
EGF stimulation yields a transient response of ERK, NGF leads to a
sustained response of ERK \cite{Sasagawa2005,Marshall1995}. Another
strategy is to encode the message in the frequency or amplitude of
oscillatory signals. Indeed, a large number of systems have now been
identified that employ oscillatory signals to transduce
information. Arguably the best-known example is calcium oscillations
\cite{Berridge2000a}, but other examples are the p53
\cite{Batchelor2011}, NFAT \cite{Hoffmann2002, Nelson2004a}, nuclear
ERK oscillations \cite{Shankaran2009} and NF-$\kappa$B system
\cite{Hoffmann2002,Ashall2009b,Sung2009,Cheong2011}. In fact, cells use
oscillatory signals not only to transmit information intracellularly,
but also from one cell to the next---insulin \cite{Wu2011} and
Gonadotropin Release Hormone (GnRH) \cite{Li1989} are prominent examples
of extracellular signals that oscillate in time. More examples of
systems that encode stimuli in the temporal dynamics of the signal are
provided in the recent review article by Behar and Hoffmann
\cite{Behar2010}.
In this manuscript we demonstrate that oscillatory signals can be used
to multiplex biochemical signals. We present a multiplexing system,
which is based on well-known network motifs, such as the
Goldbeter-Koshland push-pull network \cite{Goldbeter1981a} and the incoherent feedforward motif \cite{Alon:2007tz}. For the constant signal the
information is encoded in the concentration, while for the oscillatory
signal the message is encoded in the amplitude or the frequency of the
oscillations. These input signals are then multiplexed in the dynamics
of a common signaling pathway, which are finally decoded by downstream
networks.
Our results highlight that information transmission is a
mapping problem. For optimal information transmission, each input
signal needs to be mapped onto a unique output signal, allowing the
cell to infer from the output what the input was. It is now well
established that noise, arising from the inherent stochasticity of
biochemical reactions, can reduce information transmission
\cite{Detwiler2000,Ziv2007, Tkacik2008, Walczak2009,
Mehta2009,Tostevin2009, DeRonde2011, Cheong2011}, because a given output signal
may correspond to different input signals. Additionally, here we show that
crosstalk between the two different signals can also compromise
information transmission: a given state of a given input signal can
map onto different states of its corresponding output signal, because
the input-output mapping for that channel depends on the state of the
signal that is transmitted through the other channel. This crosstalk presents a
fundamental bound on the amount of information that can be
transmitted, because it limits information transmission even in the
deterministic, mean-field limit. We also show, however, that under
biologically relevant conditions more than $2$ bits of
information can be transmitted per channel. We end by comparing our results with
observations on experimental systems, in which oscillatory and
constant signals are transmitted through a common pathway.
\section{Results}
\subsection{The model}
\fref{Fig1} shows a cartoon of the setup. We consider two
input species $\molsf{S}_1$ and $\molsf{S}_2$, with two corresponding
output species, $\molsf{X}_1$ and $\molsf{X}_2$, respectively. The
concentration $S_1\br{t}$ of input $\molsf{S}_1$
oscillates in time, while the concentration of $\molsf{S}_2$ is
constant in time. An input signal can represent different messages;
that is, the input can be in different states.
For $\molsf{S}_1$ the different states
could either be encoded in the amplitude or in the period of
the oscillations. Here, unless stated otherwise, we will focus on the
former and comment on the latter in the Discussion.
The different states of $\molsf{S}_2$
correspond to different copy-number or, since we are working at
constant volume, concentration levels $S_2$. The
signals $S_1$ and $S_2$ drive
oscillations in the concentration $V(t)$ of an intermediate component
$\molsf{V}$, with a mean that is determined by
$S_2$ and an amplitude that is determined by $S_1$ (see {\em SI}). The
states of $\molsf{S}_1$ are thus encoded in the amplitude of $V(t)$ while the
states of $\molsf{S}_2$ are encoded in the mean level
of $V$. The output $X_2$ reads out the mean of $V(t)$ and hence the
state of its input $S_2$ by simply time-integrating the oscillations of
$V(t)$. The output $X_1$ reads out the amplitude of the oscillations
in $V(t)$ and hence the state of $S_1$ via an adaptive
network, indicated by the dashed circle. We now describe the coding and
decoding steps in more detail.
\subsubsection{Encoding}
In the encoding step of the motif, the two signals $S_1$, $S_2$ are combined
into the shared pathway. The signals are modeled as a
sinusoidal function
\begin{align}
\elabel{sig}
S\br{t}=\mu\br{1+A\sin\br{2\pi \frac{t}{T}}}.
\end{align}
$\mu$ is the signal mean, $A$ is the signal amplitude and $T$ is the period of the signal oscillation. We
assume that the signals are deterministic and discuss the effects of noise
later. As discussed above, $S_1$ is an
oscillatory signal, with kinetic parameters $A_1, T_1$ and constant $\mu_1$.
$S_2$ is constant, $A_2=0$, and the concentration level $\mu_2$ carries the
information (i.e. sets the state) in the signal. In recent years it has been shown
that biochemical systems can tune separately the amplitude and frequency of an
oscillatory signal \cite{Tsai2008a, Stricker2008}.
The simplest shared pathway is a single component,$\molsf{V}$,
which could be a receptor on the cell or nuclear membrane, but could
also be an intracellular enzyme or a gene regulatory protein. We
imagine that each signal
is a kinase for
$\molsf{V}$, which can switch between an active (e.g. phosphorylated)
state ($\molsf{V}^{\molsf{P}}$) and an
inactive (e.g. unphosphorylated) state, such that
\begin{align}
\elabel{VPfull}
\diff{V^P}{t}=\frac{k_V\sqbr{\sum_i
S_i\br{t}}\br{V_T-V^P}}{K_V+\br{V_T-V^P}}-m_VE_T\frac{V^P}{M_V+V^P},
\end{align}
where we sum over the two signals $S_1\br{t}$ and $S_2\br{t}=\mu_2$. The dephosphorylation
is mediated by a phosphatase, that has a constant copy number $E_T$. In
\eref{VPfull} we assume Michaelis-Menten dynamics for $\molsf{V}$ (see
{\em SI} for more details).
The Michaelis-Menten kinetics of the activation of $\molsf{V}$ could
distort the oscillatory signal of $S_1$. They could reduce the
amplitude of the oscillations or transform the sinusoidal signal into a
signal that effectively switches between two values. Such transformations potentially hamper
information transmission. We therefore require that the component $V$
accurately tracks the dynamics of the input signals.
It is well known that a linear transfer function between $S$ and $V$ does not
lead to a deformation of the dynamic behavior, but only to a rescaling of the
absolute levels (see {\em SI}). A linear transfer function can be
realized if the kinase acts in the saturated regime, while the
phosphatase is not saturated ($K_V\ll \br{V_T-V^P\br{t}}, M_V\gg
V^P\br{t}$), leading to
\begin{align}
\elabel{VPlin}
\diff{V^P}{t}=k_V\br{\sum_i S_i\br{t}}-m'_VV^P.
\end{align}
with $m'_V=m_VE_T/M_V$.
\begin{figure*}[t]
\begin{center}
\begin{tabular}{c}
\figwithletter{}{0.6}{Fig1}
\end{tabular}
\end{center}
\capt{Fig1}{Schematic drawing of the multiplexing system. Two signals are
multiplexed. Signal $S_1$ oscillates in time while signal $S_2$ is constant. The
message of $S_1$ could either be encoded in the amplitude or the frequency of
the oscillations, but in this manuscript we focus on the former. The message of
$S_2$ is encoded in the concentration. The output or response of $S_1$ is
$X_1$ while the response of $S_2$ is $X_2$. Encircled is the
adaptive motif, used to readout the amplitude of the oscillations of $S_1$.
}
\end{figure*}
\subsubsection{Decoding $V^{P}$ to $X_1, X_2$}
The second part of the multiplexer is the decoding of the information
in $V^{\molsf{P}}$ into a functional output (see \fref{Fig1}). The signals that are
encoded in $V^{\molsf{P}}$ have to be decoded into two output
signals, the responses $X_1$ and $X_2$. We imagine that the cell should be able
to infer from
an instantaneous measurement of the output response
the state of the corresponding input signal.
Therefore, we take the outputs of the multiplexing motif
to be the concentration levels $X_1$ and $X_2$ of the output species
$\molsf{X}_1$ and $\molsf{X}_2$, respectively. Here $X_1$ is
the response of ${S}_1$, while ${X}_2$ is the response of
${S}_2$.
The response ${X}_2$ should be sensitive to the concentration of
${S}_2$, but be blind to any characteristics of
${S}_1$. In our simple model there is only one time-dependent
signal, namely $S_1$; $S_2$ is constant in time.
Since $\molsf{V}^{\molsf{P}}$ has a linear transfer function of the
signals (\eref{VPlin}), the average level of $V^{\molsf{P}}$,
$\avgin{V^P}$, is independent of both the amplitude $A_1$
and the period $T_1$ of the oscillations in $S_1$. $\avgin{V^P}$
does depend on the mean concentration level of the two signals, and since
$S_1$ has a constant mean, changes in $\avgin{V^P}$ reflect only a
change in the mean of $S_2$, $\mu_2$. As a result, a simple
linear time integration motif can be used as the final read out for
$S_2$. We therefore model $X_2$ as
\begin{align}
\elabel{x2}
\diff{X_2}{t}=k_{X_2}V^P\br{t}-m_{X_2} X_2\br{t}.
\end{align}
Since \eref{x2} is linear, $\avgin{X_2}$ is a function of
$\avgin{V^P}$ only. Moreover, if the response time of $\molsf{X}_2$,
$\tau_{X_2}\br{=m^{-1}_{X_2}}$, is much longer than the oscillation
period $T_1$ of $S_1$, the effect of the oscillations on the
instantaneous concentration $X_2$ is integrated out. This is important
to reduce the variability in $\avgin{X_2}$ due to dynamics in the
system \cite{Tostevin2012}.
\begin{figure*}[h]
\begin{center}
\figwithletter{a)}{0.9}{Fig2a}
\figwithletter{b)}{0.9}{Fig2b}
\end{center}
\capt{Fig2}{The gain $g^2_{W^P}\br{\omega}$ for channel 1 for different
parameter sets. The circles indicate the response times
$\tau_i$. \textbf{a)} The effect of changing the timescale
$k_R=\mu_R$. \textbf{b)} The effect of changing the signal $S_2$ in
the other channel 2; it is seen that the gain $g^2_{W^P}\br{\omega}$
of channel 1 depends on $S_2$, which may lead to
crosstalk. Parameters: Panel \textbf{a)}: $\mu_2=500$,
$k_R=m_R$. Panel \textbf{b)}: $\mu_2=200$ and $\mu_2=800$, $k_R=1,
m_R=1$. Panels \textbf{a,b)}: $\mu_1=0,
k_V=0.1,K_V=10^{-4}V_T,m_VE_T=600,M_V=5V_T,V_T=1000$,$k_W=1,K_W=M_W=W_T/4,
W_T=1000$, $m_W$ sets the timescale.
}
\end{figure*}
For $\molsf{X}_1$ a simple time-integration scheme does not work. The
information that has to be mapped onto the output concentration $X_1$
is the amplitude of $\molsf{S}_1$, which is propagated to $V^P$. The
output $X_1$ should therefore depend on the amplitude of the
oscillations of $V^P$, but not on its mean $\avgin{V^P}$, since the
mean represents the information in $\molsf{S}_2$. These requirements
mean that the frequency-dependent gain of the network from $V$ to
$X_1$ should have a band-pass structure. The frequency-dependent gain
shows how the amplification of the input signal depends on the
frequency of the signal \cite{DeRonde2010} (\fref{Fig2}). Due to the finite lifetime
of the molecules, the frequency-dependent gain of any biochemical
network inevitably reaches zero at high frequencies. Here we
require that at the other end of the frequency spectrum, in the
zero-frequency limit, the gain should also be small: Changes in the
constant level of $V_P$, which result from changes in $S_2$, should
not be amplified because $X_1$ should not respond to changes in
$S_2$. Indeed, only at intermediate frequencies should the gain be
large: Changes in the amplitude of the oscillations of $V_P$ at the
frequency of the input $S_1$ must be strongly amplified, because these
changes correspond to changes in $S_1$ to which $X_1$ must
respond. The network between $V$ and $X_1$ should thus have a
frequency-transmission band that matches the frequency of $S_1$. The
output $X_1$ will then strongly respond to $S_1$ but not to $S_2$.
A common biochemical motif with a frequency band-pass filter is an
adaptive motif \cite{Tostevin2010a}. An adaptive system does not
respond to very slowly varying signals, essentially because it then
already adapts to the changing signal before a response is
generated. Indeed, the key feature of an adaptive system is that the
steady-state output is independent of the magnitude of a constant
input, meaning that
\begin{align}
\avg{W}=f\br{\acco{\text{all parameters}}\notin \avg{V^P}}.
\end{align}
This appears to be precisely what is required, because it means
that when
$\molsf{S}_1$ is absent and {$S_2$} is changed, the output {$X_1$}
remains constant, as it should --- only {$X_2$} should change when
{$S_2$}, a signal constant in time, is changed. On the other hand,
while the steady-state output of an adaptive network is insensitive to
variations in constant inputs, it is sensitive to dynamical
inputs. This observation is well known; {it} is, e.g., the basis for
the chemotactic behavior of {\em E. coli}, where the system responds
to a change in the input concentration, and the strength of the
response depends on the magnitude of the change in input
concentration. This is another characteristic that {is required},
because it allows the magnitude of the response $X_1$ to depend on the
amplitude $A_1$ of the oscillations in $S_1$, thus enabling a mapping
from $A_1$ to $X_1$. The important question that remains is
whether the magnitude of the response solely depends on the change
in the input concentration, which reflects $S_1$, or whether it also
depends on the absolute value of the input concentration, which
reflects $S_2$. In the following, we will show that it may depend on
both, which would introduce crosstalk between the two signals.
Two common ways to construct an adaptive motif are known
\cite{Ma2009a}, the negative feedback motif and the incoherent
feed-forward motif. In this multiplexing system we use the latter. In the incoherent feed-forward
motif the input signal, in our case $V^P$, stimulates two downstream components
$\molsf{R,W}$, see \fref{Fig1}. One of the downstream components, $\molsf{R}$, is
also a signal for the other downstream component, $\molsf{W}$.
Importantly, the regulatory effect of the direct pathway
($V^P\to W$) is opposite to the effect of the indirect
pathway ($V^P\to R\to W$). As a result, if
$\molsf{V}^P$ activates $\molsf{W}$, this activation is counteracted by
the regulation of $\molsf{W}$ through $\molsf{R}$. We thus obtain
\begin{align}
\diff{R}{t}&=k_R V^P-m_R R, \elabel{eqr}\\
\diff{W}{t}&=k_{W} \frac{V^P\br{W_T-W^P}}{K_{W}+\br{W_T-W^P}}-m_{W} \frac{RW^P}{M_{W}+W^P}\elabel{eqy}.
\end{align}
This motif is adaptive, which can be shown by setting the
time-derivatives in \eref{eqr} and \eref{eqy} to zero and solving for the steady state
$\avgin{W^P}$. This yields
\begin{align}
\elabel{steady_state}
0&=\frac{\br{k_{W}\br{W_T-\avg{W^P}}}\br{m_R\br{\avg{W^P}+M_{W}}}}{\br{K_{W}+\br{W_T-\avg{W^P}}}\br{m_{W}k_R\avg{W^P}}}.
\end{align}
Although the full expression for $\avgin{W^P}$ is unwieldy to present,
\eref{steady_state} shows that it does not depend on the magnitude of a constant input $\avgin{V^P}$,
which means that the network is indeed adaptive.
For a correct separation of the signals, the response $W$ should be insensitive
to the average level of $V^P$, $\avgin{V^P}$, since $\avgin{V^P}$ carries
information {on} $\molsf{S}_2$, and not $\molsf{S}_1$. Indeed, a
dependence of $\molsf{W}$ on $\avgin{V^P}$ and hence on $\molsf{S}_2$
necessarily leads to unwanted crosstalk between the two information channels.
While the adaptive property of the network ensures that $W^{{P}}$ is
insensitive to the mean of ${V}^{{P}}$ for a constant input (see
\eref{steady_state}), this is not necessarily the case for a dynamic
$V^P\br{t}$. Since \eref{eqy} is non-linear, the response ${W}$ is dependent on
the precise functional form of ${V}^{{P}}$, and, more importantly, will depend on
$\avgin{V^P}$. Crosstalk
may thus arise, which will be studied in more detail below.
The full expression of the frequency-dependent gain $g^2\br{\omega}$ is too
unwieldy to present here, but in simplified form we have
\begin{align}
\elabel{motif1_g2}
g^2_{W^P}\br{\omega}\propto \frac{\alpha
\omega^2}{\beta\br{\omega^2+\tau^{-2}_R}\br{\omega^2+\tau^{-2}_{W}}\br{
\omega^2+\tau^{-2}_V}},
\end{align}
where $\alpha$ and $\beta$ are proportionality constants and $\tau_i$ are the
response times of component $i$. For slowly varying signals
($\omega\to0$), the amplitude of the response is negligible due to the
$\omega^2$-term in the numerator of \eref{motif1_g2}, reflecting
the adaptive nature of the network. Second, for $\omega\ll
\min\sqbr{\tau^{-1}_V, \tau^{-1}_R, \tau^{-1}_{W}}$, the power scales with
$\omega^2$. For very large $\omega$ the power scales with $\omega^{-4}$. In the
intermediate regime for $\omega$, the scaling depends on the
precise response times. The response times are the diagonal Jacobian elements
for the linearized system (\erefs{VPfull},\ref{eqn:eqr},\ref{eqn:eqy}),
\begin{align}
&\tau_{V}=\sqbr{\frac{m_VE_T
M_V}{\br{M_V+\avg{V^P}}^2}+\frac{k_VK_V\mu}{\br{V_T-\avg{V^P}+K_V}^2}}^{-1}
\elabel{tauv},\\
&\tau_{R}=m^{-1}_R\elabel{taur},\\
&\tau_{W}=\sqbr{\frac{m_{W}
M_{W}\avg{R}}{\br{M_{W}+\avg{W^P}}^2}+\frac{k_{W}K_{W}\avg{V^P}}{\br{W_T-\avg{
W^P}+K_{W}}^2}}^{-1}.\elabel{taux}
\end{align}
\eref{taur} gives the response time for a protein with a simple
birth-death reaction. The mathematical form of the response times,
$\tau_V$ and $\tau_W$, \eref{tauv} and \eref{taux}, resembles that of
a switching process with a forward and a backward step; their values
depend on the signal parameters. When the dynamics of $\molsf{V^P}$
operate in the linear regime (see \eref{VPlin}), $\tau_V$ simplifies
to $\tau_{V}\approx-\br{m_VE_T/\br{M_V}}^{-1}$, which is just the linear
decay rate of $\molsf{V}^{\molsf{P}}$. Importantly, while $\avgin{W^P}$ is
independent of the state $\mu_2$ of $S_2$, the response time $\tau_W$
and hence the gain $g^2\br{\omega}$
do depend on $\avgin{V^P}$ and thereby on $S_2$. This means that the
response of $X_1$ to $S_1$ will depend on $S_2$, generating crosstalk.
The gain (\eref{motif1_g2}) is shown in \fref{Fig2}a,b for two
different parameter sets. The bandpass structure, with corresponding
resonance frequency (the peak in the gain) is observed. Further, with
circles, the response times $\tau_V$ (black open), $\tau_R$ (black
solid) and $\tau_W$ (gray open) are shown, which determine the
position of the peak in the gain; the peak occurs at a frequency in
between the two largest response times. In \fref{Fig2}a we observe
the influence of increasing $k_R,m_R$, which are taken to be
equal. For very slow changes in ${R}$, corresponding to $k_R, m_R$
being {small}, the network has a very large gain. Increasing the
response time of ${R}$, decreases the amplitude at the resonance
frequency considerably. Faster tracking of ${V}^{{P}}$ by ${R}$ makes
the adaptation of the biochemical circuit very fast and as a result,
${W}^{{P}}$ does not respond at all to changes in ${V}^{{P}}$. In
\fref{Fig2}b we observe the influence of changing the state $\mu_2$ of
$S_2$. The gain decreases for larger $\mu_2$, and the response time
$\tau_W$ increases. This may lead to crosstalk, since the mapping of
$A_1$ to $X_1$ will now depend on $S_2$.
Finally, we look at the last step in the motif, the conversion of the dynamic
response of the adaptive motif $W$ into ${X}_1$. The instantaneous concentration
$X_1$ should inform the system upon the {state} of input
{$\molsf{S}_1$}. Simple time-integration of ${W}$, similar to the
response ${X}_2$ (\eref{x2}), is not sufficient. While time-integration by
itself is important to average over multiple oscillation cycles, it is not
sufficient because time-integration with a linear-transfer function does not
lead to a change in the response when the amplitude of the input is varied,
assuming that the oscillations are symmetric. Indeed to respond to
different amplitudes, a non-linear transfer function is required
\begin{align}
\elabel{eqx1}
\diff{X_1}{t}=k_{X_1}\frac{W^n}{W^n+K_{X_1}^n}-m_{X_1}X_1.
\end{align}
These Hill-type non-linear transfer functions are very common in biological
systems, for example in gene regulation by transcription factors, or protein
activation by multiple enzymes.
\subsection{Multiplexing}
\label{sec:ch8:results}
Now that we have specified the model with its components, we
characterize its multiplexing capacity, using the formalism of
information theory \cite{Shannon1948}. We
define two measures: 1) $I_1\br{X_1;A_1}$, the mutual information
between the concentration $X_1$ and the amplitude $A_1$ of signal
$\molsf{S}_1$, and 2) $I_2\br{X_2;\mu_2}$, the mutual information between
the concentration $X_2$ and the concentration level $\mu_2$ of
$\molsf{S}_2$. The information capacity of the system is then defined
by the total information $I_{T}=I_1\br{X_1;A_1}+I_2\br{X_2;\mu_2}$
that is transmitted through the system. The mutual information in bits
shows how many different input states can be transmitted with
100\% fidelity. It does not necessarily reflect whether all
input signals are transmitted reliably. For example, increasing the number
of input states $N_A$ can increase the mutual information
$I_1\br{A_1;X_1}$ \cite{Shannon1948}, yet a specific output
concentration $X_1$ could be less informative about a specific input
amplitude $A_1$. To quantify the fidelity of signal transmission, we
normalize the mutual information by the information
entropy $H(A_1)$ and $H(\mu_2)$ of the respective inputs. We therefore define the
relative mutual information
\begin{align}
\elabel{I_R}
I_R\br{\br{A_1;X_1},\br{\mu_2;X_2}}&=\frac{I_1\br{X_1;A_1}}{H\br{A_1}}+\frac{
I_2\br{X_2;\mu_2}}{H\br{\mu_2}}\\
&=\frac{I_1\br{X_1;A_1}}{\log_2\sqbr{N_A}}+\frac{I_2\br{X_2;\mu_2}}{\log_2\sqbr{
N_\mu}}.
\end{align}
Note that $I_R\brin{\brin{A_1;X_1},\brin{\mu_2;X_2}}$ has a maximum
value of $2$, meaning that each channel $i=1,2$ transmits all its
messages $S_i\to X_i$ with $100\%$ fidelity.
The mutual information depends on the kinetic parameters of the
system, on the input distribution of the signal states, and on the
amount of noise in the system. In a previous study we have shown that
under biologically relevant conditions, a simple biochemical system
using only constant signals is capable of
simultaneously transmitting at least two bits of information
\cite{DeRonde2011}, meaning that at least two signals with two input
states can be transmitted with 100\% fidelity. Here we wondered
whether this information capacity can be increased. Therefore, we
study the system for increasing number of input states (increasing
$N_A$ for $S_1$ and $N_\mu$ for $S_2$), where we assume a uniform
distribution of the states for $\molsf{S}_1$, $A_1\in \sqbr{0:1}$, and
for $\molsf{S}_2$, $\mu_2 \in \sqbr{0:\mu_{\rm max}}$ (see
\eref{sig}). To obtain a lower bound on the information that can be
transmitted, we optimize the total mutual information
over a subset of the kinetic parameters,
where we constrain
the kinetic rates between $10^{-3}< k_i <10^3$, the
dissociation constants between $1<K_i<7.5\times 10^4$,
the maximum concentration level for $S_2$ $10<\mu_{\rm max}<1000$ and the oscillation period $10<T<10000$.
We set the response times of $\molsf{X}_1, \molsf{X}_2$ to be much
longer than the oscillation period, so that the variability in $V$ and
$W$ due to the oscillations in $S_1$ is time-integrated; specifically,
$m_{X_1}=m_{X_2}=\br{NT_p}^{-1}{\rm s}^{-1}$, such that the output
averages over $N=10$ oscillations with period $T_p$. The noise
strength is calculated using the linear-noise approximation \cite{VanKampen2007}
while
assuming that the input signals are constant, of magnitude
$\mu_1,\mu_2$. The effects of the non-linear and oscillatory nature of
the network on the noise strength are thus not
taken into account. However, we do not expect that these two effects
qualitatively change the observations discussed below. To compute the noise
strength, we assume that the maximum copy numbers of $X_1$ and
$X_2$ are $1000$.
The optimization is performed using an evolutionary algorithm (see {\em SI}).
Before we discuss the information transmission capacity of our system,
we first show typical results for the time traces and input-output
relations as obtained by the evolutionary algorithm. \fref{Fig3}a
shows that the oscillations in $V_P$ are amplified by the adaptive
network to yield large amplitude oscillations in $W^P$. In contrast,
$X_1$ and $X_2$ only exhibit very weak oscillations due to their long
lifetime. \fref{Fig3}b shows that when $A_1$ is increased while
$\mu_2$ is kept constant, the average of $V^P$, which is set by
$\mu_2$, is indeed constant. As a result, $\avg{X_2}$ is constant, as
it should be (because $\mu_2$ is constant). In contrast, $X_1$
increases with $A_1$. This is because the amplitude of the
oscillations in $W^P$ increases with $A_1$, which is picked up by the
non-linear transfer function from $W^P$ to $X_1$. In addition, $X_1$
increases because the mean of $W^P$ itself increases, due to the
non-linearity of the network; this further helps to
increase $X_1$ with $A_1$. \fref{Fig3}c shows that when $\mu_2$ is
increased while $A_1$ is kept constant, $\avg{V^P}$ and hence $X_2$---the response of $S_2$---increases. Importantly, while the mean of the
buffer node $R$ of the adaptive network increases with $\avg{V^P}$,
the mean of the output of this network, $W^P$, is almost
constant. Consequently, $X_1$ is nearly constant, as it should
because $X_1$ should reflect the value of $A_1$ which is kept
constant. These two panels thus show that this system can multiplex
two signals: it can transmit multiple states of two signals through
one and the same signaling pathway, and yet each output responds very
specifically to changes in its corresponding input. This is the
central result of our manuscript.
Interestingly, \fref{Fig3}c shows a (very) weak dependence of $X_1$ on
$S_2=\mu_2$, which will introduce crosstalk in the system. It is important
to realize that this will
reduce information transmission, even in a deterministic noiseless
system. The mechanism by which crosstalk can reduce information transmission
is illustrated in \fref{Fig4}. Maximal information transmission
between $S_1$ and $X_1$ occurs if a given amplitude $A_1$ (independent of
$\mu_2$) uniquely
maps to a specific output $X_1$, and similarly for $S_2$ and $X_2$. In
a deterministic system, every combination of inputs $(S_1,S_2)$ maps
onto a unique combination of outputs $(X_1,X_2)$. We aim to multiplex
the signals such that $X_1$ should be a function of $S_1$ (i.e. $A_1$)
only, while $X_2$ should be a function of $S_2$ ($\mu_2$) only. That
is, the mapping from $S_i$ to $X_i$ should be independent of the state
of the other signal $S_{j\neq i}$. However, crosstalk causes the
mapping from $S_i$ to $X_i$ to depend on the state of the other
channel. This dependence reduces information transmission, because a
given concentration of $X_1$ can now correspond to multiple values of
$A_1$, as illustrated in \fref{Fig4}a. Crosstalk can thus reduce
information transmission even in a deterministic system without
biochemical noise.
\begin{figure*}[t]
\begin{center}
\begin{tabular}{ccc}
\figwithletter{a)}{0.5}{Fig3a}
&
\figwithletter{b)}{0.5}{Fig3b}
&
\figwithletter{c)}{0.5}{Fig3c}
\end{tabular}
\end{center}
\capt{Fig3}{Typical time traces and input-output curves as obtained by the
evolutionary algorithm. Shown are results for a system with
$N_A=N_{\mu}=16$. \textbf{a)} Time traces of $V^P$, $R$, $W^P$,
$X_1$ and $X_2$ for $A_1=0.5$ and $\mu_2=275$ \textbf {b)}
$\avgin{V^P}$, $\avgin{R}$, $\avgin{W^P}$, $\avgin{X_1}$, and
$\avgin{X_2}$ as a function of $A_1$, keeping $\mu_2=275$
constant. \textbf{c)} $\avgin{V^P}$, $\avgin{R}$, $\avgin{W^P}$,
$\avgin{X_1}$, and $\avgin{X_2}$ as a function of $\mu_2$, keeping
$A_1=0.5$. The figure shows that the system can multiplex: $X_1$ is
sensitive to $S_1=A_1$ (panel b) but not $S_2=\mu_2$ (panel c),
while $X_2$ is sensitive to $S_2=\mu_2$ (panel c) but not $S_1=A_1$ (panel
b). The time traces in panel a correspond to the points in panels b
and c that are indicated by the dashed lines. All panels correspond
to the point in \fref{Fig5}c.
}
\end{figure*}
It is of interest to quantify the amount of information that can be
transmitted in the presence of crosstalk in a deterministic, noiseless
system. Via the procedure described in the {\em SI}, we compute the
maximal mutual information for the two channels, assuming that we have
a uniform distribution of input states for each channel, with $A_1\in
\sqbr{0:1}$ and
$\mu_2 \in \sqbr{0:\mu_{\rm max}}$. We find that for channel $2$,
the mutual information is given by the entropy of the input
distribution, which means that the number of signals that can be
transmitted with 100\% fidelity through that channel is just the
total number of input signals for that channel. This is because signal
transmission through channel $2$ is hardly affected by crosstalk from
the other channel. Below we will see and explain that this observation
also holds in the presence of biochemical noise. For signal
transmission through channel $1$, however, the situation is markedly
different. The maximum amount of information that can be transmitted
through that channel is limited to about 4 bits. This means that up to
$2^4$ signals can be transmitted with 100\% fidelity; in this regime,
the input signal $S_1$ can be uniquely
inferred from the output signal $X_1$. Increasing the number of input
signals beyond $2^4$, however, does
not increase the amount of information that is transmitted through
that channel; more signals will be transmitted, but, due to the
crosstalk from the other channel, each signal will
be transmitted less reliably (see \fref{Fig4} and {\em SI}).
We will now quantify how many messages can be transmitted reliably in
the presence of not only crosstalk, but also biochemical noise.
The results of the optimization of the mutual
information using the evolutionary algorithm are shown in \fref{Fig5}. The left panel shows the relative mutual information
for channel $1$, the middle panel for channel $2$, and the right panel
shows the total relative mutual information (\eref{I_R}). Clearly, biochemical noise affects information
transmission through the two respective channels differently.
Firstly, we see that the fidelity of signal transmission through channel $2$ is
effectively independent of the number of states $N_A$ that are transmitted
through channel $1$, even in the presence of biochemical noise (\fref{Fig5}b).
This means that channel $2$ is essentially insensitive to crosstalk
from channel $1$. This is because $X_2$ time-integrates the
sinusoidal $V_P(t)$ via a linear transfer function---the output $X_2$ is thus
sensitive to the mean of $V_P$ (set by $S_2$), but not to the amplitude
of $V_P(t)$ (set by $S_1$). We also observe that even in the presence of noise, the relative
information stays close to 100\% when $N_{\mu}$ is below $3$
bits. Channel $2$ is thus fairly resilient to biochemical noise, which
can be understood by noting that a linear transfer function (from
$V_p$ to $X_2$) allows for an optimal separation of the $N_{\mu}$ input
states in phase space \cite{Tkacik2009,Tkacik2011,Tkacik2012}.
The left channel, $\molsf{S}_1 \to \molsf{X}_1$, is more susceptible
to noise (\fref{Fig5}a) and to crosstalk from the other channel,
$\molsf{S}_2$. The susceptibility to noise can be seen for $N_\mu=1
{\rm bit}=2$ states: the relative information decreases as $N_A$
increases. This sensitivity to noise becomes more pronounced as
$N_\mu$ increases, an effect that is due to the crosstalk from the
other channel. A larger $N_\mu$ reduces the accessible phase space for
channel $1$---it reduces the volume of state space that allows for a
unique mapping from $S_1$ to $X_1$. As a result, a
small noise source is more likely to cause a reduction in
$I_R(S_1;X_1)$. How crosstalk and noise together reduce information
transmission is further elucidated in \fref{Fig4}c. Remarkably, even in
the presence of noise, maximal relative information is obtained for
$N_A=N_\mu=4 (=2{\rm bits})$ (\fref{Fig5}c), showing that $4$ input
states can be transmitted for each channel simultaneously without loss
of information.
\begin{figure*}[t]
\begin{center}
\begin{tabular}{ccc}
\figwithletter{a)}{0.58}{Fig4a_cartoon_X1_crosstalk}&
\figwithletter{b)}{0.58}{Fig4b_cartoon_X1_noise}&
\figwithletter{c)}{0.58}{Fig4c_X1_noise}
\end{tabular}
\end{center}
\capt{Fig4}{The influence of noise and crosstalk on information
transmission in pathway $S_1 \to X_1$. \textbf{a)} Schematic: Crosstalk
reduces the amount of information that can be transmitted. For every
$A_1$ multiple values of $\avgin{X_1}$ are obtained, each
corresponding to a different value of $\mu_2$. The dark red
corresponds to the maximum value of $\avg{X_1}$ for each $A_1$,
while the light red line denotes the minimum value. The black line
in between the red lines visualizes the range for which a specific
$\avgin{X_1}$ uniquely maps to a single input amplitude
$A_1$. Crosstalk from the $\molsf{S}_2\to\molsf{X}_2$ channel thus
limits the number of states, and hence the amount of information,
that can be transmitted through channel $1$. \textbf{b)} Schematic:
Also noise
reduces the number of input states that can be resolved. Shown is
the mean response curve $\avgin{X_1}(A_1)$ together with the noise
in $X_1$. Dotted lines give the minimum and maximum values of $X_1$
for each amplitude. Since for each $A_1$ a larger range of $X_1$
values is obtained, less states $A_1$ can be uniquely encoded in the
phase space. This is reflected in the width of the boxes; indeed,
here only $5$ input states can be transmitted with absolute
reliability. \textbf{c)} Combined effect of noise and crosstalk on
information transmission for a system with $N_A=N_{\mu}=8$, as
obtained from the evolutionary optimization algorithm; the results
corresponds to the black dot in \fref{Fig5}c. Both the noise and the
crosstalk reduce the number of possible input states that can be
transmitted. Solid lines give the deterministic dose-response curve,
while dashed lines correspond to a network with noise. Dark red
lines indicate the maximum of $\avgin{X_1}$ for a specific $A_1$
over the range of possible values of $\mu_2$, while red lines give
the minimum value. Because for each $A_1$ a range of $\avgin{X_1}$
values is obtained, the number of states $A_1$ that can be uniquely
encoded in the phase space is limited. This is reflected in the
increase in the width of the boxes; indeed, here only $7$ input
states can be transmitted with absolute reliability.
}
\end{figure*}
\subsection{Experimental observations}
Here we connect our work to two biological systems. The first system is the
p53 DNA damage response system. The p53 protein is a cellular signal for
DNA-damage. Different forms of DNA damage exist and they lead to different
temporal profiles of the p53 concentration. Double-stranded breaks cause
oscillations in the p53 concentration, while single stranded damage leads to a
sustained p53 response \cite{Fei2003, Sengupta2005,Batchelor2011}. Compared to
our simple multiplexing motif, the encoding scheme in this system is more
involved. In our system two external signals activate the shared component
$\molsf{V}$. In the p53 system, p53 itself is $\molsf{V}$, but
interestingly, negative (indirect) autoregulation of p53 is required to obtain
sustained oscillations.
Although the encoding structure is different, the main result is that the
system is able to encode two different signals into different
temporal profiles simultaneously; depending on the type of damage either a
constant and/or an oscillatory profile of p53 is present. These
two signals could therefore be transmitted simultaneously due to their
difference in the temporal profiles.
For the p53 system the input signals are binary, e.g. either there is DNA
damage or not, although some experiments suggest that the amount
of damage also could be transmitted \cite{Lahav2004}. The maximum information
that can be transmitted following our simplified model is much
larger than that required for two binary signals. A mathematical model, based
upon experimental observations, shows that the encoding step
creates a temporal profile for p53 that could be decoded by our suggested
decoding module (not shown).
Another system of interest is the MAPK (or RAF-MEK-ERK) signaling
cascade. The final output of this cascade is the protein ERK, which
shuttles between the cytoplasm and the nucleus. ERK is regulated by
many different incoming signals of which EGF, NGF and HRG are well
known \cite{Shaul2007}. The temporal profile of ERK depends on the
specific input that is present. NGF and HRG lead to a sustained ERK
level \cite{Sasagawa2005}, while EGF leads to a transient or even
oscillatory profile of the ERK level \cite{Kholodenko2000,
Sasagawa2005, Shankaran2009}. In comparison with our model ERK would
be the shared component $\molsf{V}$. Experiments show that
oscillations in the ERK concentration can arise due to intrinsic
dynamics of the system \cite{Shankaran2009}. However, these
oscillations could be amplified by, or even arise because of,
oscillations in the signal EGF, especially since, to our knowledge, it
is unclear what the temporal behavior of EGF is under physiological
conditions.
For both experimental systems, we have only described the encoding step. In
both cases, two signals are encoded in a shared component
$\molsf{V}$, where one signal leads to a constant response, while the other
signal creates oscillations. Both p53 and ERK are transcription
factors for many downstream genes \cite{VonKriegsheim2009, Wei2006a}. For the
decoding of the constant signal, only a simple birth-death process
driven by $\molsf{V}$ would be required. Many genes are regulated in
this way \cite{Alon:2007tz}.
The decoding of the oscillatory signal requires an adaptive motif.
Although adaptive motifs are common in biological processes \cite{Alon:2007tz}, it is unclear
whether downstream of either p53 or ERK an adaptive motif is present, which
would complete our suggested multiplexing motif. As such, our study
should be regarded as a proof-of-principle demonstration that
biochemical networks can multiplex oscillatory signals.
\begin{figure*}[!ht]
\begin{center}
\begin{tabular}{c|c|cl}
$I_R\br{A_1,X_1}$ & $I_R\br{\mu_2,X_2}$ &
$I_R\br{\br{A_1,X_1},\br{\mu_2,X_2}}$&\\
\hline
\figwithletter{a)}{0.5}
{Fig5a}
&
\figwithletter{b)}{0.5}
{Fig5b}
&
\figwithletter{c)}{0.5}
{Fig5c}&
\end{tabular}
\end{center}
\capt{Fig5}{The transmitted relative information $I_R$ (\eref{I_R}) as
function of the number of input states $N_{A}, N_{\mu}$, where $2$
bits correspond to $2^2=4$ input states. Results are shown for a
stochastic system with $X_T=1000$. In panels \textbf{a,b}
100\% corresponds to $I_R=1$, while in \textbf{c} 100\%
corresponds to $I_R=2$. \textbf{a)} the relative mutual information
$I_R\br{A_1,X_1}$ for the $\molsf{S}_1\to\molsf{X}_1$ channel; the total
mutual information is
obtained by multiplying $I_R$ with $\log_2\brin{N_A}$, the
horizontal axis. Both the decrease in $I_R\brin{A_1,X_1}$ as a
function of $N_A$ due to the presence of biochemical noise and the decrease in
$I_R\br{A_1,X_1}$ as a
function of $N_\mu$ due to the presence of
crosstalk is observed. \textbf{b)} the relative mutual
information $I_R\br{\mu_2,X_2}$ for the $\molsf{S}_2\to\molsf{X}_2$
channel. The total mutual
information is obtained by multiplying $I_R$ with
$\log_2\brin{N_\mu}$, the vertical axis. The effect of noise is relatively
small and crosstalk from $S_1$ is hardly
present. \textbf{c)} the
relative information of the total network
$I_R\br{\br{A_1,X_1},\br{\mu_2,X_2}}=I_R\br{A_1,X_1}+I_R\br{\mu_2,X_2}$. The
dot correponds to the timetraces in \fref{Fig3}.
All results are obtained
through numerical optimization (see {\em SI}).
}
\end{figure*}
\section{Discussion}
We have presented a scheme for multiplexing two biochemical
signals. The premise of the proposal is that the two signals have to
be transmitted, not integrated. Indeed, the central hypothesis is that
$X_1$ should only respond to $S_1$ and $X_2$ only to
$S_2$. Information transmission is then maximized when the crosstalk
between the different channels is minimized.
The model discussed here consists of elementary motifs, and can
simultaneously transmit two signals reliably. One of these signals is
constant in time,
and its corresponding information is encoded in its concentration level, while
the other signal is dynamic, and its information is encoded in the dynamical
properties, but not in its average concentration level. The decoding of the
constant signals is performed by a time-integration motif, while the decoding
of the oscillatory signal requires a frequency-sensitive motif, for example an
adaptive motif.
The main problem in multiplexing biochemical signals is crosstalk
between the two signals. In this system the signals are encoded based
upon their dynamical profile---${S}_1$ is oscillatory and ${S}_2$ is
constant in time. The decoding module for the oscillatory signal, an
adaptive motif, is non-linear. Therefore, this motif is
sensitive not only to the temporal properties like the amplitude, but also
to the mean or average of its input. This inevitably leads
to crosstalk between channel $1$ and channel $2$, reducing information
transmission.
Remarkably, the system is capable of transmitting over $3$ bits of
information through each channel with 100\% fidelity. In the presence
of noise the information transmission decreases, but even with
considerable noise levels in the biologically relevant regime, more
than $2$ bits of information can be transmitted through each channel
simultaneously; this information transmission capacity is comparable
to what has been measured recently in the context of NF-$\kappa$B signaling \cite{Cheong2011}. To transmit signals without errors it is preferable to
send most information using channel $2$ and a smaller number of states
through channel $1$. The reason for this is twofold. First, channel $2$ is
less noisy since the number of components is smaller; secondly,
channel $1$ is corrupted by crosstalk from channel $2$, leading to
overlaps in the state space of $X_1$ as a function of $A_1$ (see
\fref{Fig4}). Nonetheless, the two channels can reliably transmit 4
states in
the presence of noise. This is a considerable increase in
the information transmission compared to a system where both signals
are constant in time \cite{DeRonde2011}---this could transmit two
binary signals with absolute fidelity. This indicates that oscillatory
signals could significantly enhance the information transmission
capacity of biochemical systems. Importantly, while we have optimized
the parameters of our model system using an evolutionary algorithm, it
is conceivable that other architectures than those studied here allow
for larger information transmission. Indeed, the results presented
here provide a lower bound on information transmission.
In this system we have assumed that the amplitude of the oscillatory
signal is the information carrier of that signal. The same analysis
could be performed for an oscillatory signal at constant amplitude but
with different frequencies. Qualitatively, the results will be
similar. The dependence of the gain on the frequency
means that the amplitude of the output varies with the frequency of
the input (see
\fref{Fig2}). The amplitude of the output thus characterizes the
signal frequency. However, an intrinsic redundancy is present in using
the frequency as the information carrier, which can be understood from
the symmetry of the gain (see \fref{Fig2}). The response of the system
is equal for frequencies that are positioned symmetrically with
respect to the resonance frequency. As a result, for any given output,
there are always two possible input frequencies, and without
additional information, the cell can not resolve which of the two
frequencies is present. Of course, one way to avoid this, would be to
use only a part of the gain, in which the gain increases monotonically
with frequency.
In this study we have assumed that the input signals are deterministic.
Results are obtained following deterministic simulations, where
noise is added following a solution of the linear-noise approximation assuming
non-oscillatory inputs. The effect of noise is a reduction of
the information transmission. However, the effect of noise can always be
counteracted by increasing the copy number. At the cost of producing
and maintaining more proteins, similar results can therefore be obtained
\cite{DeRonde2011}. The effect of oscillations on the
variability of the output is small since the response times of $\molsf{
X}_1$ and $\molsf{X}_2$ are much longer than the oscillation period. Slower
responding outputs would time-average the oscillation cycles even more, reducing
the variability in the response further.
Transmitting information via oscillatory signals has many advantages.
Oscillatory signals minimize the prolonged exposure to high levels of
the signal, which can be toxic for cells, as has been argued for
calcium oscillations \cite{Trump1995}. In systems with cooperativity
\cite{Gall2000b}, an oscillating signal effectively reduces the signal
threshold for response activation. Pulsed signals also provide a way
of controlling the relative expression of different genes
\cite{Cai2008}. Encoding of stimuli into oscillatory signals can
reduce the impact of noise in the input signal and during signal
propagation \cite{Rapp1981}. Frequency encoded signals can be decoded
more reliably than constant signals \cite{Tostevin2012}.
Here we show that information can be encoded in the amplitude or
frequency of oscillatory signals, which are then decoded using a
non-linear integration motif. We also discussed two biological
systems that may have implemented this multiplexing strategy. The idea
to use the temporal kinetics as the information carrier in a signal
has been studied in a slightly different context, where the dose
information is encoded in the duration of an intermediate component,
which in turn is time-integrated by a downstream component
\cite{Behar2008}. Here, we show that encoding
signals into the temporal dynamics of a signaling pathway allows for
multiplexing, making it possible to simultaneously transmit multiple
input signals through a common network with high fidelity. It is
intruiging that systems with a bow-tie structure, such as calcium and
NF-$\kappa$B \cite{Cheong2011}, tend to transmit information via
oscillatory signals.
\section{Materials and Methods}
The model is based on mean-field chemical rate equations or the
linear-noise approximation \cite{Elf2003}. For details see the
Supporting Information.
\begin{acknowledgments}
We thank Jos\'{e} Alvarado for a critical reading of the manuscript.
\end{acknowledgments}
\clearpage
\newpage
\begin{center}
{\bf \large Supplementary information: Multiplexing oscillatory biochemical
signals}\\[0.2cm]
Wiet de Ronde and Pieter Rein ten Wolde
\end{center}
\section{General definitions}
We use the following two general definitions for the mean and the maximum of a
specific component, if the period of the input signal is $T_p$
\begin{align}
\avg{Z}&=\frac{1}{T_p}\int_t^{t+T_p} Z\br{t'}dt',\\
Z_{\rm max}&=\sup_{T_p} Z\br{t}.
\end{align}
\subsection{Encoding}
\subsubsection{\slabel{MM_approx}MM-approximation}
In the derivation of Eq.~3 of the main text, we have assumed, as is
commonly done, Michaelis-Menten (MM) kinetics. However, the
MM-approximation may not hold for a dynamical system
\cite{Segel1988a,RamiTzafriri2007a}, since the MM-approach is a coarse
graining of the full mass-action kinetics of the system. Therefore we
numerically evaluate whether the MM-approximation is valid for our
network. The full set of reactions is (see Eq.~2 of the main text)
\begin{align}
\elabel{mass_act}
\molsf{S} + \molsf{V} &\rates{k_1}{k_{-1}} \molsf{SV}, \nonumber\\
\molsf{SV} &\torate{k_2} \molsf{S} + \molsf{V}^\molsf{P},\nonumber\\
\molsf{V}^\molsf{P} + \molsf{E} &\rates{m_1}{m_{-1}}
\molsf{V}^\molsf{P}\molsf{E},\nonumber\\
\molsf{V}^\molsf{P}\molsf{E} &\torate{m_2} \molsf{V}+\molsf{E}.
\end{align}
The quasi-steady-state approximation in MM-kinetics assumes that the
concentration of the complex of enzyme and substrate is in equilibrium and does
not change (or changes very slowly) with time, leading to
\begin{align}
\diff{SV}{t}=0,\,\diff{V^PE}{t}=0.
\end{align}
We compare the results for the full system (described by
\eref{mass_act}), with the Michaelis-Menten approximation in the
linear regime (Eq.~2 of the main text), in which the phosphorylation
reaction is linear in the signal concentration $S$ (\fref{si:Fig1}a).
Eq. 2 of the main text is obtained for
$K_V\br{=\br{k_{-1}+k_2}/k_{1}}\ll V_T-V^P$, such that the
phosphorylation reaction of $\molsf{V}$ to $\molsf{V}^\molsf{P}$
simplifies to $k_V\sum_i S_i\br{t}$ (Eq. 3 of the main text), which is indeed
linear in $S_{i}$ and zero-order in $V$. If $V_T\gg S$, the
linearity condition will also be fulfilled for
all moments in time when the system is dynamical \cite{Segel1988a}. In
this case all $\molsf{S}$ directly binds to $\molsf{V}$ and the
complex $\molsf{SV}$ is very stable.
This very stable complex $\molsf{SV}$, however, is likely to influence
the dynamics of the signal oscillations, as in the following two
examples: if the oscillations in the signal $\molsf{S}$ are driven due
to factors external of the cell (like hormone pulsing), or if the
oscillations depend on the (saturated) degradation of the signal
$\molsf{S}$ \cite{Mengel2010a} and this regulation does not act on the
signal in bound form. Due to the stability of the complex, in the
rising part of the oscillation $\molsf{S}$ directly forms the complex
$\molsf{SV}$, but, again because of the stability, during the falling
part of the oscillation $\molsf{SV}$ does not dissociate. Since the
complex is not regulated, the total signal level $S_T=S+SV$ increases,
since with every oscillation the concentration $\molsf{SV}$ increases,
until all $\molsf{V}$ is saturated. As a result, the oscillatory
dynamics of the signal is strongly influenced: the periodicity is
reduced and the mean level of signal present is increased
(\fref{si:Fig1}b).
The interaction between the signal $\molsf{S}$ and $\molsf{V}$ may corrupt the
information that is
encoded in the oscillations (\fref{si:Fig1}b), because
it influences the oscillation characteristics. To overcome this,
instead of assuming $S\ll V$, one could assume $V\ll S$. However, in
this regime it is unclear whether the dynamical behavior of the MM-approximation
accurately represents the dynamics of the full mass-action equations. Moreover,
if $S\gg V$, a small minimum concentration $S_{\rm min}$ directly saturates all
the $\molsf{V}$ molecules. As a result, the concentration of $\molsf{V}$ is
insensitive to any oscillation in the concentration $S$ when the minimum
concentration $S_{\rm min}$ is larger than the concentration $V$.
Therefore, an extended model is required, with two additional reactions
\eref{mass_act_2}. This extension is biologically inspired, since many
external signals are sensed by receptors $\molsf{Q}$, which in turn activate (or
phosphorylate) intracellular proteins. The crucial ingredient is that the
signal-bound receptor dissociates on a much faster timescale than the
oscillations. Due to this very fast receptor dissociation, the signal-bound
state is very small.
\begin{align}
\elabel{mass_act_2}
\molsf{S} + \molsf{Q} &\rates{q_1}{q_{-1}} \molsf{Q}^\molsf{A}, \nonumber\\
\molsf{Q}^\molsf{A} + V &\rates{k_1}{k_{-1}} \molsf{VQ}^\molsf{A}, \nonumber\\
\molsf{VQ}^\molsf{A} &\torate{k_2} \molsf{V}^\molsf{P} + \molsf{Q}^\molsf{A},
\nonumber\\
\molsf{V}^\molsf{P} + \molsf{E} &\rates{m_1}{m_{-1}}
\molsf{V}^\molsf{P}\molsf{E},\nonumber\\
\molsf{V}^\molsf{P}\molsf{E} &\torate{m_2} \molsf{V}+\molsf{E},
\end{align}
With this biological-inspired, small extension, the dose-response curve is
similar to the dose-response curve for the Michaelis-Menten approximation with
small $K_{V}$ (in the linear regime), also for oscillatory signals
(\fref{si:Fig1}c).
\begin{figure*}[!ht]
\begin{center}
\figwithletter{a)}{0.6}{Fig_SI_1a}
\figwithletter{b)}{0.6}{Fig_SI_1b}
\figwithletter{c)}{0.6}{Fig_SI_1c}
\end{center}
\capt{si:Fig1}{Comparison of the results of the Michaelis-Menten
approximation (red) with those of the full mass action equation
(\eref{mass_act}, black) using Gillespie simulations. \textbf{a)}
Dose-response curve for {\em constant} signal $S$. The Michaelis
Menten approximation describes the full system (\eref{mass_act})
very well (only for $K=5000$ the curves do not precisely overlap).
\textbf{b)} Dose-response for {\em sinusoidal} input $S$ with
$T=100\unit{s}$. Due to the strong complex formation of $\molsf{VS}$
all the signaling molecules $S$ bind $V$ until all $V$ is saturated
and, as a result, the signal does not oscillate independently of the
system. Accordingly, the effective concentration $S$ is much larger
than the mean of the oscillations. Parameters: $r_1\unit{s_{-1}},
E_T=150,V_T=2500,M_V=5000$, and for $K_V=5000, K_V=10, K_V=0.01$
respectively $\acco{k_{-1}, k_{2},m_{-1},m_{2}}=\acco{4997.5, 2.5,
4999,1}\unit{s^{-1}},\acco{9, 1,4998,2}\unit{s^{-1}},\acco{0,
1,4998, 2}\unit{s^{-1}}$, and $k_1\avg{S}=10,
10,1000\unit{s^{-1}}$ respectively \textbf{c)} The dose-response
curve of the extended model described by \eref{mass_act_2} compared
to the Michaelis-Menten equations of Eq. 3 of the main text. It is
seen that the MM model of Eq. 3 of the main text accurately
describes the dynamics of the extended system of
Eq. \eref{mass_act_2}. Parameters: $q_1\avg{S}=2.5\unit{s^{-1}},
q_{-1}=4000\unit{s^{-1}},k_1V_T=5000\unit{s^{-1}}, k_{-1}=4000,
k_2=25\unit{s^{-1}}, m_1E_T=100\unit{s^{-1}}, ,
m_{-1}=4998\unit{s^{-1}},m_2=1\unit{s^{-1}}, E_T=50, R_T=1000,
V_T=5000$. }
\end{figure*}
\subsection{\slabel{lin_approx}Linear Approximation}
In this section we show that the MM-approximation in the linear regime
(see Eq. 3 of the main text) does not change the mean of
$\molsf{V}^\molsf{P}$, $\avgin{V^P}$, irrespective of the signal
characteristics. We compare analytical results with numerical
simulations and for completeness we compare this linear regime (I)
with two other regimes which we describe in more detail below:
zero-order dynamics for $\molsf{V}$ (II) and non-linear phosphorylation
(III).
In regime II there are many more $\molsf{V_T}$ molecules than kinase
($\molsf{S}_i$) and phosphatase ($\molsf{E_T}$) molecules. Therefore,
the kinase and phosphatase enzymes are {\em both} saturated
$\br{V_T\to\infty}$. Since saturation of both kinase and
phosphatase molecules implies that $M_V, K_V\ll V_T$, the dynamics
of Eq. 2 of the main text can be simplified to
\begin{align}
\elabel{VP2sat}
\diff{V^P}{t}\approx k_V\br{\sum_i S_i\br{t}}-m_V.
\end{align}
This is the well-known regime of zero-order dynamics for $\molsf{V}$. In this
regime $\avgin{V^P}$ can be approximated by a binary function
$\avgin{V^P}=0$ or $\avgin{V^P}=V_T$, where the transition occurs at a
critical kinase concentration $\br{\sum_i S_i}_{\rm crit}$ (see \fref{si:Fig2},
open black symbols). $\molsf{V}$ thus acts as a switch. A
switch-like functional dependence of $\molsf{V}$ on $\molsf{S}$ does not lead to
perfect tracking of the signal $\molsf{S}$, and therefore not to accurate
propagation of the oscillations.
The third regime, regime III, is in a sense the opposite of the
previous. In this regime, $\molsf{V}$ is limiting (e.g. $M_V, K_V\gg
V_T$, see Eq. 2 of the main text). Eq. 2 of the main text then reduces
to
\begin{align}
\elabel{VP2lin}
\diff{V^P}{t}=k'_V\br{\sum_i S_i\br{t}}\br{V_T-V^P}-m'_VV^P,
\end{align}
where $k'_V=k_V/K_V$ and $m'_V=m_VE_T/M_V$. In this regime, the
phosphorylation and dephosphorylation reaction are first order in $V$
and $V^P$, respectively. A typical
dose-response curve is shown in \fref{si:Fig2} (closed gray symbols).
In regime I, corresponding to Eq. 3 of the main text, the two preceding regimes
are combined (\fref{si:Fig2}, closed red symbols). There is saturation of the
kinases in the production, but
saturation of $\molsf{V}^\molsf{P}$ in the dephosphorylation, leading to
\begin{align}
\elabel{SI:VPlin}
\diff{V^P}{t}=k_V\br{\sum_i S_i\br{t}}-m'_VV^P,
\end{align}
which is Eq. 3 of the main text.
In \fref{si:Fig2}a the dose-response curve for $\molsf{V}^\molsf{P}$
as function of $\molsf{S}$ is shown; the focus is on the mean
$\avgin{V^P}$ as a function of a constant signal $\molsf{S}$ with
increasing mean $\mu$. Regime I of the main text (closed red symbols)
has an approximate linear relation between $S$ and
$\avgin{V^P}$. Regime II (open black symbols) shows
the switch-like response, while regime III (closed gray symbols) increases
hyperbolically to saturation.
A sinusoidal oscillation can only be propagated perfectly as a
sinusoidal signal if
the dose-response function is linear. For non-linear dose-response functions,
oscillations with small amplitude are propagated correctly, since for small
perturbations every function has linear characteristics. However, larger
amplitude oscillations are deformed by the non-linear transfer function. As a
result, the mean $\avgin{V^P}$ changes as a function of $A$ and/or $T$, the
oscillation parameters.
A stronger non-linear dose-response function decreases the amplitude-range of
oscillations that can be propagated without this deformation.
Figure~\ref{fig:si:Fig2}b shows $\avgin{V^P}$ for signals with different
properties $A$ and $T$, but constant signal mean $\mu$. If the transfer function
allows for perfect tracking of the input signal, $\avgin{V^P}$ should
be constant because the mean $\mu$ is
constant. Indeed, in
both regime I and III $\avgin{V^P}$ does not depend on the oscillation
parameters $A, T$, while in regime II a strong dependence on these parameters
exist.
For completeness, in \fref{Fig3SI}a-c we show corresponding time traces. Again
the strong non-linear response for regime II (\fref{Fig3SI}b) is observed, while
regime I (\fref{Fig3SI}a) and III (\fref{Fig3SI}c) exhibit oscillations that
are
very similar to the sinusoidal oscillations of the input signal. Please also
note the
reduction in amplitude in regime III (\fref{Fig3SI}c), compared to
regime I of the main text (\fref{Fig3SI}a). This can be
explained
by the hyperbolic shape of the dose-response curve for regime III, which dampens
changes in $S$ (\fref{si:Fig2}a).
\begin{figure*}[!ht]
\begin{center}
\figwithletter{a)}{0.6}{Fig_SI_2a}
\figwithletter{b)}{0.6}{Fig_SI_2b}
\end{center}
\capt{si:Fig2}{ \textbf{a)} The dose-response curve is shown
for the regime in which both the production and degradation
are zero-order in $\molsf{V}^\molsf{P}$ (regime II,
\eref{VP2sat}), in which both production and degradation are
first order in $\molsf{V}^\molsf{P}$ (regime III,
\eref{SI:VPlin}), and, the model of the main text, zero-order
production but linear degradation of $\molsf{V}^\molsf{P}$
(regime I,\eref{VP2lin}). The curve that is linear over the
widest $S$-range is that for linear degradation of
$\molsf{V}^\molsf{P}$, but zero-order production, studied in
the main text (regime I). Parameters: Regime I:
$m_VE_T=500$, $K_V/V_T=1/25000$, $M_V/V_T=2$;
$k_V=1\unit{s^{-1}}$; Regime II: $m_VE_T=50$,
$K_V/V_T=1/25000$, $M_V/V_T=1/25000$; Regime III:
$m_VE_T=100$, $K_V/V_T=2$, $M_V/V_T=2$; \textbf{b)} The time
average over a single oscillation period, $\avgin{V^P}_T$,
is shown for four different simulations where the signal
characteristics are as indicated and $\mu_{S}=50$.}
\end{figure*}
\begin{figure*}[!ht]
\begin{center}
\figwithletter{a)}{0.6}{Fig_SI_3a}
\figwithletter{b)}{0.6}{Fig_SI_3b}
\figwithletter{c)}{0.6}{Fig_SI_3c}
\end{center}
\capt{Fig3SI}{Time traces for regime I (panel \textbf{a}),
with first-order degradation of $V^P$ and zero-order
production of $V$ (Eq. 3 of the main text); regime II (panel
\textbf{b}), with zero-order production and degradation of
$V$ (\eref{VP2sat}); regime III (panel \textbf{c}), with
first-order production and degradation of $V$
(\eref{VP2lin}). In all cases, the amplitude $A$ and
frequency $T^{-1}$ are varied, while keeping $\mu=50$. The non-linear
response for the zero-order dynamics in regime II is clearly visible
in
\textbf{b}. The difference between panel \textbf{a} and
\textbf{c} is the amplitude of the response. The system of
the main text with linear production of $V^P$ (I,
panel \textbf{a}) has a much larger amplitude than that
with saturated production of $V^P$ (III, panel \textbf{c});
note the different scale of the y-axis.}
\end{figure*}
\subsection{\slabel{ch8:num_opt}Numerical optimization: a two-step rocket}
\subsubsection{General characteristics}
The numerical optimization used to produce Fig.~5 of the main text is based on
the Wright-Fisher model for population evolution. In each simulation a
population of $N$ independent systems is initialized. Each system consists of a
single multiplexing network.
In the initialization step, each network is assigned random parameters where
each parameter is within the ranges specified in \eref{si:ch8:constraint} and
\eref{si:ranges}.
In the next step the fitness of each network is calculated, which we detail in
the following subsection. Based on the fitness $I_{T}$ for each network, in the
selection step again $N$ new systems are chosen from the original $N$ systems.
The likelihood of selection (reproduction) of each system is proportional to its
fitness $I_T$. Each new network is then ``mutated'' by multiplying all
kinetic parameters by the factor (1 + $\delta$), where $\delta$ is drawn uniform
randomly from the range [−$\Delta$: $\Delta$]; we take $\Delta = 0.3$. Then the
cycle is repeated.
\paragraph{Parameters}\.\newline
We take the kinetic parameters of the encoding module to be fixed, to ensure
correct propagation of the oscillations to component $V$.
For a reliable transmission of oscillations, the
phosphorylation of $V$ is given by Eq. 3 of the main text, as
discussed in the previous section. We further take the mean of the
oscillatory signal to be constant, resulting in the following fixed parameters
\begin{align}
& k_V=1/10\unit{s^{-1}},\nonumber\\
& m_VE_T=5625\unit{s^{-1}},\nonumber\\
& M_V/V_T=30,\nonumber\\
& K_V/V_T=1/250,\nonumber\\
& V_T=2500,\nonumber\\
&\mu_1=25\nonumber\\
\end{align}
Next, the following parameters are constrained based upon values of other
parameters
\begin{align}
\elabel{si:ch8:constraint}
&k_W \text{ to set } \avg{W^P}=W_T/2 \text{ for a constant signal},\nonumber\\
&m_{X_2}=m_{X_1}=\br{10T}^{-1},\nonumber\\
&k_{X_2}=5m_{X_2},\nonumber\\
&k_{X_1}=X_T m_{X_1}
\br{\frac{W_T^{n_{X_1}}}{W_T^{n_{X_1}}+K_{X_1}^{n_{X_1}}}}^{-1}
\end{align}
The parameters $X_T=W_T=1000$ are constant (unless explicitly mentioned).
This leads to the following set of parameters that are optimized for given
$N_A,N_\mu$, where between square brackets we give the minimum and maximum
value of each parameter.
\begin{align}
\elabel{si:ranges}
&K_W\,\sqbr{1:75000},\nonumber\\
&M_W\,\sqbr{1:75000},\nonumber\\
&m_W\,\sqbr{10^{-3}\,{\rm s^{-1}}:10^3\,{\rm s^{-1}}},\nonumber\\
&T\,\sqbr{10^1\,{\rm s}:10^4\,{\rm s}},\nonumber\\
&\mu_{2,\rm max}\,\sqbr{10:1000},\nonumber\\
&n_{X_1}\,\sqbr{1:5},\nonumber\\
&K_{X_1}\,\sqbr{1:75000}.
\end{align}
\subsubsection{Step 1: Contiguity}
\label{sec:contiguity}
A key point is that, while the precise mapping from $S$ to $X$ may not
be critical for the total amount of information transmitted {\em per
se}, this is likely to be important for whether or not this
information can be exploited \cite{DeRonde2011}. We therefore impose
the multiplexing requirement \cite{DeRonde2011} (see \fref{si:Fig4}).
\begin{figure*}[!ht]
\begin{center}
\figwithletter{}{2.0}{Fig_SI_4}
\end{center}
\capt{si:Fig4}{Schematic view of the contiguity requirement. For a system
with $3$ states of $\molsf{S}_1, \molsf{S}_2$, the corresponding output
states of $\molsf{X}_1$ should follow a contiguous (and thus monotonic) order.
In other words, the three states of $X_i$ that correspond to one given state of
$S_i$ are grouped into a set; the different sets $\{X_i\}$ that correspond to
the different states of the input $S_i$ increase monotonically with
$S_i$.}
\end{figure*}
To illustrate the multiplexing requirement,
imagine that each signal $S_i$ can take 3 levels: $S_i^0,S_i^1,S_i^2$
(\fref{si:Fig4}). This means that both
$X_1$ and $X_2$ each have $9$ states, corresponding to the $3\times 3$
possible combinations of input states; for each state of the input
signal $S_i$, {\it i.e.} $S_i^k$, we thus have $3$ output states of $X_i$,
corresponding to
the three different states of the other input signal. The multiplexing
requirement now imposes that the mapping from $S_1,S_2$ to $X_1,X_2$
is such that the output states $\{X_i\}$ corresponding to input
$S_i=j$ are grouped into sets that are contiguous and increase
monotonically with $j$, for each signal $i$. In other words, the three
states of $X_i$ that correspond to one given state of $S_i$ are
grouped into a set; the different sets $\{X_i\}$ that correspond to
the different states of the input $S_i$ increase monotonically with
$S_i$. This leads to a monotonic input-output relation between $S_i$
and $X_i$ for each $i$, which can be decoded by the network.
Mutual information does not enforce contiguity. Optimizing mutual
information only means minimizing the overlap of the conditional
probability distributions $p(X_i|S_i^k)$ corresponding to the
different states $k$ of the input $S_i$. Certainly in the absence of
noise, where each combination of inputs $S_1,S_2$ yields one and only
one combination of outputs $X_1,X_2$, a minimal overlap of the
conditional distribution does not impose a contiguous division; in
essence, the outputs $X_1$ and $X_2$ are $\delta$-peaks, which could in
principle be arranged in any order when the networks are
optimized for maximizing the mutual information. This, we argue,
hampers decoding. Therefore, the first step in the optimization
routine is to enforce a contiguous mapping.
For a {\it deterministic network} we obtain a contiguous split by the
following procedure: At each step of the evolutionary algorithm we
determine for all input states
($S_1^k,S_2^{k^\prime})=(A_1^k,\mu_2^{k^\prime}$), the output
concentrations $\avgin{X_1}^{k,k^\prime}, \avgin{X_2}^{k,k^\prime}$. For a
specific state $k$ of
the input signal $S_1^k$, $A_1^k$, we determine the minimum and maximum
value of $\avgin{X_1}^{k,k^\prime}$, which (most likely) correspond to
$\avgin{X_1}$ for $\mu_{2, \rm min}$ and $\mu_{2, \rm max}$
respectively. We thus obtain for each state $k$ of $S_1$ a set or
``block'' of $\avg{X_1}$ values between $\acco{\avgin{X_{1, \rm
min}},\avgin{X_{1, \rm max}}}$ (see also Fig. 4 of the main
text). Similar blocks are obtained for $\avgin{X_2}$ for each state
$k^\prime$ of $S_2$, where the block boundaries depend on $S_1$, {\it
i.e.} $A_{1, \rm min}, A_{1, \rm max}$.
The fitness of each network is then determined by the amount of
overlap between the different blocks corresponding to the different
states of signal $S_i$, where an increase in overlap
reduces the fitness.
An overlap means that from an
output level $X_i$ the state of the input $S_i$ cannot uniquely be inferred.
Maximum fitness therefore corresponds to minimal overlap between the
blocks.
For a {\it stochastic network}
the optimization method is slightly different. Instead
of determining the boundaries of the block by the minimum and maximum output
concentration, we now include the noise. Using the linear-noise
approximation, we determine for each output concentration
$\avgin{X_i}$ the corresponding
variance $\sigma^2_{X_i}$. The block is then formed by
$\acco{\sqbr{\avgin{X_{i}}-\sigma_{X_i}}_{\rm
min},\sqbr{\avgin{X_{i}}+\sigma_{X_i}}}_{\rm max}$, where $\sigma_{X_i}$ is the
noise determined through the linear-noise approximation as described in the
main text. The evolutionary algorithm optimizes each network using a minimum
overlap as selection criterion, similar to the deterministic network.
\subsubsection{Step 2: Mutual information}
The procedure outlined above generates networks with optimal
contiguity \cite{DeRonde2011}. To quantify information transmission in these
networks we compute the mutual information. Specifically, the
performance measure is the multiplication of
the relative mutual information of the individual channels
\begin{align}
I_T=\frac{I\br{S_1,X_1}}{H\br{S_1}}\times\frac{I\br{S_2,X_2}}{H\br{S_2}},
\end{align}
where $H\br{S_1}$ is the entropy of the amplitude input distribution
$p\br{A_1}$; $H\br{S_2}$ is the entropy of the concentration input
distribution $p\br{\mu_2}$; and $I(S_i,X_i)$ is the mutual information
between $S_i$ and $X_i$ \cite{Shannon1948}.
To determine the entropy of the input distribution and the mutual
information we need to specify the form of the input distributions. We take the
input distributions to be uniform:
\begin{align}
p\br{A_1=a}&=\frac{1}{N_A}, \text{ with amplitude values }
\nonumber\\a_i&=\frac{i}{N_A}, i\in \sqbr{1:N_A},\\
p\br{\mu_2=\mu}&=\frac{1}{N_\mu} \text{ with concentration values
}\nonumber\\
\mu_j&=\frac{j}{N_\mu}\mu_{\rm max}, j\in \sqbr{1:N_\mu},
\end{align}
where $\mu_{\rm max}$ is an optimization parameter with maximum value $1000$.
To compute the mutual information, we have used a slightly different
approach for the stochastic and the deterministic networks.
\paragraph{Mutual information for a stochastic network}
\quad\newline
To calculate the mutual information $I(S_i;X_i)$ for a
stochastic network, we determine for all input
states ($S_1^k,S_2^{k^\prime})=(A_1^k,\mu_2^{k^\prime}$), the output
concentrations $\avgin{X_1}, \avgin{X_2}$ and corresponding variances
$\sigma^2_{X_1},\sigma^2_{X_2}$ via the linear-noise approximation.
With these quantities the mutual information of the two respective channels are
calculated via
\begin{align}
I(S_i,X_i) = H(X_i) - H(X_i|S_i),
\end{align}
where $H(X_i) = -\sum_l p(X^l_i) \ln p(X^l_i)$ and $H(X_i|S_i) = -\sum_k
p(S_i^k) \sum_l p(X^l_i|S_i^k) \ln p(X^l_i|S_i^k)$.
Here, $p(X^l_i|S_i^k) = \sum_{k^\prime} p(S_{j\neq i}^{k^\prime})
p(X^l_i|S_i^k,S_{j\neq i}^{k^\prime})$, where $p(X^l_i|S_i^k,S_{j\neq
i}^{k^\prime})$ is a
Gaussian distribution centered around the mean $\avgin{X_i}$ given by $S_i^k$
and $S_j^{k^\prime}$.
\paragraph{Mutual information for a deterministic network}
\quad\newline
For a deterministic network without
noise in the mean field limit, each input $(S_i^k,S_j^{k^\prime})$ maps onto a
unique output $(X^{k,k^\prime}_i,X^{k,k^\prime}_j)$ which is a
delta peak.
One may therefore think that when the number of input
states for each signal goes to infinity, the mutual information also
goes to infinity; this would imply that an infinite number of states
for each signal $S_i$ could be transmitted with 100\% fidelity. However, this is
not true: the mutual information and hence the number of signals that can be
transmitted with 100\% fidelity, remains bounded because of the
crosstalk (and the finite copy number). As described in the text, crosstalk
means that the
input-output mapping no longer is unique; from the output $X_i$, the
input $S_i$ can no longer be inferred with 100\% fidelity.
To compute the maximum amount of information that can be transmitted
through each channel, we adopt the following procedure. We first take
the number of states $N_i$ that is transmitted through each channel
$i$ to be
finite. We thus discretize each signal $S_i$ with equally spaced values
$S_i^k$, with $k=0,\dots, N_i$. Signal $S_1$ is discretized between
$\sqbr{A_{\rm min}-1}$ and signal $S_2$ between $\sqbr{\mu_{\rm min}-\mu_{\rm
max}}$; only $A_{\rm
min}$ and $\mu_{\rm min}$ depend on the number of states; the maximum values
are constant. For each $S_i^k$, we compute $X_i^{k,k^\prime}$ for each state of
the other input signal $S_{j\neq i}^{k^\prime}$ by solving the mean-field
network in steady state. We then determine the minimum and maximum
values of $X_i^{k,k^\prime}$ for a given $k$,
$X^k_{i,\rm min}$ and $X^k_{i,\rm max}$. This is equivalent to the
``block''-procedure, described above.
Next, we calculate $H(X_i^k|S_i^k)=-\int_{X_{i,\rm
min}^k}^{X^k_{i,{\rm max}}} dX_i^k p(X_i^k|S_i^k)\ln p(X_i^k|S_i^k)$, with
$p(X_i^k|S_i^k)=1/(X_{i,{\rm max}}^k-X_{i,{\rm min}}^k)$ for each
state $k$ of signal $S_i$, $S_i^k$.
To compute $H(X_i|S_i)$, we now have to average $H(X_i^k|S_i^k)$
over all $S_i^k$: $H(X_i|S_i) = -\sum_k p(S_i^k)
H(X_i^k|S_i^k)$. The entropy of the output distribution is $H(X_i) =
-\sum_l p(X^l_i) \ln p(X^l_i)$, where $p(X_i) = \sum_k p(S_i^k) p(X_i^k|S_i^k)$.
\fref{si:Fig5}a shows the mutual information $I(S_1;X_1)$ as
function of the of the number of states $N_A$ in channel 1, for
$N_\mu=16$ states of channel 2. It is seen that initially the
mutual information increases linearly with $N_A$; moreover, the slope
is unity. In this regime, the number of states that can be transmitted
with 100\% fidelity through channel 1 is the total number of states of
that channel. In essence, the different blocks of states $X_1$
corresponding to the different states of $S_1$ do not overlap, which
means that from the output $X_1$, the input $S_1$ can be uniquely
inferred. However, as $N_A$ increases beyond 4 bits, the different
blocks overlap increasingly, and the number of signals
that can be transmitted with 100\% fidelity saturates; in the plateau
regime, increasing the
number of input signals further no longer increases the number of
signals that can be transmitted reliably. The plateau
value slightly depends on the number of signals $N_\mu$ that are
transmitted through channel 2, which is shown in panel b. It is seen that the
plateau value saturates to about 3.74 bits when $N_\mu$
is larger than $3.5$ bits. We thus conclude that the maximum
number of signals that can be transmitted with 100\% fidelity through
channel 1 is about 4 bits.
\fref{si:Fig5}c shows the mutual information $I(S_2,X_2)$ as a
function of $N_\mu$, for $N_A=16$. Clearly, the
mutual information is to an excellent approximation given by the
entropy of the input distribution over the full range of $N_\mu$,
which means that all signals can be transmitted with 100\% fidelity,
even when the number of signals goes beyond 4 bits. This is because
the effect of crosstalk from the other channel is negligible, as
explained in the main text.
\begin{figure*}[!ht]
\begin{center}
\figwithletter{a)}{0.6}{Fig_SI_5a}
\figwithletter{b)}{0.6}{Fig_SI_5b}
\figwithletter{c)}{0.6}{Fig_SI_5c}
\end{center}
\capt{si:Fig5}{The effect of crosstalk on information
transmission in a deterministic system. \textbf{a)} The
mutual information $I(S_1;X_1)$ as function of the number of
states $N_A$, with $N_\mu=16$. The line is to guide the
eye. \textbf{b)} The plateau value of $I(S_1;X_1)$ as a
function of $N_A$ for a given $N_\mu$ (see panel a), plotted
against $N_\mu$. It is seen that beyond $N_\mu = 3.5$, the
plateau value is constant. \textbf{c)} The mutual
information $I(S_2;X_2)$ as a function of $N_\mu$ for
$N_A=16$. It is seen that the mutual information is equal to
the entropy of the input distribution. This is because the
effect of crosstalk is negligible. The results are obtained
for a network that has been optimized via the procedure
described in section \ref{sec:contiguity} with
$N_A=N_\mu=16$, and with $A_1$ in the range $\sqbr{0-1}$ and
$\mu_2$ in the range $\sqbr{0-23}$; the optimized value of
$\mu_2^{\rm max}=23$ as found by the procedure described
in \ref{sec:contiguity} is lower than the maximally
allowed value $\mu_2^{\rm max}=1000$, because that mimizes
the effect of crosstalk from channel 2 on channel
1. Parameters:$m_w=0.006{\rm s^{-1}}$, $K_W/W_T=1.6\times
10^{-2}$, $M_W/W_T=7.9\times 10^{-2}$,
$K_{X_1}/X_T=1\times10^2$, $n_{X_1}=3$, $X_T=W_T=100$.}
\end{figure*}
\clearpage
\bibliographystyle{plos2009}
|
train/arxiv
|
BkiUd0k4eIOjRvCHtPDf
| 5 | 1 |
\section{Introduction}
In the era of large spectroscopic surveys such as APOGEE, Gaia-ESO Survey or RAVE, among others,
the contribution of smaller samples with high-resolution and high quality spectra is of great
importance to understand the Galactic Chemical Evolution (GCE). In this work we have derived abundances
for Cu, Zn, Sr, Y, Zr, Ba, Ce, Nd and Eu (\cite[Delgado Mena et al. 2017]{delgadomena17}) for 1111 stars
within the volume-limited HARPS-GTO planet search sample in order to complement our previous works for
light elements (\cite[Delgado Mena et al. 2014]{delgadomena14}, \cite[Delgado Mena et al. 2015]{delgadomena15},
\cite[Su\'arez Andr\'es et al. 2016]{suarezandres16}, \cite[Bertr\'an de Lis et al. 2015]{bertrandelis15}),
$\alpha$- elements and Fe-peak elements (\cite[Adibekyan et al. 2012]{adibekyan12}). The main purpose of
this work is to evaluate the GCE evolution of those heavier elements and the dependence on
stellar ages of different abundance ratios.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=13.5cm]{all_abun_combined_age_13p5_fit_error2Gyr_diff_gaia_1Gyr_small.eps}
\caption{General [X/Fe] ratios as a function of age for the reduced sample with reliable ages.
Thin disk stars, thick disk stars and \textit{h$\alpha$mr} are depicted with dots. triangles and
squares, respectively. The linear fit to all the stars just serves as eye guiding.}
\label{fig1}
\end{center}
\end{figure}
\section{Stellar ages}
We derive the masses, radii and ages with the PARAM v1.3 tool\footnote{http://stev.oapd.inaf.it/cgi-bin/param}
using the PARSEC isochrones (\cite[Bressan et al. 2012]{bressan12}) with our values for Teff
and [Fe/H], the V magnitudes from the main Hipparcos catalogue (\cite[Perryman et al. 1997)]{perryman97})
and the parallaxes from the Hipparcos new reduction (\cite[van Leeuwen 2007]{vanleeuwen07})
or from the first release (DR1) of Gaia (\cite[Lindegren et al. 2016]{lindegren16}). We note that we added a
systematic error of 0.3 mas to the formal error of the Gaia DR1 parallaxes as recommended by
the Gaia collaboration. Meanwhile Hipparcos provides parallaxes for 1051 out of the 1059
stars within our sample, only 923 stars have parallaxes in GAIA DR1. Moreover, there are significant
differences in many cases, leading to non-negligible differences in age. In order to have a sample with
ages as reliable as possible we decided to select the Hipparcos ages with a difference less than 1 Gyr with
respect to the ages derived with GAIA parallaxes and with an error in age lower than 2 Gyr. This final sample
is composed by 377 stars belonging to the thin disk, thick disk and high-$\alpha$ metal-rich stars
(hereafter \textit{h$\alpha$mr}, a population with high $\alpha$ abundances at [Fe/H]\,$>$\,-0.2\,dex discovered
by \cite[Adibekyan et al. 2011]{adibekyan11}).
\section{Abundance ratios vs age}
In Fig. \ref{fig1} we can see how several elements depend on age. By combining elements that increase and decrease
with age, respectively, it is possible to have steeper and more constrained trends. For example, [Mg/Fe] shows a tight
increasing trend with age, meanwhile [Eu/Fe], [Zn/Fe] and [Al/Fe] also show this dependency though with more dispersion.
This trend is expected since these elements are mainly formed by massive stars which started to contribute to the
Galaxy chemical enrichment earlier than the lower mass stars responsible for Fe production.
On the other hand, the light-\textit{s} process elements Y and
Sr show the most clear decreasing trends with age. These elements are formed by low-mass AGB stars so we can expect
them to increase with time (for younger stars) due to the increasing and delayed contribution of low-mass
stars as the Galaxy evolves. In Figs. \ref{fig2} and \ref{fig3} we show different combinations of previously mentioned
elements at different metallicity regions. Previous works have explored and confirmed the tight correlation of these
abundance ratios with age (e.g. \cite{dasilva12}, \cite{nissen15}, \cite{spina16}) but only using solar twins
or solar analogues. However, \cite{feltzing17} noted that [Y/Mg] clock is not valid at [Fe/H]\,$<$\,-0.5\,dex.
In our sample we find that still at [Fe/H]\,$<$\,-0.5\,dex the different abundance ratios show a correlation
with age (steeper for [Sr/X] than for [Y/X]) but this is only valid for thin disk stars. We note however that our
sample of thick disk stars with reliable ages is quite small. It is also clear that the trends become flat
at ages $\gtrsim$\,8\,Gyr. On the other hand, at higher metallicities, in the bin -0.2\,$<$\,[Fe/H]\,$<$\,0.2\,dex,
the abundance ratios of Y and Sr (with respect to Mg, Zn and Al) present similar slopes. Nevertheless, we remark that
meanwhile [Sr/Fe] presents a constant correlation with age at different metallicities, [Y/Fe] becomes
flatter as [Fe/H] increases. Moreover, we can observe that thin disk stars present mostly no dependence on age
for ages $\gtrsim$\,8\,Gyr but \textit{h$\alpha$mr} stars show a continuous dependence in the full age range
for [Y/X] ratios. The improvement of parallaxes from GAIA DR2 will help to determine more precise ages for our
stars increasing the sample size and allowing us to better understand the behaviour of the abundance-age trends
for different populations in the Galaxy.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=6.6cm]{YMg_age_error2Gyr_diff1Gyr_feh_m0p8_m0p2_weighted.eps}
\includegraphics[width=6.6cm]{SrMg_age_error2Gyr_diff1Gyr_feh_m0p8_m0p2_weighted.eps}
\caption{[Y/Mg],[Y/Zn],[Y/Al] and [Sr/Mg],[Sr/Zn],[Sr/Al] for -0.8$<$[Fe/H]$<$-0.2. Symbols as in Fig 1.}
\label{fig2}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=6.6cm]{YMg_age_error2Gyr_diff1Gyr_feh_m0p2_0p2_weighted.eps}
\includegraphics[width=6.6cm]{SrMg_age_error2Gyr_diff1Gyr_feh_m0p2_0p2_weighted.eps}
\caption{[Y/Mg],[Y/Zn],[Y/Al] and [Sr/Mg],[Sr/Zn],[Sr/Al] for -0.2$<$[Fe/H]$<$0.2. Symbols as in Fig 1.}
\label{fig3}
\end{center}
\end{figure}
\begin{acknowledgements}
\begin{scriptsize}
E.D.M., V.Zh.A., N.C.S. and S.G.S. acknowledge the support from Funda\c{c}\~ao para a Ci\^encia
e a Tecnologia (FCT) through national funds and from FEDER through COMPETE2020 by the following
grants UID/FIS/04434/2013 \& POCI-01-0145-FEDER-007672, PTDC/FIS-AST/7073/2014 \&
POCI-01-0145-FEDER-016880 and PTDC/FIS-AST/1526/2014 \& POCI-01-0145-FEDER-016886.
E.D.M., V.Zh.A., N.C.S. and S.G.S. also acknowledge the support from FCT through Investigador
FCT contracts IF/00849/2015, IF/00650/2015, IF/00169/2012/CP0150/CT0002 and IF/00028/2014/CP1215/CT0002
funded by FCT (Portugal) and POPH/FSE (EC). This research has made use of the SIMBAD database
operated at CDS, Strasbourg (France).
This work has made use of data from the European Space Agency (ESA)
mission {\it Gaia} (https://www.cosmos.esa.int/gaia), processed by
the {\it Gaia} Data Processing and Analysis Consortium (DPAC,
https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for
the DPAC has been provided by national institutions, in particular the
institutions participating in the {\it Gaia} Multilateral Agreement.
\end{scriptsize}
\end{acknowledgements}
|
train/arxiv
|
BkiUfWnxK0zjCsHecyq_
| 5 | 1 |
\section{Introduction}
In the standard model of the elementary particles, the number of neutrino flavors is three.
However, some of neutrino oscillation related experiments show anomalies which can not be explained by the standard three flavor neutrino oscillations.
LSND (Liquid Scintillator Neutrino Detector) group reported an excess of 88 $\bar{\nu}_e$ events in $\bar{\nu}_\mu$ beam in 1993$\sim$1998, where $\bar{\nu}_\mu$ was produced in $\mu^+$ decay at rest(DAR)~\cite{LSND01}.
If the excess is caused by neutrino oscillation ($\bar{\nu}_\mu \to \bar{\nu}_e$), the mass-square difference is $\Delta m^2 > 10^{-2}~{\rm eV^2}$.
This large $\Delta m^2$ can not be explained by the standard three flavor neutrino oscillations and existence of 4th neutrino which does not feel the weak interactions and thus called {\it sterile neutrino}, has been suggested.
Later on,
KARMEN (KArlsruhe Rutherford Medium Energy Neutrino experiment) group performed similar measurement and obtained null result~\cite{KARMEN98}.
However, some of the LSND-positive oscillation parameter regions have survived because of shorter baseline.
MiniBooNE (Mini Booster Neutrino Experiment) group performed measurement using neutrinos from decay in flight and
obtained positive oscillation results~\cite{MINIBOONE13}.
However, it is pointed out that there is a possibility that the observed $\bar{\nu}_e$ and $\nu_e$ signals are actually caused by $\gamma$'s from neutral current interactions and the result is not conclusive.
ICARUS(Imaging Cosmic And Rare Underground Signals) group obtained negative result and showed $\sin^22\theta<10^{-2}$~\cite{ICARUS12}.
Fig.-\ref{fig:SterileParameters}(a) shows the oscillation parameters of the above appearance experiments.
\begin{figure}[htbp]
\centering
\includegraphics[width=120mm]{SterileParameters.eps}
\caption{\small{Sterile neutrino positive parameter regions.
(a) Appearance mode~\cite{ICARUS12} (b) Disappearance mode~\cite{White12} }}
\label{fig:SterileParameters}
\end{figure}
On the other hand, reactor neutrino flux has been reported to be 6\% smaller than expectation and the $\nu_e$ flux from radioactive source has been reported to be 14\% smaller than expectation (see summary \cite{White12}).
Those results can be explained if there are sterile neutrinos with $m>1~$eV which mix with our regular neutrinos with mixing angle $\sin^22\theta \sim 0.2$ as shown in Fig.-\ref{fig:SterileParameters}(b).
In order to test those indications a number of experiments are being performed and planned.
This proceedings describe the properties of DAR neutrinos and LSND experiment which observed positive DAR $\bar{\nu}_\mu \to \bar{\nu}_e$ oscillation and then new generation DAR experiments, JSNS$^2$, OscSNS and KPipe.
\section{Neutrinos from $\mu^+$, $\pi^+$ and K$^+$ decay at rest}
Currently there are two planned sites for DAR experiments, one is J-PARC Material Life Science Facility (MLF) in Japan and the other is Oak Ridge Spallation Neutron Source (SNS) in the Unites States.
In this section, properties of neutrinos from DAR are described referencing the MLF beam as an example.
The energy spectrum of the neutrinos from the MLF target is shown in
Fig.-\ref{fig:DARnu}.
\begin{figure}[htbp]
\centering
\includegraphics[width=140mm]{DARnu.eps}
\caption{\small{Example of the neutrino energy spectra (J-PARC MLF)~\cite{JSNS2}. The neutrinos are produced in decay at rest (DAR), decay in flight (DIF) and nuclear interactions.
Four kinds of neutrinos from DAR to be used in the experiments described in this paper are indicated by arrows.}}
\label{fig:DARnu}
\end{figure}
Four kinds of neutrinos from DAR to be used in the sterile neutrino experiments are indicated by arrows.
The production scheme of those DAR neutrinos is shown below.
\begin{equation}
\begin{split}
&p + {\rm target} \to \pi^+/{\rm K}^+ + X \\
& ~~~~~~~~~~~~~~~~~~~~\usebox{\anglearrow }~\pi^+/{\rm K}^+ ({\rm stop})
\xrightarrow{\tau=26/12{\rm ns}} \mu^+ + \nu_\mu (E_\nu=30/236~{\rm MeV}) \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\usebox{\anglearrow }
~\mu^+({\rm stop}) \xrightarrow{\tau=2.2\mu{\rm s}} e^+ + \bar{\nu}_\mu + \nu_e~~
\end{split}
\label{eq:DAR_Process}
\end{equation}
High energy protons hit target material and produce $\pi$ and K mesons by the strong interactions.
The $\pi^+/{\rm K}^+$ stop in the target and decay to $\mu^+ + \nu_\mu$ with lifetimes $\tau=26/12$~ns.
Since the $\pi^+/{\rm K}^+$ decay at rest, the produced $\nu_\mu$'s have monochromatic energies of $E_\nu = 30/236~$MeV.
The produced $\mu^+$'s stop in the target and decay as
$\mu^+ \to e^+ + \nu_e + \bar{\nu}_\mu$ with lifetime 2.2~$\mu$s.
Since the momentum of the parent $\mu^+$ is zero, the energy spectra of the produced $\bar{\nu}_\mu$ and $\nu_e$ are well known.
By setting the timing window of the event selection to be less than a few tens of nano seconds from the beam pulse, the monochromatic $\nu_\mu$'s from the $\pi^+$ and K$^+$ DAR can be selected and by setting the timing window to be later than a few hundreds nano seconds, the $\bar{\nu}_\mu$ and
$\nu_e$ from the $\mu^+$ DAR can be selected.
For $\bar{\nu}_\mu \to \bar{\nu}_e$ oscillation experiments, there is an intrinsic background of $\bar{\nu}_e$ from the $\pi^-({\rm stop}) \to \mu^-({\rm stop}) \to \bar{\nu}_e$ decays.
However the magnitude of the $\bar{\nu}_e$ flux is suppressed to an order of $10^{-3}$ since $\pi^-$ and $\mu^-$ are absorbed by the target nuclei before they decay.
In addition, the small contribution of the background $\bar{\nu}_e$ can be measured from the energy spectrum.
\section{Decay at rest experiments}
\subsection{LSND experiment}
LSND experiment was performed in 1993$\sim$1998 at Los Alamos National Laboratory and showed positive
$\bar{\nu}_\mu \to \bar{\nu}_e$ oscillation at $\Delta m^2 > 10^{-2}~{\rm eV^2}$.
Fig.-\ref{fig:LSND_Detector} shows a side view of the LSND experiment.
\begin{figure}[htbp]
\centering
\includegraphics[width=100mm]{LSND_Detector.eps}
\caption{\small{LSND experiment~\cite{LSND01}}}
\label{fig:LSND_Detector}
\end{figure}
The LSND experiment used LAMPF proton beam whose energy and current were 800~MeV and 1~mA, respectively.
600~$\mu s$ of pulse beam was delivered with repetition rate of 120~Hz, resulting in the duty factor of 7.2~\%.
The beam target was water or high-Z material and the beam stopper was copper.
The baseline was 30~m.
\begin{figure}[htbp]
\centering
\includegraphics[width=120mm]{LSND_Result.eps}
\caption{\small{LSND result~\cite{LSND01}
(a) Excess of $\bar{\nu}_e$ events (b) Allowed oscillation parameter regions }}
\label{fig:LSND_Result}
\end{figure}
It used a 167~tons of low light-output liquid scintillator to detect both
\v{C}erenkov light and scintillation light.
Fast neutron background can be reduced by requiring the \v{C}erenkov light.
$\bar{\nu}_e$ was detected by inverse $\beta$ decay reaction with proton followed by the neutron capture on proton.
$$\bar{\nu}_e + p \to e^+ + n~: ~n+ p \to d + \gamma ( 2.2~{\rm MeV})$$
The average time difference between the positron signal and neutron signal was $\sim200~{\rm \mu s}$.
They observed excess of 88 $\bar{\nu}_e$ events in 6~years operation as shown in Fig.-\ref{fig:LSND_Result}(a).
The allowed oscillation parameter regions are shown in Fig.-\ref{fig:LSND_Result}(b).
This positive oscillation result is inconsistent with other neutrino oscillation measurements within the three neutrino flavor scheme and has not been accepted as an conclusive result from the neutrino oscillation community and further experiments with better sensitivities are required to test the result.
\subsection{JSNS$^2$ experiment}
\begin{figure}[htbp]
\centering
\includegraphics[width=120mm]{JSNS2exp.eps}
\caption{\small{JSNS$^2$ experiment~\cite{JSNS2} (a) experimental site (b) detector}}
\label{fig:JSNS2exp}
\end{figure}
JSNS$^2$(J-PARC Sterile Neutrino Search at J-PARC Spallation Neutrino Source) experiment~\cite{JSNS2} uses the DAR $\bar{\nu}_\mu$ from the J-PARC MLF beam line shown in Fig.-\ref{fig:JSNS2exp}(a).
The energy of the proton beam is 3~GeV and the power will become 1~MW when the experiment is supposed to starts.
The MLF proton beam consists of two narrow ($\sim$100~ns) pulses which are
$\sim$600~ns apart.
The twin beams hit the target every 40~ms (25~Hz).
Fig.-\ref{fig:DAR_Nu_Spectra}(a) shows the timing of the neutrino production.
By setting the timing window $(1<t<10~{\rm \mu s})$ after the start of the first beam pulse, the beam associate background and neutrinos from $\pi$/K decays can be eliminated and the beam uncorrelated background can be suppressed to $1.1\times 10^{-4}$.
The $\bar{\nu}_e$ background from the $\pi^-({\rm stop}) \to \mu^-({\rm stop}) \to \bar{\nu}_e$ is suppressed to $1.7\times 10^{-3}$ because $\pi^-$ and $\mu^-$ are absorbed by high-Z (mercury) target nuclei.
JSNS$^2$ will use Gadolinium loaded liquid scintillator (Gd-LS) as neutrino target.
Two neutrino detectors containing 25~tons Gd-LS each (Fig.-\ref{fig:JSNS2exp}(b)) will be used.
The detection method is the inverse $\beta$ decay, like LSND.
However, the neutron is absorbed by Gd, emitting 8~MeV $\gamma$-rays.
\begin{figure}[htbp]
\centering
\includegraphics[width=120mm]{DAR_Nu_Timing.eps}
\caption{\small{~ (a) Timing spectrum and (b) Expected Sensitivity of JSNS$^2$\cite{JSNS2} }}
\label{fig:DAR_Nu_Spectra}
\end{figure}
$$\bar{\nu}_e + p \to e^+ + n~:~n+ {\rm Gd} \to {\rm Gd'^*}~:~
{\rm Gd'^*} \to {\rm Gd'} + \gamma's (\Sigma E_\gamma = 8~{\rm MeV})
$$
Using Gd, the environmental $\gamma$-ray backgrounds can be eliminated and the coincidence window can be made narrower,
($200~{\rm \mu s } \to 30~{\rm \mu s }$).
There are two options for the liquid scintillator (LS).
One is high light output LS with enhanced pulse shape discrimination (PSD) capability for the fast neutron rejection and the other is LSND type low light output LS.
The baseline is 24~m and
the number of events will be 100/year in case $\sin^22\theta = 0.003 $ and $\Delta m^2 > 1~{\rm eV}^2$.
The neutrino production and detection mechanisms are the same as those of LSND's and a direct test of LSND result can be performed.
The sensitivity of JSNS$^2$ is shown in Fig.-\ref{fig:DAR_Nu_Spectra}(b).
The JSNS$^2$ group submitted the proposal in 2013 and performed on-site background measurements using 500~kg plastic scintillators and obtained stage-1 approval from J-PARC PAC in 2014.
The group is now requesting the budget for construction of the detector.
\subsection{OscSNS experiment}
\begin{figure}[htbp]
\centering
\includegraphics[width=120mm]{OscSNSExp.eps}
\caption{\small{OscSNS experiment~\cite{OscSNS} (a) The detector (b) Oak Ridge SNS and a candidate detector location}}
\label{fig:OscSNSexp}
\end{figure}
OscSNS group is proposing to perform sterile neutrino experiment using
the Oak Ridge Spallation Neutron Source (SNS)~\cite{OscSNS}.
The neutrino detector and expected location are shown in
Figs.-\ref{fig:OscSNSexp}.
The energy of the beam is 1~GeV and power is 1.4~MW.
The width of the beam pulse is 500~ns and the frequency is 60~Hz.
Therefore, by setting the timing window $(1<t<10~{\rm \mu s} )$,
the beam un-correlated background can be suppressed to $2.6\times 10^{-4}$.
The baseline to the detector is 50~m and the target mass is 450~ton.
The liquid scintillator is LSND type to detect both \v{C}erenkov and scintillation lights.
\begin{figure}[htbp]
\centering
\includegraphics[width=120mm]{OscSNS_Sensitivity.eps}
\caption{\small{(a) Sensitivity for $\bar{\nu}_\mu \to \bar{\nu}_e$ appearance search
(b) Sensitivity for $\nu_\mu \to \nu_\mu$ disappearance search~\cite{OscSNS} }}
\label{fig:OscSNS_Sensitivity}
\end{figure}
Fig.-\ref{fig:OscSNS_Sensitivity}(a) shows the sensitivity of the OscSNS for $\bar{\nu}_\mu \to \bar{\nu}_e$ measurement.
Since the baseline is much longer than LSND, the positive region of the LSND result can be completely covered.
Table \ref{tab:Comparison} compares main features of $\bar{\nu}_\mu \to \bar{\nu}_e$ appearance measurements of JSNS$^2$, OscSNS and LSND.
This shows both JSNS$^2$ and OscSNS are expected to have much better sensitivities than LSND experiment.
\begin{table}[htbp]
\small{
\begin{center}
\begin{tabular}{|l||c|c|c|}
\hline
& JSNS$^2$ & OscSNS & LSND \\
\hline \hline
Accelerator & J-PARC MLF & Oak Ridge SNS & LAMPF \\
\hline
Beam Energy (Power) & 3GeV (1MW) & 1GeV (1.4MW) & 0.8GeV(0.8MW) \\
\hline
BKG suppression & $1.1\times 10^{-4}$ & $2.6\times 10^{-4}$ & 0.072 \\
by pulse beam & & & \\
\hline
Liquid Scintillator & 50~t (Gd Loaded) & 450~t(LSND type) & 167~t \\
\hline
Baseline & 24~m & 50~m & 30~m \\
\hline
\# of $\bar{\nu}_e$(BKG) events & 500(300)/5yr & 600(200)/5yr & 88/6yr$^{a)}$ \\
\hline
Stopping $\mu^-/\mu^+$&$1.7\times 10^{-3}$&$\sim 10^{-3}$& $6.5\times 10^{-4}$ \\
\hline
Delayed Coin.: (E,$\Delta t$) & (8MeV,$30\mu$s) & (2.2MeV, $200\mu$s) & (2.2MeV, $200\mu$s) \\
\hline
Fast $n$ rejection & PSD$^{b)}$ or \v{C}erenkov$^{c)}$ & \v{C}erenkov & \v{C}erenkov \\
\hline
$\Delta E/ E$ & 3\% @ 35MeV$^{b)}$ & - & 7\% @ 45MeV \\
\hline
Cost & \$ & \$\$ & - \\
\hline
\end{tabular}
\caption{\small{Comparison of baseline designs of JSNS$^2$, OscSNS and LSND for $\bar{\nu}_\mu \to \bar{\nu}_e$ search. $\sin^22\theta=0.003$ is assumed for JSNS$^2$ and $P(\bar{\nu}_\mu \to \bar{\nu}_e) =0.26\%$ is assumed for OscSNS.
$^{a)}$~after very tight cuts. $^{b)}$~for high light output LS option, $^{c)}$ for low light output LS option.}}
\label{tab:Comparison}
\end{center}
}
\end{table}
In addition to the detection of $\bar{\nu}_\mu \to \bar{\nu}_e$ oscillation, OscSNS plans to detect $\nu_\mu \to \nu_e$ oscillation with following process.
\begin{equation}
\begin{split}
&\pi^+({\rm stop}) \to \nu_\mu (30~{\rm MeV}) + \mu^+ \\
& ~~~~~~~~~~~~~~~~~~\usebox{\anglearrow}~
\nu_\mu \xrightarrow{\rm oscillation} \nu_e (30~{\rm MeV}) \\
& ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\usebox{\anglearrow}~
\nu_e + ~^{12}{\rm C} \to ~^{12}{\rm N_{gs}}({\rm 17.3~MeV}) + e^-(12.5~{\rm MeV}) \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\usebox{\anglearrow}~
^{12}{\rm N_{gs}} \xrightarrow{\rm \tau=11ms} ~^{12}{\rm C} + e^+ + \nu_e
\end{split}
\end{equation}
The $\nu_e$ can be identified by the delayed coincidence of 12.5{\rm MeV} monochromatic electron and $\beta^+$( Q=17~MeV) signals.
Since the detector is long, disappearance of the neutrinos due to oscillation can be measured through $L$ dependence of the deficit.
$$ \nu_x + ~^{12}{\rm C} \to \nu_x + ~^{12}{\rm C^*}(15~{\rm MeV})$$
$$~~~~~~~~~~~~~~~~~\nu_e + ~^{12}{\rm C} \to e^- + ~^{12}{\rm N_{gs}} \xrightarrow{\rm 11~ms} ~^{12}{\rm C} + e^+ +\nu_e$$.
%
Fig.-\ref{fig:OscSNS_Sensitivity}(b) shows the $\nu_\mu \to \nu_\mu$ sensitivity.
\subsection{KPipe experiment}
Because the beam energy is high (3~GeV), monochromatic (236~MeV) $\nu_\mu$ from K$^+$ DAR are abundant at J-PARC MLF as can be seen in Fig.-\ref{fig:DARnu}.
The energy of the $\nu_\mu$ is high enough to perform the charged current interactions,
\begin{equation}
\nu_\mu (236~{\rm MeV}) + ~^{12}{\rm C} \to \mu^- + {\rm X}.
\label{eq:NuMu_Charged Current}
\end{equation}
Therefore, the $\nu_\mu$ can be identified by the existence of the $\mu^-$ in the final state.
If oscillation $\nu_\mu \to \nu_S$ takes place, it is identified as a deficit of $\nu_\mu$ flux.
Since the neutrino energy is unique, a clear oscillation pattern will be observed in the $L$ dependence of the deficit.
The analysis is rather simple because it is not necessary to know absolute neutrino flux nor detection efficiency.
KPipe group is proposing to install 120~m long cylindrical liquid scintillator detector near the J-PARC MLF beam line as shown in Figs.-\ref{fig:KPipe}~\cite{KPipe}.
\begin{figure}[htbp]
\centering
\includegraphics[width=110mm]{KPipe.eps}
\caption{\small{(a) KPipe detector (b) KPipe site~\cite{KPipe}}}
\label{fig:KPipe}
\end{figure}
The diameter of the "pipe" is 3~m and it extends 32$\sim$152~m from the target.
The total mass of the liquid scintillator is 700 ton.
Fig.-\ref{fig:KPipeBeamSensitivity}(a) shows the $L/E$ dependence of the relative $\nu_\mu$ rate for various $\Delta m^2$ parameters and
Fig.-\ref{fig:KPipeBeamSensitivity}(b) shows the sensitivity of this experiment.
\begin{figure}[htbp]
\centering
\includegraphics[width=140mm]{KPipeBeamSensitivity.eps}
\caption{\small{(a) $L/E$ dependence of the deficit (b) KPipe Sensitivity~\cite{KPipe}}}
\label{fig:KPipeBeamSensitivity}
\end{figure}
\section{Summary}
Neutrinos from $\pi^+$, $\mu^+$ and K$^+$ decays at rest are strong tool to search for the sterile neutrinos.
LSND showed positive results sometime ago and currently JSNS$^2$, OscSNS and KPipe are proposed
to search for the sterile neutrinos with better sensitivities than the LSND experiment, all using the DAR neutrinos.
|
train/arxiv
|
BkiUdh85qX_BrrvEBZz3
| 5 | 1 |
\section{Introduction}
Vaccines are an effective public health measure used to prevent and control infectious diseases. Access to healthcare services is an ongoing challenge to vaccine coverage, but vaccine hesitancy and refusal can also increase the risk of outbreaks~\cite{19Larson:2010:IIR:1753126.1753129}, and threaten the ability to introduce new vaccines for conditions like human papillomavirus~\cite{zhang2020mining} or COVID-19~\cite{dodd2021concerns,lewandowsky2021covid}. Some of the beliefs or attitudes of people who refuse or are hesitant about vaccination include distrust of government or industry, fear of side effects, lack of effectiveness, or that the disease is not serious~\cite{luz2020heuristics}.
Social media analysis can identify emerging concerns and the spread of misinformation for use in guiding targeted communication interventions, education and policy~\cite{steffens2020using}. Previous social media research focused on public health topics includes infectious diseases and outbreaks~\cite{charles2015using}, illicit drug use~\cite{kazemi2017systematic}, health promotion~\cite{rao2020mgl,cong2018xa}, and pharmacovigilance support~\cite{golder2015systematic,kang2017semantic}. Twitter is commonly studied because it has a large user base, and data are generally more accessible compared to other platforms~\cite{love2013twitter}. Twitter-specific health research has included analysis of sentiment for health-related issues~\cite{salathe2011assessing,salathe2013dynamics}, including vaccination~\cite{zhang2020sentiment}, allergies~\cite{paul2011you}, and others~\cite{sinnenberg2017twitter,naseem2021covidsenti}.
Twitter has been used to characterize vaccine attitudes among social media users~\cite{dunn2015associations,zhou2015using}. A broad range of methods has been proposed to classify tweet-level vaccine sentiment based on the user's expressed vaccine attitudes, often being able to differentiate between vaccine critical tweets or users and neutral or vaccine-promoting tweets or users. Challenges relating to the Twitter format include the short length of the text and informal use of language, abbreviations and misspellings, and inclusion of URLs and emoticons (Fig.~\ref{tweet}).
\begin{figure}[!t]
\centering
\includegraphics[width=1\linewidth]{tt.png}
\caption{ Examples of tweets where meaning of words [good],[bad],[happy], and [sad] changes according to the context of pro-vaccine or anti-vaccine tweets.}
\label{tweet}
\end{figure}
Deep learning (DL) has played an increasingly important role in natural language processing (NLP)~\cite{naseem2020comprehensive}. Transformer~\cite{vaswani2017attention} based approaches use general language models (LMs) to represent semantic information but can be too general to capture specific context from specific types of text or topics. One limitation of transformer-based models is the inability to capture information that is specific to a domain. To improve results in specialized domains, several transformer-based LMs such as Biomedical BERT (BioBERT)\cite{lee2019biobert}, Biomedical A Lite Bidirectional Encoder Representations from Transformers (BioALBERT)\cite{naseem2020bioalbert}, Twitter BERT (BertTweet)\cite{nguyen2020bertweet} and Covid-BERT (CT-BERT)\cite{muller2020covid}, were trained on domain-specific corpora using the same unsupervised training method used in general models. Given how domain-specific LMs have improved performance for specific NLP tasks, we expect that domain-specific LMs should improve the vaccine sentiment classification task.
Some vaccine sentiment classification methods use information about the social network or users' past tweets to inform the classification of a tweet~\cite{dunn2015associations,zhou2015using}, but this may be infeasible where the number of users posting about vaccines is too large like we see with COVID-19 vaccine tweets currently. A limitation of the research in the area comes from the challenge of incorporating external knowledge (i.e. commonsense knowledge) into the models in ways that could contribute to the performance of the classification task. Commonsense knowledge is described as universally known and acknowledged information about the world~\cite{storks2019recent}.
In this paper, we present an end-to-end method for tweet-level vaccine sentiment classification that models: (i) domain-specific knowledge, (ii) commonsense knowledge, (iii) user metadata, and (iv) word-level sentiment. Our contributions are:
\begin{itemize}[leftmargin=*]
\item We introduce the first large-scale pre-trained language model for English vaccine-related tweets;
\item We expand the typical Gated Recurrent Unit (GRU) by introducing a new cell component that allows for combination with external knowledge and consolidates commonsense knowledge into our LM network;
\item We present an end-to-end method for vaccine sentiment classification that outperforms recent state-of-the-art (SOTA) methods for tweet-level vaccine sentiment classification.
\end{itemize}
\section{Related Work} \label{rw}
Recent advances in NLP have demonstrated a substantial improvement in performance across a range of tasks~\cite{liu2019linguistic,peters2018dissecting}. Context-dependent representations were introduced to address limitations related to capturing context-dependent representations that assign one vector to the same word regardless of context~\cite{Melamud2016context2vecLG,McCann2017LearnedIT,peters2018deep}. Further improvements for specific tasks were achieved by introducing sentiment information extracted from lexicons~\cite{naseem_dice}.
Commonsense knowledge is considered an evident knowledge to humans and contains universally acknowledged opinion about the world~\cite{storks2019recent}. With the advancement of knowledge engineering, there has been a continuous attempt to collect and encode commonsense knowledge, and many commonsense knowledge bases (CKBs) such as ConceptNet, FreeBase, DBpedia, and SenticNet have been used to augment models with real-world knowledge for different NLP tasks. For example, Ahn et al.~\cite{ahn2016neural} developed a LM that leveraged knowledge bases in an RNN-based LM. Ye et al.~\cite{ye2019align} proposed a discriminative pre-training method for including commonsense knowledge into
the LM, in which the question is concatenated
with different candidates to build a multi-choice question answering sample, and each choice is used to predict whether the candidate is the right answer. For sentiment analysis, Xu et al.~\cite{xu2017incorporating} modified the ordinary recall gate function in RNN to leverage CKB. For sentiment classification, Ma et al.~\cite{ma2018targeted} combined CKB into LSTM cell to improve aspect sentiment classification. Unlike their work, we leverage the commonsense knowledge in BiGRU with context-aware attention and applies it to information extraction.
Most tweet-level vaccine sentiment classification methods use traditional machine learning methods include support vector machines (SVM)~\cite{botsis2011text}, Naive Bayes ensembles and maximum entropy classifiers~\cite{salathe2013dynamics}, and hierarchical SVMs~\cite{du2017optimization}. Alternative approaches use the information beyond individual tweets, including social network structure~\cite{dunn2015associations,zhou2015using}. As supervised machine learning methods, these approaches rely on manual labour from experts. Zhang et al.~\cite{zhang2020sentiment} used three transfer learning methods (ELMo, GPT, and BERT) for tweet-level HPV vaccine sentiment classification and found that a fine-tuned BERT model produced the highest performance.
\begin{figure}[!t]
\centering
\includegraphics[width=.98\linewidth]{ijcnnsentiment.png}
\caption{ Overall architecture of the proposed approach to tweet-level vaccine sentiment classification.}
\label{arciiii}
\end{figure}
\section{Methodology}\label{111model}
First, we define our target problem, followed by the architecture of the proposed method and technical details.
\textbf{Problem Definition}: First we define our problem formally, given a \textbf{tweet} \( T_i \) with a sequence of tokens \( (t_1,t_2,t_3,...t_k ) \), \textit{i} describes the number of a \textbf{tweet} and \textit{k} shows the number of tokens in a \textbf{tweet}. The objective is to predict whether a tweet sentiment polarity is positive, negative or neutral from corresponding set of
labels \(Y = (y_1,y_2,...,y_k)\).
\textbf{Overview of Architecture}:
The model is based on (a)
Representation layer, which includes domain-specific contextual word representation and linguistic-based embedding; (b) Bidirectional Gated Recurrent Unit with Commonsense (CK-BiGRU); (c) Context-aware attention; (d) Sentiment Embedding; and (e) User behaviour features. The architecture of the proposed model can be seen in Fig.~\ref{arciiii}. In the subsequent discussion, we will explain each of these components in depth.
\subsection{Representation Layer}
As mentioned previously, the first component of our model is a representation layer that includes word representations obtained using domain-specific pre-trained LM and linguistic embedding. Below we describe each of these.
\textbf{RoBERTa for Vaccine related tweets:} A Robustly Optimized BERT (RoBERTa)~\cite{robertaliu2019roberta} is an optimized version of Bidirectional encoder representations from transformer (BERT)~\cite{devlin-etal-2019-bert}. During pre-training, BERT used two training objectives: (i) mask language modelling (MLM) and (ii) a next sentence prediction (NSP) task, whereas RoBERTa made the following changes to the BERT model: (i) Longer training of the model with more data with bigger batches; (ii) Eliminating the NSP objective; (iii) Longer training of the sequence; (iv) Dynamically changing the masked positions during pre-training.
Our sub-domain contextual LM uses the same architecture as RoBERTa and is trained on a corpus of 64M vaccine-related tweets crawled using Twitter API during January 12, 2017, and December 3, 2019, using keywords related to the vaccine in the English language.
We used the same pre-processing techniques before training as used in previous studies~\cite{muller2020covid,nguyen2020bertweet}. The retweet tags were removed from the raw corpus, and usernames and web-page URLs were replaced with unique tokens @USER and HTTP-URL, respectively. Further, all emoticons were replaced with their associated meaning using the Python emoji library\footnote{https://pypi.org/project/emoji/}. Tweets were segmented using open-source python, the HuggingFace library\footnote{https://huggingface.co/}. Every input sequence of the RoBERTa LM is transformed into 50,265-word vocabulary tokens. The length of Twitter messages is limited to 200 characters, and we kept the batch size of 8 during the training and evaluation process.
\textbf{Linguistic Embedding:} Each word in a sentence has several attributes which we can use for analysis. For instance, part of speech (POS) of a token where nouns are a person, location, or object; verbs are acts or events; adjectives are terms that characterise nouns. However, words often have relations around them, and there are a variety of such relations. For instance, a noun may be the subject of a sentence where it behaves as a verb, e.g.\textit{`Trump laughed'}. Nouns may also be the subject of a sentence where they are the subject of a sentence, such as \textit{Biden} in the example \textit{`Trump laughed at Biden'}. Word representation as discrete and distinct symbols is incomplete and often leads to poor generalisation. Thus, we
aimed to produce a representation that preserves semantic and syntactive similarity between words using linguistic embedding.
Here, we used both \textit{POS} \textit{tags} to provide knowledge of a word and the various POS forms of words and \textit{dependency parsing} to explain these relations among words in a phrase to model linguistic features in a tweet context. Using POS tagging and dependency parsing has demonstrated positive results in previous studies~\cite{naseem2020transformer}. We used the Python library `SpaCy'\footnote{https://pypi.org/project/spacy/} for dependency and POS tagging. These tags were then transformed into a vector representation using one-hot encoding to capture syntactical information of words.
\subsection{BiGRU with commonsense knowledge (CK-BiGRU)}
\textbf{GRU:} is a type of RNNs presented by \cite{184DBLP:journals/corr/ChungGCB14}. GRU is the simplest form of LSTM architecture. It includes two gates and does not contain internal memory, which makes it different from LSTM. Also, in GRU, a second non-linearity (tanh) is not applied on a network. The working of a GRU cell is given below:
\(z_{t}= \sigma(W_{iz}x_{t}+b_{iz}+W_{hz}h_{(t-1)}+b_{hz})\)
\( r_{t} = \sigma(W_{ir}x_{t}+b_{ir}+ W_{hr}h_{(t-1)}+b_{hr})\)
\(n_{t} = \tanh (W_{in}x_{t}+b_{in}+t_{t}*W_{hn}h_{(t-1)}+b_{hn})\)
\(h_{t} = (1- z_{t})*n_{t}+z_{t}*h_{(t-1)}\)
Where \(z_{t}\) is the reset, \(r_{t}\) is the update, \(n_{t}\) is the new gates, $\sigma$ is the sigmoid function, and * is the product of Hadamard, and \(x_{t}\) and \(h_{t}\) is the input and the hidden state at time \textit{t}, and finally, \(h_{(t-1)}\) is the hidden state of the layer at time \textit{t-1}. Typically, a single GRU encodes a sequence in only one direction.However, two GRUs can be stacked to use it as a bi-directional encoder, attributed to as a bi-directional GRU that leads to two hidden states forward \(\overrightarrow{h_i}\) and backward \(\overleftarrow{h_i}\) GRU at time step \textit{t}. Their concatemnation $h_i= [ \overrightarrow{h_i} \parallel \overleftarrow{h_i}] $ provides a full description of the input data for the time step \textit{t}.
\begin{figure}[!t]
\centering
\includegraphics[width=.98\linewidth]{IJCNN_CK_GRU.png}
\caption{A Visualisation of BiGRU with commonsense knowledge (CK-BiGRU).}
\label{comm}
\end{figure}
\textbf{Commonsense Knowledge:} To improve performance, we used commonsense knowledge as our knowledge base to be incorporated in the sequence encoder. Typically, a CKB can be considered as a semantic network where concepts are nodes in the graph and relations are edges. Each~ \(<~concept1,~relation,~concept2~>\) triple is termed an assertion. AffectiveSpace~\cite{sentic5.Cambria2018SenticNet5D} was designed to map SenticNet~\cite{sentic5.Cambria2018SenticNet5D} CKB concepts to continuous low-dimensional embeddings while maintaining the affective relationship of the original space. We used AffectiveSpace and extend a previous approach~\cite{ma2018targeted} to construct an affective extension of BiGRU called BiGRU with commonsense knowledge (CK-BiGRU). CK-BiGRU leverages the affective commonsense knowledge to GRU cells and provides affective information to the GRU memory cell. A set of \textit{C} concept candidates would be extracted using a
syntactic concept parser and mapped to the $d_{c}$ dimensional vectors \( [\alpha_{t,1}, \alpha_{t,2}, ....., \alpha_{t,C}]
\) at time step \textit{t}. The candidate embedding of step \textit{t} is calculated as the average of the vectors using eq\ref{cc}:
\begin{equation}\label{cc}
\alpha_{t}= \frac {1}{ C} \sum_{i} \alpha_{t,i}
\end{equation}
The formula of BiGRU with commonsense knowledge is illustrated below :
\( r_{t} = \sigma(W_{r}[x_{t}, h_{t-1},\textcolor{red}{\alpha_{t}}]+b_{r})\)
\( z_{t} = \sigma(W_{z}[x_{t}, h_{t-1},\textcolor{red}{\alpha_{t}}]+b_{z})\)
\(\widehat{n_{t}} = \tanh (W_{n}x_{t}+b_{n}+r_{t}*(W_{m}x_{t}+b_{m}))\)
\(\textcolor{red}{\widehat{n_{t}^{c}} = \tanh (W_{cn}\alpha_{t}+b_{cn}+r_{t}*(W_{cm}x_{t}+b_{cm}))}\)
\(h_{t} = (1- z_{t})*n_{t}+z_{t}*h_{(t-1)}+\textcolor{red}{(1-z_{t})*\widehat{n_{t}^{c}}}\)
Knowledge concepts are added to the reset and update gates as a filtering cue. They are
assumed to be meaningful to the sequence model to supervise the information flow at the token level. Moreover, an additional candidate activation vector \(\widehat{n_{t}^{c}}\) models the relative contributions of token and concept level, is employed to extend the normal GRU cell and is added to the output vector, as shown in Fig~\ref{comm}.
\subsection{Context-Aware Attention}
Not all words have an equivalent role in interpreting the context of a sentence, and attention assigns a weight \textit{\(a_n\)} to each word through a softmax function. In this work, we used global context-aware attention mechanism (GCM)~\cite{liu2017global} to impose the contribution of meaningful words instead of simply
using CK-BiGRU hidden states, since the outputs of the state provide relatively local contextual information
from previous time steps.
The hidden states of all time steps \( [h_{1},h_{2},.....,h_{n}]\) from the first CK-BiGRU layer act as the key and value vectors, while the query vectors are from the GCM. The vectors list \( [h_{1},h_{2},.....,h_{n}]\) also initializes the memory \([m_{1}, m_{2},.....,m_{n}] \). The
attention outputs \([a_{1},a_{2},....,a_{n}] \) are fed into the second CK-BiGRU layer and then the state outputs refine the GCM. Multiple iterations of attention operations would be conducted to promote the attention ability to assess each token's informativeness level. Liu et al.\cite{liu2017global} use the memory for the final classification. However, in our model, the last forward and backward hidden outputs from the second CK-BiGRU layer are generated as the entire attention layer's output after iterative refinements. A visual representation of GCM attention is shown in Fig.~\ref{gcm}. We fed the representation of CK-BiGRU and context-aware attention into a linear layer. The linear layer aims to reduce the dimensionality, to avoid weakening the influence of small-dimensional sentiment scores on the final results.
\begin{figure}[!t]
\centering
\includegraphics[width=.98\linewidth]{GCM.png}
\caption{ A Visualisation of Global Context-aware Attention.}
\label{gcm}
\end{figure}
\subsection{Sentiment Lexicon}
Extending our method, we incorporated a sentiment score for the tweet into our model. The sentiment embedding obtains sentiment scores from lexicons. Each lexicon includes a pair of words and their corresponding sentiment, in which each word has its sentiment value.
We used Valence Arousal Dominance (VAD) sentiment lexicon~\cite{mohammad2018obtaining} to extract the sentiment scores and is transformed into a vector that captures words' sentiment. The VAD lexicon gives a vector, \(\overrightarrow {VAD_{t}} = [t_{V}, t_{A}, t_{D}]\), for every token where \(t_{V}\) is the valence of the token, \(t_{A}\) is the arousal of the token, and \(t_{D}\) is the dominance of the token. These three concepts have been recognized as the most significant aspects of meaning by a number of linguistic scholars ~\cite{mohammad2018obtaining}. Our VAD-lexicon-based sentiment calculates each dimension's average score (valence, arousal and dominance) for each part separately and returns the three scores corresponding to each part. A value of zero is given to a word that is not present in the lexicon. The output would be a 9-dimensional vector containing 3 score lists for each part of a tweet.
\subsection{Incorporating User-Metadata}
Many new social media sentiment analysis approaches evaluate
tweet-level vaccine sentiment mainly on the basis of textual content and neglect other useful information (metadata such as click, follow, \# of friends and followers, etc.) easily available on these platforms. Incorporating metadata has demonstrated positive results in previous studies~\cite{alharbi2019twitter}. In our work, we tested a combination of different metadata (given in Table~\ref{metaa}) as features to train our model. The feature is processed for the entire list with min-max normalisation since different tweets contain quite different amounts of those particles. The best combination (F1-F8) of metadata is obtained from testing different combinations (see Table IV).
\begin{table}[!t]
\centering
\caption{List of features used to incorporate User's behaviour in metadata}
\label{metaa}
\begin{tabular}{cl}
\hline
Feature ID & \multicolumn{1}{c}{Feature Description} \\ \hline
F1 & Tweet posted Date, \\ \hline
F2 & \# of Emoticons, \\ \hline
F3 & \# of Hashtags, \\ \hline
F4 & \# of Exclamation Marks, \\ \hline
F5 & \# of Question Marks, \\ \hline
F6 & \# of Mentions, \\ \hline
F7 & \# of Positive Words in Bing Liu Lexicon, \\ \hline
F8 & \# of Negative Words in Bing Liu Lexicon, \\ \hline
F9 & \# of Favorite \\ \hline
F10 & \# of Retweet \\ \hline
F11 & \# of User's Favorite \\ \hline
F12 & \# of Followers \\ \hline
F13 & \# of Friends \\ \hline
F14 & \# of User Listed \\ \hline
F15 & User Statues \\ \hline
F16 & Profile Verified (Yes/No) \\ \hline
F17 & Profile Image (Yes/No) \\ \hline
\end{tabular}
\end{table}
Finally, we combined the above vectors to produce a vector that can resolve the language ambiguities described above. The final concatenated vector is then fed to the linear output layer for final prediction. We used softmax to have the distribution of the likelihood class at the last layer of the classifier and used cross-entropy as a loss function in our experiments.
\begin{table}[!b]
\centering
\caption{Dataset Distribution}
\label{data11}
\begin{tabular}{ccc}
\hline
Dataset & Vaccine Sentiment (VS1) & Vaccine Sentiment (VS2) \\ \hline
Positive & 6,683 & 8,965 \\ \hline
Negative & 1,084 & 1,976 \\ \hline
Neutral & 1,445 & 7,562 \\ \hline
Total & 9,212 & 18,503 \\ \hline
\end{tabular}
\end{table}
\section{Experiments}\label{results}
We evaluated the performance of our approach with several SOTA approaches as the baselines. A 10-fold cross-validation technique is used to evaluate the classification results. Accuracy, F1-Score, Recall and Precision scores are reported to evaluate the performance of our model.
\begin{table*}[!t]
\centering
\caption{Comparison of the proposed method v/s the baselines, F1-score, Precision and Recall score averaged over 10-folds.}
\label{results}
\begin{tabular}{ccccc|cccc}
\hline
\multirow{4}{*}{Model\textbackslash Dataset} & \multicolumn{4}{c|}{\multirow{2}{*}{VS1}} & \multicolumn{4}{c}{\multirow{2}{*}{VS2}} \\
& \multicolumn{4}{c|}{} & \multicolumn{4}{c}{} \\ \cline{2-9}
& \multirow{2}{*}{Accuracy} & \multirow{2}{*}{F1-Score} & \multirow{2}{*}{Precision} & \multirow{2}{*}{Recall} & \multirow{2}{*}{Accuracy} & \multirow{2}{*}{F1-Score} & \multirow{2}{*}{Precision} & \multirow{2}{*}{Recall} \\
& & & & & & & & \\ \hline
Word2vec & 0.718 & 0.624 & 0.611 & 0.718 & 0.639 & 0.610 & 0.615 & 0.639 \\ \hline
Glove & 0.730 & 0.652 & 0.672 & 0.720 & 0.676 & 0.643 & 0.674 & 0.696 \\ \hline
BERT & 0.746 & 0.682 & 0.706 & 0.746 & 0.751 & 0.738 & 0.746 & 0.751 \\ \hline
RoBERTa & 0.764 & 0.714 & 0.733 & 0.764 & 0.765 & 0.757 & 0.762 & 0.765 \\ \hline
ALBERT & 0.747 & 0.681 & 0.711 & 0.747 & 0.739 & 0.726 & 0.732 & 0.739 \\ \hline
BioBERT & 0.738 & 0.664 & 0.690 & 0.738 & 0.741 & 0.725 & 0.737 & 0.741 \\ \hline
BERTweet & 0.749 & 0.681 & 0.719 & 0.749 & 0.750 & 0.737 & 0.747 & 0.750 \\ \hline
CT-BERT & 0.789 & 0.752 & 0.760 & 0.789 & 0.783 & 0.779 & 0.782 & 0.783 \\ \hline
CT-BERT+ Dep Emb+ BiGRU + Context Attention & 0.793 & 0.778 & 0.771 & 0.793 & 0.797 & 0.796 & 0.798 & 0.797 \\ \hline
Word2vec+ Dep Emb+ BiGRU + Context Attention +VAD & 0.718 & 0.683 & 0.670 & 0.718 & 0.731 & 0.729 & 0.730 & 0.731 \\ \hline
Glove+ Dep Emb+ BiGRU + Context Attention +VAD & 0.731 & 0.706 & 0.697 & 0.731 & 0.741 & 0.740 & 0.742 & 0.731 \\ \hline
BERT+ Dep Emb+ BiGRU + Context Attention +VAD & 0.750 & 0.729 & 0.720 & 0.750 & 0.764 & 0.764 & 0.764 & 0.764 \\ \hline
RoBERTa+ Dep Emb+ BiGRU + Context Attention +VAD & 0.760 & 0.747 & 0.739 & 0.760 & 0.782 & 0.782 & 0.784 & 0.782 \\ \hline
ALBERT+ Dep Emb+ BiGRU + Context Attention +VAD & 0.745 & 0.724 & 0.715 & 0.745 & 0.764 & 0.764 & 0.764 & 0.764 \\ \hline
BioBERT+ Dep Emb+ BiGRU + Context Attention +VAD & 0.734 & 0.718 & 0.710 & 0.734 & 0.754 & 0.754 & 0.757 & 0.754 \\ \hline
BERTweet+ Dep Emb+ BiGRU + Context Attention +VAD & 0.761 & 0.742 & 0.734 & 0.761 & 0.773 & 0.774 & 0.775 & 0.773 \\ \hline
CT-BERT+ Dep Emb+ BiGRU + Context Attention +VAD & 0.792 & 0.778 & 0.771 & 0.792 & 0.797 & 0.796 & 0.798 & 0.797 \\ \hline \hline
Proposed & \textbf{0.820 } & \textbf{0.807 } & \textbf{0.801 } & \textbf{0.820 } & \textbf{0.818 } & \textbf{0.819 } & \textbf{0.823} & \textbf{0.818 } \\ \hline \hline
\end{tabular}
\end{table*}
\subsection{Datasets}
The experiments were performed using 2 datasets of vaccine-related tweets, where class labels were positive, neutral, and negative (see Table ~\ref{data11}).
\begin{itemize}
\item \textbf{Vaccine Sentiment \#1 (VS1)}: Our first dataset contains the collection of tweets related to the dissemination of vaccine on Twitter by assessing awareness and interaction among regular users in the United States (US) collected between January 12, 2017, and December 3, 2019. This dataset was collected and labelled by~\cite{dunn2020limited}. The total number of tweets are 9,212 and contains three classes: promoting vaccination (positive with 6,683 tweets), vaccine critical (negative with 1,084 tweets) and neural with 1,445 tweets.
\item \textbf{Vaccine Sentiment \#2 (VS2)}: This dataset \footnote {https://github.com/digitalepidemiologylab/crowdbreaks-paper} contains a set of measles and vaccination-related U.S.-geolocated tweets gathered between July 2018 and January 2019 via the Twitter Streaming API, presented by Müller et al.~\cite{muller2019crowdbreaks} and contains three classes. The total number of tweets are 18,503 and contains three classes: pro-vaccine (positive with 8,965 tweets), anti-vaccine (negative with 1,976 tweets) and neural with 7,562 tweets.
\end{itemize}
\subsection{Experimental Settings}
\subsubsection{Pre-processing}
We corrected for spelling mistakes; sentiment aware tokenization was used to replace emojis with their associated words; further, we replaced emoticons with their associated meaning words following previous studies~\cite{naseem2020survey}. With hashtags, the hashtag symbol (\#) was removed, and we achieved word-segmentation to split the words in the hashtag. Other corrections included expanding contractions, normalized URLs, digits, emails, user mentions and elongated words. We used \textit{ekphrasis}\footnote{https://github.com/cbaziotis/ekphrasis} and \textit{emoji}\footnote{https://pypi.org/project/emoji/}, a Python open-source library to enhance the quality of tweets. We also removed punctuation and removed repeated words using a regular expression.
\begin{table*}[!b]
\centering
\caption{Comparison of our approach with other variants by replacing CK-BiGRU with BiGRU and Sentic-LSTM and different combination of metadata features (F1-F17) given in Table\ref{metaa}.}
\label{Variants}
\begin{tabular}{ccccc|cccc}
\hline
\multirow{2}{*}{Model/Dataset} & \multicolumn{4}{c|}{VS1} & \multicolumn{4}{c}{VS2} \\ \cline{2-9}
& Accuracy & F1-Score & Precision & Recall & Accuracy & F1-Score & Precision & Recall \\ \hline \hline
Proposed Model (PM) & \textbf{0.820} & \textbf{0.807} & \textbf{0.801 } & \textbf{0.820 } & \textbf{0.818 } & \textbf{0.819} & \textbf{0.823 } & \textbf{0.818} \\ \hline \hline
PM-metadata-CKBiGRU+BiGRU & 0.788 & 0.779 & 0.773 & 0.788 & 0.795 & 0.796 & 0.799 & 0.795 \\ \hline
PM-CKBiGRU+BiGRU+Metadata (F1-F8) & 0.798 & 0.783 & 0.776 & 0.798 & 0.797 & 0.796 & 0.797 & 0.797 \\ \hline
PM-CKBiGRU+BiGRU+Metadata (F2-F8) & 0.786 & 0.780 & 0.775 & 0.786 & 0.796 & 0.797 & 0.800 & 0.796 \\ \hline
PM-CKBiGRU+BiGRU+Metadata (F1-F6) & 0.790 & 0.778 & 0.772 & 0.790 & 0.791 & 0.792 & 0.795 & 0.791 \\ \hline
PM-CKBiGRU+BiGRU+Metadata (F7-F8) & 0.792 & 0.778 & 0.771 & 0.792 & 0.797 & 0.796 & 0.798 & 0.797 \\ \hline
PM-CKBiGRU+BiGRU+Metadata (F1-F6) & 0.792 & 0.778 & 0.771 & 0.792 & 0.796 & 0.796 & 0.797 & 0.796 \\ \hline
PM-CKBiGRU+BiGRU+Metadata (F1-F17) & 0.790 & 0.780 & 0.774 & 0.790 & 0.796 & 0.796 & 0.798 & 0.796 \\ \hline
PM-CKGRU+SenticLSTM+Metadata (F1-F8) & 0.795 & 0.780 & 0.778 & 0.795 & 0.799 & 0.798 & 0.801 & 0.799 \\ \hline
\end{tabular}
\end{table*}
\subsubsection{Parameters tuning}
We used the grid search optimization method to tune the optimal model. The $(0.5)$ dropout at network connections is used to switch off the network neurons randomly. To eliminate over-fitting and make our approach robust, we used the $(L {2}=0.005)$ regularisation method to determine large weights. We used 2 layers, and the size of each hidden layer used is 50. The classifier was trained with 128 batch size, and to tune the $(0.005)$ learning rate, we used the Adam Optimizer for 40 epochs. Finally, in our method, we used the cross-entropy loss function.
\subsection{Baselines}
We compared our results with several word representation methods and SOTA architectures that have been excessively used for Twitter sentiment analysis. We also evaluate the results with a number of variants of our proposed approach to highlight our technical contribution. We used our implementation of these baselines and used the grid-search cross-validation to derive the models' optimal settings.
\begin{table*}[!t]
\centering
\caption{Comparison of Prediction by our Proposed method in contrast to gold truth.}
\label{predictions}
\begin{tabular}{lcc}
\hline
\multicolumn{1}{c}{Tweet} & Gold Truth & Predicted Label \\ \hline \hline
The same people who mock anti vaxxers are the same ones who want open borders. \#elpaso & Neutral & Neutral \\
$ @username$ Getting my puppers vaccinated. Safe for another year. & Positive & Positive \\
Peep dont realize how dangerous these vaccinations are. https://t.co/VY2ht6ub5Y & Negative & Negative \\
The vaccines today are so dangerous and obviously worthless!! https://t.co/x8CZuX4c8n & Negative & Negative \\ \hline \hline
\end{tabular}
\end{table*}
We used representations from pre-trained Word2Vec~\cite{NIPS2013_5021}, GloVe~\cite{pennington2014glove}, BERT~\cite{devlin-etal-2019-bert}, RoBERTa~\cite{robertaliu2019roberta}, a distilled version of BERT (DistilBERT)\cite{DBLP:journals/corr/abs-1910-01108} and A Lite Bidirectional Encoder Representations from Transformers (ALBERT)~\cite{lan2019albert}, BioBERT~\cite{lee2019biobert}, BERTTweet~\cite{nguyen2020bertweet} and CT-BERT~\cite{muller2020covid} and fed these representations to a linear classifier. In addition, we also compared our results with neural network-based classifiers with attention. We used the representations from the same word representation models and linguistic features with BiGRU with context-aware attention with and without sentiment embedding. Finally, we compared the results (Table V) of our approach with other variants by replacing different components such as CK-BiGRU with BiGRU and Sentic LSTM~\cite{ma2018targeted} and different combination of metadata.
\subsection{Results and Discussion}
The proposed method achieved an F1-score of 0.807 on the VS1 dataset and 0.819 on the VS2 dataset, compared to the next highest performance of 0.778 and 0.798, respectively (Table~\ref{results}).
Replacing CK-BiGRU with BiGRU reduced the performance from 0.807 to 0.779 on the VS1 dataset and from 0.819 to 0.795 on the VS2 dataset, showing that CKBiGRU contributed to the overall performance (Table~\ref{Variants}). Similarly, replacing CKBiGRU with SenticLSTM reduced the performance from 0.807 to 0.780 on the VS1 dataset and from 0.819 to 0.798 on the VS2 dataset (Table~\ref{Variants}). Finally, replacing the combination of (F1-F8) metadata features with other combination of features also reduced the performance, showing that the combination of (F1-F8) metadata features also contributed to the overall performance (Table~\ref{Variants}). Overall, the results show that the use of CKBiGRU, a combination of (F1-F8) metadata, and domain-specific contextual word representation together lead to an increase in performance.
\begin{figure}[!htpb]
\centering
\includegraphics[width=1\linewidth]{ablationV1.png}
\caption{An ablation analysis}
\label{ablation}
\end{figure}
\subsection{Ablation Analysis}
Similarly, the ablation testing showed that it is the combination of components that produces the increase in performance (Fig.~\ref{ablation}). The results drop marginally in both cases when we exclude sentiment and replace CK-BiGRU with BiGRU from our approach. Further, the empirical analysis also reveals a noticeable drop in performance when we use our pre-trained domain-specific LM as a linear classifier and remove all other components from our model.
\begin{figure}[!hpbt]
\centering
\includegraphics[width=1\linewidth]{heat.png}
\caption{Visualisation of GCM Attention heat-map: The color intensity corresponds to the weight given to each word by the GCM-attention mechanism.}
\label{attheatmap}
\end{figure}
An attention-based heat-map visualization illustrated the specific words that are more important to the prediction (Fig.~\ref{attheatmap}). Examples of the predictions made by the proposed approach also illustrate the performance (Table~~\ref{predictions}).
\section{Conclusion}\label{con}
In this study, we presented an end-to-end approach to tweet-level vaccine sentiment classification and demonstrated that our approach outperformed comparative SOTA approaches. Ablation testing shows that each of the components in the proposed approach contributes to the increase in performance over previous approaches. We explicitly modelled the domain-specific contextual word representation with commonsense knowledge and sentiment information, generating a more accurate representation. A key contribution of the work is the first large-scale pre-trained LM for English language vaccine-related tweets. In addition, the novel CK-BiGRU extension demonstrates how commonsense knowledge can be incorporated into a vector. The implications of this work include ways to improve public health surveillance related to vaccine hesitancy and may be extended to support other applications where public opinion has the potential to affect population health outcomes.
\section{Acknowledgment}
This research is supported by Australian Government Research Training Program (RTP).
\bibliographystyle{plain}
|
train/arxiv
|
BkiUaxjxK6nrxpQczBEg
| 5 | 1 |
\section{Introduction}
\label{Intro}
The stochastic differential equations with Markovian switching (SDEwMS) are widely used in many real-life situations, see, for example, \cite{bao2016permanence}, \cite{mao2006stochastic}, \cite{mao2006hierarchical}, \cite{yin1994hybrid}, \cite{zhang1998nonlinear}, \cite{zhang2001stock} and the references therein.
The existence and uniqueness of the solution of SDEwMS are well-known in the literature, see \cite{dareiotis2016tamed} and \cite{mao2006stochastic}.
However, the explicit solutions of such equations are often unknown, which makes the numerical approximations of SDEwMS an important subject of investigation.
The numerical schemes of order $0.5$ for SDEwMS, namely the Euler scheme, tamed Euler scheme and implicit Euler scheme, have received significant attention in the literature, see \cite{mao2006stochastic}, \cite{nguyen2017pathwise}, \cite{dareiotis2016tamed} and \cite{nguyen2018tamed}.
Recently, order $1.0$ schemes such as (tamed) explicit Milstein-type scheme for SDEwMS and their strong rate of convergence have been studied in \cite{kumar2020tamedmilstein}, \cite{kumar2021milstein} and \cite{nguyen2017milstein}.
However, no attention is given to the investigation of the higher-order numerical schemes for SDEwMS, perhaps because of the non-availability of the It\^o-Taylor expansion for functions depending on a Markov chain.
The derivation of the It\^o-Taylor expansion for SDEwMS is not a straightforward extension of the corresponding results on SDEs (see \cite{kloeden1992numerical}), mainly because the coefficients of SDEwMS depend additionally on a Markov chain and there is no notion of derivative of a Markov chain dependent function with respect to the Markov chain.
In this article, we present novel strategies to address these challenges to derive and demonstrate the It\^o-Taylor expansion for SDEwMS.
More precisely, in the derivation of the It\^o-Taylor expansion for the traditional SDEs, as discussed in \cite{kloeden1992numerical}, authors deal with two types of integrals, namely, Riemann integral and It\^o's integral.
However, to derive the It\^o-Taylor expansion for SDEwMS, we have to deal with one more form of integral, which is the integral with respect to the optional process $\{[M_{i_0 k_0}](t):t\in[0,T]\}$ (see Subsection \ref{sub:Martingale with MS}) associated with the Markov chain.
Thus, the techniques developed for SDEs in \cite{kloeden1992numerical} can not be extended in a straightforward manner to SDEwMS, which necessitates the development of novel strategies.
Indeed, multi-indices, multiple integrals, hierarchical and remainder sets are defined by taking into consideration the new integrals with respect to $\{[M_{i_0 k_0}](t):t\in[0,T]\}$ which brings additional complexity in the derivation and proofs of the main results on the It\^o-Taylor expansion and the explicit $\gamma\in\{n/2:n\in\mathbb{N}\}$-order scheme for SDEwMS.
Here, it is worth mentioning that designing and analysis become difficult because of the random switching of the Markov chain and entangling of the continuous dynamics of the state and discrete events arising due to switching of the Markov chain.
Further, if the state space of the Markov chain is a singleton set, then SDEwMS become SDEs. In such a case, our It\^o-Taylor expansion reduces to the classical It\^o-Taylor expansion for SDEs given in \cite{kloeden1992numerical}.
We provide an application of the It\^o-Taylor expansion in the derivation of higher-order numerical schemes for SDEwMS.
We show that the strong rate of convergence of the general numerical scheme for SDEwMS is $\gamma\in\{n/2: n \in \mathbb{N}\}$.
For a Markov chain with singleton state space, \textit{i.e.}, when SDEwMS reduce to SDEs, our $\gamma\in\{n/2: n \in \mathbb{N}\}$-order scheme is the same as the $\gamma\in\{n/2: n \in \mathbb{N}\}$-order scheme discussed in \cite{kloeden1992numerical}.
In this case, the regularity requirement on the coefficients is much weaker in our setting when compared with the correspodning results for SDEs given in \cite{kloeden1992numerical}.
The paper is organized as follows.
The formulation and preliminaries of the article are given in Section \ref{sec:formulation}.
In Section \ref{sec:notations}, notations and definitions are given, which are used throughout the paper.
In Section \ref{Main Result}, the main results on the It\^o-Taylor expansion and its application in $\gamma\in\{n/2:n\in\mathbb{N}\}$-order numerical schemes for SDEwMS are stated.
The technical lemmas required to prove the It\^o-Taylor expansion for SDEwMS, including the proof of the It\^o-Taylor expansion, are given in Section \ref{sec:ito}.
In the last section, we explain the derivation of the $\gamma\in \{n/2:n\in\mathbb{N}\}$-order numerical scheme and establish its moment stability and the strong rate of convergence.
\section{Formulation and preliminaries}
\label{sec:formulation}
Let $(\Omega,\mathcal{F}, P)$ be a complete probability space.
Assume that $W:=\{(W^1(t),W^2(t), \ldots,W^m(t)): t\geq 0\}$ is an $\mathbb{R}^m$-valued standard Wiener process and denote the natural filtration of $W$ by $\mathbb{F}^W:=\{\mathcal{F}^W_t:t\geq 0\}$.
For a fixed $m_0\in \mathbb{N}$, define a set $\mathcal{S}:=\{1,\ldots,m_0 \}$ and an $m_0\times m_0$ matrix $Q:=\{q_{i_0k_0}: i_0, k_0 \in \mathcal{S}\}$ such that $q_{i_0i_0}=-\displaystyle \sum_{k_0\neq i_0 \in \mathcal{S}} q_{i_0k_0}$ for any $i_0\in \mathcal{S}$ where $q_{i_0k_0}\geq 0$ for $i_0\neq k_0$.
Now, consider a continuous-time Markov chain $\alpha:=\{\alpha(t):t\geq 0\}$ with state space $\mathcal{S}$ and generator $Q$.
Clearly, the transition probability of $\alpha$ is given by,
\begin{align}
P(\alpha(t+\delta)=k_0|\alpha(t)=i_0)=
\begin{cases}
q_{i_0k_0}\delta+o(\delta), &\text{if } i_0\neq k_0, \\
1+q_{i_0k_0}\delta+o(\delta), &\text{if } i_0= k_0,
\end{cases} \notag
\end{align}
for any $t \geq 0$, $i_0, k_0 \in \mathcal{S}$ where $\delta>0$.
Also, let $\mathbb{F}^\alpha:=\{\mathcal{F}^\alpha_t:t\geq 0\}$ be the natural filtration of $\alpha$.
We assume that $\alpha$ is independent of $W$.
Define $\mathcal{F}_t=\mathcal{F}_t^W \vee \mathcal{F}_t^\alpha$ for any $t\geq 0$ and equip the probability space $(\Omega,\mathcal{F}, P)$ with the filtration $\mathbb{F}:=\{\mathcal{F}_t:t\geq 0\}$.
For $T>0$, consider a $d$-dimensional stochastic differential equation with Markovian switching (SDEwMS) on $(\Omega,\mathcal{F}, \mathbb{F}, P)$, given by,
\begin{align} \label{eq:sdems}
X(t)=X_0+\int^t_0b(X(s),\alpha(s))ds+\int^t_0 \sigma(X(s),\alpha(s))dW(s)
\end{align}
almost surely for any $t\in[0,T]$ where initial value $X_0\in \mathbb{R}^d$ is an $\mathcal{F}_0$-measurable random variable and $b:\mathbb{R}^d\times \mathcal{S} \mapsto \mathbb{R}^d$ and $\sigma:\mathbb{R}^d\times \mathcal{S} \mapsto \mathbb{R}^{d \times m}$ are measurable functions.
We make the following assumptions for existence, uniqueness and moment stability of SDEwMS \eqref{eq:sdems}.
\begin{assumption}
\label{ass:initial data}
$E|X_0|^{2}, E|Y_0|^{2} < \infty$ and $E|X_0-Y_0|^2\leq L h^{2\gamma}$ where $L>0$ is a constant.
\end{assumption}
\begin{assumption}
\label{ass: b sigma lipschitz}
There exists a constant $L>0$ such that,
\begin{align*}
|b^k(x,i_0)-b^k(y,i_0)|+|\sigma^{(k,j)}(x,i_0)-\sigma^{(k,j)}(y,i_0)| \leq L|x-y|
\end{align*}
for all $i_0\in \mathcal{S}$, $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $x,y\in \mathbb{R}^d$.
\end{assumption}
The result on the existence and uniqueness of the strong solution for SDEwMS is stated in the following theorem and the proof can be found in \cite{mao2006stochastic}, see Theorem 3.1.3.
\begin{thm}\label{thm:true moment}
Let Assumptions \ref{ass:initial data} and \ref{ass: b sigma lipschitz} be satisfied. Then, there exists a unique continuous solution $\{X(t): \{t\in [0,T]\}\}$ of SDEwMS \eqref{eq:sdems}. Moreover, the following hold,
\begin{align*}
E\Big(\sup_{t\in [0,T]}|X(t)|^{p}\Big|\mathcal{F}_T^{\alpha}\Big)& \leq C
\\
E\Big( \sup_{t\in [s,s+h]}|X(t)-X(s)|^p\Big|\mathcal{F}_T^{\alpha} \Big)& \leq Ch^{p/2}
\end{align*}
where the positive constant $C$ is independent of $h$.
\end{thm}
\subsection{Martingales associated with the Markov chain}\label{sub:Martingale with MS}
Following \cite{nguyen2017milstein}, we define
\begin{align}
[M_{i_0k_0}](t):=\sum_{s\in [0, t]}\mathbbm{1}{\{\alpha(s-)=i_0\}} &\mathbbm{1}{\{\alpha(s)=k_0\}},\quad \langle M_{i_0k_0}\rangle(t):=\int_0^t q_{i_0k_0}\mathbbm{1}{\{\alpha(s-)=i_0\}} ds\nonumber
\\
M_{i_0k_0}(t)&:= [M_{i_0 k_0}](t)-\langle M_{i_0k_0}\rangle(t) \nonumber
\end{align}
almost surely for any $t\in[0,T]$ and $i_0\neq k_0 \in \mathcal{S}$.
Clearly, $M_{i_0k_0}(0)=0$ (a.s.) and $\{M_{i_0k_0}(t): t\in[0,T]\}$ is a purely discontinuous square integrable martingale with respect to the filtration $\mathbb{F}^\alpha$.
Also, $\{[M_{i_0 k_0}](t):t\in[0,T]\}$ is an optional process and $\{\langle M_{i_0k_0}\rangle(t):t\in[0,T]\}$ is its predictable quadratic variation process.
Moreover, the following orthogonality relations (of quadratic covariances) hold,
\begin{align*}
[W^i, W^j]=0 \,(i\neq j), \, [M_{i_0k_0}, W^i]=0, \mbox{ and } [M_{i_0k_0}, M_{i_1k_1}]=0 \,\,((i_0k_0)\neq (i_1k_1))
\end{align*}
for any $i,j \in \{1,2,\ldots,m\}$ and $i_0, k_0, i_1, k_1 \in \mathcal{S}$.
For convenience, we take $M_{i_0i_0}(t)=0$ for any $i_0\in\mathcal{S}$ and $t\in[0,T]$.
The proof of the following lemma appears in \cite{nguyen2017milstein} (see Lemma 4.1).
\begin{lemma}\label{lem:rateMS}
Let $q=\max \{-q_{i_0i_0}:i_0 \in \mathcal{S}\}$ and $N^{(s,t]}$ denotes the number of jumps of the Markov chain $\alpha$ in the interval $(s,t]$ for any $s<t\in[0,T]$. Then,
\begin{enumerate}[label=(\alph*)]
\item $P(N^{(s,t]}\geq N)\leq q^N(s-t)^N$ whenever $N\geq 1$, and
\item $EN^{(s,t]}\leq C (t-s)$ whenever $t-s<1/(2q)$ where $C>0$ is a constant independent of $t-s$.
\end{enumerate}
\end{lemma}
\section{Notations and definitions}\label{Notations and definitions}
\label{sec:notations}
For sets $A$ and $B$, define $A\setminus B:=\{x:x\in A \mbox{ and } x\notin B\}$, $A\cap B:=\{x:x\in A,x\in B\}$ and $A\cup B:=\{x:x\in A \: \text{ or }\: x\in B\}$.
$A^{(i,j)}$ and $A^{(j)}$ stand for the $(i,j)$-th element and the $j$-th column of a matrix $A$ respectively and the $k$-th element of a vector $v$ is denoted by $v^k$.
$xy$ denotes the inner product of vectors $x$ and $y$.
The indicator function of a set $A$ is denoted by $\mathbbm{1}A$.
$\varphi$ stands for an empty set and by convention, the sum over an empty set is taken to be zero.
$N^{(s,t]}$ is the no. of jump and $\tau_1,\ldots,\tau_{N^{(s,t]}}$ are the jump times of the Markov chain $\alpha$ in the interval $(s,t]$ for any $s<t\in [0,T]$.
The general constant is denoted by $C$ which can vary from place to place and does not depend on the step-size.
\subsection{Multi-indices} \label{sub:Multi-indices}
We call a row vector $\beta=(j_1,\ldots,j_l)$, a \textit{multi-index} of length $l(\beta)=l \in\mathbb{N}$, when $ j_i\in\{N_1, \ldots,N_{\mu},\bar{N}_{\mu },0,1,\ldots,m\}$ for all $i\in\{1,\ldots,l\}$ and satisfies the following condition,
\begin{itemize}
\item[(a)] if $j_{i}\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu }\}$, then $j_{i-1}\notin\{N_1,\ldots,N_{\mu },\bar{N}_{\mu }\}$ for all $ i\in \{2,\ldots,l\}$
\end{itemize}
where $\mu\in\mathbb{N}$ is fixed.
Here, $N_i$ signifies the number $i$ and $\bar{N}_i$, the number $i+1$ for any $i\in\mathbb{N}$.
A \textit{multi-index $\beta$ of length zero} is denoted by $\nu$, \textit{i.e.}, $l(\nu)=0$.
Here, each component of multi-index $\beta$ represents a type of integral in multiple integrals which becomes more clear when we define multiple integrals in Subsection \ref{sub:multiple integral}.
Furthermore, the positive integer $\mu$ is related to the strong rate $\gamma\in \{n/2,n\in\mathbb{N}\}$ of convergence of the numerical schemes considered in this paper (see Subsection \ref{sub:scheme}).
Also, assume that $\mathcal{M}:=\{\beta: l(\beta)\in\mathbb{N}\cup\{0\}\}$ denotes the set of all multi-indices of arbitrary length.
We divide the set $\mathcal{M}$ of all multi-indices into disjoint sets $\mathcal{M}_1$, $\mathcal{M}_2$ and $\mathcal{M}_3$ as follows,
\begin{itemize}
\item[] $\mathcal{M}_1:=\{(j_1,\ldots,j_l)\in\mathcal{M}:j_i\in\{0,1,\ldots,m\}\forall\, i\in\{1,\ldots,l\}\}\cup \{\nu\}$,
\item[] $\mathcal{M}_2:=\{(j_1,\ldots,j_l)\in\mathcal{M}\setminus\mathcal{M}_1:j_1\in\{0,1,\ldots,m\}\}$,
\item[] $\mathcal{M}_3:=\{(j_1,\ldots,j_l)\in\mathcal{M}\setminus\mathcal{M}_1:j_1\in\{N_1,\ldots,N_{\mu},\bar{N}_{\mu}\}\}$.
\end{itemize}
For any $\beta=(j_1,\ldots,j_l)\in\mathcal{M}\setminus\{\nu\}$, the deletion of its first component is represented by $-\beta=(j_2,\ldots,j_l)$ and the deletion of its last component is represented by $\beta-=(j_1,\ldots,j_{l-1})$.
Notice that $-\beta=\beta-=\nu$ for any $\beta\in\mathcal{M}$ of length $l(\beta)=1$.
For $\beta=(j_1,\ldots,j_k), \bar{\beta}=(\bar{j_1},\ldots,\bar{j_l}) \in \mathcal{M}$, the \textit{concatenation operation} $\star$ is defined by
\begin{itemize}
\item[] $\beta\star\bar{\beta}:=(j_1,\ldots,j_k,\bar{j_1},\ldots,\bar{j_l})\in \mathcal{M}$ with $\nu\star\bar{\beta}=\bar{\beta}$ and $\beta\star\nu=\beta$.
\end{itemize}
Also, for $\beta=(j_1,\ldots,j_l) \in\mathcal{M}$,
\begin{itemize}
\item[] $n(\beta)$ : number of components of $\beta$ which are equal to $k$ where $k\in \{1,\ldots,m\}$,
\item[] $\bar{n}(\beta)$ : number of components of $\beta$ which are equal to $0$,
\item[] $[n](\beta)$ : number of components of $\beta$ which are equal to $k$ where $k \in \{N_1,\ldots,N_{\mu},\bar{N}_{\mu}\}$,
\item[]$\eta(\beta)=n(\beta)+2\bar{n}(\beta)+\mu_{\max}(\beta)$ where $\mu_{\max}(\beta)=\displaystyle \max_{i \in \{1,\ldots, l\}}\{j_i$: $j_i\in\{N_1,\ldots,N_{\mu}$, $\bar{N}_{\mu}\}$ for any $i\in\{1,\ldots,l\}$\}. Clearly, $\eta(\nu)=0$.
\end{itemize}
Notice that $\eta(\beta)$ may be equal for two different $\beta\in\mathcal{M}$ whose length is not same. For example, $\eta((1,N_1))=\eta((N_1,1,N_1))=2$.
Further, $\{\beta\in\mathcal{M}:\eta(\beta)<0\}$ is an empty set.
\begin{example}
Let $m=4$ and consider a multi-index $\beta=(0,N_2,2,1,N_3,0)$ of length $6$.
Clearly, $\beta$ satisfies conditions (a) mentioned above.
Then, $-\beta=(N_2,2,1,N_3,0)$ and $\beta-=(0,N_2,2,1,N_3)$.
Also, $n(\beta)=2$, $\bar{n}(\beta)=2$ and $\mu_{max}(\beta)=\max\{N_2,N_3\}=3$ which yields $\eta(\beta)=9$.
Further, for $\bar{\beta}=(4,0,0,N_3,1,\bar{N}_1)$, $$\beta\star\bar{\beta}=(0,N_2,2,1,N_3,0,4,0,0,N_3,1,\bar{N}_1) \in \mathcal{M}.$$
Clearly, $\beta\star\bar{\beta}$ satisfies conditions (a) and is a multi-index of length $12$.
\end{example}
\subsection{Operators}
\label{sub:Operators}
The following operators are used throughout this article,
\begin{align*}
L^0_{i_0} & :=\sum^d_{k=1}b^k(\cdot,i_0)\frac{\partial }{\partial x^k}+\frac{1}{2}\sum^d_{k,l=1}\sum^m_{j=1}\sigma^{(k,j)}(\cdot,i_0)\sigma^{(l,j)}(\cdot,i_0)\frac{\partial^2 }{\partial x^k\partial x^l},
\\
L^j_{i_0}& :=\sum^d_{k=1}\sigma^{(k,j)}(\cdot,i_0)\frac{\partial }{\partial x^k}, \,\, j\in\{1,2,\ldots,m\}
\end{align*}
for any $i_0\in \mathcal{S}$.
Further, for $\beta\in\{(j_1,\ldots,j_l)\in\mathcal{M}:j_i\in\{0,1,\ldots,m\}\forall i\in\{1,\ldots,l\}\}$
\begin{align*}
J^{\beta}_{i_0}&:=L^{j_1}_{i_0}\cdots L^{j_l}_{i_0}
\\
J^{\beta}_{i_0k_0}&:=J^{\beta}_{k_0}-J^{\beta}_{i_0}
\end{align*}
for any $i_0,k_0\in \mathcal{S}$.
If $\beta=\nu$, then $J_{i_0}^{\beta}$ is taken as the identity operator.
\subsection{Multiple integrals}\label{sub:multiple integral}
Let $\beta=(j_1,\ldots,j_l)\in\mathcal{M}$ and $f:\mathbb{R}^d\times\mathcal{S}\mapsto\mathbb{R}$ be $(n(\beta)+2\bar{n}(\beta))-$times continuously differentiable function in the first component.
The multiple integral $I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}$ can be defined recursively as,
\begin{align}
&I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}\notag
\\
&:=
\begin{cases}
f(X(t),\alpha(t))& \mbox{if } l(\beta)=0, \mbox{\textit{i.e.}, } \beta=\nu
\\
\displaystyle \int^t_s I_{\beta-}[L^{j_l}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,s_1}dW^{j_l}(s_1) & \mbox{if } j_l\in\{0,1,\ldots,m\}
\\
\displaystyle \sum_{i_0\neq k_0} \int^t_s\mathbbm{1}\{N^{(s,t]}=r \} I_{\beta-}[f(X(\cdot),k_0)-f(X(\cdot),i_0)]_{s,s_1}d[M_{i_0k_0}](s_1) &\mbox{if } j_l\in\{N_r\}, r\in\{1,\ldots,\mu\}
\\
\displaystyle \sum_{i_0\neq k_0} \int^t_s\mathbbm{1}\{N^{(s,t]}>\mu \} I_{\beta-}[f(X(\cdot),k_0)-f(X(\cdot),i_0)]_{s,s_1}d[M_{i_0k_0}](s_1) & \mbox{if } j_l=\bar{N}_{\mu }
\end{cases} \notag
\end{align}
almost surely for all $s<t\in[0,T]$ where $dW^{0}(s_1):=ds_1$.
Notice that if the state of the Markov chain $\alpha$ is fixed in $f$, say $i_0\in\mathcal{S}$, then
\begin{align}
I_{\beta}[f(X(\cdot),i_0)]_{s,t}:=&
\begin{cases}
f(X(t),i_0)& \mbox{if } l(\beta)=0
\\
\displaystyle \int^t_s I_{\beta-}[L^{j_l}_{\alpha(\cdot)}f(X(\cdot),i_0)]_{s,s_1}dW^{j_l}(s_1) & \mbox{if } j_l\in\{ 0,1,\ldots,m\}
\end{cases} \notag
\end{align}
almost surely for any $s<t\in[0,T]$.
Furthermore, we define,
\begin{align*}
&I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}\notag
\\
&:=
\begin{cases}
f(X(s),\alpha(s))& \mbox{if } l(\beta)=0, \mbox{\textit{i.e.}, } \beta=\nu
\\
\displaystyle \int^t_s I_{\beta-}[L^{j_l}_{\alpha(\cdot)}f(X(s),\alpha(\cdot))]_{s,s_1}dW^{j_l}(s_1) & \mbox{if } j_l\in\{0,1,\ldots,m\}
\\
\displaystyle \sum_{i_0\neq k_0} \int^t_s\mathbbm{1}\{N^{(s,t]}=r \} I_{\beta-}[f(X(s),k_0)-f(X(s),i_0)]_{s,s_1}d[M_{i_0k_0}](s_1) &\mbox{if } j_l\in\{N_r\}, r\in\{1,\ldots,\mu\}
\\
\displaystyle \sum_{i_0\neq k_0} \int^t_s\mathbbm{1}\{N^{(s,t]}>\mu \} I_{\beta-}[f(X(s),k_0)-f(X(s),i_0)]_{s,s_1}d[M_{i_0k_0}](s_1) & \mbox{if } j_l=\bar{N}_{\mu }
\end{cases} \notag
\end{align*}
almost surely for all $s<t\in[0,T]$.
As before, if the state of $\alpha$ is fixed in $f$, say $i_0\in\mathcal{S}$, then
\begin{align}
I_{\beta}[f(X(s),i_0)]_{s,t}:=&
\begin{cases}
f(X(s),i_0)& \mbox{if } l(\beta)=0
\\
\displaystyle \int^t_s I_{\beta-}[L^{j_l}_{\alpha(\cdot)}f(X(s),i_0)]_{s,s_1}dW^{j_l}(s_1) & \mbox{if } j_l\in\{ 0,1,\ldots,m\}
\end{cases} \notag
\end{align}
almost surely for any $s<t\in[0,T]$.
Clearly, for $\beta\in\{(j_1,\ldots,j_l)\in\mathcal{M}:j_i\notin\{N_1,\ldots,N_{\mu},\bar{N}_{\mu}\}\forall\, i\in\{1,\ldots,l\}\}$,
\begin{align}
I_{\beta}[f(X(s),\alpha(s))]_{s,t}:=&
\begin{cases}
f(X(s),\alpha(s))& \mbox{if } l(\beta)=0
\\
\displaystyle \int^t_s I_{\beta-}[L^{j_l}_{\alpha(s)}f(X(s),\alpha(s))]_{s,s_1}dW^{j_l}(s_1) & \mbox{if } j_l\in\{ 0,1,\ldots,m\}
\end{cases} \notag
\end{align}
almost surely for any $s<t\in[0,T]$.
We assume that all of the above multiple integrals are well defined.
\begin{remark}
When the state space $\mathcal{S}$ of the Markov chain $\alpha$ is a singleton set, then the SDEwMS \eqref{eq:sdems} reduces to an SDE and the integrals corresponding to the components $\{N_1,\ldots,N_{\mu},\bar{N}_{\mu}\}$ in the multiple integrals vanish.
In such a situation, the multi-indices and multiple integrals defined above reduce to the case as discussed in Section $5.2$ of \cite{kloeden1992numerical}.
The components $\{N_1,\ldots,N_{\mu},\bar{N}_{\mu}\}$ in multi-indices and the corresponding multiple integrals appear due to the switching of the Markov chain.
\end{remark}
\begin{example}
Let $\beta=(N_1,k,N_1)\in\mathcal{M}$ for a $k\in\{1,\ldots,m\}$.
Then, using the definitions of multiple integrals, one has
\begin{align*}
&\hspace{-0.32cm} I_{(N_1,k,N_1)}[f(X(\cdot),\alpha(\cdot))]_{s,t}=\sum_{i_0\neq k_0}\int_s^t\mathbbm{1}\{N^{(s,t]}=1 \} I_{(N_1,k)} [f(X(\cdot),k_0)-f(X(\cdot),i_0)]_{s,s_1} d[ M_{i_0 k_0}](s_1)
\\
&=\sum_{i_0\neq k_0}\int_s^t\mathbbm{1}\{N^{(s,t]}=1 \}\int_s^{s_1} I_{(N_1)} [L^k_{\alpha(\cdot)}f(X(\cdot),k_0)-L^k_{\alpha(\cdot)}f(X(\cdot),i_0)]_{s,s_2} dW^k(s_2)d[ M_{i_0 k_0}](s_1)
\\
&=\sum_{i_0\neq k_0}\int_s^t\mathbbm{1}\{N^{(s,t]}=1 \}\int_s^{s_1}\sum_{i_1\neq k_1}\int_{s}^{s_2}\mathbbm{1}\{N^{(s,s_2]}=1 \}
\\
&\qquad I_{\nu}[(L^k_{k_1}f(X(\cdot),k_0)-L^k_{i_1}f(X(\cdot),k_0))-(L^k_{k_1}f(X(\cdot),i_0)-L^k_{i_1}f(X(\cdot),i_0))]_{s,s_3}
\\
&\qquad d[ M_{i_1 k_1}](s_3) dW^k(s_2)d[ M_{i_0 k_0}](s_1)
\\
=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{1}\{N^{(s,t]}=1 \}\int_s^{s_1}\sum_{i_1\neq k_1}\int_{s}^{s_2}\mathbbm{1}\{N^{(s,s_2]}=1 \}
\\
&\qquad ((L^k_{k_1}f(X(s_3),k_0)-L^k_{i_1}f(X(s_3),k_0))-(L^k_{k_1}f(X(s_3),i_0)-L^k_{i_1}f(X(s_3),i_0)))
\\
&\qquad d[ M_{i_1 k_1}](s_3) dW^k(s_2)d[ M_{i_0 k_0}](s_1)
\end{align*}
almost surely for any $s<t\in[0,T]$.
\end{example}
\begin{example}
Let $\beta=(0,N_1)\in\mathcal{M}$.
Then, the definition of multiple integrals yields,
\begin{align*}
I_{(0,N_1)}[f(&X(s),\alpha(\cdot))]_{s,t}=\sum_{i_0\neq k_0}\int_s^t\mathbbm{1}\{N^{(s,t]}=1 \} I_{(0)} [f(X(s),k_0)-f(X(s),i_0)]_{s,s_1} d[ M_{i_0 k_0}](s_1)
\\
=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{1}\{N^{(s,t]}=1 \}\int_s^{s_1}I_{\nu} [L^0_{\alpha(\cdot)}f(X(s),k_0)-L^0_{\alpha(\cdot)}f(X(s),i_0)]_{s,s_2}ds_2 d[ M_{i_0 k_0}](s_1)
\\
=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{1}\{N^{(s,t]}=1 \}\int_s^{s_1}(L^0_{\alpha(s)}f(X(s),k_0)-L^0_{\alpha(s)}f(X(s),i_0))ds_2 d[ M_{i_0 k_0}](s_1)
\end{align*}
almost surely for any $s<t\in[0,T]$.
\end{example}
\subsection{Hierarchical and remainder sets}\label{sub:hierarchical and reminder}
A subset $\mathcal{A}$ of $\mathcal{M}$ is called a \textit{hierarchical set} if $\displaystyle \sup_{\beta\in\mathcal{A}}l(\beta)<\infty$ and $-\beta\in\mathcal{A}$ for every $\beta\in\mathcal{A}\setminus\{\nu\}$.
The \textit{remainder set} $\mathcal{B}(\mathcal{A})$ corresponding to a hierarchical set $\mathcal{A}$ is define as,
\begin{itemize}
\item[] $\mathcal{B}(\mathcal{A}):=\{\beta\in\mathcal{M}\setminus\mathcal{A}:-\beta\in\mathcal{A}\}$.
\end{itemize}
If $\mathcal{A}=\varphi$, then $\mathcal{B}(\mathcal{A})=\{\nu\}$.
\begin{example}\label{ex:heirarchical}
Let $\mathcal{A}:=\{\beta\in\mathcal{M}: \eta(\beta)\leq 2 \}$ with $\mu=3$.
Then
\begin{itemize}
\item[] $\mathcal{A}=\{\nu,(N_1),(N_2),(0),(k),(k,N_1),(N_1,k),(k_1,k),(N_1,k,N_1);k,k_1\in\{1,\ldots,m\}\}$
\end{itemize}
is the hierarchical set and corresponding remainder set is
\begin{itemize}
\item[] \hspace{-1cm} $\mathcal{B}(\mathcal{A})=\{(N_3),(\bar{N}_3), (0,N_1), (0,N_2), (k,N_2), (N_1,0),(N_2,0),(N_3,0),(\bar{N}_3,0),(0,0),(k,0),(N_2,k),$\newline
$(N_3,k),(\bar{N}_3,k), (0,k),(N_2,k,N_1),(N_3,k,N_1),(\bar{N}_3,k,N_1),(0,k,N_1),(k_1,k,N_1),(0,N_1,k),$\newline
$(k_1,N_1,k), (N_1,k_1,k),(N_2,k_1,k),(N_3,k_1,k),(\bar{N}_3,k_1,k),(0,k_1,k),(k_2,k_1,k),(0,N_1,k,N_1),$\newline
$(k_1,N_1,k,N_1);k,k_1,k_2\in\{1,\ldots,m\}\}$.
\end{itemize}
\end{example}
\section{Main results}\label{Main Result}
In this section, we introduce two key findings of this paper, namely, the It\^o-Taylor expansion for functions additionally depending on the Markov chain $\alpha$ (see Subsection \ref{sub:ItoTaylorexpansion}) and the explicit numerical scheme of arbitrary order $\gamma\in\{n/2:n\in\mathbb{N}\}$ for SDEwMS \eqref{eq:sdems} (see Subsection \ref{sub:scheme}).
\subsection{It\^o-Taylor expansion}\label{sub:ItoTaylorexpansion}
In the It\^{o}-Taylor expansion, the regularity of the drift coefficient $b$ of SDEwMS \eqref{eq:sdems} is identified with the help of following notations.
\begin{itemize}
\item[] $\mathcal{D}_b:=\{\beta\in\mathcal{M}:\beta\star(0)\star\bar{\beta}\in\mathcal{A}\cup\mathcal{B}(\mathcal{A}),$ the components of $\bar{\beta}\in\mathcal{M}$ are not equal to $0\}$,
\item[] $\mathcal{K}_b:=\displaystyle\max_{\beta\in\mathcal{D}_b}\{n(\beta)+2\bar{n}(\beta)\}$,
\end{itemize}
and for the regularity of the diffusion coefficient $\sigma$, we define,
\begin{itemize}
\item[] $\mathcal{D}_{\sigma}=\{\beta\in\mathcal{M}:\beta\star(j)\star\bar{\beta}\in\mathcal{A}\cup\mathcal{B}(\mathcal{A}),j\in\{0,1,\ldots,m\},$ the components of $\bar{\beta}\in\mathcal{M}$ are not equal to $k$ where $k\in\{0,1,\ldots,m\}\}$,
\item[] $\mathcal{K}_{\sigma}:=\displaystyle \max_{\beta\in\mathcal{D}_{\sigma}}\{n(\beta)+2\bar{n}(\beta)\}$.
\end{itemize}
Similarly, for the regularity of the function $f$, we define,
\begin{itemize}
\item[] $\mathcal{K}_f:=\displaystyle \max_{\beta\in\mathcal{A}\cup\mathcal{B}(\mathcal{A})}\{n(\beta)+2\bar{n}(\beta)\}$.
\end{itemize}
Moreover, we write the hierarchical set $\mathcal{A}$ as $\mathcal{A}=\tilde{\mathcal{A}}\cup(\mathcal{A}\setminus\tilde{\mathcal{A}} )$ where
\begin{itemize}
\item[] $\tilde{\mathcal{A}}:=\{(j_1,\ldots,j_l)\in\mathcal{A}: j_i\in\{N_1,\ldots,N_\mu,\bar{N}_\mu\} \mbox{ for any } i\in\{1,\ldots,l\} \}$.
\end{itemize}
We make the following assumption on the coefficients of SDEwMS \eqref{eq:sdems}.
\begin{assumption} \label{as:ito}
For a hierarchical set $\mathcal{A}$ and for any $i_0 \in \mathcal{S}$, $k \in\{1, \ldots, d\}$ and $j\in\{1,\ldots,m\}$, let $b^k(\cdot,i_0)$ and $\sigma^{(k,j)}(\cdot,i_0)$ be $\mathcal{K}_{b}-$ and $\mathcal{K}_{\sigma} -$ times continuously differentiable functions respectively.
\end{assumption}
The following theorem is the first main result of this article which is proved in Section \ref{sec:ito}.
\begin{thm}[\bf{It\^o-Taylor expansion}] \label{thm:Ito-Taylor}
For a hierarchical set $\mathcal{A}$, let Assumption $\ref{as:ito}$ be satisfied and assume that $f:\mathbb{R}^d\times\mathcal{S}\mapsto \mathbb{R}$ is a $\mathcal{K}_f-$times continuously differentiable function.
Then, for all $ s<t\in [0, T]$,
\begin{align}
f(X(t),\alpha(t))=&\sum_{\beta\in\mathcal{A}\setminus\tilde{\mathcal{A}}}I_{\beta}[f(X(s),\alpha(s))]_{s,t}+\sum_{\beta\in\tilde{\mathcal{A}}}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{\beta\in\mathcal{B}(\mathcal{A})}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t} \notag
\end{align}
almost surely, provided all the multiple integrals appearing above exist.
\end{thm}
The above expansion is characterized by the specific selection of the hierarchical set $\mathcal{A}$.
Later in Subsection \ref{sec:Derivation of scheme}, it is used to construct the numerical scheme of arbitrary order $\gamma\in\{n/2:n\in\mathbb{N}\}$, see equation \eqref{eq:gen.scheme}.
\begin{remark}
The aforementioned expansion can be considered as a generalization of the It\^o-Taylor expansion for SDE discussed in Theorem 5.5.1 of \cite{kloeden1992numerical} in the sense that the coefficients of SDE additionally depend on a Markov chain.
Indeed, if the state space $\mathcal{S}$ of the Markov chain $\alpha$ is a singleton set, then SDEwMS \eqref{eq:sdems} reduces to an SDE and $\tilde{\mathcal{A}}=\varphi$ which leads to the It\^o-Taylor expansion for SDE.
The terms corresponding to $\tilde{\mathcal{A}}$ in the above expansion appear due to the presence of the Markov chain in the coefficients of the SDE.
\end{remark}
The following two examples illustrate the It\^{o}-Taylor expansion (Theorem \ref{thm:Ito-Taylor}) for some specific hierarchical sets $\mathcal{A}$.
\begin{example}
Let $\mathcal{A}=\{\nu\}$.
Then, $\tilde{\mathcal{A}}=\varphi$, $\mathcal{A}\setminus\tilde{\mathcal{A}}=\{\nu\}$ and $ \mathcal{B}(\mathcal{A})=\{(N_1),\ldots, (N_\mu), (\bar{N}_\mu),(0),$ $(1), \ldots, (m)\}$.
Further, $\mathcal{D}_b=\{\nu\}$ which gives $\mathcal{K}_b=0$.
Similarly, $\mathcal{K}_\sigma=0$ and $\mathcal{K}_f=2$.
Thus, by Theorem~$\ref{thm:Ito-Taylor}$,
\begin{align*}
f(X(t),\alpha(t))=&I_{\nu}[f(X(s),\alpha(s))]_{s,t}+\sum_{r=1}^{\mu}I_{(N_r)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+I_{(\bar{N}_\mu)}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\\
&+I_{(0)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+\sum_{j=1}^mI_{(j)}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
which on using the definition of multiple integrals from Subsection \ref{sub:multiple integral} yields,
\begin{align*}
f(X(t),&\alpha(t))=f(X(s),\alpha(s))+\sum_{r=1}^\mu \sum_{i_0\neq k_0}\int_s^t \mathbbm{1}\{N^{(s,t]}=r \} (f(X(s_1),k_0)-f(X(s_1),i_0))d[M_{i_0k_0}](s_1)
\\
&+ \sum_{i_0\neq k_0}\int_s^t\mathbbm{1}\{N^{(s,t]}>\mu \} (f(X(s_1),k_0)-f(X(s_1),i_0))d[M_{i_0k_0}](s_1)
\\
&+\int_s^t L^{0}_{\alpha(s_1)} f(X(s_1),\alpha(s_1))ds_1+\sum_{j=1}^m\int_s^t L^{j}_{\alpha(s_1)} f(X(s_1),\alpha(s_1))dW^j(s_1)
\end{align*}
almost surely for all $ s<t\in [0, T]$.
\end{example}
\begin{example}
Let $\mathcal{A}:=\{(j_1,\ldots,j_l)\in\mathcal{M}: \eta((j_1,\ldots,j_l))\leq 2$, if $j_i\notin\{ N_1,\ldots,N_{\mu},1,\ldots,m\}, \forall \, i\in\{1,\ldots,l\}\}\cup\{(j_1,\ldots,j_l)\in\mathcal{M}: \eta((j_1,\ldots,j_l))\leq 1$ if $j_i\in\{ N_1,\ldots,N_{\mu},1,\ldots,m\}$ for any $ i\in\{1,\ldots,l\} \}$ with $\mu=3$.
Clearly, $\mathcal{A}=\{\nu,(0),(N_1),(k); k \in \{1, \ldots, m\}\}$, $\tilde{\mathcal{A}}=\{(N_1)\}$, $\mathcal{A}\setminus\tilde{\mathcal{A}}=\{\nu,(0),(k); k \in \{1, \ldots, m\}\}$ and $\mathcal{B}(\mathcal{A})$ $=$ $\{(N_2),$ $(N_3)$, $(\bar{N}_3),$ $(N_1,0),$ $ (N_2,0),$ $(N_3,0),$ $(\bar{N}_3,0),$ $(0,0),$ $(k,0),$ $(0,N_1),$ $(k,N_1),$ $(N_1,k),$ $(N_2,k),$ $(N_3,k),$ $(\bar{N}_3,k),$ $(0,k),$ $(k_1,k);k,k_1\in\{1,\ldots,m\}\}$.
Further, $\mathcal{D}_b=\mathcal{D}_\sigma=\{\nu,(N_1),$ $(N_2),(N_3),(\bar{N}_3),(0),(k);k\in\{1,\ldots,m\}\}$ which gives $\mathcal{K}_b=2$, $\mathcal{K}_\sigma=2$ and $\mathcal{K}_f=4$.
Moreover, by Theorem \ref{thm:Ito-Taylor}, we obtain
\begin{align*}
f(&X(t),\alpha(t))=I_{\nu}[f(X(s),\alpha(s))]_{s,t}+I_{(0)}[f(X(s),\alpha(s))]_{s,t}+\sum_{k=1}^mI_{(k)}[f(X(s),\alpha(s))]_{s,t}
\\
&+I_{(N_1)}[f(X(s),\alpha(\cdot))]_{s,t}+I_{(N_2)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+I_{(N_3)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+I_{(\bar{N}_3)}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\\
&+\sum_{i=1}^3I_{(N_i,0)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+I_{(\bar{N}_3,0)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+I_{(0,0)}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\\
&+\sum_{k=1}^mI_{(k,0)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+I_{(0,N_1)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+\sum_{k=1}^mI_{(k,N_1)}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\\
&+\sum_{i=1}^3\sum_{k=1}^mI_{(N_i,k)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+\sum_{k=1}^mI_{(\bar{N}_3,k)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+\sum_{k=1}^mI_{(0,k)}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\\
&+\sum_{k,k_1=1}^mI_{(k_1,k)}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
which on using the definition of multiple integrals from Subsection \ref{sub:multiple integral} yields,
\begin{align*}
f(&X(t),\alpha(t))=f(X(s),\alpha(s))+\int_s^t L^0_{\alpha(s)}f(X(s),\alpha(s))ds_1+\sum_{k=1}^m\int_s^t L^k_{\alpha(s)}f(X(s),\alpha(s))dW^k(s_1)
\\
&+\sum_{i_0\neq k_0}\int_s^t \mathbbm{1}\{N^{(s,t]}=1 \} (f(X(s),k_0)-f(X(s),i_0))d[M_{i_0k_0}](s_1)
\\
&+\sum_{i_0\neq k_0}\int_s^t \mathbbm{1}\{N^{(s,t]}=2 \} (f(X(s_1),k_0)-f(X(s_1),i_0))d[M_{i_0k_0}](s_1)
\\
&+\sum_{i_0\neq k_0}\int_s^t \mathbbm{1}\{N^{(s,t]}=3 \} (f(X(s_1),k_0)-f(X(s_1),i_0))d[M_{i_0k_0}](s_1)
\\
&+\sum_{i_0\neq k_0}\int_s^t \mathbbm{1}\{N^{(s,t]}>3 \} (f(X(s_1),k_0)-f(X(s_1),i_0))d[M_{i_0k_0}](s_1)
\\
&+\sum_{i=1}^3\int_s^t \sum_{i_0\neq k_0}\int_s^{s_1}\mathbbm{1}\{N^{(s,s_1]}=i \} (L^{0}_{k_0} f(X(s_2),k_0)-L^{0}_{i_0} f(X(s_2),i_0))d[ M_{i_0k_0}](s_2)ds_1
\\
&+\int_s^t \sum_{i_0\neq k_0}\int_s^{s_1}\mathbbm{1}\{N^{(s,s_1]}>3 \} (L^{0}_{k_0} f(X(s_2),k_0)-L^{0}_{i_0} f(X(s_2),i_0))d[ M_{i_0k_0}](s_2)ds_1
\\
&+\int_s^t\int_s^{s_1}L^{0}_{\alpha(s_2)} L^{0}_{\alpha(s_2)} f(X(s_2),\alpha(s_2))ds_2ds_1
\\
&+\sum_{k=1}^m\int_s^t\int_s^{s_1}L^{k}_{\alpha(s_2)} L^{0}_{\alpha(s_2)} f(X(s_2),\alpha(s_2))dW^k(s_2)ds_1
\\
&+\sum_{i_0\neq k_0}\int_s^{t}\mathbbm{1}\{N^{(s,t]}=1 \}\int_s^{s_1} (L^0_{\alpha(s_2)}f(X(s_2),k_0)-L^0_{\alpha(s_2)}f(X(s_2),i_0))ds_2d[M_{i_0k_0}](s_1)
\\
&+\sum_{k=1}^m\sum_{i_0\neq k_0}\int_s^{t}\mathbbm{1}\{N^{(s,t]}=1 \}\int_s^{s_1} (L^k_{\alpha(s_2)}f(X(s_2),k_0)-L^k_{\alpha(s_2)}f(X(s_2),i_0))dW^k(s_2)d[M_{i_0k_0}](s_1)
\\
&+\sum_{i=1}^3\sum_{k=1}^m\int_s^t \sum_{i_0\neq k_0}\int_s^{s_1}\mathbbm{1}\{N^{(s,s_1]}=i \} (L^{k}_{k_0} f(X(s_2),k_0)-L^{k}_{i_0} f(X(s_2),i_0))d[ M_{i_0k_0}](s_2)dW^k(s_1)
\\
&+\sum_{k=1}^m\int_s^t \sum_{i_0\neq k_0}\int_s^{s_1}\mathbbm{1}\{N^{(s,s_1]}>3 \} (L^{k}_{k_0} f(X(s_2),k_0)-L^{k}_{i_0} f(X(s_2),i_0))d[ M_{i_0k_0}](s_2)dW^k(s_1)
\\
&+\sum_{k=1}^m\int_s^t\int_s^{s_1}L^{0}_{\alpha(s_2)} L^{k}_{\alpha(s_2)} f(X(s_2),\alpha(s_2))ds_2dW^k(s_1)
\\
&+\sum_{k,k_1=1}^m\int_s^t\int_s^{s_1}L^{k_1}_{\alpha(s_2)} L^{k}_{\alpha(s_2)} f(X(s_2),\alpha(s_2))dW^{k_1}(s_2)dW^k(s_1)
\end{align*}
almost surely for all $ s<t\in [0, T]$.
\end{example}
\subsection{Explicit $\mathbf{\gamma}$-order numerical scheme}\label{sub:scheme}
For explicit numerical scheme of $\gamma\in\{n/2:n\in\mathbb{N}\}$-order (see equation \eqref{eq:gen.scheme}), we fix $\mu=2\gamma$ and define
\begin{itemize}
\item[] $\mathcal{A}_{\gamma}^b:=\{\nu\} \cup \{(j_1,\ldots,j_l)\in\mathcal{M}: \eta((j_1,\ldots,j_l))\leq 2\gamma-1, \mbox{ if } j_i= 0\, \forall\: i\in\{1,\ldots,l\}\}\cup\{(j_1,\ldots,j_l)\in\mathcal{M}: \eta((j_1,\ldots,j_l))\leq 2\gamma-2$ if $j_i\in\{ N_1,\ldots,N_{\mu},1,\ldots,m\}$ for any $ i\in\{1,\ldots,l\} \}$.
\item[] $\mathcal{A}_{\gamma}^\sigma:=\{\beta\in\mathcal{M}: \eta(\beta)\leq 2\gamma-1 \}$.
\end{itemize}
The sets $\mathcal{A}_{\gamma}^b$ and $\mathcal{A}_{\gamma}^\sigma$ are hierarchical sets and their remainder sets are denoted by $\mathcal{B}(\mathcal{A}_\gamma^b):=\{\beta\in\mathcal{M}\setminus\mathcal{A}_{\gamma}^b:-\beta\in\mathcal{A}_{\gamma}^b\}$ and $\mathcal{B}(\mathcal{A}_\gamma^\sigma):=\{\beta\in\mathcal{M}\setminus\mathcal{A}_{\gamma}^\sigma:-\beta\in\mathcal{A}_{\gamma}^\sigma\}$, respectively.
Further, we write $\mathcal{A}_{\gamma}^b=\tilde{\mathcal{A}}_{\gamma}^b\cup(\mathcal{A}_{\gamma}^b\setminus\tilde{\mathcal{A}}_{\gamma}^b)$ and $\mathcal{A}_{\gamma}^\sigma=\tilde{\mathcal{A}}_{\gamma}^\sigma\cup(\mathcal{A}_{\gamma}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma}^\sigma)$ where
\begin{itemize}
\item[] $\tilde{\mathcal{A}}_{\gamma}^b:=\{(j_1,\ldots,j_l)\in\mathcal{A}_{\gamma}^b: j_i\in\{N_1,\ldots,N_\mu\} \mbox{ for any } i\in\{1,\ldots,l \}\}$,
\item[] $\tilde{\mathcal{A}}_{\gamma}^\sigma:=\{(j_1,\ldots,j_l)\in\mathcal{A}_{\gamma}^\sigma: j_i\in\{N_1,\ldots,N_\mu\} \mbox{ for any } i\in\{1,\ldots,l \}\}.
$
\end{itemize}
Also, we assume that $b^k(\cdot,i_0)$ and $\sigma^{(k,j)}(\cdot,i_0)$ are sufficiently smooth functions for all $i_0 \in \mathcal{S}$, $k \in\{1, \ldots, d\}$ and $j\in\{1,\ldots,m\}$.
We partition the interval $[0,T]$ into $n_T\in\mathbb{N}$ sub-intervals of equal length $h=T/n_T>0$, \textit{i.e.}, $t_n=nh$ for any $n\in\{0,1,\ldots, n_T\}$.
The explicit $\gamma\in\{n/2:n\in\mathbb{N}\}$-order scheme for SDEwMS \eqref{eq:sdems} at grid point $t_{n+1}$ is given by,
\begin{align}
Y^k&(t_{n+1})=Y^k(t_{n})+\int_{t_{n}}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{A}_{\gamma}^b\setminus\tilde{\mathcal{A}}_{\gamma}^b}I_{\beta}[b^k(Y(t_n),\alpha(t_n)) ]_{t_n,s}+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^b}I_{\beta}[b^k(Y(t_n),\alpha(\cdot)) ]_{t_n,s}\Big)ds\notag
\\
&+\sum_{j=1}^m\int_{t_{n}}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{A}_{\gamma}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma}^\sigma}I_{\beta}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n)) ]_{t_n,s}+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^\sigma}I_{\beta}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}\Big)dW^j(s)\label{eq:gen.scheme}
\end{align}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$ where the initial value $Y_0:=Y(0)$ is an $\mathcal{F}_0$-measurable random variable in $\mathbb{R}^d$.
\begin{remark}
If the state space $\mathcal{S}$ of the Markov chain $\alpha$ is a singleton set, then the SDEwMS \eqref{eq:sdems} reduces to an SDE and the integrals appearing in the terms corresponding to the sets $\tilde{\mathcal{A}}_{\gamma}^b$ and $\tilde{\mathcal{A}}_{\gamma}^\sigma$ in equation \eqref{eq:gen.scheme} vanish.
In this case, \eqref{eq:gen.scheme} is the the $\gamma$-order explicit numerical scheme for SDE, see Section $10.6$ in \cite{kloeden1992numerical}.
Clearly, terms corresponding to the sets $\tilde{\mathcal{A}}_{\gamma}^b$ and $\tilde{\mathcal{A}}_{\gamma}^\sigma$ in \eqref{eq:gen.scheme} appear due to the presence of the Markov chain.
\end{remark}
We now give some examples of explicit numerical scheme \eqref{eq:gen.scheme} for $\gamma\in\{0.5,1.0,1.5\}$.
\begin{example}[\textbf{Euler scheme for SDEwMS}]
If $\gamma=0.5$, then $\mathcal{A}_{0.5}^b=\mathcal{A}_{0.5}^\sigma=\{\nu\}$ and $\tilde{\mathcal{A}}_{0.5}^b=\tilde{\mathcal{A}}_{0.5}^\sigma=\varphi$.
By using \eqref{eq:gen.scheme} for $\gamma=0.5$ and the definition of multiple integrals from Subsection $\ref{sub:multiple integral}$, the Euler scheme for SDEwMS \eqref{eq:sdems} is given by
\begin{align}
Y^k(t_{n+1})=&Y^k(t_{n})+\int_{t_{n}}^{t_{n+1}}I_{\nu}[b^k(Y(t_n),\alpha(t_n))]_{t_n,s}ds+\sum_{j=1}^m \int_{t_{n}}^{t_{n+1}}I_{\nu}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n))]_{t_n,s}dW^j(s)\notag
\\
=&Y^k(t_n)
+\int_{t_n}^{t_{n+1}}b^k(Y(t_n),\alpha(t_n))ds+\sum_{j=1}^m \int_{t_n}^{t_{n+1}}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))dW^j(s)\notag
\end{align}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$, see also \cite{yuan2004convergence}.
If the state space $\mathcal{S}$ of the Markov chain $\alpha$ is a singleton set, then the SDEwMS \eqref{eq:sdems} and the above scheme reduce to the classical SDE and its Euler scheme, respectively, as detailed in section 10.2 of \cite{kloeden1992numerical}.
\end{example}
\begin{example}[\textbf{Milstein-type scheme for SDEwMS}]
Let $\gamma=1.0$. Then, $\mathcal{A}_{1.0}^b=\{\nu\}$, $\mathcal{A}_{1.0}^\sigma=\{\nu,(N_1),(1),\ldots,(m)\}$, $\tilde{\mathcal{A}}_{1.0}^b=\varphi$ and $\tilde{\mathcal{A}}_{1.0}^\sigma=\{(N_1)\}$, which on using \eqref{eq:gen.scheme} and the definition of multiple integrals from Subsection $\ref{sub:multiple integral}$ yields
\begin{align}
Y^k(&t_{n+1})=Y^k(t_{n})+\int_{t_{n}}^{t_{n+1}}I_{\nu}[b^k(Y(t_n),\alpha(t_n))]_{t_n,s}ds+\sum_{j=1}^m \int_{t_{n}}^{t_{n+1}}\Big(I_{\nu}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n))]_{t_n,s}\notag
\\
&+\sum_{j_1=1}^m I_{(j_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n))]_{t_n,s}+I_{(N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot))]_{t_n,s}\Big)dW^j(s)\notag
\\
=&Y^k(t_n)
+\int_{t_n}^{t_{n+1}}b^k(Y(t_n),\alpha(t_n))ds+\sum_{j=1}^m \int_{t_n}^{t_{n+1}}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))dW^j(s)\notag
\\
&+\sum_{j,j_1=1}^m \int_{t_n}^{t_{n+1}}\int_{t_n}^{s}L^{j_1}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))dW^{j_1}(s_1)dW^j(s)\notag
\\
&+\sum_{j=1}^m \int_{t_n}^{t_{n+1}}\sum_{i_0\neq k_0}\int_{t_n}^{s}\mathbbm{1}\{N^{(t_n,s]}=1\}(\sigma^{(k,j)}(Y(t_n),k_0)-\sigma^{(k,j)}(Y(t_n),i_0))d[M_{i_0k_0}](s_1)dW^j(s)\notag
\end{align}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$ which is the Milstein-type scheme for SDEwMS \eqref{eq:sdems}, see also \cite{kumar2021milstein} and \cite{nguyen2017milstein}.
If the state space $\mathcal{S}$ of the Markov chain $\alpha$ is a singleton set, then SDEwMS \eqref{eq:sdems} is an SDE, and the last term of the above equation vanishes which leads to the Milstein scheme for SDE, as detailed in Section 10.3 of \cite{kloeden1992numerical}.
Further, if the following commutative condition,
\begin{align*}
L^{j_1}_{i_0}\sigma^{(k,j)}(x,i_0)=L^{j}_{i_0}\sigma^{(k,j_1)}(x,i_0)
\end{align*}
hold for all $x\in\mathbb{R}^d$, $i_0\in\mathcal{S}$, $j,j_1\in\{1,\ldots,m\}$ and $k\in\{1,\ldots,d\}$, then we write
\begin{align*}
Y^k&(t_{n+1})=Y^k(t_{n})+b^k(Y(t_n),\alpha(t_n))h+\sum_{j=1}^m \sigma^{(k,j)}(Y(t_n),\alpha(t_n))(W^j(t_{n+1})-W^j(t_n))
\\
+&\frac{1}{2}\sum_{j,j_1=1}^mL^{j_1}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))((W^{j_1}(t_{n+1})-W^{j_1}(t_n))(W^j(t_{n+1})-W^j(t_n))-\mathbbm{1}\{j=j_1\}h)
\\
+&\sum_{j=1}^m \mathbbm{1}\{N^{(t_n,t_{n+1})}=1\}(\sigma^{(k,j)}(Y(t_n),\alpha(\tau_1))-\sigma^{(k,j)}(Y(t_n),\alpha(t_n)))(W^j(t_{n+1})-W^j(\tau_1))
\end{align*}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$ where $\tau_1$ is the first jump time of the Markov chain $\alpha$ in the interval $(t_n,t_{n+1})$.
\end{example}
\begin{example}[\textbf{$\mathbf{1.5}$-order scheme for SDEwMS}]
If $\gamma=1.5$, then
\begin{itemize}
\item[] $\mathcal{A}_{1.5}^b=\{\nu,(N_1),(0),(j_1);j_1\in\{1,\ldots,m\}\}$,
\item[] $\mathcal{A}_{1.5}^\sigma=\{\nu,(N_1),(N_2),(0),(j_1),(j_1,N_1),(N_1,j_1),(j_1,j_2),(N_1,j_1,N_1);j_1,j_2\in\{1,\ldots,m\}\}$,
\item[] $\tilde{\mathcal{A}}_{1.5}^b=\{(N_1)\}$,
\item[] $\tilde{\mathcal{A}}_{1.5}^\sigma=\{(N_1),(N_2),(j_1,N_1),(N_1,j_1),(N_1,j_1,N_1);j_1\in\{1,\ldots,m\}\}$.
\end{itemize}
Further, by employing \eqref{eq:gen.scheme} for $\gamma=1.5$, we have
\begin{align*}
&Y^k(t_{n+1})=Y^k(t_{n})+\int_{t_{n}}^{t_{n+1}}\Big(I_{\nu}[b^k(Y(t_n),\alpha(t_n))]_{t_n,s}+I_{(0)}[b^k(Y(t_n),\alpha(t_n))]_{t_n,s}
\\
&+\sum_{j_1=1}^mI_{(j_1)}[b^k(Y(t_n),\alpha(t_n))]_{t_n,s}+I_{(N_1)}[b^k(Y(t_n),\alpha(\cdot))]_{t_n,s}\Big)ds
\\
&+\sum_{j=1}^m \int_{t_{n}}^{t_{n+1}}\Big(I_{\nu}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n))]_{t_n,s}+I_{(0)}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n))]_{t_n,s}
\\
&+\sum_{j_1=1}^m I_{(j_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n))]_{t_n,s}+\sum_{j_1,j_2=1}^mI_{(j_1,j_2)}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n))]_{t_n,s}
\\
&+I_{(N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot))]_{t_n,s}+I_{(N_2)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot))]_{t_n,s}+\sum_{j_1=1}^m I_{(j_1,N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{j_1=1}^mI_{(N_1,j_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot))]_{t_n,s}+\sum_{j_1=1}^m I_{(N_1,j_1,N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot))]_{t_n,s}\Big)dW^j(s)
\\
\end{align*}
which by the definition of multiple integrals from subsection $\ref{sub:multiple integral}$ yields
\begin{align}
Y^k(&t_{n+1})=Y^k(t_n)
+\int_{t_n}^{t_{n+1}}b^k(Y(t_n),\alpha(t_n))ds+\int_{t_n}^{t_{n+1}}\int_{t_n}^sL^{0}_{\alpha(t_n)}b^k(Y(t_n),\alpha(t_n))ds_1ds\notag
\\
&+\sum_{j_1=1}^m\int_{t_n}^{t_{n+1}}\int_{t_n}^sL^{j_1}_{\alpha(t_n)}b^k(Y(t_n),\alpha(t_n))dW^{j_1}(s_1)ds\notag
\\
&+\int_{t_n}^{t_{n+1}}\sum_{i_0\neq k_0}\int_{t_n}^s\mathbbm{1}\{N^{(t_n,s]}=1\}(b^k(Y(t_n),k_0)-b^k(Y(t_n),i_0))d[M_{i_0k_0}](s_1)ds\notag
\\
&+\sum_{j=1}^m \int_{t_n}^{t_{n+1}}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))dW^j(s)\notag
\\
&+\sum_{j=1}^m \int_{t_n}^{t_{n+1}}\int_{t_n}^{s}L^{0}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))ds_1dW^j(s)\notag
\\
&+\sum_{j,j_1=1}^m \int_{t_n}^{t_{n+1}}\int_{t_n}^{s}L^{j_1}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))dW^{(j_1)}(s_1)dW^j(s)\notag
\\
&+\sum_{j,j_1,j_2=1}^m \int_{t_n}^{t_{n+1}}\int_{t_n}^s\int_{t_n}^{s_1}L^{j_2}_{\alpha(t_n)}L^{j_1}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))dW^{j_2}(s_2)dW^{j_1}(s_1)dW^j(s)\notag
\\
&+\sum_{j=1}^m \int_{t_n}^{t_{n+1}}\sum_{i_0\neq k_0}\int_{t_n}^{s}\mathbbm{1}\{N^{(t_n,s]}=1\}(\sigma^{(k,j)}(Y(t_n),k_0)-\sigma^{(k,j)}(Y(t_n),i_0))d[M_{i_0k_0}](s_1)dW^j(s)\notag
\\
&+\sum_{j=1}^m \int_{t_n}^{t_{n+1}}\sum_{i_0\neq k_0}\int_{t_n}^{s}\mathbbm{1}\{N^{(t_n,s]}=2\}(\sigma^{(k,j)}(Y(t_n),k_0)-\sigma^{(k,j)}(Y(t_n),i_0))d[M_{i_0k_0}](s_1)dW^j(s)\notag
\\
&+\sum_{j,j_1=1}^m \int_{t_n}^{t_{n+1}}\sum_{i_0\neq k_0}\int_{t_n}^s\mathbbm{1}\{N^{(t_n,s]}=1\}\int_{t_n}^{s_1}(L^{j_1}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),k_0)\notag
\\
&\qquad-L^{j_1}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),i_0))dW^{j_1}(s_2)d[M_{i_0k_0}](s_1)dW^j(s)\notag
\\
&+\sum_{j,j_1=1}^m \int_{t_n}^{t_{n+1}}\int_{t_n}^s\sum_{i_0\neq k_0}\int_{t_n}^{s_1}\mathbbm{1}\{N^{(t_n,s_1]}=1\}(L^{j_1}_{k_0}\sigma^{(k,j)}(Y(t_n),k_0)\notag
\\
&\qquad-L^{j_1}_{i_0}\sigma^{(k,j)}(Y(t_n),i_0))d[M_{i_0k_0}](s_2)dW^{j_1}(s_1)dW^j(s)\notag
\\
&+\sum_{j,j_1=1}^m \int_{t_n}^{t_{n+1}}\sum_{i_0\neq k_0}\int_{t_n}^s\mathbbm{1}\{N^{(t_n,s]}=1\}\int_{t_n}^{s_1}\sum_{i_1\neq k_1}\int_{t_n}^{s_2}\mathbbm{1}\{N^{(t_n,s_2]}=1\}\notag
\\
&\qquad (L^{j_1}_{k_1}\sigma^{(k,j)}(Y(t_n),k_0)-L^{j_1}_{i_1}\sigma^{(k,j)}(Y(t_n),k_0))-(L^{j_1}_{k_1}\sigma^{(k,j)}(Y(t_n),i_0)-L^{j_1}_{i_1}\sigma^{(k,j)}(Y(t_n),i_0)) \notag
\\
& \qquad \qquad d[M_{i_1k_1}](s_3)dW^{j_1}(s_2)d[M_{i_0k_0}](s_1)dW^j(s)\notag
\end{align}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$.
The above equation is an order 1.5 strong Taylor scheme for SDEwMS \eqref{eq:sdems}.
Notice that the last term on the right side of the above equation is zero.
If the state space $\mathcal{S}$ of the Markov chain $\alpha$ is a singleton set, then the SDEwMS \eqref{eq:sdems} reduces to an SDE and the $5th$, $10th$ to $14th$ terms on the right side of the above equation, which appear due to the switching of the Markov chain $\alpha$, vanish.
This gives an order 1.5 strong Taylor scheme for SDE, see Section 10.4 in \cite{kloeden1992numerical}.
Now, if following commutative conditions,
\begin{align*}
L^{j_1}_{i_0}\sigma^{(k,j)}(x,i_0)=L^{j}_{i_0}\sigma^{(k,j_1)}(x,i_0),\qquad L^{j_2}_{i_0}L^{j_1}_{i_0}\sigma^{(k,j)}(x,i_0)=L^{j_1}_{i_0}L^{j_2}_{i_0}\sigma^{(k,j)}(x,i_0)
\end{align*}
hold for all $x\in\mathbb{R}^d$, $i_0\in\mathcal{S}$, $j,j_1,j_2\in\{1,\ldots,m\}$ and $k\in\{1,\ldots,d\}$, then we have
\begin{align*}
Y^k(&t_{n+1})=Y^k(t_n)+b^k(Y(t_n),\alpha(t_n))h+\frac{1}{2}L^{0}_{\alpha(t_n)}b^k(Y(t_n),\alpha(t_n))h^2
\\
&+\sum_{j_1=1}^mL^{j_1}_{\alpha(t_n)}b^k(Y(t_n),\alpha(t_n))\Delta_n Z^{j_1}
\\
&+\mathbbm{1}\{N^{(t_n,t_{n+1})}=1\}(b^k(Y(t_n),\alpha(\tau_1))-b^k(Y(t_n),\alpha(t_n)))(t_{n+1}-\tau_1)
\\
&+\sum_{j=1}^m \sigma^{(k,j)}(Y(t_n),\alpha(t_n))\Delta_nW^j+\sum_{j=1}^mL^{0}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))(h\Delta_nW^j-\Delta_n Z^{j})
\\
&+\frac{1}{2}\sum_{j,j_1=1}^mL^{j_1}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))(\Delta_nW^{j_1}\Delta_nW^j-\mathbbm{1}\{j=j_1\}h)
\\
&+\frac{1}{6}\sum_{j,j_1,j_2=1}^m L^{j_2}_{\alpha(t_n)}L^{j_1}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),\alpha(t_n))\big(\Delta_n W^{j}\Delta_n W^{j_1}\Delta_n W^{j_2}
\\
&\qquad-\mathbbm{1}\{j\neq j_1,j\neq j_2,j_1=j_2\}h\Delta_n W^{j}-3\mathbbm{1}\{j=j_1=j_2\}h\Delta_n W^{j}\big)
\\
&+\sum_{j=1}^m \mathbbm{1}\{N^{(t_n,t_{n+1})}=1\}(\sigma^{(k,j)}(Y(t_n),\alpha(\tau_1))-\sigma^{(k,j)}(Y(t_n),\alpha(t_n)))(W^j(t_{n+1})-W^j(\tau_1))
\\
&+\sum_{j=1}^m \mathbbm{1}\{N^{(t_n,t_{n+1})}=2\}(\sigma^{(k,j)}(Y(t_n),\alpha(\tau_2))-\sigma^{(k,j)}(Y(t_n),\alpha(t_n)))(W^j(t_{n+1})-W^j(\tau_2))
\\
&+\sum_{j,j_1=1}^m \mathbbm{1}\{N^{(t_n,t_{n+1})}=1\}(L^{j_1}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),\alpha(\tau_1))-L^{j_1}_{\alpha(t_n)}\sigma^{(k,j)}(Y(t_n),\alpha(t_n)))
\\
&\qquad\times(W^{j_1}(\tau_1)-W^{j_1}(t_n))(W^j(t_{n+1})-W^j(\tau_1))
\\
&+\frac{1}{2}\sum_{j,j_1=1}^m\mathbbm{1}\{N^{(t_n,t_{n+1})}=1\}(L^{j_1}_{\alpha(\tau_1)}\sigma^{(j)}(Y(t_n),\alpha(\tau_1))-L^{j_1}_{\alpha(t_n)}\sigma^{(j)}(Y(t_n),\alpha(t_n)))
\\
&\qquad\times\big((W^{j_1}(t_{n+1})-W^{j_1}(\tau_1))(W^{j}(t_{n+1})-W^{j}(\tau_1))-\mathbbm{1}\{j=j_1\}(t_{n+1}-\tau_1)\big)
\end{align*}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$ where $\Delta_nZ^j:=\int_{t_n}^{t_{n+1}}\int_{t_n}^sdW^{j_1}(s_1)ds$ is normally distributed with mean zero and variance $E(\Delta_nZ^j)^2=h^3/3$, $\Delta_nW^j:=W^j(t_{n+1})-W^j(t_n)$ for all $j\in\{1,\ldots,m\}$ and $\tau_1,\tau_2$ are the first and second jump times of the Markov chain $\alpha$ in the interval $(t_n,t_{n+1})$, respectively.
\end{example}
\begin{example}[\textbf{$\mathbf{2.0}$-order scheme for SDEwMS}]
Let $\gamma=2.0$.
Then,
\begin{itemize}
\item[] $\mathcal{A}_{2.0}^b=\mathcal{A}_{1.5}^b\cup\{(N_2),(N_1,j_1),(j_1,N_1),(j_1,j_2),(N_1,j_1,N_1);j_1,j_2\in\{1,\ldots,m\}\}$
\item[] $\tilde{\mathcal{A}}_{2.0}^b=\tilde{\mathcal{A}}_{1.5}^b\cup\{(N_2,),(N_1,j_1),(j_1,N_1),(N_1,j_1,N_1);j_1\in\{1,\ldots,m\}\}$,
\item[] $\mathcal{A}_{2.0}^\sigma=\mathcal{A}_{1.5}^\sigma\cup\{(N_3),(0,N_1),(j_1,N_2),(N_1,0),(j_1,0),(N_2,j_1),(0,j_1),(N_1,0,N_1),$
$(N_2,j_1,N_1),$ $(j_1,j_2,N_1),$ $(N_1,j_1,N_2),$ $(N_2,j_1,N_2),$ $(j_1,N_1,j_2),$ $(N_1,j_1,j_2),(j_1,j_2,j_3),$
$(j_1,N_1,j_2,N_1),$ $(N_1,j_1,j_2,N_1),$ $(N_1,j_1,N_1,j_2),$ $(N_1,j_1,N_1,j_2,N_1)$ $;j_1,j_2,j_3\in\{1,\ldots,m\}$\}
\item[] $\tilde{\mathcal{A}}_{2.0}^\sigma=\tilde{\mathcal{A}}_{1.5}^\sigma\cup\{(N_3),(0,N_1),(j_1,N_2),(N_1,0),(N_2,j_1),(N_1,0,N_1),(N_2,j_1,N_1),(j_1,j_2,N_1),$ $(N_1,j_1,N_2),(N_2,j_1,N_2),(j_1,N_1,j_2),(N_1,j_1,j_2),(j_1,N_1,j_2,N_1),(N_1,j_1,j_2,N_1),
(N_1,j_1,N_1,j_2),$ $(N_1,j_1,N_1,j_2,N_1) ;j_1,j_2\in\{1,\ldots,m\}\}. $
\end{itemize}
By the application of \eqref{eq:gen.scheme}, the 2.0-order scheme for SDEwMS \eqref{eq:sdems} is written in the form of multiple integrals as
\begin{align*}
Y^k&(t_{n+1})=Y^k(t_{n})+\int_{t_{n}}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{A}_{1.5}^b\setminus\tilde{\mathcal{A}}_{1.5}^b}I_{\beta}[b^k(Y(t_n),\alpha(t_n)) ]_{t_n,s}+\sum_{j_1,j_2=1}^m I_{(j_1,j_2)}[b^k(Y(t_n),\alpha(t_n)) ]_{t_n,s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{1.5}^b}I_{\beta}[b^k(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+I_{(N_2)}[b^k(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+\sum_{j_1=1}^mI_{(N_1,j_1)}[b^k(Y(t_n),\alpha(\cdot)) ]_{t_n,s}
\\
&+\sum_{j_1=1}^mI_{(j_1,N_1)}[b^k(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+\sum_{j_1=1}^mI_{(N_1,j_1,N_1)}[b^k(Y(t_n),\alpha(\cdot)) ]_{t_n,s}\Big)ds\notag
\\
&+\sum_{j=1}^m\int_{t_{n}}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{A}_{1.5}^\sigma\setminus\tilde{\mathcal{A}}_{1.5}^\sigma}I_{\beta}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n)) ]_{t_n,s}+\sum_{j_1=1}^mI_{(j_1,0)}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n)) ]_{t_n,s}
\\
&+\sum_{j_1=1}^mI_{(0,j_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n)) ]_{t_n,s}+\sum_{j_1,j_2,j_3=1}^mI_{(j_1,j_2,j_3)}[\sigma^{(k,j)}(Y(t_n),\alpha(t_n)) ]_{t_n,s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{1.5}^\sigma}I_{\beta}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+I_{(N_3)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+I_{(0,N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}
\\
&+\sum_{j_1=1}^mI_{(j_1,N_2)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+I_{(N_1,0)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}
\\
&+\sum_{j_1=1}^mI_{(N_2,j_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+I_{(N_1,0,N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}
\\
&+\sum_{j_1=1}^mI_{(N_2,j_1,N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+\sum_{j_1,j_2=1}^mI_{(j_1,j_2,N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}
\\
&+\sum_{j_1=1}^mI_{(N_1,j_1,N_2)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+\sum_{j_1=1}^mI_{(N_2,j_1,N_2)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}
\\
&+\sum_{j_1,j_2=1}^mI_{(j_1,N_1,j_2)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+\sum_{j_1,j_2=1}^mI_{(N_1,j_1,j_2)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}
\\
&+\sum_{j_1,j_2=1}^mI_{(j_1,N_1,j_2,N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+\sum_{j_1,j_2=1}^mI_{(N_1,j_1,j_2,N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}
\\
&+\sum_{j_1,j_2=1}^mI_{(N_1,j_1,N_1,j_2)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}+\sum_{j_1,j_2=1}^mI_{(N_1,j_1,N_1,j_2,N_1)}[\sigma^{(k,j)}(Y(t_n),\alpha(\cdot)) ]_{t_n,s}\Big)dW^j(s)
\end{align*}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$.
If the Markov chain $\alpha$ has singletin state space $\mathcal{S}$, then SDEwMS \eqref{eq:sdems} reduces to an SDE and fourth to eighth and thirteen to twentynine terms on the right side of the above equation become zero and this leads to 2.0-order strong Taylor scheme for SDE, one can refer to Section 10.5 in \cite{kloeden1992numerical}.
\end{example}
In order to investigate the strong rate of convergence of the explicit numerical scheme \eqref{eq:gen.scheme} for SDEwMS \eqref{eq:sdems}, we make the following assumptions.
\begin{assumption}
\label{ass:A b sigma ds dw lipschitz}
There exists a constant $L>0$ such that
\begin{align*}
|J^\beta_{i_0}b^k(x,i_0)-J^\beta_{i_0}b^k(y,i_0)| + |J^{\bar{\beta}}_{i_0}\sigma^{(k,j)}(x,i_0)-J^{\bar{\beta}}_{i_0}\sigma^{(k,j)}(y,i_0)|\leq L|x-y|
\end{align*}
for all $\beta\in (\mathcal{A}^b_\gamma\setminus\tilde{\mathcal{A}}_\gamma^b)\setminus\{\nu\}$ and $\bar{\beta}\in \mathcal{A}^\sigma_\gamma\setminus\tilde{\mathcal{A}}_\gamma^\sigma\setminus\{\nu\}$, $i_0\in \mathcal{S}$, $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $x,y\in \mathbb{R}^d$.
\end{assumption}
\begin{assumption}
\label{ass:A b N lipschitz}
There exists a constant $L>0$ such that, for all $\beta\in\tilde{\mathcal{A}}_\gamma^b$, $i_1,\ldots,i_{r+1}\in\mathcal{S}$, $k\in\{1,\ldots,d\}$ and $x,y\in \mathbb{R}^d$,
\begin{align*}
|J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_r}\cdots J^{\beta_2}_{i_2}J^{\beta_1}_{i_1}b^k(x,i_1)-J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_r}\cdots J^{\beta_2}_{i_2}J^{\beta_1}_{i_1}b^k(y,i_1)|\leq L |x-y|
\end{align*}
where $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1$, $ \mu_1,\ldots,\mu_r\in\{1,\ldots,2\gamma-2\}$, $\beta_1,\ldots,\beta_{r+1}\in\mathcal{M}_1$ and $r\in\{1,\ldots,2\gamma-2\}$.
\end{assumption}
\begin{assumption} \label{ass:A sigma N lipschitz}
There exists a constant $L>0$ such that, for all $\beta\in\tilde{\mathcal{A}}_\gamma^\sigma$, $i_1,\ldots,i_{r+1}\in\mathcal{S}$, $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $x,y\in \mathbb{R}^d$,
\begin{align*}
|J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_r}\cdots J^{\beta_2}_{i_2}J^{\beta_1}_{i_1}\sigma^{(k,j)}(x,i_1)-J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_r}\cdots J^{\beta_2}_{i_2}J^{\beta_1}_{i_1}\sigma^{(k,j)}(y,i_1)|\leq L |x-y|
\end{align*}
where $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1$, $ \mu_1,\ldots,\mu_r\in\{1,\ldots,2\gamma-1\}$, $\beta_1,\ldots,\beta_{r+1}\in\mathcal{M}_1$ and $r\in\{1,\ldots,2\gamma-2\}$.
\end{assumption}
To state the forthcoming assumptions, we define the following sets,
\begin{itemize}
\item[] $\tilde{\mathcal{B}}_1^b:=\{(j)\star\beta:j\in\{N_1,\ldots,N_\mu,\bar{N}_\mu,0,1,\ldots,m\},\beta\in\{(j_1,\ldots,j_l)\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^b)\cap\mathcal{A}_{\gamma}^b:j_i\notin\{N_1,\ldots,N_{\mu}\}\forall\, i\in\{1,\ldots,l\}\}\cup \{\nu\} \}$ and
\item[] $\tilde{\mathcal{B}}_1^\sigma:=\{(j)\star\beta:j\in\{N_1,\ldots,N_\mu,\bar{N}_\mu,0,1,\ldots,m\},\beta\in\{(j_1,\ldots,j_l)\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^\sigma)\cap\mathcal{A}_{\gamma}^\sigma:j_i\notin\{N_1,\ldots,N_{\mu}\}\forall\, i\in\{1,\ldots,l\}\}\cup \{\nu\}\}.$
\end{itemize}
\begin{assumption}
\label{ass:reminder b sigma ds dw growth}
For all $i_0\in \mathcal{S}$, $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$, $x\in \mathbb{R}^d$, $\beta\in(\mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}_1^b) \cap \mathcal{M}_1$ and $\bar{\beta}\in(\mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}_1^\sigma) \cap \mathcal{M}_1$, there exists a constant $L>0$ such that
\begin{align*}
|J^\beta_{i_0}b^k(x,i_0)|+|J^{\bar{\beta}}_{i_0}\sigma^{(k,j)}(x,i_0)|\leq L(1+|x|).
\end{align*}
\end{assumption}
\begin{assumption}\label{ass:reminder b N growth}
For all $\beta\in(\mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}_1^b) \cap (\mathcal{M}_2\cup\mathcal{M}_3)$, $i_1,\ldots,i_{r+1}\in\mathcal{S}$, $r\in\{1,\ldots,2\gamma-2\}$, $k\in\{1,\ldots,d\}$ and $x\in \mathbb{R}^d$, there exists a constant $L>0$ such that
\begin{align*}
|J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_r}\cdots J^{\beta_2}_{i_2}J^{\beta_1}_{i_1}b^k(x,i_1)|\leq L (1+|x|)
\end{align*}
where $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1$, $\mu_1,\ldots,\mu_r\in\{1,\ldots,2\gamma\}$ and $\beta_1,\ldots,\beta_{r+1}\in\mathcal{M}_1$.
\end{assumption}
\begin{assumption}\label{ass:reminder sigma N growth}
For all $\beta\in(\mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}_1^\sigma) \cap (\mathcal{M}_2 \cup \mathcal{M}_3)$, $i_1,\ldots,i_{r+1}\in\mathcal{S}$, $r\in\{1,\ldots,2\gamma-1\}$, $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $x\in \mathbb{R}^d$, there exists a constant $L>0$ such that
\begin{align*}
|J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_r}\cdots J^{\beta_2}_{i_2}J^{\beta_1}_{i_1}\sigma^{(k,j)}(x,i_1)|\leq L (1+|x|),
\end{align*}
where $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1$, $\mu_1,\ldots,\mu_r\in\{1,\ldots,2\gamma\}$ and $\beta_1,\ldots,\beta_{r+1}\in\mathcal{M}_1$.
\end{assumption}
The following theorem is the second main result of this article.
\begin{thm} \label{thm:main}
Let Assumptions $\ref{ass:initial data}$, \ref{ass: b sigma lipschitz} and \ref{ass:A b sigma ds dw lipschitz} to $\ref{ass:reminder sigma N growth}$ be satisfied. Then, the numerical scheme \eqref{eq:gen.scheme} converges in $\mathcal{L}^2$-sense to the true solution of SDEwMS \eqref{eq:sdems} with a rate of convergence equal to $\gamma\in\{n/2:n\in\mathbb{N}\}$, \textit{i.e.}, there exists a constant $C>0$, independent of $h$, such that
$$
E\Big(\sup_{n\in\{0,1,\ldots,n_T\}}|X(t_n)-Y(t_n)|^2 \Big)\leq C h^{2\gamma}
$$
where $0<h<1/(2q)$ with $q:=\max\{-q_{i_0i_0}:i_0\in\mathcal{S}\}$.
\end{thm}
We conclude this section by following remarks.
\begin{remark}
If the Markov chain $\alpha$ has singleton state space $\mathcal{S}$, then SDEwMS \eqref{eq:sdems} reduces to an SDE and Assumptions \ref{ass:A b N lipschitz}, \ref{ass:A sigma N lipschitz}, \ref{ass:reminder b N growth} and \ref{ass:reminder sigma N growth} does not appear.
\end{remark}
\begin{remark}
\label{rem:A b sigma ds dw growth}
Due to the Assumptions \ref{ass: b sigma lipschitz} and \ref{ass:A b sigma ds dw lipschitz},
\begin{align*}
|J^{\beta}_{i_0}b^k(x,i_0)|+ |J^{\bar{\beta}}_{i_0}\sigma^{(k,j)}(x,i_0)| \leq C(1+|x|)
\end{align*}
for all $\beta\in \mathcal{A}^b_\gamma\setminus\tilde{\mathcal{A}}_\gamma^b$, $\bar{\beta}\in \mathcal{A}^\sigma_\gamma\setminus\tilde{\mathcal{A}}_\gamma^\sigma$, $i_0\in\mathcal{S}$, $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $x\in \mathbb{R}^d$.
\end{remark}
We obtain the following remarks due to Assumptions \ref{ass:A b N lipschitz} and \ref{ass:A sigma N lipschitz}.
\begin{remark}\label{rem:A b sigma N lipschitz}
Notice that, for all $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1 \in\tilde{\mathcal{A}}_\gamma^b$, the expansion of $J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_rk_r}\cdots J^{\beta_2}_{i_2k_2}(J^{\beta_1}_{k_1}b^k(x,k_1)-J^{\beta_1}_{i_1}b^k(x,i_1))-J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_rk_r}\cdots J^{\beta_2}_{i_2k_2}(J^{\beta_1}_{k_1}b^k(y,k_1)-J^{\beta_1}_{i_1}b^k(y,i_1))$ consists of $2^{r}$ terms of the type $J^{\beta_{r+1}}_{i_{r+1}}$ $J^{\beta_r}_{\rho_r}\cdots J^{\beta_2}_{\rho_2}J^{\beta_1}_{\rho_1}b^k(x,\rho_1)-J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{\rho_r}\cdots J^{\beta_2}_{\rho_2}J^{\beta_1}_{\rho_1}b^k(y,\rho_1)$ where $\rho_j\in\{i_j,k_j\}$, for all $j\in\{1,\ldots,r\}$, $k\in\{1,\ldots,d\}$, $i_1, \ldots, i_{r+1}, k_1,\ldots,k_r\in\mathcal{S}$, $ \mu_1,\ldots,\mu_r\in\{1,\ldots,2\gamma-2\}$ and $\beta_1,\ldots,\beta_{r+1}\in\mathcal{M}_1$.
Hence, Assumption \ref{ass:A b N lipschitz} gives
\begin{align*}
|J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_rk_r}\cdots J^{\beta_2}_{i_2k_2}(J^{\beta_1}_{k_1}b^k(x,k_1)&-J^{\beta_1}_{i_1}b^k(x,i_1))-J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_rk_r}\cdots J^{\beta_2}_{i_2k_2}(J^{\beta_1}_{k_1}b^k(y,k_1)-J^{\beta_1}_{i_1}b^k(y,i_1))|
\\
\leq&C2^{r}|x-y|\leq C2^{2\gamma-2}|x-y| \leq C|x-y|
\end{align*}
for all $x,y\in \mathbb{R}^d$ and $r\in\{1,\ldots,2\gamma-2\}$.
Similarly,
for all $\bar{\beta}=\bar{\beta}_{\bar{r}+1}\star(N_{\bar{\mu}_{\bar{r}}})\star \bar{\beta}_{\bar{r}}\star \cdots\star(N_{\bar{\mu}_2})\star \bar{\beta}_2\star(N_{\bar{\mu}_1})\star \bar{\beta}_1\in\tilde{\mathcal{A}}_\gamma^\sigma$, Assumption \ref{ass:A sigma N lipschitz} yields,
\begin{align*}
|J^{\bar{\beta}_{\bar{r}+1}}_{\bar{i}_{\bar{r}+1}}&J^{\bar{\beta}_{\bar{r}}}_{\bar{i}_{\bar{r}}\bar{k}_{\bar{r}}}\cdots J^{\bar{\beta}_2}_{\bar{i}_2\bar{k}_2}(J^{\bar{\beta}_1}_{\bar{k}_1}\sigma^{(k,j)}(x,\bar{k}_1)-J^{\bar{\beta}_1}_{\bar{i}_1}\sigma^{(k,j)}(x,\bar{i}_1))
\\
-&J^{\bar{\beta}_{\bar{r}+1}}_{\bar{i}_{\bar{r}+1}}J^{\bar{\beta}_{\bar{r}}}_{\bar{i}_{\bar{r}}\bar{k}_{\bar{r}}}\cdots J^{\bar{\beta}_2}_{\bar{i}_2\bar{k}_2}(J^{\bar{\beta}_1}_{\bar{k}_1}\sigma^{(k,j)}(y,\bar{k}_1)-J^{\bar{\beta}_1}_{\bar{i}_1}\sigma^{(k,j)}(y,\bar{i}_1))|\leq C|x-y|
\end{align*}
where $\bar{i}_1,\ldots,\bar{i}_{\bar{r}+1}, \bar{k}_1, \ldots, \bar{k}_{\bar{r}} \in\mathcal{S}$, $ \bar{\mu}_1,\ldots,\bar{\mu}_{\bar{r}}\in\{1,\ldots,2\gamma-1\}$, $\bar{\beta}_1,\ldots,\bar{\beta}_{\bar{r}+1}\in\mathcal{M}_1$, $\bar{r}\leq 2\gamma-1$, $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $x,y\in \mathbb{R}^d$.
\end{remark}
\begin{remark}\label{rem:A b N growth}
Due to Remark \ref{rem:A b sigma N lipschitz}, for all $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1\in\tilde{\mathcal{A}}_\gamma^b$, $i_1,\ldots,i_{r+1}, k_1, \ldots, k_r \in\mathcal{S}$, $k\in\{1,\ldots,d\}$ and $x\in \mathbb{R}^d$,
\begin{align*}
|J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_rk_r}\cdots J^{\beta_2}_{i_2k_2}(J^{\beta_1}_{k_1}b^k(x,k_1)-J^{\beta_1}_{i_1}b^k(x,i_1))|\leq C(1+|x|)
\end{align*}
where $ \mu_1,\ldots,\mu_r\in\{1,\ldots,2\gamma-2\}$, $\beta_1,\ldots,\beta_{r+1}\in\mathcal{M}_1$ and $r\in\{1,\ldots,2\gamma-2\}$.
As before, for all $\bar{\beta}=\bar{\beta}_{\bar{r}+1}\star(N_{\bar{\mu}_{\bar{r}}})\star \bar{\beta}_{\bar{r}}\star \cdots\star(N_{\bar{\mu}_2})\star \bar{\beta}_2\star(N_{\bar{\mu}_1})\star \bar{\beta}_1\in\tilde{\mathcal{A}}_\gamma^\sigma$,
\begin{align*}
|J^{\bar{\beta}_{\bar{r}+1}}_{\bar{i}_{\bar{r}+1}}&J^{\bar{\beta}_{\bar{r}}}_{\bar{i}_{\bar{r}}\bar{k}_{\bar{r}}}\cdots J^{\bar{\beta}_2}_{\bar{i}_2\bar{k}_2}(J^{\bar{\beta}_1}_{\bar{k}_1}\sigma^{(k,j)}(x,\bar{k}_1)-J^{\bar{\beta}_1}_{\bar{i}_1}\sigma^{(k,j)}(x,\bar{i}_1))|\leq C(1+|x|)
\end{align*}
where $\bar{i}_1,\ldots,\bar{i}_{\bar{r}+1}, \bar{k}_1, \ldots, \bar{k}_{\bar{r}} \in\mathcal{S}$, $ \bar{\mu}_1,\ldots,\bar{\mu}_{\bar{r}}\in\{1,\ldots,2\gamma-1\}$, $\bar{\beta}_1,\ldots,\bar{\beta}_{\bar{r}+1}\in\mathcal{M}_1$, $\bar{r}\leq 2\gamma-1$, $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $x\in \mathbb{R}^d$.
\end{remark}
As a result of the Assumptions \ref{ass:reminder b N growth} and \ref{ass:reminder sigma N growth}, we get the following remarks.
\begin{remark}\label{rem:reminder b N growth}
Notice that, for all $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1\in(\mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}_1^b)\cap (\mathcal{M}_2 \cup\mathcal{M}_3)) $, the expression of $J^{\beta_{r+1}}_{i_{r+1}}J^{\beta_r}_{i_rk_r}\cdots J^{\beta_2}_{i_2k_2}(J^{\beta_1}_{k_1}b^k(x,k_1)-J^{\beta_1}_{i_1}b^k(x,i_1))$ consists of $2^r$ terms of the type $J^{\beta_{r+1}}_{i_{r+1}}$ $J^{\beta_r}_{\rho_r}\cdots J^{\beta_2}_{\rho_2}J^{\beta_1}_{\rho_1}b^k(x,\rho_1)$ where $\rho_j\in\{i_j,k_j\}$ for all $j\in\{1,\ldots,r\}$, $k\in\{1,\ldots,d\}$, $i_1, \ldots, i_{r+1}, k_1,\ldots,k_r\in\mathcal{S}$, $ \mu_1,\ldots,\mu_r\in\{1,\ldots,2\gamma\}$, $\beta_1,\ldots,\beta_{r+1}\in\mathcal{M}_1$ and $r\in\{1,\ldots,2\gamma-2\}$.
Thus, Assumption \ref{ass:reminder b N growth} yields,
\begin{align*}
|J^{\beta_{r+1}}_{i_{r+1}}&J^{\beta_r}_{i_rk_r}\cdots J^{\beta_2}_{i_2k_2}(J^{\beta_1}_{k_1}b^k(x,k_1)-J^{\beta_1}_{i_1}b^k(x,i_1))|\leq C2^r(1+|x|)\leq C2^{2\gamma-2}(1+|x|)\leq C(1+|x|)
\end{align*}
for all $x\in \mathbb{R}^d$.
\end{remark}
\begin{remark}\label{rem:reminder sigma N growth}
By using Assumption \ref{ass:reminder sigma N growth} and adopting the similar arguments as used in Remark \ref{rem:reminder b N growth}, for all $\beta\in(\mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}_1^\sigma)\cap (\mathcal{M}_2 \cup\mathcal{M}_3))$, $i_1,\ldots,i_{r+1}\in\mathcal{S}$, $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $x\in \mathbb{R}^d$, we have
\begin{align*}
|J^{\beta_{r+1}}_{i_{r+1}}&J^{\beta_r}_{i_rk_r}\cdots J^{\beta_2}_{i_2k_2}(J^{\beta_1}_{k_1}\sigma^{(k,j)}(x,k_1)-J^{\beta_1}_{i_1}\sigma^{(k,j)}(x,i_1))|\leq C(1+|x|)
\end{align*}
where $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1$, $\mu_1,\ldots,\mu_r\in\{1,\ldots,2\gamma\}$, $\beta_1,\ldots,\beta_{r+1}\in\mathcal{M}_1$ and $r\in\{1,\ldots,2\gamma-1\}$.
\end{remark}
\section{It\^o-Taylor expansion}
\label{sec:ito}
In this section, we prove our first main result on the It\^o-Taylor expansion, \textit{i.e.}, Theorem \ref{thm:Ito-Taylor}, but first we establish several lemmas and corollaries required in the proof.
The proof of the following lemma can be found in \cite{nguyen2017milstein} (see Lemma 2.2).
\begin{lemma}\label{lem:ItowMS}
Let $f:\mathbb{R}^d\times\mathcal{S}\mapsto\mathbb{R}$ be a twice continuously differentiable function in the first argument.
Then,
\begin{align*}
f(X(t),\alpha(t))=&f(X(s),\alpha(s))+\int^t_sL^{0}_{\alpha(u)}f(X(u),\alpha(u))du+\sum_{j=1}^m\int^t_sL^j_{\alpha(u)}f(X(u),\alpha(u))dW^{j}(u)
\\
&\quad+\sum_{i_0\neq k_0}\int^t_s(f(X(u),k_0)-f(X(u),i_0))d[M_{i_0k_0}](u)
\\
=& I_{\nu}[f(X(s),\alpha(s))]_{s,t}+I_{(0)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+\sum_{j=1}^mI_{(j)}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\\
&+\sum_{r=1}^{\mu}I_{(N_r)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+I_{(\bar{N}_{\mu})}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for all $s<t\in[0,T]$.
\end{lemma}
When the state of the Markov chain $\alpha$ is fixed, then we obtain the following corollary.
\begin{corollary}\label{cor:Ito}
For every $i_0 \in \mathcal{S}$, let $f(\cdot,i_0):\mathbb{R}^d\mapsto\mathbb{R}$ be a twice continuously differential function in the first argument. Then,
\begin{align*}
f(X(t),i_0)=&f(X(s),i_0)+\int^t_sL^0_{\alpha(u)}f(X(u),i_0)du+\sum^m_{j=1}\int^t_sL^j_{\alpha(u)}f(X(u),i_0)dW^{j}(u)
\\
=&I_{\nu}[f(X(s),i_0)]_{s,t}+I_{(0)}[f(X(\cdot),i_0)]_{s,t}+\sum_{j=1}^mI_{(j)}[f(X(\cdot),i_0)]_{s,t}
\end{align*}
almost surely for all $s<t\in[0,T]$.
\end{corollary}
For the following lemma, we define
\begin{align*}
\mathbbm{I}^{j}_{(s,t]}=\begin{cases}
\mathbbm{1}\{N^{(s,t]}=r\} & \mbox{ if } j= N_r, r\in\{1,\ldots,\mu\}
\\
\mathbbm{1}\{N^{(s,t]}>\mu\} & \mbox{ if } j= \bar{N}_\mu
\end{cases}
\end{align*}
for any $s<t\in[0,T]$ where $j\in\{N_1, \ldots, N_\mu, \bar{N}_\mu\}$.
\begin{lemma}\label{lem:Ito on beta}
Let $\beta\in\mathcal{M}$ such that $l(\beta) \in \{2, 3,\ldots\}$. Then, for a sufficiently smooth function $~f:\mathbb{R}^d\times\mathcal{S}\mapsto\mathbb{R}$, we have
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))&]_{s,t}
=\begin{cases}
\displaystyle I_{\beta}[f(X(s),\alpha(s))]_{s,t}+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t} &\mbox{if }\beta\in\mathcal{M}_1
\\
\displaystyle I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t} &\mbox{if }\beta\in\mathcal{M}_2
\\
\displaystyle I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{j=0}^m I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t} &\mbox{if }\beta\in\mathcal{M}_3
\end{cases}
\end{align*}
almost surely for any $s<t\in[0,T]$, provided all the multiple integrals appearing above exist.
If $l(\beta)=0$, then the cases $\beta \in \mathcal{M}_2$ and $\beta \in \mathcal{M}_3$ on the right side of the above expression is dropped and when $l(\beta)=1$, the case $\beta\in \mathcal{M}_2$ does not appear.
\end{lemma}
\begin{proof}
First, we establish the result for $l(\beta)\in \{0,1\}$.
Let $l(\beta)=0$, \textit{i.e.}, $\beta=\nu\in\mathcal{M}_1$.
By using the definition of multiple integrals from Subsection \ref{sub:multiple integral} and Lemma \ref{lem:ItowMS}, we have,
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&f(X(t),\alpha(t))
\\
=&I_{\nu}[f(X(s),\alpha(s))]_{s,t}+I_{(0)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+\sum_{j=1}^mI_{(j)}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\\
&+\sum_{r=1}^{\mu}I_{(N_r)}[f(X(\cdot),\alpha(\cdot))]_{s,t}+I_{(\bar{N}_{\mu})}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\\
=&I_{\beta}[f(X(s),\alpha(s))]_{s,t}+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$.
Now, let $l(\beta)=1$ and $\beta=(j_1)\in \mathcal{M}_1$ for $j_1 \in \{0,1,\ldots,m\}$.
As before, we use the definition of multiple integrals from Subsection \ref{sub:multiple integral} and Lemma \ref{lem:ItowMS} to obtain,
\begin{align*}
I_{\beta}[f(X(\cdot),&\alpha(\cdot))]_{s,t}=\int_s^tL^{j_1}_{\alpha(s_1)} f(X(s_1),\alpha(s_1))dW^{j_1}(s_1)
\\
=&\int_s^t\Big(L^{j_1}_{\alpha(s)} f(X(s),\alpha(s))+\sum_{j=0}^m\int_{s}^{s_1}L^j_{\alpha(s_2)}L^{j_1}_{\alpha(s_2)} f(X(s_2),\alpha(s_2))dW^j(s_2)
\\
&+\sum_{r=1}^\mu\sum_{i_0\neq k_0}\int_s^{s_1}\mathbbm{1}\{N^{(s,s_1]}=r\}(L^{j_1}_{k_0} f(X(s_2),k_0)-L^{j_1}_{i_0} f(X(s_2),i_0))d[M_{i_0k_0}](s_2)
\\
&+\sum_{i_0\neq k_0}\int_s^{s_1}\mathbbm{1}\{N^{(s,s_1]}>\mu\}(L^{j_1}_{k_0} f(X(s_2),k_0)-L^{j_1}_{i_0} f(X(s_2),i_0))d[M_{i_0k_0}](s_2)\Big)dW^{j_1}(s_1)
\\
=&I_{\beta}[f(X(s),\alpha(s))]_{s,t}+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$.
Similarly, let $l(\beta)=1$ and $\beta=(j_1)\in\mathcal{M}_3$ for $j_1\in\{N_1, \ldots, N_\mu, \bar{N}_\mu\}$. Then, on using the definition of multiple integrals from Subsection \ref{sub:multiple integral} and Corollary \ref{cor:Ito}, one has
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{I}^{j_1}_{(s,t]}(f(X(s_1),k_0)-f(X(s_1),i_0))d[M_{i_0k_0}](s_1)
\\
=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{I}^{j_1}_{(s,t]}\Big((f(X(s),k_0)-f(X(s),i_0))
\\
&+\sum_{j=0}^m\int_s^{s_1}(L^j_{\alpha(s_2)}f(X(s_2),k_0)-L^j_{\alpha(s_2)}f(X(s_2),i_0))dW^j(s_2)\Big)d[M_{i_0k_0}](s_1)
\\
=&I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{j=0}^m I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$.
We use mathematical induction to prove the case $l(\beta) \in \{2,3,\ldots\}$.
First, we establish the base case, \textit{i.e.}, $l(\beta)=2$.
Let $\beta=(j_1,j_2)\in \mathcal{M}_1$.
Due to the definition of multiple integrals from Subsection \ref{sub:multiple integral} and Lemma \ref{lem:ItowMS}, we obtain
\begin{align*}
I_{\beta}[f(X(\cdot),&\alpha(\cdot))]_{s,t}=\int_s^t\int_s^{s_1} L^{j_1}_{\alpha(s_2)}L^{j_2}_{\alpha(s_2)} f(X(s_2),\alpha(s_2))dW^{j_1}(s_2)dW^{j_2}(s_1)
\\
=&\int_s^t\int_s^{s_1}\Big( L^{j_1}_{\alpha(s)}L^{j_2}_{\alpha(s)} f(X(s),\alpha(s))+\sum_{j=0}^m\int_{s}^{s_2}L^j_{\alpha(s_3)}L^{j_1}_{\alpha(s_3)}L^{j_2}_{\alpha(s_3)} f(X(s_3),\alpha(s_3))dW^j(s_3)
\\
&+\sum_{r=1}^\mu\sum_{i_0\neq k_0}\int_s^{s_2}\mathbbm{1}\{N^{(s,s_2]}=r\}(L^{j_1}_{k_0}L^{j_2}_{k_0} f(X(s_3),k_0)-L^{j_1}_{i_0}L^{j_2}_{i_0}f(X(s_3),i_0))d[M_{i_0k_0}](s_3)
\\
&+\sum_{i_0\neq k_0}\int_s^{s_2}\mathbbm{1}\{N^{(s,s_2]}>\mu\}(L^{j_1}_{k_0}L^{j_2}_{k_0} f(X(s_3),k_0)
\\
&\qquad-L^{j_1}_{i_0}L^{j_2}_{i_0}f(X(s_3),i_0))d[M_{i_0k_0}](s_3)\Big)dW^{j_1}(s_2)dW^{j_2}(s_1)
\\
=&I_{\beta}[f(X(s),\alpha(s))]_{s,t}+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$.
Now, let $\beta=(j_1,j_2)\in \mathcal{M}_2$.
As before, the definition of multiple integrals from Subsection \ref{sub:multiple integral} and Lemma \ref{lem:ItowMS} yields
\begin{align*}
I_{\beta}[f(X(\cdot),&\alpha(\cdot))]_{s,t}
\\
=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{I}^{j_2}_{(s,t]}\int_s^{s_1}(L^{j_1}_{\alpha(s_2)}f(X(s_2),k_0)-L^{j_1}_{\alpha(s_2)}f(X(s_2),i_0))dW^{j_1}(s_2)d[M_{i_0k_0}](s_1)
\\
=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{I}^{j_2}_{(s,t]}\int_s^{s_1}\Big((L^{j_1}_{\alpha(s)}f(X(s),k_0)-L^{j_1}_{\alpha(s)}f(X(s),i_0))
\\
&+\sum_{j=0}^m\int_{s}^{s_2}L^j_{\alpha(s_3)}(L^{j_1}_{\alpha(s_3)}f(X(s_3),k_0)-L^{j_1}_{\alpha(s_3)}f(X(s_3),i_0))dW^j(s_3)
\\
&+\sum_{r=1}^\mu\sum_{i_1\neq k_1}\int_s^{s_2}\mathbbm{1}\{N^{(s,s_2]}=r\}((L^{j_1}_{k_1}f(X(s_3),k_0)-L^{j_1}_{i_1}f(X(s_3),k_0)))
\\
&-(L^{j_1}_{k_1}f(X(s_3),i_0)-L^{j_1}_{i_1}f(X(s_3),i_0)))d[M_{i_0k_0}](s_3)
\\
&+\sum_{i_1\neq k_1}\int_s^{s_2}\mathbbm{1}\{N^{(s,s_2]}>\mu\}((L^{j_1}_{k_1}f(X(s_3),k_0)-L^{j_1}_{i_1}f(X(s_3),k_0)))
\\
&-(L^{j_1}_{k_1}f(X(s_3),i_0)-L^{j_1}_{i_1}f(X(s_3),i_0)))d[M_{i_0k_0}](s_3)\Big)dW^{j_1}(s_2)d[M_{i_0k_0}](s_1)
\\
=&I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$.
Further, let $\beta=(j_1,j_2)\in \mathcal{M}_3$.
By the definition of multiple integrals from Subsection \ref{sub:multiple integral} and Corollary \ref{cor:Ito}, we have
\begin{align*}
I_{\beta}&[f(X(\cdot),\alpha(\cdot))]_{s,t}=\int_s^t\sum_{i_0\neq k_0}\int_s^{s_1}\mathbbm{I}_{(s,s_1]}^{j_1} (L^{j_2}_{k_0} f(X(s_2),k_0)-L^{j_2}_{i_0} f(X(s_2),i_0))d[M_{i_0k_0}](s_2)dW^{j_2}(s_1)
\\
=&\int_s^t\sum_{i_0\neq k_0}\int_s^{s_1}\mathbbm{I}_{(s,s_1]}^{j_1}\Big( (L^{j_2}_{k_0} f(X(s),k_0)-L^{j_2}_{i_0} f(X(s),i_0))
\\
&+\sum_{j=0}^m\int_s^{s_2}(L^j_{\alpha(s_3)}L^{j_2}_{k_0} f(X(s_3),k_0)-L^j_{\alpha(s_3)}L^{j_2}_{i_0} f(X(s_3),i_0))dW^j(s_3)\Big)d[M_{i_0k_0}](s_2)dW^{j_2}(s_1)
\\
=&I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{j=0}^m I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$.
For the inductive arguments, we assume that the result holds for $l(\beta)=k-1\geq 2$.
Now, consider $\beta=(j_1,\ldots,j_k)\in\mathcal{M}$ and $k\geq 3$.
\newline
\textbf{Case 1.} Let $\beta\in\mathcal{M}_1 $.
Then, the definition of multiple integrals from Subsection \ref{sub:multiple integral} gives
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\int_s^tI_{\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}dW^{j_k}(u)
\end{align*}
almost surely for any $s<t\in[0,T]$.
Notice that $\beta-\in\mathcal{M}_1$ and the inductive hypothesis is
\begin{align}
I_{\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}=&I_{\beta-}[L^{j_k}_{\alpha(s)}f(X(s),\alpha(s))]_{s,u}\notag
\\
&+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}\notag
\end{align}
which by the definition of multiple integrals from Subsection \ref{sub:multiple integral} yields,
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t} =&\int_s^t\Big(I_{\beta-}[L^{j_k}_{\alpha(s)}f(X(s),\alpha(s))]_{s,u}
\\
&+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}\Big)dW^{j_k}(u)
\\
=& I_{\beta}[f(X(s),\alpha(s))]_{s,t}+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$.
\newline
\textbf{Case 2.}
Let $\beta\in\mathcal{M}_2$ and $j_k\notin\{N_1,\ldots,N_{\mu },\bar{N}_{\mu }\}$.
Due to the definition of multiple integrals from Subsection \ref{sub:multiple integral}, we obtain
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\int_s^tI_{\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}dW^{j_k}(u)
\end{align*}
almost surely for any $s<t\in[0,T]$.
As before, notice that $\beta-\in\mathcal{M}_2$ and the inductive hypothesis is
\begin{align}
I_{\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}=&I_{\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(s),\alpha(\cdot))]_{s,u}\notag
\\
&+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u} \notag
\end{align}
almost surely for any $s<u\in[0,T]$.
Thus, by applying the above equation and the definition of multiple integrals from Subsection \ref{sub:multiple integral}, one has
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\int_s^t\Big(I_{\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(s),\alpha(\cdot))]_{s,u}
\\
&+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}\Big)dW^{j_k}(u)
\\
=&I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$.
\newline
\textbf{Case 3.} Let $\beta\in\mathcal{M}_2 $ and $j_k\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu }\}$.
By using the definition of multiple integrals from Subsection \ref{sub:multiple integral}, we can write
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{I}_{(s,t]}^{j_k}I_{\beta-}[f(X(\cdot),k_0)-f(X(\cdot),i_0)]_{s,u}d[M_{i_0k_0}](u)
\end{align*}
almost surely for any $s<t\in[0,T]$.
Notice that $\beta-\in\mathcal{M}_1\cup\mathcal{M}_2$ and the inductive hypothesis is
\begin{align}
I_{\beta-}[f(X(\cdot),k_0)&-f(X(\cdot),i_0)]_{s,u}=I_{\beta-}[f(X(s),k_0)-f(X(s),i_0)]_{s,u}\notag
\\
&+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta-}[f(X(\cdot),k_0)-f(X(\cdot),i_0)]_{s,u}\notag
\end{align}
almost surely for any $s<u\in[0,T]$.
Further, above equation and the definition of multiple integrals from Subsection \ref{sub:multiple integral} yields,
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{I}_{(s,t]}^{j_k}\Big(I_{\beta-}[f(X(s),k_0)-f(X(s),i_0)]_{s,u}
\\
&+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta-}[f(X(\cdot),k_0)-f(X(\cdot),i_0)]_{s,u}\Big)d[M_{i_0k_0}](u)
\\
=&I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$.
\newline
\textbf{Case 4.} Let $\beta\in\mathcal{M}_3 $ and $j_k\notin\{N_1,\ldots,N_{\mu },\bar{N}_{\mu }\}$.
By the definition of multiple integrals from Subsection \ref{sub:multiple integral}, we have
\begin{align*}
I_{\beta}[f(X(\cdot)&,\alpha(\cdot))]_{s,t}=\int_s^tI_{\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}dW^{j_k}(u)
\end{align*}
almost surely for any $s<t\in[0,T]$.
In this case, $\beta-\in\mathcal{M}_3$ and the inductive hypothesis is
\begin{align*}
I_{\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}=&I_{\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(s),\alpha(\cdot))]_{s,u}+\sum_{j=0}^{m} I_{(j)\star\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}
\end{align*}
almost surely for any $s<u\in[0,T]$.
By the definition of multiple integrals from Subsection \ref{sub:multiple integral} and inductive hypothesis, we have
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t} =&\int_s^t\Big(I_{\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(s),\alpha(\cdot))]_{s,u}
\\
&+\sum_{j=0}^{m} I_{(j)\star\beta-}[L^{j_k}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}\Big)dW^{j_k}(u)
\\
=&I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{j=0}^{m} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$.
\newline
\textbf{Case 5.} Let $\beta\in\mathcal{M}_3 $ and $j_k\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu }\}$.
The definition of multiple integrals from Subsection \ref{sub:multiple integral} gives
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{I}_{(s,t]}^{j_k}I_{\beta-}[f(X(\cdot),k_0)-f(X(\cdot),i_0)]_{s,u}d[M_{i_0k_0}](u)
\end{align*}
almost surely for any $s<t\in[0,T]$.
Notice that $\beta-\in\mathcal{M}_3$, then the inductive hypothesis is written as
\begin{align}
I_{\beta-}[f(X(\cdot),k_0)-f(X(\cdot),i_0)]_{s,u}=&I_{\beta-}[f(X(s),k_0)-f(X(s),i_0)]_{s,u}\notag
\\
&+\sum_{j=0}^{m} I_{(j)\star\beta-}[f(X(\cdot),k_0)-f(X(\cdot),i_0)]_{s,u}\notag
\end{align}
almost surely for any $s<u\in[0,T]$.
Hence, the above equation and the definition of multiple integrals from Subsection \ref{sub:multiple integral} gives
\begin{align*}
I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\sum_{i_0\neq k_0}\int_s^t\mathbbm{I}_{(s,t]}^{j_k}\Big(I_{\beta-}[f(X(s),k_0)-f(X(s),i_0)]_{s,u}
\\
&+\sum_{j=0}^{m} I_{(j)\star\beta-}[f(X(\cdot),k_0)-f(X(\cdot),i_0)]_{s,u}\Big)d[M_{i_0k_0}](u)
\\
=&I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}+\sum_{j=0}^{m} I_{(j)\star\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for any $s<t\in[0,T]$. This completes the proof.
\end{proof}
Now we will prove one of the main result of this article, \textit{i.e.}, theorem \ref{thm:Ito-Taylor}.
\begin{proof}[\textbf{Proof of Theorem \ref{thm:Ito-Taylor}}]
If $\mathcal{A}=\varphi$, then the proof is trivial.
We shall prove Theorem \ref{thm:Ito-Taylor} for $\mathcal{A}\neq \varphi$ by induction on
\begin{equation}
\eta(\mathcal{A})=\sup_{\beta\in\mathcal{A}}\eta(\beta) \nonumber
\end{equation}
where $\eta(\beta)$ is defined in Subsection \ref{sub:Multi-indices}.
Let $\eta(\mathcal{A})=0$.
Then, $\mathcal{A}=\{\nu\}$ with the remainder set
\begin{itemize}
\item[] $\mathcal{B}(\mathcal{A})=\big\{(N_1),\ldots,(N_{\mu}),(\bar{N}_{\mu}),(0),(1),\ldots,(m)\big\}$ and $\tilde{\mathcal{A}}=\varphi$.
\end{itemize}
Thus, on using Lemma \ref{lem:ItowMS} and the definition of multiple integrals from Subsection \ref{sub:multiple integral}, one has
\begin{align*}
f(X(t),\alpha(t))&=f(X(s),\alpha(s))+\int^t_sL^{0}_{\alpha(u)}f(X(u),\alpha(u))du+\sum_{j=1}^m\int^t_sL^j_{\alpha(u)}f(X(u),\alpha(u))dW^{j}(u)
\\
&+\sum_{r=1}^{\mu}\sum_{i_0\neq k_0}\int^t_s\mathbbm{1}\{N^{(s,t]}=r\} (f(X(u),k_0)-f(X(u),i_0))d[M_{i_0k_0}](u)
\\
&+\sum_{i_0\neq k_0}\int^t_s\mathbbm{1}\{N^{(s,t]}>\mu\} (f(X(u),k_0)-f(X(u),i_0))d[M_{i_0k_0}](u)
\\
=&\sum_{\beta\in\mathcal{A}\setminus\tilde{\mathcal{A}}}I_{\beta}[f(X(s),\alpha(s))]_{s,t}+\sum_{\beta\in\tilde{\mathcal{A}}}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{\beta\in\mathcal{B}(\mathcal{A})}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t} \nonumber
\end{align*}
almost surely for all $s<t\in[0,T]$.
Thus, the result holds for $\eta(\mathcal{A})=0$.
Now, let $\eta(\mathcal{A})\geq 1$ and $\zeta:=\{\beta\in\mathcal{A}:\:\eta(\beta)\leq \eta(\mathcal{A})-1\}$.
Notice that $\zeta$ is a hierarchical set.
Indeed, $\nu \in \zeta$ because $\nu\in \mathcal{A}$ and $\eta(\nu)=0\leq \eta(\mathcal{A})-1$.
Also, for any $\beta \in \zeta\setminus \{\nu\}$, $\beta\in \mathcal{A}$ and $\eta(\beta)\leq \eta(\mathcal{A})-1$ which implies $-\beta\in \mathcal{A}$ and $\eta(-\beta)\leq \eta(\beta)\leq \eta(\mathcal{A})-1$.
Further, the inductive hypothesis is
\begin{align}
f(X(t),\alpha(t))=&\sum_{\beta\in \zeta\setminus\tilde{\zeta}}I_{\beta}\big[f(X(s),\alpha(s))\big]_{s,t}+\sum_{\beta\in\tilde{\zeta}}I_{\beta}\big[f(X(s),\alpha(\cdot))\big]_{s,t}\nonumber
\\
&+\sum_{\beta\in\mathcal{B}(\zeta)}I_{\beta}\big[f(X(\cdot),\alpha(\cdot))\big]_{s,t} \label{eq:ind.Ito Taylor Proof}
\end{align}
almost surely for all $s<t\in[0,T]$ where $\tilde{\zeta}:=\{(j_1,\ldots,j_l)\in\zeta: j_i\in\{ N_1,\ldots,N_{\mu},\bar{N}_{\mu}\}\text{ for any }i\in\{1,\ldots,l\} \}$.
We write $\mathcal{B}(\zeta)$ as $\mathcal{B}(\zeta)=\bar{\mathcal{B}_1}\cup\bar{\mathcal{B}}_2\cup\bar{\mathcal{B}}_3\cup(\mathcal{B}(\zeta)\setminus(\bar{\mathcal{B}_1}\cup\bar{\mathcal{B}}_2\cup\bar{\mathcal{B}}_3))$ where
\begin{itemize}
\item[] $\bar{\mathcal{B}}_1:=\{(j_1,\ldots,j_l)\in\mathcal{B}(\zeta) \cap \mathcal{A}: j_i\in\{0,1,\ldots,m\}\forall\, i\in\{1,\ldots,l\}\},$
\item[] $\bar{\mathcal{B}}_2:=\{(j_1,\ldots,j_l)\in \mathcal{B}(\zeta) \cap \mathcal{A}: j_1\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu}\}$,
\item[] $\bar{\mathcal{B}_3}:=\{(j_1,\ldots,j_l)\in(\mathcal{B}(\zeta)\cap \mathcal{A})\setminus\bar{\mathcal{B}}_1: j_1\in\{0,1,\ldots,m\}\}$.
\end{itemize}
Notice that $\bar{\mathcal{B}}_1\subset\mathcal{M}_1$, $\bar{\mathcal{B}}_2\subset\mathcal{M}_3$ and $\bar{\mathcal{B}}_3\subset\mathcal{M}_2$.
By using Lemma \ref{lem:Ito on beta} for $\beta\in\bar{\mathcal{B}}_1$, we have
\begin{align}
\sum_{\beta\in\bar{\mathcal{B}}_1}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\sum_{\beta\in\bar{\mathcal{B}}_1}I_{\beta}[f(X(s),\alpha(s))]_{s,t}\nonumber
\\
&+\sum_{\beta\in\bar{\mathcal{B}}_1}\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star \beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}\nonumber
\\
=&\sum_{\beta\in\bar{\mathcal{B}}_1}I_{\beta}[f(X(s),\alpha(s))]_{s,t}+\sum_{\beta\in\tilde{\bar{\mathcal{B}}}_1}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t} \label{eq:beta1 in ito}
\end{align}
almost surely for all $s<t\in[0,T]$ where
\begin{itemize}
\item[] $\tilde{\bar{\mathcal{B}}}_1:=\{(j)\star \beta:\beta\in\bar{\mathcal{B}}_1,j\in\{N_1,\ldots,N_{\mu},\bar{N}_{\mu },0,1,\ldots,m\}\}$
\end{itemize}
and $\eta(\beta)\geq\eta(\mathcal{A})+1$ for all $\beta\in\tilde{\bar{\mathcal{B}}}_1$.
Further, for $\beta\in\bar{\mathcal{B}}_2$, Lemma \ref{lem:Ito on beta} yields
\begin{align}
\sum_{\beta\in\bar{\mathcal{B}}_2}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\sum_{\beta\in\bar{\mathcal{B}}_2}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{\beta\in\bar{\mathcal{B}}_2}\sum_{j=0}^m I_{(j)\star \beta}[f(X(\cdot),\alpha(\cdot))]_{s,t} \nonumber
\\
=&\sum_{\beta\in\bar{\mathcal{B}}_2}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{\beta\in\tilde{\bar{\mathcal{B}}}_2}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}\label{eq:beta2 in ito}
\end{align}
almost surely for all $s<t\in[0,T]$ where
\begin{itemize}
\item[] $\tilde{\bar{\mathcal{B}}}_2:=\{(j)\star \beta:\beta\in \bar{\mathcal{B}}_2,j\in\{0,1,2,\ldots,m\}\}$.
\end{itemize}
and $\eta(\beta)\geq\eta(\mathcal{A})+1$ for all $\beta\in\tilde{\bar{\mathcal{B}}}_2$.
Similarly, on using Lemma \ref{lem:Ito on beta} for $\beta\in\bar{\mathcal{B}}_3$, we obtain
\begin{align*}
\sum_{\beta\in\bar{\mathcal{B}_3}}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\sum_{\beta\in\bar{\mathcal{B}_3}}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}\nonumber
\\
&+\sum_{\beta\in\bar{\mathcal{B}_3}}\sum_{j\in\{N_1,\ldots,N_{\mu },\bar{N}_{\mu },0,1,\ldots,m\}} I_{(j)\star \beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\\
=&\sum_{\beta\in\bar{\mathcal{B}_3}}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{\beta\in\tilde{\bar{\mathcal{B}}}_3}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for all $s<t\in[0,T]$ where
\begin{itemize}
\item[] $\tilde{\bar{\mathcal{B}}}_3:=\{(j)\star \beta:\beta\in\bar{\mathcal{B}_3},j\in\{N_1,\ldots,N_{\mu},\bar{N}_{\mu},0,1,\ldots,m\}\}$.
\end{itemize}
Now notice that there may be some $\beta\in\tilde{\bar{\mathcal{B}}}_3$ such that $\eta(\beta)=\eta(\mathcal{A})$ and $\beta\in\mathcal{A}$. Thus, we define
\begin{itemize}
\item[] $\bar{\mathcal{B}}_4:=\tilde{\bar{\mathcal{B}}}_3\cap \mathcal{A}$.
\end{itemize}
Clearly, $\bar{\mathcal{B}}_4\subset\mathcal{M}_3$.
By the application of Lemma \ref{lem:Ito on beta} for $\beta\in \bar{\mathcal{B}}_4$, the above equation can be written as
\begin{align}
\sum_{\beta\in\bar{\mathcal{B}_3}}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}=&\sum_{\beta\in\bar{\mathcal{B}_3}}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{\beta\in\bar{\mathcal{B}}_4}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t} \nonumber
\\
& +\sum_{\beta\in\bar{\mathcal{B}}_4}\sum_{j=0}^m I_{(j)\star \beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}+\sum_{\beta\in\tilde{\bar{\mathcal{B}}}_3\setminus\bar{\mathcal{B}}_4}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}\nonumber
\\
=&\sum_{\beta\in\bar{\mathcal{B}_3}}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{\beta\in\bar{\mathcal{B}}_4}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}\nonumber
\\
&+\sum_{\beta\in\tilde{\bar{\mathcal{B}}}_4}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}+\sum_{\beta\in\tilde{\bar{\mathcal{B}}}_3\setminus\bar{\mathcal{B}}_4}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}\label{eq:beta3 in ito}
\end{align}
almost surely for all $s<t\in[0,T]$ where $\tilde{\bar{\mathcal{B}}}_4:=\big\{(j)\star \beta: \beta\in \bar{\mathcal{B}}_4,\:j\in\{0,1,2,\ldots,m\}\big\}$ with $\eta(\beta)\geq\eta(\mathcal{A})+1$.
Notice that, $\zeta\cup\bar{\mathcal{B}}_1\cup\bar{\mathcal{B}}_2\cup\bar{\mathcal{B}}_3\cup\bar{\mathcal{B}}_4\subset\mathcal{A}$.
Clearly $\nu\in\zeta$.
Let us consider $\beta\in\mathcal{A}\setminus\{\nu\}$ to prove $\mathcal{A}\subset\zeta\cup\bar{\mathcal{B}}_1\cup\bar{\mathcal{B}}_2\cup\bar{\mathcal{B}}_3\cup\bar{\mathcal{B}}_4$.
If $\eta(\beta)\leq \eta(\mathcal{A})-1$, then $\beta\in\zeta$.
Now, let $\eta(\beta)=\eta(\mathcal{A})$ which implies either $\eta(-\beta)<\eta(\beta)$ or $\eta(-\beta)=\eta(\beta)$.
Let $\eta(-\beta)<\eta(\beta)$, then the definition of hierarchical and remainder sets from Subsection \ref{sub:hierarchical and reminder} yields $-\beta\in\zeta$ and $\beta\in\mathcal{B}(\zeta)$ which gives $\beta\in\bar{\mathcal{B}}_1\cup\bar{\mathcal{B}}_2\cup\bar{\mathcal{B}}_3$.
Further, if $\eta(-\beta)=\eta(\beta)$, then $-\beta\in\bar{\mathcal{B}}_3$ and $\beta\in\bar{\mathcal{B}}_4$.
Hence
\begin{itemize}
\item[] $\mathcal{A}=\zeta\cup\bar{\mathcal{B}}_1\cup\bar{\mathcal{B}}_2\cup\bar{\mathcal{B}}_3\cup\bar{\mathcal{B}}_4$ and $\tilde{\mathcal{A}}=\zeta\cup\bar{\mathcal{B}}_2\cup\bar{\mathcal{B}}_3\cup\bar{\mathcal{B}}_4$.
\end{itemize}
Moreover,
\begin{itemize}
\item[] $(\mathcal{B}(\zeta)\setminus (\bar{\mathcal{B}}_1\cup\bar{\mathcal{B}}_2\cup\bar{\mathcal{B}}_3))\cup\tilde{\bar{\mathcal{B}}}_1\cup\tilde{\bar{\mathcal{B}}}_2\cup(\tilde{\bar{\mathcal{B}}}_3\setminus\bar{\mathcal{B}}_4)\cup\tilde{\bar{\mathcal{B}}}_4=\{\beta\in\mathcal{M}\setminus\mathcal{A} :-\beta\in\zeta\}\cup\{\beta\in\mathcal{M}\setminus\mathcal{A}:-\beta\in\bar{\mathcal{B}}_{1}\}\cup\{\beta\in\mathcal{M}\setminus\mathcal{A}:-\beta\in\bar{\mathcal{B}}_{2}\}\cup\{\beta\in\mathcal{M}\setminus\mathcal{A}:-\beta\in\bar{\mathcal{B}}_{3}\}\cup\{\beta\in\mathcal{M}\setminus\mathcal{A}:-\beta\in\bar{\mathcal{B}}_4\}=\mathcal{B}(\mathcal{A})$.
\end{itemize}
Thus, by using \eqref{eq:beta1 in ito}, \eqref{eq:beta2 in ito} and \eqref{eq:beta3 in ito} in \eqref{eq:ind.Ito Taylor Proof}, we obtain
\begin{align*}
f(X(t),\alpha(t))=&\sum_{\beta\in\mathcal{A}\setminus\tilde{\mathcal{A}}}I_{\beta}[f(X(s),\alpha(s))]_{s,t}+\sum_{\beta\in\tilde{\mathcal{A}}}I_{\beta}[f(X(s),\alpha(\cdot))]_{s,t}+\sum_{\beta\in\mathcal{B}(\mathcal{A})}I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t}
\end{align*}
almost surely for all $s<t\in[0,T]$. This completes the proof.
\end{proof}
\begin{remark}
If there is no jump then $\tilde{\zeta},\bar{\mathcal{B}}_2,\bar{\mathcal{B}}_3,\tilde{\bar{\mathcal{B}}}_2,\tilde{\bar{\mathcal{B}}}_3,\bar{\mathcal{B}}_4,\tilde{\bar{\mathcal{B}}}_4$ in preceding proof are empty sets and SDEwMS \eqref{eq:sdems} reduces to an SDE.
As a result, by repeating the preceding steps, we can prove Theorem 5.5.1 of \cite{kloeden1992numerical}, \textit{i.e.}, It\^o-Taylor expansion for SDE.
\end{remark}
\section{Explicit $\gamma$-order scheme for SDEwMS}
\label{sec:scheme der., moment, convergence}
This section is devoted to the derivation of the $\gamma\in \{n/2:n\in\mathbb{N}\}$-order numerical scheme for SDEwMS \eqref{eq:sdems} and prove its moment stability and strong rate of convergence.
\subsection{Derivation}
\label{sec:Derivation of scheme}
In this subsection, we present the derivation of the explicit $\gamma\in \{n/2:n\in\mathbb{N}\}$-order scheme for SDEwMS \eqref{eq:sdems}.
For this, we recall the definitions of hierarchical sets $\mathcal{A}_{\gamma-0.5}^b$ and $\mathcal{A}_{\gamma-0.5}^\sigma$ from Subsection \ref{sub:scheme} and $\mathcal{A}_{0}^b=\mathcal{A}_{0}^\sigma=\varphi$.
Then, we use Theorem \ref{thm:Ito-Taylor} to obtain,
\begin{align*}
b^k(X(s),\alpha(s))=&\sum_{\beta\in\mathcal{A}_{\gamma-0.5}^b\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^b}I_{\beta}[b^k(X(t_n),\alpha(t_n))]_{t_n,s}
+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^b}I_{\beta}[b^k(X(t_n),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^b)}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
\sigma^{(k,j)}(X(s),\alpha(s))=&\sum_{\beta\in\mathcal{A}_{\gamma-0.5}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^\sigma)}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_n,s}
\end{align*}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$, $j\in\{1,\ldots,m\}$, $k\in\{1,\ldots,d\}$ and $s\in [t_n, t_{n+1}]$.
Thus, from \eqref{eq:sdems}, we have,
\begin{align*}
X^k(t_{n+1}&)=X^k(t_{n})+\int_{t_n}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{A}_{\gamma-0.5}^b\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^b}I_{\beta}[b^k(X(t_n),\alpha(t_n))]_{t_n,s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^b}I_{\beta}[b^k(X(t_n),\alpha(\cdot))]_{t_n,s}
+\sum_{\beta\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^b)}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_n,s}\Big)ds\notag
\\
&+\sum_{j=1}^m\int_{t_n}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{A}_{\gamma-0.5}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(\cdot))]_{t_n,s}
+\sum_{\beta\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^\sigma)}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_n,s}\Big)dW^j(s)
\end{align*}
almost surely for any $k \in \{1,\ldots,d\}$ and $n \in \{0, \ldots, n_T-1\}$.
Let $\mu=2\gamma$ be fixed and notice that the remainder sets $\mathcal{B}(\mathcal{A}_{\gamma-0.5}^b)$ and $\mathcal{B}(\mathcal{A}_{\gamma-0.5}^\sigma)$ contain elements of $\mathcal{A}_\gamma^b$ and $\mathcal{A}_\gamma^\sigma$ which are identified with the following sets,
\begin{itemize}
\item[] $\mathcal{B}_1^b:=\{\nu\}\cup\{(j_1,\ldots,j_l)\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^b)\cap \mathcal{A}_{\gamma}^b:j_i\notin\{N_1,\ldots,N_{\mu}\}\forall i\in\{1,\ldots,l\}\},$
\item[] $\mathcal{B}_1^\sigma:=\{\nu\}\cup\{(j_1,\ldots,j_l)\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^\sigma)\cap\mathcal{A}_{\gamma}^\sigma:j_i\notin\{N_1,\ldots,N_{\mu}\}\forall i\in\{1,\ldots,l\}\},$
\item[] $\mathcal{B}_2^b:=\{(j_1,\ldots,j_l)\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^b)\cap \mathcal{A}_{\gamma}^b:j_1\in\{N_1,\ldots,N_{\mu}\}\},$
\item[] $\mathcal{B}_2^\sigma:=\{(j_1,\ldots,j_l)\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^\sigma) \cap \mathcal{A}_{\gamma}^\sigma: j_1\in\{N_1,\ldots,N_{\mu}\}\},$
\item[] $\mathcal{B}_3^b:=\{(j_1,\ldots,j_l)\in(\mathcal{B}(\mathcal{A}_{\gamma-0.5}^b)\cap \mathcal{A}_{\gamma}^b ) \setminus\mathcal{B}_1^b:j_1\in\{0,1,\ldots,m\}\},$
\item[] $\mathcal{B}_3^\sigma:=\{(j_1,\ldots,j_l)\in( \mathcal{B}(\mathcal{A}_{\gamma-0.5}^\sigma)\cap \mathcal{A}_{\gamma}^\sigma ) \setminus\mathcal{B}_1^\sigma:j_1\in\{0,1,\ldots,m\}\}$.
\end{itemize}
Clearly, $\mathcal{B}_2^b, \mathcal{B}_2^\sigma \subset \mathcal{M}_3$ and $\mathcal{B}_3^b, \mathcal{B}_3^\sigma \subset \mathcal{M}_2$.
Similarly, some elements of $\mathcal{B}(\mathcal{A}_\gamma^b)$ and $\mathcal{B}(\mathcal{A}_\gamma^\sigma)$ are identified by the below mentioned sets,
\begin{itemize}
\item[] $\mathcal{B}_4^b:= \mathcal{B}(\mathcal{A}_{\gamma-0.5}^b)\setminus(\mathcal{B}_1^b\cup\mathcal{B}_2^b\cup\mathcal{B}_3^b),$
\item[] $\mathcal{B}_4^\sigma:= \mathcal{B}(\mathcal{A}_{\gamma-0.5}^\sigma)\setminus(\mathcal{B}_1^\sigma\cup\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma)$.
\end{itemize}
Thus, by using the sets defined above, we write,
\begin{align*}
X^k(t_{n+1})&=X^k(t_{n})+\int_{t_n}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{B}_1^b\cup(\mathcal{A}_{\gamma-0.5}^b\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^b)}I_{\beta}[b^k(X(t_n),\alpha(t_n))]_{t_n,s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^b}I_{\beta}[b^k(X(t_n),\alpha(\cdot))]_{t_n,s}
+\sum_{\beta\in\mathcal{B}_2^b\cup\mathcal{B}_3^b\cup\mathcal{B}_4^b}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}_1^b}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_n),\alpha(t_n))]_{t_n,s}\Big)ds\notag
\\
&+\sum_{j=1}^m\int_{t_n}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{B}_1^\sigma\cup(\mathcal{A}_{\gamma-0.5}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma)}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(\cdot))]_{t_n,s}
+\sum_{\beta\in\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma\cup\mathcal{B}_4^\sigma}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}_1^\sigma}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}\Big)dW^j(s)
\end{align*}
which on the application of Lemma \ref{lem:Ito on beta} for $\beta\in\{\mathcal{B}_2^b,\mathcal{B}_3^b,\mathcal{B}_2^\sigma,\mathcal{B}_3^\sigma\}$ yields
\begin{align*}
X^k(t_{n+1})&=X^k(t_{n})+\int_{t_n}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{B}_1^b\cup(\mathcal{A}_{\gamma-0.5}^b\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^b)}I_{\beta}[b^k(X(t_n),\alpha(t_n))]_{t_n,s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^b\cup\mathcal{B}_2^b\cup\mathcal{B}_3^b}I_{\beta}[b^k(X(t_n),\alpha(\cdot))]_{t_n,s}+\sum_{\beta\in\mathcal{B}_2^b}\sum_{k_1=0}^mI_{(k_1)\star\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}_3^b}\sum_{k_1\in\{N_1,\ldots,N_{\mu},\bar{N}_{\mu},0,1,\ldots,m\}}I_{(k_1)\star\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_n,s}+\sum_{\beta\in\mathcal{B}_4^b}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}_1^b}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_n),\alpha(t_n))]_{t_n,s}\Big)ds\notag
\\
&+\sum_{j=1}^m\int_{t_n}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{B}_1^\sigma\cup(\mathcal{A}_{\gamma-0.5}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma)}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma\cup\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(\cdot))]_{t_n,s}
+\sum_{\beta\in\mathcal{B}_2^\sigma}\sum_{k_1=0}^mI_{(k_1)\star\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}_3^\sigma}\sum_{k_1\in\{N_1,\ldots,N_{\mu},\bar{N}_{\mu},0,1,\ldots,m\}}I_{(k_1)\star\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_n,s}+\sum_{\beta\in\mathcal{B}_4^\sigma}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}_1^\sigma}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}\Big)dW^j(s)
\end{align*}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$.
Now, to summarize the notations of the fourth, fifth, tenth and eleventh terms on the right side of the above equation, we denote by
\begin{itemize}
\item[] $\tilde{\mathcal{B}}_2^b:=\{(k_1)\star \beta:\beta\in\mathcal{B}_2^b,k_1\in\{0,1,\ldots,m\}\},$
\item[] $\tilde{\mathcal{B}}_2^\sigma:=\{(k_1)\star \beta:\beta\in\mathcal{B}_2^\sigma,k_1\in\{0,1,\ldots,m\}\},$
\item[] $\tilde{\mathcal{B}}_3^b:=\{(k_1)\star \beta:\beta\in\mathcal{B}_3^b,k_1\in\{N_1,\ldots,N_{\mu},\bar{N}_{\mu},0,1,\ldots,m\}\},$
\item[] $\tilde{\mathcal{B}}_3^\sigma:=\{(k_1)\star \beta:\beta\in\mathcal{B}_3^\sigma,k_1\in\{N_1,\ldots,N_{\mu},\bar{N}_{\mu},0,1,\ldots,m\}\}$.
\end{itemize}
Also notice that there are some elements of $\mathcal{A}_{\gamma}^b$ and $\mathcal{A}_{\gamma}^\sigma$ which are present in $\tilde{\mathcal{B}}_3^b$ and $\tilde{\mathcal{B}}_3^\sigma$, respectively and we identify them with the help of following notations,
\begin{itemize}
\item[] $\mathcal{B}_5^b:=\tilde{\mathcal{B}}_3^b\cap \mathcal{A}_{\gamma}^b,$
\item[] $\mathcal{B}_5^\sigma:=\tilde{\mathcal{B}}_3^\sigma\cap \mathcal{A}_{\gamma}^\sigma$.
\end{itemize}
Clearly, $\mathcal{B}_5^b$, $\mathcal{B}_5^\sigma\subset\mathcal{M}_3$.
Thus, by using the above notations, we write,
\begin{align*}
X^k(t_{n+1})&=X^k(t_{n})+\int_{t_n}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{B}_1^b\cup(\mathcal{A}_{\gamma-0.5}^b\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^b)}I_{\beta}[b^k(X(t_n),\alpha(t_n))]_{t_n,s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^b\cup\mathcal{B}_2^b\cup\mathcal{B}_3^b}I_{\beta}[b^k(X(t_n),\alpha(\cdot))]_{t_n,s}
+\sum_{\beta\in\mathcal{B}_5^b}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}_4^b\cup\tilde{\mathcal{B}}_2^b\cup(\tilde{\mathcal{B}}_3^b\setminus\mathcal{B}_5^b)}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}_1^b}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_n),\alpha(t_n))]_{t_n,s}\Big)ds\notag
\\
&+\sum_{j=1}^m\int_{t_n}^{t_{n+1}}\Big(\sum_{\beta\in\mathcal{B}_1^\sigma\cup(\mathcal{A}_{\gamma-0.5}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma)}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma\cup\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(\cdot))]_{t_n,s}
+\sum_{\beta\in\mathcal{B}_5^\sigma}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}_4^\sigma\cup\tilde{\mathcal{B}}_2^\sigma\cup(\tilde{\mathcal{B}}_3^\sigma\setminus\mathcal{B}_5^\sigma)}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_n,s}
\\
&+\sum_{\beta\in\mathcal{B}_1^\sigma}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}\Big)dW^j(s)
\end{align*}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$.
Further, on using Lemma \ref{lem:Ito on beta} for $\beta\in\{\mathcal{B}_5^b,\mathcal{B}_5^\sigma\}$, we have
\begin{align*}
X^k(t_{n+1})&=X^k(t_{n})+\int_{t_n}^{t_{n+1}}\Big(\sum_{\beta\in(\mathcal{B}_1^b\cup(\mathcal{A}_{\gamma-0.5}^b\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^b))}I_{\beta}[b^k(X(t_n),\alpha(t_n))]_{t_n,s}\notag
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^b\cup\mathcal{B}_2^b\cup\mathcal{B}_3^b\cup\mathcal{B}_5^b}I_{\beta}[b^k(X(t_n),\alpha(\cdot))]_{t_n,s}
+\sum_{\beta\in\mathcal{B}_4^b\cup\tilde{\mathcal{B}}_2^b\cup(\tilde{\mathcal{B}}_3^b\setminus\mathcal{B}_5^b)\cup\tilde{\mathcal{B}}_5^b}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_n,s}\notag
\\
&+\sum_{\beta\in\mathcal{B}_1^b}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_n),\alpha(t_n))]_{t_n,s}\Big)ds\notag
\\
&+\sum_{j=1}^m\int_{t_n}^{t_{n+1}}\Big(\sum_{\beta\in(\mathcal{B}_1^\sigma\cup(\mathcal{A}_{\gamma-0.5}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma))}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}\notag
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma\cup\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma\cup\mathcal{B}_5^\sigma}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(\cdot))]_{t_n,s}\notag
\\
&+\sum_{\beta\in\mathcal{B}_4^\sigma\cup\tilde{\mathcal{B}}_2^\sigma\cup(\tilde{\mathcal{B}}_3^\sigma\setminus\mathcal{B}_5^\sigma)\cup\tilde{\mathcal{B}}_5^\sigma}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_n,s}\notag
\\
&+\sum_{\beta\in\mathcal{B}_1^\sigma}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}\Big)dW^j(s)
\end{align*}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$ where
\begin{itemize}
\item[] $\tilde{\mathcal{B}}_5^b:=\{(k_1)\star \beta:\beta\in\mathcal{B}_5^b,k_1\in\{0,1,\ldots,m\}\},$
\item[] $\tilde{\mathcal{B}}_5^\sigma:=\{(k_1)\star \beta:\beta\in\mathcal{B}_5^\sigma,k_1\in\{0,1,\ldots,m\}\}$.
\end{itemize}
Clearly, $\mathcal{A}_{\gamma-0.5}^\sigma\cup\mathcal{B}_1^\sigma\cup\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma\cup\mathcal{B}_5^\sigma\subset\mathcal{A}_\gamma^\sigma$.
Notice that $\nu\in\mathcal{A}_{\gamma-0.5}^\sigma\cup\mathcal{B}_1^\sigma\cup\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma\cup\mathcal{B}_5^\sigma$.
Now, we prove $\mathcal{A}_\gamma^\sigma\subset\mathcal{A}_{\gamma-0.5}^\sigma\cup\mathcal{B}_1^\sigma\cup\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma\cup\mathcal{B}_5^\sigma$.
For this, let $\beta\in\mathcal{A}_\gamma^\sigma\setminus\{\nu\}$ which implies $\eta(\beta)\leq 2\gamma-1$.
If $\eta(\beta)\leq 2\gamma-2$, then $\beta\in\mathcal{A}_{\gamma-0.5}^\sigma$.
Further, consider $\eta(\beta)=2\gamma-1$ which gives rise to two cases, either $\eta(-\beta)<\eta(\beta)$ or $\eta(-\beta)=\eta(\beta)$.
If $\eta(-\beta)<\eta(\beta)$, then by the definition of hierarchical and remainder sets from Subsection \ref{sub:hierarchical and reminder}, we have $-\beta\in\mathcal{A}_{\gamma-0.5}^\sigma$ and $\beta\in\mathcal{B}(\mathcal{A}_{\gamma-0.5}^\sigma)$ which gives $\beta\in\mathcal{B}_1^\sigma\cup\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma$.
Moreover, if $\eta(-\beta)=\eta(\beta)$, then $-\beta\in\mathcal{B}_3^\sigma$ and $\beta\in\mathcal{B}_5^\sigma$.
Hence
\begin{itemize}
\item[] $\mathcal{A}_\gamma^\sigma=\mathcal{A}_{\gamma-0.5}^\sigma\cup\mathcal{B}_1^\sigma\cup\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma\cup\mathcal{B}_5^\sigma$ and
\item[] $\tilde{\mathcal{A}}_{\gamma}^\sigma=\tilde{\mathcal{A}}_{\gamma-0.5}^\sigma\cup\mathcal{B}_2^\sigma\cup\mathcal{B}_3^\sigma\cup\mathcal{B}_5^\sigma$.
\end{itemize}
Moreover,
\begin{itemize}
\item[] $\mathcal{B}_4^\sigma\cup\tilde{\mathcal{B}}_1^\sigma\cup\tilde{\mathcal{B}}_2^\sigma\cup(\tilde{\mathcal{B}}_3^\sigma\setminus\mathcal{B}_5^\sigma)\cup\tilde{\mathcal{B}}_5^\sigma=\{\beta\in\mathcal{M}\setminus\mathcal{A}_{\gamma}^\sigma :-\beta\in\mathcal{A}_{\gamma-0.5}^\sigma\}\cup\{\beta\in\mathcal{M}\setminus\mathcal{A}_{\gamma}^\sigma:-\beta\in\mathcal{B}_{1}^\sigma\}\cup\{\beta\in\mathcal{M}\setminus\mathcal{A}_{\gamma}^\sigma:-\beta\in\mathcal{B}_{2}^\sigma\}\cup\{\beta\in\mathcal{M}\setminus\mathcal{A}_{\gamma}^\sigma:-\beta\in\mathcal{B}_{3}^\sigma\}\cup\{\beta\in\mathcal{M}\setminus\mathcal{A}_{\gamma}^\sigma:-\beta\in\mathcal{B}_5^\sigma\}=\mathcal{B}(\mathcal{A}_{\gamma}^\sigma)$.
\end{itemize}
Similarly, we obtain
\begin{itemize}
\item[] $\mathcal{A}_\gamma^b=\mathcal{A}_{\gamma-0.5}^b\cup\mathcal{B}_1^b\cup\mathcal{B}_2^b\cup\mathcal{B}_3^b\cup\mathcal{B}_5^b$,
\item[] $\tilde{\mathcal{A}}_{\gamma}^b=\tilde{\mathcal{A}}_{\gamma-0.5}^b\cup\mathcal{B}_2^b\cup\mathcal{B}_3^b\cup\mathcal{B}_5^b$,
\item[] $\mathcal{B}(\mathcal{A}_\gamma^b)\setminus \tilde{\mathcal{B}}_1^b=\mathcal{B}_4^b\cup\tilde{\mathcal{B}}_2^b\cup(\tilde{\mathcal{B}}_3^b\setminus\mathcal{B}_5^b)\cup\tilde{\mathcal{B}}_5^b$.
\end{itemize}
Thus, preceding equation can be written as
\begin{align}\label{eq:gen. scheme derivation}
&X^k(t_{n+1})=X^k(t_{n})+\sum_{\beta\in\mathcal{A}_{\gamma}^b\setminus\tilde{\mathcal{A}}_{\gamma}^b}\int_{t_n}^{t_{n+1}}I_{\beta}[b^k(X(t_n),\alpha(t_n))]_{t_n,s}ds\notag
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^b}\int_{t_n}^{t_{n+1}}I_{\beta}[b^k(X(t_n),\alpha(\cdot))]_{t_n,s}ds\notag+\sum_{\beta\in\mathcal{B}(\mathcal{A}_\gamma^b)\setminus \tilde{\mathcal{B}}_1^b}\int_{t_n}^{t_{n+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_n,s}ds\notag
\\
&+\sum_{\beta\in\mathcal{B}_1^b}\int_{t_n}^{t_{n+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_n),\alpha(t_n))]_{t_n,s}ds\notag
\\
&+\sum_{\beta\in\mathcal{A}_{\gamma}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma}^\sigma}\sum_{j=1}^m\int_{t_n}^{t_{n+1}}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}dW^j(s)\notag
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^\sigma}\sum_{j=1}^m\int_{t_n}^{t_{n+1}}I_{\beta}[\sigma^{(k,j)}(X(t_n),\alpha(\cdot))]_{t_n,s}dW^j(s)\notag
\\
&+\sum_{\beta\in\mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus \tilde{\mathcal{B}}_1^\sigma}\sum_{j=1}^m\int_{t_n}^{t_{n+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_n,s}dW^j(s)\notag
\\
&+\sum_{\beta\in\mathcal{B}_1^\sigma}\sum_{j=1}^m\int_{t_n}^{t_{n+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_n),\alpha(t_n))]_{t_n,s}dW^j(s)
\end{align}
almost surely for all $n\in\{0,1,\ldots,n_T-1\}$ and $k\in\{1,\ldots,d\}$.
Hence, the explicit numerical scheme \eqref{eq:gen.scheme} of $\gamma\in \{n/2:n\in\mathbb{N}\}$-order for SDEwMS \eqref{eq:sdems} is obtained by ignoring the fourth, fifth, eighth and ninth terms on the right side of \eqref{eq:gen. scheme derivation}.
\subsection{Moment bound}
In this subsection, we shall show the moment stability of the numerical scheme \eqref{eq:gen.scheme} of $\gamma\in \{n/2:n\in\mathbb{N}\}$-order for SDEwMS \eqref{eq:sdems}.
The result on the moment bound of the scheme \eqref{eq:gen.scheme} is stated in Lemma \ref{lem:schemeMoment}.
We first establish some preliminary lemmas which are required for proving Lemma~\ref{lem:schemeMoment}.
For the following lemmas, let us recall the definitions of $\eta(\beta)$ and $l(\beta)$ from Subsection \ref{sub:Multi-indices} and $J^{\beta}_{i_0}$ from Subsection \ref{sub:Operators}.
It is assumed that all integrals appearing in Lemmas \ref{lem:multiple estimate 0,1} and \ref{lem:multiple estimate N_mu} are well-defined. Also, one requires the standard stopping time arguments in the proof which we avoid to write for the sake of notational simplicity.
\begin{lemma}
\label{lem:multiple estimate 0,1}
Let $f:\mathbb{R}^d\times\mathcal{S}\mapsto\mathbb{R}$ and $\beta\in\{(j_1,\ldots,j_l)\in\mathcal{M}:j_i\in\{0,1,\ldots,m\}\forall i\in\{1,\ldots,l\}\}$.
Then,
\begin{align}
E\Big(&\sup_{u\in[s,t]}| I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,u}|^2\Big|\mathcal{F}_T^{\alpha} \Big) \notag
\\
& \leq C (t-s)^{\eta(\beta)-l(\beta)} \int_{s}^{t}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l(\beta)-1}}E\Big(|J^{\beta}_{\alpha(t_{l(\beta)})} f(X(t_{l(\beta)}),\alpha(t_{l(\beta)}))|^2\Big|\mathcal{F}_T^{\alpha} \Big) dt_{l(\beta)}\cdots dt_2dt_1\notag
\end{align}
almost surely for all $s<t\in[0,T]$.
In addition, if $|J^{\beta}_{i_0}f(x,i_0)|\leq L (1+|x|)$ for all $x\in\mathbb{R}^d$ and $i_0\in\mathcal{S}$ where $L>0$ is a constant, then
\begin{align*}
E\Big(\sup_{u\in[s,t]}|I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,u}|^2&\Big|\mathcal{F}_T^{\alpha} \Big)\leq C (t-s)^{\eta(\beta)} E\big(\sup_{u\in[s,t]}(1+|X(u)|)^2\big|\mathcal{F}_T^{\alpha} \big).
\end{align*}
Moreover, for $\bar{\beta}\in\mathcal{M}$,
\begin{align*}
E\Big(& \sup_{u\in[s,t]}|I_{\bar{\beta}*\beta}[f(X(\cdot),\alpha(\cdot))]_{s,u}|^2\Big|\mathcal{F}_T^{\alpha} \Big) \notag
\\
&\leq C (t-s)^{\eta(\beta)-l(\beta)} \int_{s}^{t}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l(\beta)-1}} E\Big( |I_{\bar{\beta}}[J^\beta_{\alpha(\cdot)} f(X(\cdot),\alpha(\cdot))]_{s,t_{l(\beta
)}}|^2\Big|\mathcal{F}_T^{\alpha} \Big) dt_{l(\beta)}\cdots dt_2dt_1\notag
\end{align*}
almost surely for all $s<t\in[0,T]$.
\end{lemma}
\begin{proof}
The first inequality of the lemma is proved by using induction on $l(\beta)$.
Let $l(\beta)=1$ and $\beta=(0)$ which gives $\eta(\beta)=2$.
On the application of H\"older's inequality, we have
\begin{align*}
E\Big(\sup_{u\in[s,t]}|I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,u}|^2&\Big|\mathcal{F}_T^{\alpha} \Big)=E\Big(\sup_{u\in[s,t]}\Big|\int^u_{s}L^0_{\alpha(t_1)}f(X(t_1),\alpha(t_1))dt_1 \Big|^2\Big|\mathcal{F}_T^{\alpha} \Big)
\\
\leq & C (t-s)\int_{s}^{t}E\Big(|L^0_{\alpha(t_1)}f(X(t_1),\alpha(t_1))|^2\Big|\mathcal{F}_T^{\alpha} \Big)dt_1
\\
=&C (t-s)^{\eta(\beta)-l(\beta)}\int_{s}^{t}E\Big(|J^{\beta}_{\alpha(t_1)}f(X(t_1),\alpha(t_1))|^2\Big|\mathcal{F}_T^{\alpha} \Big)dt_1
\end{align*}
almost surely for all $s<t\in[0,T]$ where $J^{\beta}_{i_0}=L_{i_0}^0$, $i_0\in \mathcal{S}$.
Further, if $l(\beta)=1$ and $\beta=(j)$ for all $j\in\{1,2,\ldots,m\}$ which yields $\eta(\beta)=1$.
Then, Burkholder-Davis-Gundy inequality gives,
\begin{align*}
E\Big(\sup_{u\in[s,t]}|I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,u}|^2& \Big|\mathcal{F}_T^{\alpha} \Big)=E\Big(\sup_{u\in[s,t]}\Big|\int^u_{s}L^j_{\alpha(t_1)}f(X(t_1),\alpha(t_1))dW^j(t_1) \Big|^2\Big|\mathcal{F}_T^{\alpha} \Big)
\\
\leq & C \int_{s}^{t}E\Big(|L^j_{\alpha(t_1)}f(X(t_1),\alpha(t_1))|^2\Big|\mathcal{F}_T^{\alpha} \Big)dt_1
\\
=&C (t-s)^{\eta(\beta)-l(\beta)}\int_{s}^{t}E\Big(|J^{\beta}_{\alpha(t_1)}f(X(t_1),\alpha(t_1))|^2\Big|\mathcal{F}_T^{\alpha} \Big)dt_1
\end{align*}
almost surely for all $s<t\in[0,T]$ where $J^{\beta}_{i_0}=L_{i_0}^j$, $i_0\in \mathcal{S}$ and $j\in \{1,\ldots,m\}$.
Thus, the first inequality of the lemma holds for $l(\beta)=1$.
For inductive arguments, we assume that it holds for $l(\beta)=k$ for a fixed $k\in \mathbb{N}$.
Now, consider $\beta=(j_1,\ldots,j_{k+1})$ and $j_{k+1}=0$.
On using H\"older's inequality and the inductive hypothesis, we have,
\begin{align*}
E\Big(&\sup_{u\in[s,t]}|I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,u}|^2\Big|\mathcal{F}_T^{\alpha} \Big)=E\Big(\sup_{u\in[s,t]}|\int^{u}_{s}I_{\beta-}[L^{j_{k+1}}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,t_1}dt_1|^2\Big|\mathcal{F}_T^{\alpha} \Big)
\\
\leq & C(t-s) \int^{t}_{s}E\Big(\sup_{u\in[s,t_1]}|I_{\beta-}[L^{j_{k+1}}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}|^2\Big|\mathcal{F}_T^{\alpha} \Big)dt_1
\\
\leq& C (t-s)^{\eta(\beta)-l(\beta)} \int_{s}^{t}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l(\beta)-1}} E\Big(|J^{\beta}_{\alpha(t_{l(\beta)})} f(X(t_{l(\beta)}),\alpha(t_{l(\beta)}))|^2\Big|\mathcal{F}_T^{\alpha} \Big) dt_{l(\beta)}\cdots dt_2dt_1
\end{align*}
almost surely for all $s<t\in[0,T]$.
Moreover, if $j_{k+1}\in\{1,2,\ldots,m\}$, then by using Burkholder-Davis-Gundy inequality and inductive hypothesis, we obtain
\begin{align*}
E\Big(&\sup_{u\in[s,t]}|I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,u}|^2\Big|\mathcal{F}_T^{\alpha} \Big)=E\Big(\sup_{u\in[s,t]}|\int^{u}_{s}I_{\beta-}[L^{j_{k+1}}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,t_1}dW^j(t_1)|^2\Big|\mathcal{F}_T^{\alpha} \Big)
\\
\leq & C \int^{t}_{s}E\Big(\sup_{u\in[s,t_1]}|I_{\beta-}[L^{j_{k+1}}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,u}|^2\Big|\mathcal{F}_T^{\alpha} \Big)dt_1
\\
\leq& C (t-s)^{\eta(\beta)-l(\beta)} \int_{s}^{t}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l(\beta)-1}} E\Big(|J^{\beta}_{\alpha(t_{l(\beta)})} f(X(t_{l(\beta)}),\alpha(t_{l(\beta)}))|^2\Big|\mathcal{F}_T^{\alpha} \Big) dt_{l(\beta)}\cdots dt_2dt_1
\end{align*}
almost surely for all $s<t\in[0,T]$.
This completes the proof of first inequality of the lemma.
The second inequality of the lemma follows immediately from the first inequality on using $|J^{\beta}_{i_0}f(x,i_0)|\leq L (1+|x|)$ for all $x\in\mathbb{R}^d$ and $i_0\in\mathcal{S}$.
By adapting the similar arguments, the third inequality of the lemma also follows.
\end{proof}
For following lemma, let us define $l_{k}:=\displaystyle\sum_{i=1}^{k}l(\beta_i)$ for $\beta_i\in\mathcal{M}$, $i\in\{1,\ldots,k\}$ and $k\in\mathbb{N}$ and recall the definitions of operators $J^{\beta}_{i_0}$, $J^{\beta}_{i_0k_0}$ from Subsection \ref{sub:Operators}.
\begin{lemma}
\label{lem:multiple estimate N_mu}
Let $f:\mathbb{R}^d\times\mathcal{S}\mapsto\mathbb{R}$. Then, for all $ \beta\in \mathcal{M}$ such that $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1$ where $ \mu_1, \ldots,\mu_r\in\{1,\ldots,\mu\}$, $\beta_1,\ldots,\beta_{r+1} \in \mathcal{M}_1$ and $r=[n](\beta)\in \mathbb{N}$,
\begin{align*}
E\Big(|I_{\beta}[f(X(\cdot),&\alpha(\cdot))]_{s,t_0}|^2\Big|\mathcal{F}_T^{\alpha} \Big)\leq C\prod_{i=1}^r\mu_i (t_0-s)^{\eta(\beta_1)+\cdots +\eta(\beta_{r+1})-l_{r+1}}
\\
&\int_s^{t_0}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}\sum_{i_1\neq k_1}\int_{s}^{t_{l_1}}\mathbbm{1}\{N^{(s,t_{l_1}]}=\mu_1\}
\\
&\int_s^{t_{l_1+1}}\int_{s}^{t_{l_1+2}}\cdots\int_{s}^{t_{l_2}}\sum_{i_2\neq k_2}\int_{s}^{t_{l_2+1}}\mathbbm{1}\{N^{(s,t_{l_2+1}]}=\mu_2\} \cdots
\\
&\int_s^{t_{l_{r-1}+r-1}}\int_{s}^{t_{l_{r-1}+r}}\cdots\int_{s}^{t_{l_r+r-2}}\sum_{i_r\neq k_r}\int_{s}^{t_{l_r+r-1}}\mathbbm{1}\{N^{(s,t_{l_r+r-1}]}=\mu_r\}
\\
&\int_s^{t_{l_{r}+r}}\int_{s}^{t_{l_{r}+r+1}}\cdots\int_{s}^{t_{l_{r+1}+r-1}}
\\
&E\Big(|J^{\beta_{r+1}}_{\alpha(t_{l_{r+1}+r})}J^{\beta_r}_{i_rk_r}\cdots J^{\beta_2}_{i_2k_2}(J^{\beta_1}_{k_1}f(X(t_{l_{r+1}+r}),k_1)-J^{\beta_1}_{i_1}f(X(t_{l_{r+1}+r}),i_1))|^2\Big|\mathcal{F}_T^{\alpha} \Big)
\\
&dt_{l_{r+1}+r}\cdots dt_{l_{r}+r+2}dt_{l_{r}+r+1}d[M_{i_rk_r}](t_{l_r+r})dt_{l_r+r-1}\cdots dt_{l_{r-1}+r+1}dt_{l_{r-1}+r} \cdots
\\
&d[M_{i_2k_2}](t_{l_2+2})dt_{l_2+1}\cdots dt_{l_1+3}dt_{l_1+2}d[M_{i_1k_1}](t_{l_1+1})dt_{l_1}\cdots dt_2dt_1
\end{align*}
almost surely for all $s<t_0\in[0,T]$ provided $\mu_1\geq \mu_2 \geq \cdots\geq \mu_r$, otherwise, $I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t_0}=~0$.
\end{lemma}
\begin{proof}
We shall prove the lemma by induction on $r=[n](\beta)$.
Let $[n](\beta)=1$.
Then, $\beta=\beta_2\star(N_{\mu_1})\star\beta_1$.
If $\beta_1\neq\nu$, on using Lemma \ref{lem:multiple estimate 0,1}, we obtain
\begin{align*}
E\Big(& |I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t_0}|^2\Big|\mathcal{F}_T^{\alpha} \Big)=E\Big(|I_{\beta_2\star(N_{\mu_1})\star\beta_1}[f(X(\cdot),\alpha(\cdot))]_{s,t_0}|^2\Big|\mathcal{F}_T^{\alpha} \Big)
\\
&\leq C(t_0-s)^{\eta(\beta_1)-l_1} \int_s^{t_0}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}E\Big(|I_{\beta_2\star(N_{\mu_1})}[J^{\beta_1}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,t_{l_1}}|^2\Big|\mathcal{F}_T^{\alpha} \Big) dt_{l_1}\cdots dt_2dt_1
\end{align*}
almost surely for any $s<t_0\in [0,T]$.
Notice that if $\beta_1=\nu$, then $J^{\beta_1}_{i_0}$ is the identity operator for all $i_0 \in \mathcal{S}$, $\eta(\beta_1)=l_1=0$ and the iterated integrals$ \int_s^{t_0}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}$ on the right side of the above expression and in the subsequent calculations disappear.
Also, the above inequality becomes an equality.
Further, the definition of multiple integrals from Subsection \ref{sub:multiple integral} yields
\begin{align}
E\Big(&|I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t_0}|^2\Big|\mathcal{F}_T^{\alpha} \Big)\leq C(t_0-s)^{\eta(\beta_1)-l_1} \int_s^{t_0}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}\notag
\\
& E\Big(\big|\sum_{i_1\neq k_1}\int_{s}^{t_{l_1}}\mathbbm{1}\{N^{(s,t_{l_1}]}=\mu_1\}I_{\beta_2}[J^{\beta_1}_{k_1}f(X(\cdot),k_1)-J^{\beta_1}_{i_1}f(X(\cdot),i_1)]_{s,t_{l_1+1}}d[M_{i_1k_1}](t_{l_1+1})\big|^2\Big|\mathcal{F}_T^{\alpha} \Big) \notag
\\
&dt_{l_1}\cdots dt_2dt_1\notag
\\
\leq& C\mu_1(t_0-s)^{\eta(\beta_1)-l_1} \int_s^{t_0}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}\sum_{i_1\neq k_1}\int_{s}^{t_{l_1}}\mathbbm{1}\{N^{(s,t_{l_1}]}=\mu_1\}\notag
\\
& E\Big(\big|I_{\beta_2}[J^{\beta_1}_{k_1}f(X(\cdot),k_1)-J^{\beta_1}_{i_1}f(X(\cdot),i_1)]_{s,t_{l_1+1}}\big|^2\Big|\mathcal{F}_T^{\alpha} \Big)d[M_{i_1k_1}](t_{l_1+1}) dt_{l_1}\cdots dt_2dt_1\notag
\end{align}
almost surely for all $s<t_0\in[0,T]$ where the last inequality is obtained by using,
\begin{align}
\Big|\sum_{i_1\neq k_1}\int_{s}^{t_{l_1}}I_{\beta_2}&[J^{\beta_1}_{k_1}f(X(\cdot),k_1)-J^{\beta_1}_{i_1}f(X(\cdot),i_1)]_{s,t_{l_1+1}}d[M_{i_1k_1}](t_{l_1+1})\Big|^2 \notag
\\
=&\Big|\sum_{i=1}^{\displaystyle N^{(s,t_{l_1}]}}I_{\beta_2}[J^{\beta_1}_{\alpha(\tau_i)}f(X(\cdot),\alpha(\tau_i))-J^{\beta_1}_{\alpha(\tau_{i-1})}f(X(\cdot),\alpha(\tau_{i-1}))]_{s,\tau_i} \Big|^2 \notag
\\
\leq & N^{(s,t_{l_1}]} \sum_{i=1}^{\displaystyle N^{(s,t_{l_1}]}}\Big|I_{\beta_2}[J^{\beta_1}_{\alpha(\tau_i)}f(X(\cdot),\alpha(\tau_i))-J^{\beta_1}_{\alpha(\tau_{i-1})}f(X(\cdot),\alpha(\tau_{i-1}))]_{s,\tau_i} \Big|^2, \label{eq:[M]estimate}
\end{align}
$\tau_1\ldots, \tau_{\displaystyle N^{(s,t_{l_1}]}}$ are jump times of the Markov chain $\alpha$ in the interval $(s,t_{l_1}]$.
Moreover, if $\beta_2\neq\nu$, then the application of Lemma \ref{lem:multiple estimate 0,1} gives
\begin{align*}
E\Big(&|I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t_0}|^2\Big|\mathcal{F}_T^{\alpha} \Big)
\\
\leq& C\mu_1(t_0-s)^{\eta(\beta_1)+\eta(\beta_2)-l_2} \int_s^{t_0}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}\sum_{i_1\neq k_1}\int_{s}^{t_{l_1}}\mathbbm{1}\{N^{(s,t_{l_1}]}=\mu_1\}\int_s^{t_{l_1+1}}\int_{s}^{t_{l_1+2}}\cdots\int_{s}^{t_{l_2}}\notag
\\
& E\Big(\big|J^{\beta_2}_{\alpha(t_{l_2+1})}(J_{k_1}^{\beta_1}f(X(t_{l_2+1}),k_1)-J_{i_1}^{\beta_1}f(X(t_{l_2+1}),i_1))\big|^2\Big|\mathcal{F}_T^{\alpha} \Big)
\\
& dt_{l_2+1}\cdots dt_{l_1+3}dt_{l_1+2}d[M_{i_1k_1}](t_{l_1+1}) dt_{l_1}\cdots dt_2dt_1
\end{align*}
almost surely for all $s<t_0\in[0,T]$.
As before, if $\beta_2=\nu$, then $J^{\beta_2}_{i_0}$ is the identity operator for $i_0 \in \mathcal{S}$, $\eta(\beta_2)=l(\beta_2)=0$, $l_2=l_1$ and the iterated integrals $\int_s^{t_{l_1+1}}\int_{s}^{t_{l_1+2}}\cdots\int_{s}^{t_{l_2}}$ on the right side of the above inequality does not appear.
Hence, lemma holds for $[n](\beta)=1$.
We make the inductive hypothesis that the lemma holds for $r-1=[n](\beta)-1\in \mathbb{N}$.
Now, let us consider $r=[n](\beta)\in \{2,3,\ldots\}$ and recall $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1=\bar{\beta}\star(N_{\mu_1})\star \beta_1$ where $\bar{\beta}=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2$.
Clearly, $[n](\bar{\beta)}=r-1$.
For $\beta_{1}\neq\nu$, Lemma \ref{lem:multiple estimate 0,1} yields,
\begin{align*}
E\Big(&|I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t_0}|^2\Big|\mathcal{F}_T^{\alpha} \Big)=E\Big(|I_{\bar{\beta} \star(N_{\mu_1})\star \beta_1}[f(X(\cdot),\alpha(\cdot))]_{s,t_0}|^2\Big|\mathcal{F}_T^{\alpha} \Big)
\\
\leq&C(t_0-s)^{\eta(\beta_1)-l_1} \int_s^{t_0}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}E\Big(|I_{\bar{\beta}\star(N_{\mu_1})}[J^{\beta_1}_{\alpha(\cdot)}f(X(\cdot),\alpha(\cdot))]_{s,t_{l_1}}|^2\Big|\mathcal{F}_T^{\alpha} \Big) dt_{l_1}\cdots dt_2dt_1
\end{align*}
almost surely for any $s<t_0\in[0,T]$.
As before, if $\beta_1=\nu$, then $J^{\beta_1}_{i_0}$ is the identity operator for $i_0 \in \mathcal{S}$, $\eta(\beta_1)=l_1=0$ and the iterated integrals $ \int_s^{t_0}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}$ does not appear on the right side of the above expression and in the corresponding estimates that follow.
Also, the above inequality becomes an equality.
Further, the definition of multiple integrals from Subsection \ref{sub:multiple integral} yields,
\begin{align}
E\Big(&|I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t_0}|^2\Big|\mathcal{F}_T^{\alpha} \Big)\leq C(t_0-s)^{\eta(\beta_1)-l_1} \int_s^{t_0}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}} E\Big(\big|\sum_{i_1\neq k_1}\int_{s}^{t_{l_1}}\notag
\\
&\mathbbm{1}\{N^{(s,t_{l_1}]}=\mu_1\} I_{\bar{\beta}}[J^{\beta_1}_{k_1}f(X(\cdot),k_1)-J^{\beta_1}_{i_1}f(X(\cdot),i_1)]_{s,t_{l_1+1}}d[M_{i_1k_1}](t_{l_1+1})\big|^2\Big|\mathcal{F}_T^{\alpha} \Big)dt_{l_1}\cdots dt_2dt_1 \notag
\end{align}
which on using an estimate similar to the one obtained in \eqref{eq:[M]estimate} gives
\begin{align}
E\Big(&|I_{\beta}[f(X(\cdot),\alpha(\cdot))]_{s,t_0}|^2\Big|\mathcal{F}_T^{\alpha} \Big)\leq C\mu_1(t_0-s)^{\eta(\beta_1)-l_1} \int_s^{t_0}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}} \sum_{i_1\neq k_1}\int_{s}^{t_{l_1}} \mathbbm{1}\{N^{(s,t_{l_1}]}=\mu_1\}\notag
\\
& E\Big(\big|I_{\bar{\beta}}[J^{\beta_1}_{k_1}f(X(\cdot),k_1)-J^{\beta_1}_{i_1}f(X(\cdot),i_1)]_{s,t_{l_1+1}}\big|^2\Big|\mathcal{F}_T^{\alpha} \Big)d[M_{i_1k_1}](t_{l_1+1})dt_{l_1}\cdots dt_2dt_1 \notag
\end{align}
almost surely for any $s<t_0\in [0,T]$.
The proof is completed by using the following inductive hypothesis,
\begin{align*}
E&\Big(\big|I_{\bar{\beta}}[J^{\beta_1}_{k_1}f(X(\cdot),k_1)-J^{\beta_1}_{i_1}f(X(\cdot),i_1)]_{s,t_{l_1+1}}\big|^2\Big|\mathcal{F}_T^{\alpha} \Big)\leq C\prod_{i=2}^r\mu_i (t_0-s)^{\eta(\beta_2)+\cdots +\eta(\beta_{r+1})-(l_{r+1}-l_1)}
\\
& \int_s^{t_{l_1+1}}\int_{s}^{t_{l_1+2}}\cdots\int_{s}^{t_{l_2}}\sum_{i_2\neq k_2}\int_{s}^{t_{l_2+1}}\mathbbm{1}\{N^{(s,t_{l_2+1}]}=\mu_2\}
\\
&\cdots \int_s^{t_{l_{r-1}+r-1}}\int_{s}^{t_{l_{r-1}+r}}\cdots\int_{s}^{t_{l_r+r-2}}\sum_{i_r\neq k_r}\int_{s}^{t_{l_r+r-1}}\mathbbm{1}\{N^{(s,t_{l_r+r-1}]}=\mu_r\}
\\
&\int_s^{t_{l_{r}+r}}\int_{s}^{t_{l_{r}+r+1}}\cdots\int_{s}^{t_{l_{r+1}+r-1}}E\Big(|J^{\beta_{r+1}}_{\alpha(t_{l_{r+1}+r})}J^{\beta_r}_{i_rk_r}\cdots J^{\beta_3}_{i_3k_3}(J^{\beta_2}_{k_2}(J^{\beta_1}_{k_1}f(X(\cdot),k_1)-J^{\beta_1}_{i_1}f(X(\cdot),i_1))
\\
&-J^{\beta_2}_{i_2}(J^{\beta_1}_{k_1}f(X(\cdot),k_1)-J^{\beta_1}_{i_1}f(X(\cdot),i_1))|^2\Big|\mathcal{F}_T^{\alpha} \Big)dt_{l_{r+1}+r}\cdots dt_{l_{r}+r+2}dt_{l_{r}+r+1}
\\
&d[M_{i_rk_r}](t_{l_r+r})dt_{l_r+r-1}\cdots dt_{l_{r-1}+r+1}dt_{l_{r-1}+r} \cdots d[M_{i_1k_1}](t_{l_2+2})dt_{l_2+1}\cdots dt_{l_1+3}dt_{l_1+2}
\end{align*}
almost surely for all $s<t_0\in[0,T]$.
\end{proof}
The first inequality in the following corollary follows from Lemma \ref{lem:multiple estimate 0,1} and Remark \ref{rem:A b sigma ds dw growth} and the second inequality from Lemma \ref{lem:multiple estimate N_mu} and Remark \ref{rem:A b N growth}.
\begin{corollary}
\label{cor:A_gamma moment}
Let Assumptions \ref{ass:initial data}, \ref{ass: b sigma lipschitz} and \ref{ass:A b sigma ds dw lipschitz} to \ref{ass:A sigma N lipschitz} hold. Then,
\begin{align*}
\sum_{\beta\in\mathcal{A}_{\gamma}^b\setminus\tilde{\mathcal{A}}_{\gamma}^b}E(|I_{\beta}[b^k(X(s),\alpha(s))]_{s,t}|^2) + \sum_{\beta\in\mathcal{A}_{\gamma}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma}^\sigma}E(|I_{\beta}[\sigma^{(k,j)}(X(s),\alpha(s))]_{s,t}|^2) \leq& C E(1+|X(s)|)^2,
\\
\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^b}E(|I_{\beta}[b^k(X(s),\alpha(\cdot))]_{s,t}|^2)+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^\sigma}E(|I_{\beta}[\sigma^{(k,j)}(X(s),\alpha(\cdot))]_{s,t}|^2) \leq& C E(1+|X(s)|)^2,
\end{align*}
for all $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $s<t\in[0,T]$.
\end{corollary}
\begin{lemma}
\label{lem:schemeMoment}
Let Assumptions \ref{ass:initial data}, \ref{ass: b sigma lipschitz} and \ref{ass:A b sigma ds dw lipschitz} to \ref{ass:A sigma N lipschitz} be satisfied. Then, the $\gamma$-order explicit numerical scheme \eqref{eq:gen.scheme} of SDEwMS \eqref{eq:sdems} satisfies the following
\begin{align*}
E\Big(\sup_{n\in \{0,1,\ldots, n_T\}}|Y(t_n)|^2\Big) \leq C
\end{align*}
where the constant $C>0$ does not depend on $h$.
\end{lemma}
\begin{proof}
From \eqref{eq:gen.scheme}, we have,
\begin{align*}
Y^k&(t_{n})=Y^k_0+\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\Big(\sum_{\beta\in\mathcal{A}_{\gamma}^b\setminus\tilde{\mathcal{A}}_{\gamma}^b}I_{\beta}[b^k(Y(t_{i}),\alpha(t_{i})) ]_{t_{i},s}+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^b}I_{\beta}[b^k(Y(t_{i}),\alpha(\cdot)) ]_{t_{i},s}\Big)ds\notag
\\
&+\sum_{j=1}^m\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\Big(\sum_{\beta\in\mathcal{A}_{\gamma}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma}^\sigma}I_{\beta}[\sigma^{(k,j)}(Y(t_{i}),\alpha(t_{i})) ]_{t_{i},s}+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^\sigma}I_{\beta}[\sigma^{(k,j)}(Y(t_{i}),\alpha(\cdot)) ]_{t_{i},s}\Big)dW^j(s)
\end{align*}
almost surely for all $n\in\{0,1,\ldots,n_T\}$ and $k\in\{1,\ldots,d\}$.
The H\"older's inequality and Burkholder-Davis-Gundy inequality yield,
\begin{align*}
E&\Big( \sup_{n\in \{0,1,\ldots, n'\}}|Y^k(t_n)|^2\Big)\leq CE|Y^k_0|^2
\\
+CE&\Big(\sup_{n\in \{0,1,\ldots, n'\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\Big(\sum_{\beta\in\mathcal{A}_{\gamma}^b\setminus\tilde{\mathcal{A}}_{\gamma}^b}I_{\beta}[b^k(Y(t_{i}),\alpha(t_{i})) ]_{t_{i},s}+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^b}I_{\beta}[b^k(Y(t_{i}),\alpha(\cdot)) ]_{t_{i},s}\Big)ds\big|^2\Big)
\\
+CE&\Big(\sup_{n\in \{0,1,\ldots, n'\}}\big|\sum_{j=1}^m\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\Big(\sum_{\beta\in\mathcal{A}_{\gamma}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma}^\sigma}I_{\beta}[\sigma^{(k,j)}(Y(t_{i}),\alpha(t_{i})) ]_{t_{i},s}
\\
&+\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^\sigma}I_{\beta}[\sigma^{(k,j)}(Y(t_{i}),\alpha(\cdot)) ]_{t_{i},s}\Big)dW^j(s)\big|^2\Big)
\\
\leq C&E|Y^k_0|^2+C\sum^{n'-1}_{i=0}\sum_{\beta\in\mathcal{A}_{\gamma}^b\setminus\tilde{\mathcal{A}}_{\gamma}^b}\int_{t_{i}}^{t_{i+1}}E\big|I_{\beta}[b^k(Y(t_{i}),\alpha(t_{i})) ]_{t_{i},s}\big|^2ds
\\
&+C\sum^{n'-1}_{i=0}\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^b}\int_{t_{i}}^{t_{i+1}}E\big|I_{\beta}[b^k(Y(t_{i}),\alpha(\cdot)) ]_{t_{i},s}\big|^2ds
\\
&+C\sum_{j=1}^m\sum^{n'-1}_{i=0}\sum_{\beta\in\mathcal{A}_{\gamma}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma}^\sigma}\int_{t_{i}}^{t_{i+1}}E\big|I_{\beta}[\sigma^{(k,j)}(Y(t_{i}),\alpha(t_{i})) ]_{t_{i},s}\big|^2ds
\\
&+C\sum_{j=1}^m\sum^{n'-1}_{i=0}\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^\sigma}\int_{t_{i}}^{t_{i+1}}E\big|I_{\beta}[\sigma^{(k,j)}(Y(t_{i}),\alpha(\cdot)) ]_{t_{i},s}\big|^2ds
\end{align*}
for any $n'\in\{1,\ldots,n_T\}$ and $k\in\{1,\ldots,d\}$.
Further, on using Corollary \ref{cor:A_gamma moment}, we obtain
\begin{align*}
E\Big( \sup_{n\in \{0,1,\ldots, n'\}}|Y^k(t_n)|^2\Big)\leq CE|Y^k_0|^2+C+Ch\sum^{n'-1}_{i=0}E\Big(\sup_{n\in\{0,1,\ldots,i\}}|Y^k(t_n)|^2\Big)
\end{align*}
for any $n'\in\{1,\ldots,n_T\}$ and $k\in\{1,\ldots,d\}$.
The application of Gronwall's lemma completes the proof.
\end{proof}
\subsection{Strong rate of convergence}
\label{Rate of scheme}
In this subsection, we provide the proof of Theorem \ref{thm:main}. For this, we first establish the following lemmas.
\begin{lemma}
\label{lem:multiple estimate remainder bar_N}
Let Assumptions \ref{ass:initial data}, \ref{ass: b sigma lipschitz} and \ref{ass:A b sigma ds dw lipschitz} be satisfied
Then, for all $\beta\in\mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1$, $\bar{\beta}\in\mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}^\sigma_1$ such that their first component is $\bar{N}_{2\gamma}$, the following holds,
\begin{align*}
E\Big(|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)\leq& C (t-s)^{\eta(\beta)}
\\
E\Big(|I_{\bar{\beta}}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)\leq& C (t-s)^{\eta(\bar{\beta})}
\end{align*}
for all $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$, $s<t\in[0,T]$ provided $0<(t-s)<1/(2q)$.
\end{lemma}
\begin{proof}
Let $l(\beta)=1$, \textit{i.e.}, $\beta=(\bar{N}_{2\gamma})$.
On using the definition of multiple integrals from Subsection \ref{sub:multiple integral} and Remark \ref{rem:A b sigma ds dw growth}, we obtain
\begin{align*}
E\Big(|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 &\Big)=E\Big(|I_{(\bar{N}_{2\gamma})}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)
\\
=&E\Big(\big|\sum_{i_0\neq k_0}\int_s^t\mathbbm{1}\{N^{(s,t]}>{2\gamma}\}(b^k(X(u),k_0)-b^k(X(u),i_0))d[M_{i_0k_0}](u)\big|^2\Big)
\\
\leq& CE\Big(\mathbbm{1}\{N^{(s,t]}>{2\gamma}\}(N^{(s,t]})^2E\Big(\sup_{u\in[s,t]}(1+|X(u)|)^2\Big|\mathcal{F}^T_{\alpha}\Big)\Big)
\end{align*}
which further implies due to Theorem \ref{thm:true moment} and Lemma \ref{lem:rateMS},
\begin{align*}
E\Big(|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)\leq&C \sum_{N={2\gamma}+1} N^2P\{N^{(s,t]}=N\}\leq C \sum_{N={2\gamma}+1}N^2((t-s)q)^N
\\
\leq& C (t-s)^{{2\gamma}+1}\sum_{N=0}(N+{2\gamma}+1)^2((t-s)q)^N \leq C (t-s)^{\eta(\beta)}
\end{align*}
for all $k\in\{1,\ldots,d\}$ and $s<t\in[0,T]$ where the series $\displaystyle\sum_{N=0}(N+{2\gamma}+1)^2(q(t-s))^N$ is convergent for $0<(t-s)<1/(2q)$.
Notice that for $l(\beta)>1$, if $j_2,\ldots,j_l\in\{N_1,\ldots,N_{2\gamma}\}$, then $I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}=0$ and the first inequality holds trivially.
Thus, Lemma \ref{lem:multiple estimate 0,1} gives
\begin{align*}
E\Big(&|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)=E\Big(|I_{(\bar{N}_{2\gamma})\star-\beta}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)
\\
\leq& C(t-s)^{\eta(-\beta)-l(-\beta)} \int_{s}^{t}\int_{s}^{t_{1}}\cdots \int_{s}^{t_{l(-\beta)-1}} E\big( \big|I_{(\bar{N}_{2\gamma})}[J^{-\beta}_{\alpha(\cdot)}b^k(X(\cdot),\alpha(\cdot))]_{s,t_{l(-\beta
)}}\big|^2\big) dt_{l(-\beta)}\cdots dt_2dt_1
\end{align*}
which by the definition of multiple integrals from Subsection \ref{sub:multiple integral} yields,
\begin{align*}
E\Big(&|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big) \leq C(t-s)^{\eta(-\beta)-l(-\beta)} \int_{s}^{t}\int_{s}^{t_{1}}\cdots \int_{s}^{t_{l(-\beta)-1}}
\\
& E\Big(\big|\sum_{i_0\neq k_0}\int_{s}^{t_{l(-\beta)}}\mathbbm{1}\{N^{(s,t_{l(-\beta)}]}>{2\gamma} \} (J^{-\beta}_{k_0}b^k(X(u),k_0)-J^{-\beta}_{i_0}b^k(X(u),i_0))d[M_{i_0k_0}](u)\big|^2\Big)
\\
&dt_{l(-\beta)}\cdots dt_2dt_1
\end{align*}
for all $s<t\in[0,T]$ and $k\in\{1,\ldots,d\}$.
Notice that, $-\beta\in\mathcal{A}^b_\gamma\setminus\tilde{\mathcal{A}}^b_\gamma$.
Thus, by the application of Remark \ref{rem:A b sigma ds dw growth} and Theorem \ref{thm:true moment}, we have
\begin{align*}
E\Big(|I_{\beta}[&b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)\leq C(t-s)^{\eta(-\beta)-l(-\beta)} \int_{s}^{t}\int_{s}^{t_{1}}\cdots \int_{s}^{t_{l(-\beta)-1}}
\\
&E\Big(\big(N^{(s,t_{l(-\beta)}]}\big)^2\mathbbm{1}\{N^{(s,t_{l(-\beta)}]}>{2\gamma} \} E\Big(\sup_{u\in[s,t]}(1+|X(u)|)^2\Big|\mathcal{F}_T^{\alpha} \Big)\Big)dt_{l(-\beta)}\cdots dt_2dt_1
\\
\leq & C(t-s)^{\eta(-\beta)-l(-\beta)} \int_{s}^{t}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l(-\beta)-1}}\sum_{N={2\gamma}+1} N^2P\{N^{(s,t_{l(-\beta)}]}=N\}dt_{l(-\beta)}\cdots dt_2dt_1
\end{align*}
and due to Lemma \ref{lem:rateMS},
\begin{align*}
E\Big(|I_{\beta}&[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)\leq C(t-s)^{\eta(-\beta)-l(-\beta)} \int_{s}^{t}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l(-\beta)-1}}\sum_{N={2\gamma}+1} N^2((t_{l(-\beta)}-s)q)^N
\\
&\qquad dt_{l(-\beta)}\cdots dt_2dt_1
\\
\leq& C(t-s)^{\eta(-\beta)-l(-\beta)} \int_{s}^{t}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l(-\beta)-1}}(t_{l(-\beta)}-s)^{{2\gamma}+1}\sum_{N=0}(N+{2\gamma}+1)^2(q(t_{l(-\beta)}-s))^N
\\
&\qquad dt_{l(-\beta)}\cdots dt_2dt_1\leq C(t-s)^{\eta(\beta)}
\end{align*}
for all $k\in\{1,\ldots,d\}$ and $s<t\in[0,T]$ where the the last inequality is obtained by using the finiteness of the series $\displaystyle\sum_{N=0}(N+{2\gamma}+1)^2(q(t_{l(-\beta)}-s))^N$
for $0<(t-s)<1/(2q)$.
This completes the proof of the first inequality of the lemma.
One can prove the second inequality of the lemma by adapting similar arguments.
\end{proof}
\begin{lemma}\label{lem:multiple estimate remainder N_mu}
Let Assumptions \ref{ass:initial data}, \ref{ass: b sigma lipschitz}, \ref{ass:reminder b N growth} and \ref{ass:reminder sigma N growth} be satisfied.
Then, for all $\beta\in\mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1$, $\bar{\beta}\in\mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}^\sigma_1$ such that none of the components of $\beta,\bar{\beta}$ is equal to $\bar{N}_{2\gamma}$ and atleast one of their component is equal to $N_1,\ldots,N_{2\gamma}$, we have,
\begin{align*}
E\Big(|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)\leq& C (t-s)^{\eta(\beta)}
\\
E\Big(|I_{\bar{\beta}}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)\leq& C (t-s)^{\eta(\bar{\beta})}
\end{align*}
for all $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $s<t\in[0,T]$ provided $0<(t-s)<1/(2q)$.
\end{lemma}
\begin{proof}
Clearly, we can write $\beta=\beta_{r+1}\star(N_{\mu_{r}})\star \beta_{r}\star \cdots\star(N_{\mu_2})\star \beta_2\star(N_{\mu_1})\star \beta_1$ where $\mu_1,\ldots,\mu_r\in\{1,\ldots,2\gamma\}$, $\beta_1,\ldots,\beta_{r+1}\in\mathcal{M}_1$ and $r\in\{1,\ldots, 2\gamma-2\}$.
Further, if $\mu_1> \mu_2>\cdots>\mu_r$ is not satisfied, then $I_\beta[b^k(X(\cdot),\alpha(\cdot))]_{s,t}=0$.
On using Lemma \ref{lem:multiple estimate N_mu} and Remark \ref{rem:reminder b N growth}, we obtain
\begin{align*}
E\Big(|I_{\beta}&[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)\leq CE\Big(\prod_{i=1}^r\mu_i (t-s)^{\eta(\beta_1)+\cdots +\eta(\beta_{r+1})-l_{r+1}}
\\
&\int_s^{t}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}\sum_{i_1\neq k_1}\int_{s}^{t_{l_1}}\mathbbm{1}\{N^{(s,t_{l_1}]}=\mu_1\}
\\
&\int_s^{t_{l_1+1}}\int_{s}^{t_{l_1+2}}\cdots\int_{s}^{t_{l_2}}\sum_{i_2\neq k_2}\int_{s}^{t_{l_2+1}}\mathbbm{1}\{N^{(s,t_{l_2+1}]}=\mu_2\} \cdots
\\
&\int_s^{t_{l_{r-1}+r-1}}\int_{s}^{t_{l_{r-1}+r}}\cdots\int_{s}^{t_{l_r+r-2}}\sum_{i_r\neq k_r}\int_{s}^{t_{l_r+r-1}}\mathbbm{1}\{N^{(s,t_{l_r+r-1}]}=\mu_r\}
\\
&\int_s^{t_{l_{r}+r}}\int_{s}^{t_{l_{r}+r+1}}\cdots\int_{s}^{t_{l_{r+1}+r-1}}
\\
&E\Big(|J^{\beta_{r+1}}_{\alpha(t_{l_{r+1}+r})}J^{\beta_r}_{i_rk_r}\cdots J^{\beta_2}_{i_2k_2}(J^{\beta_1}_{k_1}b^k(X(t_{l_{r+1}+r}),k_1)-J^{\beta_1}_{i_1}b^k(X(t_{l_{r+1}+r}),i_1))|^2\Big|\mathcal{F}_T^{\alpha} \Big)
\\
&dt_{l_{r+1}+r}\cdots dt_{l_{r}+r+2}dt_{l_{r}+r+1}d[M_{i_1k_1}](t_{l_r+r})dt_{l_r+r-1}\cdots dt_{l_{r-1}+r+1}dt_{l_{r-1}+r}
\\
&d[M_{i_1k_1}](t_{l_2+2})dt_{l_2+1}\cdots dt_{l_1+3}dt_{l_1+2}d[M_{i_1k_1}](t_{l_1+1})dt_{l_1}\cdots dt_2dt_1\Big)
\\
\leq& C(t-s)^{n(\beta)+2\bar{n}(\beta)-l(\beta_1)}\int_s^{t}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}E\Big(\mathbbm{1}\{N^{(s,t_{l_1}]}=\mu_1\}E\Big(\sup_{u\in[s,t]}(1+|X(u)|)^2\Big|\mathcal{F}_T^{\alpha} \Big)\Big)
\\
&dt_{l_1}\cdots dt_2dt_1
\end{align*}
which by the application of Theorem \ref{thm:true moment} and Lemma \ref{lem:rateMS} yield,
\begin{align*}
E\Big(|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)
\leq& C(t-s)^{n(\beta)+2\bar{n}(\beta)-l(\beta_1)}\int_s^{t}\int_{s}^{t_{1}}\cdots\int_{s}^{t_{l_1-1}}
\\
&\qquad P\{N^{(s,t_{l_1}]}=\mu_1\}dt_{l_1}\cdots dt_2dt_1
\leq C(t-s)^{\eta(\beta)}
\end{align*}
for all $k\in\{1,\ldots,d\}$ and $s<t\in[0,T]$ where $0<(t-s)<1/2q$.
By adopting similar arguments, one gets the second inequality of the lemma.
\end{proof}
Using Lemma \ref{lem:multiple estimate 0,1}, Assumption \ref{ass:reminder b sigma ds dw growth} and Theorem \ref{thm:true moment}, the following corollary is obtained.
\begin{corollary}\label{cor:multiple estimate remainder dsdW}
Let Assumptions \ref{ass:initial data}, \ref{ass: b sigma lipschitz} and \ref{ass:reminder b sigma ds dw growth} hold. Then, for all $\beta\in(\mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1)\cap\mathcal{M}_1$, $\bar{\beta}\in(\mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}^\sigma_1)\cap\mathcal{M}_1$, $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $s<t\in[0,T]$,
\begin{align*}
E\Big(|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)+E\Big(|I_{\bar{\beta}}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{s,t}|^2 \Big)\leq& C (t-s)^{\eta(\beta)}.
\end{align*}
\end{corollary}
\begin{lemma}\label{lem:B_A_b_N_mu}
Let Assumptions \ref{ass:initial data}, \ref{ass: b sigma lipschitz} hold and Assumption \ref{ass:reminder b N growth} be satisfied for all $\beta\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1$ such that $\eta(\beta)=2\gamma-1$, none of the component of $\beta$ is $1,\ldots,m$ and at least one of its component is $N_1,\ldots,N_{2\gamma-1}$.
Then, for all $ n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$,
\begin{align*}
E\Big(&\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)\leq Ch^{2\gamma}.
\end{align*}
\end{lemma}
\begin{proof}
Clearly, we can write $\beta=\tilde{\beta}\star(N_{\mu_1})\star\bar{\beta}$ where $\tilde{\beta}\in\mathcal{M}$, $\bar{\beta}\in\mathcal{M}_1$ and $\mu_1\in\{1,\ldots,2\gamma-1\}$.
Let $\bar{\beta}\neq\nu$.
Then, on using the definition of multiple integrals from Subsection \ref{sub:multiple integral} and noticing that $d[M_{i_0k_0}](u)$ is a positive measure along with $dM_{i_0k_0}(u)=d[M_{i_0k_0}](u)-d\langle M_{i_0k_0}\rangle(u)$, one obtains,
\begin{align*}
E\Big(&\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)
\\
=&E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}\sum_{i_0\neq k_0}\int_{t_{i}}^{s_{l(\bar{\beta})}}\mathbbm{1}\{N^{(t_{i},s_{l(\bar{\beta})}]}=\mu_1\}
\\
&\qquad I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}d[M_{i_0k_0}](u)ds_{l(\bar{\beta})}\cdots ds_2 ds_1 ds \big|^2\Big)
\\
\leq&CE\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}\sum_{i_0\neq k_0}\int_{t_{i}}^{s_{l(\bar{\beta})}}\mathbbm{1}\{N^{(t_{i},s_{l(\bar{\beta})}]}=\mu_1\}
\\
&\qquad I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}dM_{i_0k_0}(u)ds_{l(\bar{\beta})}\cdots ds_2 ds_1ds \big|^2\Big)
\\
&+CE\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}\sum_{i_0\neq k_0}\int_{t_{i}}^{s_{l(\bar{\beta})}}\mathbbm{1}\{N^{(t_{i},s_{l(\bar{\beta})}]}=\mu_1\}
\\
&\qquad I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}d\langle M_{i_0k_0}\rangle(u)ds_{l(\bar{\beta})}\cdots ds_2 ds_1ds \big|^2\Big)
\end{align*}
for all $ n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
If $\bar{\beta}=\nu$, then $J^{\bar{\beta}}_{i_0}$ is the identity operator for $i_0 \in \mathcal{S}$, $s_0=s$ and the iterated integrals $ \int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}$ does not appear on the right side of the above equation and in the forthcoming estimates.
Notice that the first term on the right side of the above inequality is a martingale.
On using Doob's martingale inequality, the definition of multiple integrals from Subsection \ref{sub:multiple integral} and H\"older's inequality, we obtain
\begin{align}
E&\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)\notag
\\
\leq&C \sum^{n_T-1}_{i=0}E\big|\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}\sum_{i_0\neq k_0}\int_{t_{i}}^{s_{l(\bar{\beta})}}\mathbbm{1}\{N^{(t_{i},s_{l(\bar{\beta})}]}=\mu_1\}\notag
\\
&\qquad I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}dM_{i_0k_0}(u)ds_{l(\bar{\beta})}\cdots ds_2 ds_1 ds \big|^2\notag
\\
+&Cn_T \sum^{n_T-1}_{i=0}E\big|\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}\sum_{i_0\neq k_0}\int_{t_{i}}^{s_{l(\bar{\beta})}}\mathbbm{1}\{N^{(t_{i},s_{l(\bar{\beta})}]}=\mu_1\}\notag
\\
&\qquad I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}d\langle M_{i_0k_0}\rangle(u)ds_{l(\bar{\beta})}\cdots ds_2 ds_1ds \big|^2\notag
\\
\leq&C \sum^{n_T-1}_{i=0}E\big|\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}\sum_{i_0\neq k_0}\int_{t_{i}}^{s_{l(\bar{\beta})}}\mathbbm{1}\{N^{(t_{i},s_{l(\bar{\beta})}]}=\mu_1\}\notag
\\
&\qquad I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}d[M_{i_0k_0}](u)ds_{l(\bar{\beta})}\cdots ds_2 ds_1 ds \big|^2\notag
\\
+&Cn_T \sum^{n_T-1}_{i=0}E\big|\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}\mathbbm{1}\{N^{(t_{i},s_{l(\bar{\beta})}]}=\mu_1\}\sum_{i_0\neq k_0}\int_{t_{i}}^{s_{l(\bar{\beta})}}q_{i_0k_0}\mathbbm{1}\{\alpha(u-)=i_0\}\notag
\\
&\qquad I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}duds_{l(\bar{\beta})}\cdots ds_2 ds_1ds \big|^2\notag
\\
\leq&Ch\sum^{n_T-1}_{i=0}\int_{t_{i}}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}|^2ds\notag
\\
&+Ch^{l(\bar{\beta})+1} \sum^{n_T-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}\sum_{i_0\neq k_0}\int_{t_{i}}^{s_{l(\bar{\beta})}}E\Big(\mathbbm{1}\{N^{(t_{i},s_{l(\bar{\beta})}]}=\mu_1\}\notag
\\
&\qquad E\Big(|I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}|^2\Big|\mathcal{F}_T^{\alpha}\Big)\Big)duds_{l(\bar{\beta})}\cdots ds_2 ds_1ds\label{eq:angle M}
\end{align}
for all $ n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
Now, for different possibilities of $\tilde{\beta}\in\mathcal{M}$, we estimate $E\Big(|I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}|^2\Big|\mathcal{F}_T^{\alpha}\Big)$ for all $u\in[t_i,t_{i+1}]$, $i\in\{0,\ldots,n_T-1\}$, $n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
If $\tilde{\beta}=\nu$, then by Remark \ref{rem:reminder b N growth} and Theorem \ref{thm:true moment}, we have
\begin{align*}
E\Big(|I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}|^2\Big|\mathcal{F}_T^{\alpha}\Big)\leq& CE\Big((1+|X(u)|)^2\Big|\mathcal{F}_T^{\alpha}\Big)\leq C
\end{align*}
for all $u\in[t_i,t_{i+1}]$, $i\in\{0,\ldots,n_T-1\}$, and $k\in\{1,\ldots,d\}$.
Now, if $\tilde{\beta}\in\mathcal{M}_1\setminus\{\nu\}$, then Remark \ref{rem:reminder b N growth}, Lemma \ref{lem:multiple estimate 0,1} and Theorem \ref{thm:true moment} yield,
\begin{align*}
E\Big(|I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)-&J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}|^2\Big|\mathcal{F}_T^{\alpha}\Big)
\\
\leq& C(u-t_i)^{n(\tilde{\beta})+2\bar{n}(\tilde{\beta})}E\Big(\sup_{v\in[0,T]}(1+|X(v)|)^2\Big|\mathcal{F}_T^{\alpha}\Big)
\leq C(u-t_i)^{n(\tilde{\beta})+2\bar{n}(\tilde{\beta})}
\end{align*}
for all $u\in[t_i,t_{i+1}]$, $i\in\{0,\ldots,n_T-1\}$, $n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
Further, if $\tilde{\beta}\in\mathcal{M}_2\cup\mathcal{M}_3$, then on using Remark \ref{rem:reminder b N growth}, Lemma \ref{lem:multiple estimate N_mu} and Theorem \ref{thm:true moment}, we obtain
\begin{align*}
E\Big(|I_{\tilde{\beta}}[J^{\bar{\beta}}_{k_0}b^k(X(\cdot),k_0)&-J^{\bar{\beta}}_{i_0}b^k(X(\cdot),i_0)]_{t_{i},u}|^2\Big|\mathcal{F}_T^{\alpha}\Big)
\\
\leq& C(u-t_i)^{n(\tilde{\beta})+2\bar{n}(\tilde{\beta})}E\Big(\sup_{v\in[0,T]}(1+|X(v)|)^2\Big|\mathcal{F}_T^{\alpha}\Big)\leq C(u-t_i)^{n(\tilde{\beta})+2\bar{n}(\tilde{\beta})}
\end{align*}
for all $u\in[t_i,t_{i+1}]$, $i\in\{0,\ldots,n_T-1\}$, $n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
Hence, \eqref{eq:angle M} becomes
\begin{align*}
E&\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)\leq Ch\sum^{n_T-1}_{i=0}\int_{t_{i}}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}|^2ds\notag
\\
&+Ch^{l(\bar{\beta})+1} \sum^{n_T-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}\int_{t_{i}}^{s_{l(\bar{\beta})}}E(\mathbbm{1}\{N^{(t_{i},s_{l(\bar{\beta})}]}=\mu_1\})(u-t_i)^{n(\tilde{\beta})+2\bar{n}(\tilde{\beta})}
\\
&\qquad duds_{l(\bar{\beta})}\cdots ds_2 ds_1ds
\end{align*}
for all $ n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
By the application of Lemmas \ref{lem:multiple estimate remainder N_mu} and \ref{lem:rateMS}, we have
\begin{align*}
E&\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)\leq Ch^{2\gamma}
\\
&+Ch^{l(\bar{\beta})+1} \sum^{n_T-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\int_{t_{i}}^s\int_{t_{i}}^{s_1}\cdots\int_{t_{i}}^{s_{l(\bar{\beta})-1}}\int_{t_{i}}^{s_{l(\bar{\beta})}}P\{N^{(t_{i},s_{l(\bar{\beta})}]}=\mu_1\}(u-t_i)^{n(\tilde{\beta})+2\bar{n}(\tilde{\beta})}
\\
&\qquad duds_{l(\bar{\beta})}\cdots ds_2 ds_1ds
\\
\leq& Ch^{2\gamma}
\end{align*}
for all $ n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
\end{proof}
\begin{lemma}\label{lem:B_A_b estimate}
Let Assumptions \ref{ass:initial data}, \ref{ass: b sigma lipschitz}, \ref{ass:A b sigma ds dw lipschitz}, \ref{ass:reminder b sigma ds dw growth} and \ref{ass:reminder b N growth} hold.
Then,
\begin{align*}
\sum_{\beta\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1 }E\Big(&\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)\leq Ch^{2\gamma},
\end{align*}
for all $ n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
\end{lemma}
\begin{proof}
We write $\mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1=\mathcal{C}_1\cup\mathcal{C}_{21}\cup\mathcal{C}_{22}\cup\mathcal{C}_{31}\cup\mathcal{C}_{32}\cup\mathcal{C}_{33}$ where
\begin{itemize}
\item[] $\mathcal{C}_1:=\{\beta=(j_1,\ldots,j_l)\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1:\eta(\beta)=2\gamma-1,j_i\notin\{1,\ldots,m\} \forall i\in\{1,\ldots,l\},j_k\in\{N_1,\ldots,N_{2\gamma}\}\mbox{ for any }k\in\{1,\ldots,l\}\}$,
\item[] $\mathcal{C}_{21}:=\{\beta=(j_1,\ldots,j_l)\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1:\eta(\beta)=2\gamma-1,j_i\in\{1,\ldots,m\}\mbox{ for any }i\in\{1,\ldots,l\},j_i\notin\{N_1,\ldots,N_{2\gamma}\} \forall i\in\{1,\ldots,l\}\}$,
\item[] $\mathcal{C}_{22}:=\{\beta=(j_1,\ldots,j_l)\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1:\eta(\beta)=2\gamma-1,j_i\in\{1,\ldots,m\}\mbox{ for any }i\in\{1,\ldots,l\},j_k\in\{N_1,\ldots,N_{2\gamma}\} \mbox{ for any } k\in\{1,\ldots,l\}\}$,
\item[] $\mathcal{C}_{31}:=\{\beta=(j_1,\ldots,j_l)\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1:\eta(\beta)\geq 2\gamma,j_1=\bar{N}_{2\gamma}\}$,
\item[] $\mathcal{C}_{32}:=\{\beta=(j_1,\ldots,j_l)\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1:\eta(\beta)\geq 2\gamma,j_i\in\{0,1,\ldots,m\}\forall i\in\{1,\ldots,l\}\}$,
\item[] $\mathcal{C}_{33}:=\{\beta=(j_1,\ldots,j_l)\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1:\eta(\beta)\geq 2\gamma,J_1\neq\bar{N} _{2\gamma},j_i\in\{N_1,\ldots,N_{2\gamma}\} \mbox{ for any } i\in\{1,\ldots,l\}\}$.
\end{itemize}
We can write
\begin{align*}
\sum_{\beta\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1 }E\Big(&\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)
\\
=&\sum_{\beta\in\mathcal{C}_{1} }E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)
\\
&+\sum_{\beta\in\mathcal{C}_{21} }E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)
\\
&+\sum_{\beta\in\mathcal{C}_{22} }E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)
\\
&+ \sum_{\beta\in\mathcal{C}_{31} }E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)
\\
&+\sum_{\beta\in\mathcal{C}_{32} }E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)
\\
&+ \sum_{\beta\in\mathcal{C}_{33} }E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)
\end{align*}
for all $ n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
Notice that $I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}$ is a martingale for every $\beta\in\mathcal{C}_{21}$ and $\beta\in\mathcal{C}_{22}$.
Then, Doob's martingale inequality, H\"older's inequality and Lemma \ref{lem:B_A_b_N_mu} yield,
\begin{align*}
&\sum_{\beta\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1 }E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)\leq Ch^{2\gamma}
\\
&+C h\sum^{n_T-1}_{i=0}\Big(\sum_{\beta\in\mathcal{C}_{21} }\int_{t_{i}}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}|^2ds+\sum_{\beta\in\mathcal{C}_{22} }\int_{t_{i}}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}|^2ds\Big)
\\
&+C \sum^{n_T-1}_{i=0}\Big(\sum_{\beta\in\mathcal{C}_{31} }\int_{t_{i}}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}|^2ds+ \sum_{\beta\in\mathcal{C}_{32} }\int_{t_{i}}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}|^2ds\Big)
\\
&+C \sum_{\beta\in\mathcal{C}_{33} }\sum^{n_T-1}_{i=0}\int_{t_{i}}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}|^2ds
\end{align*}
for all $ n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
Hence on using Lemmas \ref{lem:multiple estimate remainder bar_N}, \ref{lem:multiple estimate remainder N_mu} and Corollary \ref{cor:multiple estimate remainder dsdW}, we obtain
\begin{align*}
\sum_{\beta\in \mathcal{B}(\mathcal{A}_\gamma^b)\setminus\tilde{\mathcal{B}}^b_1 }&E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}ds \big|^2\Big)\leq Ch^{2\gamma}
\\
&+Ch\sum^{n_T-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\Big(\sum_{\beta\in\mathcal{C}_{21} }(s-t_{i})^{\eta(\beta)}ds+\sum_{\beta\in\mathcal{C}_{22} }(s-t_{i})^{\eta(\beta)}ds\Big)
\\
&+C\sum^{n_T-1}_{i=0}\int_{t_{i}}^{t_{i+1}}\Big(\sum_{\beta\in\mathcal{C}_{31} }(s-t_{i})^{\eta(\beta)}ds+\sum_{\beta\in\mathcal{C}_{32} }(s-t_{i})^{\eta(\beta)}ds+\sum_{\beta\in\mathcal{C}_{33} }(s-t_{i})^{\eta(\beta)}ds\Big)
\\
\leq& Ch^{2\gamma}
\end{align*}
for all $ n_T\in\mathbb{N}$ and $k\in\{1,\ldots,d\}$.
\end{proof}
\begin{lemma}\label{lem:B_A_sigma estimate}
Let Assumptions \ref{ass:initial data}, \ref{ass: b sigma lipschitz}, \ref{ass:A b sigma ds dw lipschitz}, \ref{ass:reminder b sigma ds dw growth} and \ref{ass:reminder sigma N growth} be satisfied. Then
\begin{align*}
\sum_{\beta\in \mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}^\sigma_1}E\Big(&\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}dW^j(s) \big|^2\Big)\leq Ch^{2\gamma}
\end{align*}
for all $ n_T\in\mathbb{N}$, $k\in\{1,\ldots,d\}$ and $j\in\{1,\ldots,m\}$.
\end{lemma}
\begin{proof}
We write $\mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}^\sigma_1=\mathcal{D}_{1}\cup\mathcal{D}_{2}\cup\mathcal{D}_{3}$ where
\begin{itemize}
\item[] $\mathcal{D}_{1}:=\{(j_1,\ldots,j_l)\in \mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}^\sigma_1:j_1=\bar{N}_{2\gamma}\}$,
\item[] $\mathcal{D}_{2}:=\{(j_1,\ldots,j_l)\in \mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}^\sigma_1:j_i\in\{0,1,\ldots,m\}\forall i\in\{1,\ldots,l\}\}$,
\item[] $\mathcal{D}_{3}:=\{(j_1,\ldots,j_l)\in \mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}^\sigma_1:j_1\neq\bar{N}_{2\gamma},j_i\in\{N_1,\ldots,N_{2\gamma}\} \mbox{ for any } i\in\{1,\ldots,l\}\}$.
\end{itemize}
On using Doob's martingale inequality and Burkholder-Davis-Gundy inequality, we have
\begin{align*}
&\sum_{\beta\in \mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}^\sigma_1}E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}dW^j(s) \big|^2\Big)
\\
\leq& C\sum^{n_T-1}_{i=0}\Big(\sum_{\beta\in\mathcal{D}_{1}}\int_{t_{i}}^{t_{i+1}}E|I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}|^2ds+\sum_{\beta\in\mathcal{D}_{2}}\int_{t_{i}}^{t_{i+1}}E|I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}|^2ds
\\
&+\sum_{\beta\in\mathcal{D}_{3}}\int_{t_{i}}^{t_{i+1}}E|I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}|^2ds\Big)
\end{align*}
for all $ n_T\in\mathbb{N}$, $k\in\{1,\ldots,d\}$ and $j\in\{1,\ldots,m\}$.
The application of Lemmas \ref{lem:multiple estimate remainder bar_N}, \ref{lem:multiple estimate remainder N_mu} and Corollary \ref{cor:multiple estimate remainder dsdW} yield,
\begin{align*}
\sum_{\beta\in \mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus\tilde{\mathcal{B}}^\sigma_1}&E\Big(\sup_{n\in\{0,1,\ldots, n_T\}}\big|\sum^{n-1}_{i=0}\int_{t_{i}}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}dW^j(s) \big|^2\Big)
\\
\leq C&\sum^{n_T-1}_{i=0}\Big(\sum_{\beta\in\mathcal{D}_{1}}\int_{t_{i}}^{t_{i+1}}(s-t_{i})^{\eta(\beta)}ds+ \sum_{\beta\in\mathcal{D}_{2}}\int_{t_{i}}^{t_{i+1}}(s-t_{i})^{\eta(\beta)}ds+ \sum_{\beta\in\mathcal{D}_{3}}\int_{t_{i}}^{t_{i+1}}(s-t_{i})^{\eta(\beta)}ds\Big)
\\
\leq C&h^{2\gamma}
\end{align*}
for all $n_T\in\mathbb{N}$, $k\in\{1,\ldots,d\}$ and $j\in\{1,\ldots,m\}$.
\end{proof}
\begin{lemma}\label{lem:convergence B_1 estimate}
Let Assumptions \ref{ass:initial data}, \ref{ass: b sigma lipschitz} and \ref{ass:A b sigma ds dw lipschitz} hold. Then,
\begin{align*}
\sum_{\beta\in\mathcal{B}_1^b}E\Big(\sup_{n\in\{1,\ldots,n_T\}}\Big|\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}ds\Big|^2\Big)\leq&Ch^{2\gamma},
\\
\sum_{\beta\in\mathcal{B}_1^\sigma}E\Big(\sup_{n\in\{1,\ldots,n_T\}}\Big|\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_i),\alpha(t_i))]_{t_i,s}dW^j(s)\Big|^2\Big)\leq& Ch^{2\gamma}
\end{align*}
for all $k\in\{1,\ldots,d\}$, $j\in\{1,\ldots,m\}$ and $n_T\in\mathbb{N}$.
\end{lemma}
\begin{proof}
We write $\mathcal{B}_1^b=\mathcal{B}_{11}^b\cup\mathcal{B}_{12}^b$ where
\begin{itemize}
\item[] $\mathcal{B}_{11}^b:=\{\beta\in\mathcal{B}_1^b:$ the components of $\beta$ are not equal to $1,\ldots,m\} $ and
\item[] $\mathcal{B}_{12}^b:=\{\beta\in\mathcal{B}_1^b:$ at least one of the components of $\beta$ is equal to $1,\ldots,m\} $.
\end{itemize}
Notice that $I_{\beta}[b^k(X(\cdot),\alpha(\cdot)) ]_{t_{i},s}$ is a martingale for every $\beta\in\mathcal{B}_{12}^b$.
Thus, Doob's martingale inequality and H\"older's inequality yield,
\begin{align*}
\sum_{\beta\in\mathcal{B}_1^b}E\Big(\sup_{n\in\{1,\ldots,n_T\}}\Big|&\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}ds\Big|^2\Big)
\\
=&\sum_{\beta\in\mathcal{B}_{11}^b}E\Big(\sup_{n\in\{1,\ldots,n_T\}}\Big|\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}ds\Big|^2\Big)
\\
&+\sum_{\beta\in\mathcal{B}_{12}^b}E\Big(\sup_{n\in\{1,\ldots,n_T\}}\Big|\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}ds\Big|^2\Big)
\\
\leq&C \sum_{\beta\in\mathcal{B}_{11}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}|^2ds
\\
&+Ch\sum_{\beta\in\mathcal{B}_{12}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}|^2ds
\end{align*}
which further implies,
\begin{align*}
\sum_{\beta\in\mathcal{B}_1^b}E\Big(&\sup_{n\in\{1,\ldots,n_T\}}\Big|\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}ds\Big|^2\Big)
\\
\leq&C \sum_{\beta\in\mathcal{B}_{11}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(\cdot))]_{t_i,s}|^2ds
\\
&+C \sum_{\beta\in\mathcal{B}_{11}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}|I_{\beta}[b^k(X(t_i),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}|^2\Big)ds
\\
&+hC \sum_{\beta\in\mathcal{B}_{12}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(\cdot))]_{t_i,s}|^2ds
\\
&+hC \sum_{\beta\in\mathcal{B}_{12}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}|I_{\beta}[b^k(X(t_i),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}|^2\Big)ds
\\
\leq&C \sum_{\beta\in\mathcal{B}_{11}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(\cdot))]_{t_i,s}|^2ds
\\
&+C \sum_{\beta\in\mathcal{B}_{11}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}E\Big(|I_{\beta}[b^k(X(t_i),\alpha(\cdot))]_{t_i,s}|^2\Big|\mathcal{F}_T^{\alpha}\Big)\Big)ds
\\
&+C \sum_{\beta\in\mathcal{B}_{11}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}E\Big(|I_{\beta}[b^k(X(t_i),\alpha(t_i))]_{t_i,s}|^2\Big|\mathcal{F}_T^{\alpha}\Big)\Big)ds
\\
&+Ch \sum_{\beta\in\mathcal{B}_{12}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E|I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(\cdot))]_{t_i,s}|^2ds
\\
&+Ch \sum_{\beta\in\mathcal{B}_{12}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}E\Big(|I_{\beta}[b^k(X(t_i),\alpha(\cdot))]_{t_i,s}|^2\Big|\mathcal{F}_T^{\alpha}\Big)\Big)ds
\\
&+Ch \sum_{\beta\in\mathcal{B}_{12}^b}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}E\Big(|I_{\beta}[b^k(X(t_i),\alpha(t_i))]_{t_i,s}|^2\Big|\mathcal{F}_T^{\alpha}\Big)\Big)ds
\end{align*}
for all $k\in\{1,\ldots,d\}$ and $n_T\in\mathbb{N}$.
Moreover, Assumptions \ref{ass: b sigma lipschitz}, \ref{ass:A b sigma ds dw lipschitz}, Remark \ref{rem:A b sigma ds dw growth} and Lemma \ref{lem:multiple estimate 0,1} yield,
\begin{align*}
\sum_{\beta\in\mathcal{B}_1^b}E\Big(&\sup_{n\in\{1,\ldots,n_T\}}\Big|\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}ds\Big|^2\Big)
\\
\leq& C\sum_{i=0}^{n_T-1}\sum_{\beta\in\mathcal{B}_{11}^b}\int_{t_i}^{t_{i+1}}(s-t_i)^{2\gamma-1}E\Big(\sup_{u\in[t_i,s]}|X(u)-X(t_i)|^2\Big)ds
\\
&+C\sum_{i=0}^{n_T-1}\sum_{\beta\in\mathcal{B}_{11}^b}\int_{t_i}^{t_{i+1}}(s-t_i)^{2\gamma-1}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}E\Big(\sup_{u\in[t_i,s]}(1+|X(u)|)^2\Big|\mathcal{F}_T^{\alpha}\Big)\Big)ds
\\
&+Ch\sum_{i=0}^{n_T-1}\sum_{\beta\in\mathcal{B}_{12}^b}\int_{t_i}^{t_{i+1}}(s-t_i)^{2\gamma-2}E\Big(\sup_{u\in[t_i,s]}|X(u)-X(t_i)|^2\Big)ds
\\
&+Ch\sum_{i=0}^{n_T-1}\sum_{\beta\in\mathcal{B}_{12}^b}\int_{t_i}^{t_{i+1}}(s-t_i)^{2\gamma-2}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}E\Big(\sup_{u\in[t_i,s]}(1+|X(u)|)^2\Big|\mathcal{F}_T^{\alpha}\Big)\Big)ds
\end{align*}
for all $k\in\{1,\ldots,d\}$ and $n_T\in\mathbb{N}$.
On using Theorem \ref{thm:true moment} and Lemma \ref{lem:rateMS}, we obtain
\begin{align*}
\sum_{\beta\in\mathcal{B}_1^b}E\Big(&\sup_{n\in\{1,\ldots,n_T\}}\Big|\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}ds\Big|^2\Big)
\\
\leq& Ch^{2\gamma}+Ch^{2\gamma-1}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}P\{N^{(t_i,s]}\geq 1\}ds\leq Ch^{2\gamma}
\end{align*}
for all $k\in\{1,\ldots,d\}$ and $n_T\in\mathbb{N}$.
Now, we prove the second inequality of the lemma.
Due to Doob's martingale inequality and It\^o's isometry, we have
\begin{align*}
\sum_{\beta\in\mathcal{B}_1^\sigma}E&\Big(\sup_{n\in\{1,\ldots,n_T\}}\Big|\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_i),\alpha(t_i))]_{t_i,s}dW^j(s)\Big|^2\Big)
\\
\leq&C\sum_{\beta\in\mathcal{B}_1^\sigma}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E|I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_i),\alpha(t_i))]_{t_i,s}|^2ds
\\
\leq&C\sum_{\beta\in\mathcal{B}_1^\sigma}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E|I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_i),\alpha(\cdot))]_{t_i,s}|^2ds
\\
&+C\sum_{\beta\in\mathcal{B}_1^\sigma}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}|I_{\beta}[\sigma^{(k,j)}(X(t_i),\alpha(\cdot))-\sigma^{(k,j)}(X(t_i),\alpha(t_i))]_{t_i,s}|^2\Big)ds
\\
\leq&C\sum_{\beta\in\mathcal{B}_1^\sigma}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E|I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_i),\alpha(\cdot))]_{t_i,s}|^2ds
\\
&+C\sum_{\beta\in\mathcal{B}_1^\sigma}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}E\Big(|I_{\beta}[\sigma^{(k,j)}(X(t_i),\alpha(\cdot))]_{t_i,s}|^2\Big|\mathcal{F}_T^{\alpha}\Big)\Big)ds
\\
&+C\sum_{\beta\in\mathcal{B}_1^\sigma}\sum_{i=0}^{n_T-1}\int_{t_i}^{t_{i+1}}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}E\Big(|I_{\beta}[\sigma^{(k,j)}(X(t_i),\alpha(t_i))]_{t_i,s}|^2\Big|\mathcal{F}_T^{\alpha}\Big)\Big)ds
\end{align*}
which by using Assumptions \ref{ass: b sigma lipschitz}, \ref{ass:A b sigma ds dw lipschitz}, Remark \ref{rem:A b sigma ds dw growth} and Lemma \ref{lem:multiple estimate 0,1} yield,
\begin{align*}
\sum_{\beta\in\mathcal{B}_1^\sigma}E&\Big(\sup_{n\in\{1,\ldots,n_T\}}\Big|\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_i),\alpha(t_i))]_{t_i,s}dW^j(s)\Big|^2\Big)
\\
\leq&C\sum_{i=0}^{n_T-1}\sum_{\beta\in\mathcal{B}_{1}^\sigma}\int_{t_i}^{t_{i+1}}(s-t_i)^{2\gamma-1}E\Big(\sup_{u\in[t_i,s]}|X(u)-X(t_i)|^2\Big)ds
\\
&+C\sum_{i=0}^{n_T-1}\sum_{\beta\in\mathcal{B}_{1}^\sigma}\int_{t_i}^{t_{i+1}}(s-t_i)^{2\gamma-1}E\Big(\mathbbm{1}\{N^{(t_i,s]}\geq 1\}E\Big(\sup_{u\in[t_i,s]}(1+|X(u)|)^2\Big|\mathcal{F}_T^{\alpha}\Big)\Big)ds
\end{align*}
for all $k\in\{1,\ldots,d\}$ and $j\in\{1,\ldots,m\}$ and $n_T\in\mathbb{N}$.
Hence, Theorem \ref{thm:true moment} and Lemma \ref{lem:rateMS}
give
\begin{align*}
\sum_{\beta\in\mathcal{B}_1^\sigma}E&\Big(\sup_{n\in\{1,\ldots,n_T\}}\Big|\sum_{i=0}^{n-1}\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_i),\alpha(t_i))]_{t_i,s}dW^j(s)\Big|^2\Big)\leq Ch^{2\gamma}
\end{align*}
for all $k\in\{1,\ldots,d\}$ and $j\in\{1,\ldots,m\}$ and $n_T\in\mathbb{N}$.
\end{proof}
\begin{proof}[\textbf{Proof of theorem \ref{thm:main}}]
Due to \eqref{eq:gen. scheme derivation} and \eqref{eq:gen.scheme},
\begin{align*}
&X^k(t_{n})-Y^k(t_{n})=X^k_0-Y^k_0+\sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{A}_{\gamma}^b\setminus\tilde{\mathcal{A}}_{\gamma}^b}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(t_i),\alpha(t_i))-b^k(Y(t_i),\alpha(t_i))]_{t_i,s}ds\notag
\\
&+\sum_{i=0}^{n-1}\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^b}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(t_i),\alpha(\cdot))-b^k(Y(t_i),\alpha(\cdot))]_{t_i,s}ds \notag
\\
&+\sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{A}_{\gamma}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma}^\sigma}\sum_{j=1}^m\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(t_i),\alpha(t_i))-\sigma^{(k,j)}(Y(t_i),\alpha(t_i))]_{t_i,s}dW^j(s)\notag
\\
&+\sum_{i=0}^{n-1}\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^\sigma}\sum_{j=1}^m\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(t_i),\alpha(\cdot))-\sigma^{(k,j)}(Y(t_i),\alpha(\cdot))]_{t_i,s}dW^j(s)\notag
\\
&+\sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{B}(\mathcal{A}_\gamma^b)\setminus \tilde{\mathcal{B}}_1^b}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_i,s}ds\notag
\\
&+\sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{B}_1^b}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}ds\notag
\\
&+\sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus \tilde{\mathcal{B}}_1^\sigma}\sum_{j=1}^m\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_i,s}dW^j(s)\notag
\\
&+\sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{B}_1^\sigma}\sum_{j=1}^m\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_i),\alpha(t_i))]_{t_i,s}dW^j(s)
\end{align*}
which implies, for all $k\in\{1,\ldots,d\}$ and $n\in\{1,\ldots,n_T\}$,
\begin{align}
E&\Big(\sup_{n\in\{1,\ldots,n'\}}|X^k(t_n)-Y^k(t_n)|^2\Big)\leq CE|X^k_0-Y^k_0|^2\notag
\\
+&CE\Big(\sup_{n\in\{1,\ldots,n'\}}\Big|\sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{A}_{\gamma}^b\setminus\tilde{\mathcal{A}}_{\gamma}^b}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(t_i),\alpha(t_i))-b^k(Y(t_i),\alpha(t_i))]_{t_i,s}ds\Big|^2\Big)\notag
\\
+&CE\Big(\sup_{n\in\{1,\ldots,n'\}}\Big|\sum_{i=0}^{n-1}\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^b}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(t_i),\alpha(\cdot))-b^k(Y(t_i),\alpha(\cdot))]_{t_i,s}ds\Big|^2\Big)\notag
\\
+&CE\Big(\sup_{n\in\{1,\ldots,n'\}}\Big|\sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{A}_{\gamma}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma}^\sigma}\sum_{j=1}^m\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(t_i),\alpha(t_i))-\sigma^{(k,j)}(Y(t_i),\alpha(t_i))]_{t_i,s}dW^j(s)\Big|^2\Big)\nonumber
\\
+&CE\Big(\sup_{n\in\{1,\ldots,n'\}}\Big|\sum_{i=0}^{n-1}\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^\sigma}\sum_{j=1}^m\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(t_i),\alpha(\cdot))-\sigma^{(k,j)}(Y(t_i),\alpha(\cdot))]_{t_i,s}dW^j(s)\Big|^2\Big) \notag
\\
+&CE\Big(\sup_{n\in\{1,\ldots,n'\}}\Big| \sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{B}(\mathcal{A}_\gamma^b)\setminus \tilde{\mathcal{B}}_1^b}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))]_{t_i,s}ds \Big|^2\Big)\notag
\\
+&CE\Big(\sup_{n\in\{1,\ldots,n'\}}\Big|\sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{B}_1^b}\int_{t_i}^{t_{i+1}}I_{\beta}[b^k(X(\cdot),\alpha(\cdot))-b^k(X(t_i),\alpha(t_i))]_{t_i,s}ds\Big|^2\Big)\notag
\\
+&CE\Big(\sup_{n\in\{1,\ldots,n'\}}\Big| \sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{B}(\mathcal{A}_\gamma^\sigma)\setminus \tilde{\mathcal{B}}_1^\sigma}\sum_{j=1}^m\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))]_{t_i,s}dW^j(s) \Big|^2\Big)\notag
\\
+&CE\Big(\sup_{n\in\{1,\ldots,n'\}}\Big|\sum_{i=0}^{n-1}\sum_{\beta\in\mathcal{B}_1^\sigma}\sum_{j=1}^m\int_{t_i}^{t_{i+1}}I_{\beta}[\sigma^{(k,j)}(X(\cdot),\alpha(\cdot))-\sigma^{(k,j)}(X(t_i),\alpha(t_i))]_{t_i,s}dW^j(s)\Big|^2\Big)\notag
\end{align}
for all $k\in\{1,\ldots,d\}$ and $n'\in\{1,\ldots,n_T\}$.
On using Doob's martingale inequality, Burkholder-Davis-Gundy inequality and H\"older's inequality, Lemmas \ref{lem:B_A_b estimate}, \ref{lem:B_A_sigma estimate}, \ref{lem:convergence B_1 estimate} and Assumption \ref{ass:initial data}, we obtain
\begin{align*}
E\Big(\sup_{n\in\{1,\ldots,n'\}}&|X^k(t_n)-Y^k(t_n)|^2\Big)\leq Ch^{2\gamma}
\\
&+C\sum_{i=0}^{n'-1}\sum_{\beta\in\mathcal{A}_{\gamma}^b\setminus\tilde{\mathcal{A}}_{\gamma}^b}\int_{t_i}^{t_{i+1}}E|I_{\beta}[b^k(X(t_i),\alpha(t_i))-b^k(Y(t_i),\alpha(t_i))]_{t_i,s}|^2ds
\\
&+C\sum_{i=0}^{n-1}\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^b}\int_{t_i}^{t_{i+1}}E|I_{\beta}[b^k(X(t_i),\alpha(\cdot))-b^k(Y(t_i),\alpha(\cdot))]_{t_i,s}|^2ds
\\
&+C\sum_{i=0}^{n'-1}\sum_{\beta\in\mathcal{A}_{\gamma}^\sigma\setminus\tilde{\mathcal{A}}_{\gamma}^\sigma}\sum_{j=1}^m\int_{t_i}^{t_{i+1}}E|I_{\beta}[\sigma^{(k,j)}(X(t_i),\alpha(t_i))-\sigma^{(k,j)}(Y(t_i),\alpha(t_i))]_{t_i,s}|^2ds
\\
&+C\sum_{i=0}^{n-1}\sum_{\beta\in\tilde{\mathcal{A}}_{\gamma}^\sigma}\sum_{j=1}^m\int_{t_i}^{t_{i+1}}E|I_{\beta}[\sigma^{(k,j)}(X(t_i),\alpha(\cdot))-\sigma^{(k,j)}(Y(t_i),\alpha(\cdot))]_{t_i,s}|^2ds
\end{align*}
for all $k\in\{1,\ldots,d\}$ and $n'\in\{1,\ldots,n_T\}$.
By the application of Assumptions \ref{ass: b sigma lipschitz}, \ref{ass:A b sigma ds dw lipschitz}, Remark \ref{rem:A b sigma N lipschitz} and Lemmas \ref{lem:multiple estimate 0,1}, \ref{lem:multiple estimate N_mu}, we have
\begin{align*}
E\Big(\sup_{n\in\{1,\ldots,n'\}}|X^k(t_n)&-Y^k(t_n)|^2\Big)\leq Ch^{2\gamma}
\\
&+ C\sum_{i=0}^{n'-1}\sum_{\beta\in\mathcal{A}_{\gamma}^b\cup\mathcal{A}_{\gamma}^\sigma}\int_{t_i}^{t_{i+1}}(s-t_i)^{\eta(\beta)}E\Big(\sup_{n\in\{0,1,\ldots,i\}}|X^k(t_n)-Y^k(t_n)|^2\Big)ds
\\
\leq&Ch^{2\gamma}+ Ch\sum_{i=0}^{n'-1}E\Big(\sup_{n\in\{0,1,\ldots,i\}}|X^k(t_n)-Y^k(t_n)|^2\Big)
\end{align*}
for all $k\in\{1,\ldots,d\}$ and $n'\in\{1,\ldots,n_T\}$.
The Gronwall's lemma completes the proof.
\end{proof}
\bibliographystyle{amsplain}
|
train/arxiv
|
BkiUfBHxK0zjCxh74_8R
| 5 | 1 |
\section{Introduction}
Humans have the ability to perform various behaviors. However, learning an intelligent agent to perform multiple behaviors is still a challenging task. In recent years, reinforcement learning (RL) \cite{sutton1998reinforcement} presents promising results for many applications by optimizing a predefined reward function. As a result, the optimal solution is a single behavior performing the best on this predefine reward function. To extend from one single behavior to various behaviors, we need to explicitly define suitable reward functions corresponding to different behaviors. However, manually defining various reward functions is not intuitive and hence impractical in many environments.
Imitation learning \cite{argall2009survey,billard2008robot} is an efficient way to learn a policy to perform a task. It alleviates the limitation of defining an appropriate reward function by learning a single behavior directly from expert demonstrations. However, standard approaches for imitation learning are incompetent to learn different behaviors from demonstrations with mixed behaviors. Simply applying imitation learning to such demonstrations will end up learning a policy trying to imitate all behaviors but very likely cannot reproduce any behavior accurately. A straightforward solution is to add a behavior label to each demonstration \cite{rahmatizadeh2017vision}. However, this needs additional labeling cost and requires the behaviors to be defined in advance. Towards addressing these issues, some previous works propose to automatically recover specific reward functions for different behaviors in the data, which is referred to multi-task inverse reinforcement learning \cite{babes2011apprenticeship,dimitrakakis2011bayesian,hausman2017multi,li2017inferring}. These approaches, however, typically rely on numerous environment interactions, which is not practical for many realistic applications (e.g. robotics).
Recently, a few approaches \cite{morton2017simultaneous,wang2017robust} apply variational autoencoders (VAEs) \cite{kingma2013auto} to this problem. These methods do not require additional rollouts. VAEs allow the policy to perform different behaviors according to the latent vector representations of demonstrations. These methods, as most VAE-based approaches, employ continuous latent variables with a standard Gaussian distribution as the prior and show some promising results. However, these works need to encode the corresponding demonstration to perform a specific behavior since directly sampling from the Gaussian prior is hard to specify the behavior and may result in sub-optimal performance.
In this work, we propose an approach based on the variational autoencoder with a categorical latent variable that jointly learns an encoder and a decoder. The encoder infers discrete latent factors corresponding to different behaviors from demonstrations. The decoder, as a policy, performs different behaviors according to the latent factors. We propose to use the categorical latent variable to learn the multi-modal policy for two reasons. First, demonstrations with mixed behaviors are inherently discrete in many cases since typically there exist salient differences between behaviors. For example, imagine a robotic arm trying to reach 4 different targets. The demonstrations can be naturally split into 4 categories where each category focuses on one specific target. Thus, using a categorical latent variable is appropriate to represent such behaviors. Second, using the categorical latent variable makes the learned policy more controllable. The categorical latent variable can discover the salient variations in the data and result in simple representations of different behaviors, namely the categories. As a result, each category corresponds to a specific behavior and the learned policy can be controlled to reproduce a behavior by simply conditioning on a categorical vector.
We evaluate our method on three different tasks including a robotic arm trying to reach different targets, a bipedal robot with different moving behaviors, and a challenging car driving task containing multiple driving behaviors with high-dimensional visual inputs. To the best of our knowledge, this is the first VAE-based method for this problem that can scale to tasks with high-dimensional inputs. We also demonstrate that our method can still learn distinct behaviors without the prior knowledge of the number of behaviors in the data.
Our contributions are summarized as the following:
\begin{itemize}
\itemsep=0pt
\item We propose an approach based on the variational autoencoder (VAE) with a categorical latent variable to imitate multiple behaviors from mixed demonstrations without behavior labels.
\item We demonstrate our method is applicable to several tasks, including a challenging task with high-dimensional visual inputs.
\item We show the categorical latent variable can discover distinct behaviors without the prior knowledge of the number of behaviors in the data.
\end{itemize}
\section{Related work}
Imitation learning considers the problem of learning skills from demonstrations. Two main approaches of imitation learning are behavior cloning \cite{pomerleau1991efficient} and inverse reinforcement learning \cite{abbeel2004apprenticeship,ho2016generative,levine2011nonlinear,Ng:2000:AIR:645529.657801,ziebart2008maximum}. Behavior cloning directly mimics expert demonstrations by supervised learning from states to actions without interacting with the environment, providing an efficient way to learn how to perform a task. In contrast, inverse reinforcement learning is aimed to seek a reward function that can explain the behavior shown in demonstrations. However, these methods typically assume the demonstrations consist of a single behavior.
Multi-task inverse reinforcement learning \cite{dimitrakakis2011bayesian,babes2011apprenticeship} aims to learn from demonstrations with multiple behaviors by inverse reinforcement learning. In \cite{dimitrakakis2011bayesian}, the authors propose a Bayesian approach for inferring the intention of an agent. In \cite{babes2011apprenticeship}, the authors propose an approach based on the EM algorithm that clusters observed trajectories by using inverse reinforcement learning methods to infer the intention for each cluster. In contrast to previous methods, recent work on multi-task inverse reinforcement learning has adopted the generative adversarial imitation learning (GAIL) algorithm \cite{li2017inferring,hausman2017multi}. The authors propose a similar framework that extends GAIL to incorporate a component that maximizes mutual information between latent variables and state-action pairs.
Multi-task inverse reinforcement learning typically needs an inefficient procedure of taking additional rollouts, which is hard to be applied to many realistic applications. Recently, a stochastic neural network framework \cite{tamar2018imitation} is proposed to learn a multi-modal policy without additional rollouts by selecting the sampled intention with the lowest error for updating network parameters. Similarly, our approach does not rely on any additional rollouts and directly learn multiple behaviors from demonstrations.
While these methods achieve some promising results, some works explore another approach which is often employed to learn a latent variable generative model called variational auto-encoders (VAEs) \cite{morton2017simultaneous,wang2017robust}. To perform a specific behavior, these methods need to condition on its corresponding demonstration. Our approach, in comparison, can learn categorical representations for distinct behaviors, resulting in a policy that can perform a specific behavior by conditioning on a categorical vector.
\section{Preliminaries}
\label{pre}
In this section, we formally define the notations as well as describe some background concepts about imitation learning and variational autoencoders.
\subsection{Imitation learning}
Let $S$ represent the state space, $A$ represent the action space. In imitation learning setting, we assume that we have been provided a set of demonstrations which consists of $N$ trajectories $\{\zeta_i\}_{i=1}^N$. Each trajectory $\zeta_i$ is composed of a sequence of state-action pairs $\zeta_i = \{s_1^i, a_1^i, ..., s_T^i, a_T^i\}$ where $s_t^i \in S, a_t^i \in A$ denote the state and action respectively at time $t$. For the remainder of this paper, we drop the notation $i$ and abbreviate the state sequence $s_1, ..., s_t$ as $s_{1:T}$ and the action sequence $a_1, ..., a_t$ as $a_{1:T}$ for simplicity. Let $\pi_\theta$ denote a policy that defines the distribution over actions given states. The goal of imitation learning is to learn a policy from demonstrations such that it can reliably perform a task. To learn the policy, we can maximize the log likelihood:
\begin{equation}
\mathcal{L}(\theta;\zeta) = \;\sum_{i=1}^{N}\log\pi_\theta(a_{1:T}|s_{1:T}) = \sum_{i=1}^{N}\sum_{t=1}^{T}\log\pi_\theta(a_t|s_t).
\end{equation}
The second equality is a consequence of a standard assumption that the policy is memory-less, which implies $\pi_\theta(a_{1:T}|s_{1:T}) = \prod_{t=1}^{T}\pi_\theta(a_t|s_t)$.
\subsection{Variational autoencoder}
The variational autoencoders (VAEs) \cite{kingma2013auto} are a kind of generative model with certain types of latent variables. The generative process is composed of two steps, where first a latent variable $z$ is sampled from some prior distribution $p(z)$ and then the data $x$ is generated from some conditional distribution $p_\theta(x|z)$. Often the likelihood of the data is intractable. Therefore, VAEs introduce a encoder $q_\phi(z|x)$ to approximate the true posterior $p(z|x)$ and optimize the variational lower bound of the log-likelihood:
\begin{equation}
\log p(x) \geq \mathcal{L}(\theta,\phi;x,z) = \mathrm{E}_{z \sim q_\phi(z|x)} \left[\log p_\theta(x|z)\right] -D_{KL}(q_\phi(z|x) || p(z)).
\end{equation}
We can combine additional information $y$ to generate the data $x$, and optimize the variational lower bound of the conditional log-likelihood:
\begin{equation}
\log p(x|y) \geq \mathcal{L}(\theta,\phi;x,y,z) = \;\mathrm{E}_{z \sim q_\phi(z|x,y)} \left[\log p_\theta(x|y,z)\right] -D_{KL}(q_\phi(z|x,y) || p(z|x)).
\end{equation}
This class of models is referred to as conditional variational autoencoders (CVAEs) \cite{NIPS2015_5775}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{schematic.png}
\caption{Illustration of our method. During training, each trajectory is encoded by a bidirectional LSTM with an attention mechanism. The approximated categorical posterior $q(z|s_{1:T}, a_{1:T})$ is parameterized by feeding the representation of the whole trajectory through a fully connected layer with a softmax function. The policy takes a latent variable $z$ which is sampled from the posterior and a state to generate an action at each time step. At test time, the policy can perform different behaviors by directly sampling $z$ from a categorical prior distribution where each category corresponds to a specific behavior.}
\centering
\label{fig:schematic}
\end{figure*}
\section{Our method}
In this section, we first construct our method based on conditional variational autoencoders (CVAEs) for imitation learning. Next, we describe our architecture with an attention mechanism. Finally, we present a categorical reparameterization trick which allows us to train the variational autoencoder with categorical latent variables. The overall architecture of our method is shown in Figure \ref{fig:schematic}.
\subsection{Variational autoencoder for behavior cloning}
In order to learn a multi-modal policy from demonstrations with mixed behaviors, we formulate a probabilistic graphical model using CVAEs. Given a sequence of $N$ trajectories, we assume each trajectory $\zeta$ is generated from an unobserved latent variable $z$ and $z$ is the same throughout the whole trajectory. Hence, the conditional generative process is as follows: given states $s_{1:T}$, $z$ is sampled from the prior distribution $p(z|s_{1:T})$, and actions $a_{1:T}$ are generated from the distribution $\pi_\theta(a_{1:T}|s_{1:T}, z)$. We can factorize it to $\prod_{t=1}^{T}\pi_\theta(a_t|s_t, z)$ with a memory-less policy, which means at each time $t \in \{1, ..., T\}$, the action $a_t$ is generated from only the state $s_t$ and $z$. Note that the prior distribution is modulated by $s_{1:T}$, which is infeasible at test time since we are unable to infer $z$ using future states $s_{t+1:T}$ at time $t$. This constraint, however, can be relaxed by making an assumption that the latent variables are statistically independent of the states \cite{doersch2016tutorial,walker2016uncertain}. That is, we assume $p(z|s_{1:T}) = p(z)$ where $p(z)$ is some assumed prior distribution, and thus we can directly sample $z \sim p(z)$ to perform a task instead of sampling from $p(z|s_{1:T})$ at test time.
We train our CVAE by maximizing the conditional log-likelihood $\pi_\theta(a_{1:T}|s_{1:T})$. We optimize its variational lower bound by introducing an encoder $q_\theta(z|s_{1:T}, a_{1:T})$ that can approximate the true posterior distribution $p(z|s_{1:T}, a_{1:T})$. To be specific, we maximize the following objective function:
\begin{equation}
\mathcal{L}(\theta,\phi;\zeta,z) = \mathrm{E}_{z \sim q_\phi(z|s_{1:T}, a_{1:T})} \left[\sum_{t=1}^{T}\log\pi_\theta(a_t|s_t,z)\right] -D_{KL}(q_\phi(z|s_{1:T}, a_{1:T}) || p(z)).
\end{equation}
Typically $p(z)$ is assumed to be the standard Gaussian distribution $\mathcal{N}(\mathbf{0}, \mathbf{I})$. We propose, however, to employ a categorical distribution to learn behavior-level representations.
Intuitively, since the different behaviors typically have salient differences in terms of trajectories, we can learn categorical representations that correspond to distinct classes of behaviors from their trajectories. In our formulation, the latent variable $z$ contains high-level information of the whole trajectory. Therefore, employing the categorical prior distribution enables our model to discover the variation between trajectories and learn categorical representations for different behaviors. Consequently, the policy trained by the categorical latent variable can perform different behaviors by simply changing the categories, where each category corresponds to a specific behavior.
\subsection{Model architecture}
Our recognition model $q_\phi(z|s_{1:T}, a_{1:T})$ uses a bi-directional LSTM \cite{schuster1997bidirectional}, which encodes the whole trajectory to obtain its context representations. The bidirectional LSTM (Figure \ref{fig:schematic}(b)) combines a forward $\overrightarrow{LSTM}$ which processes the trajectory from $t = 1$ to $T$ and a backward $\overleftarrow{LSTM}$ which processes the trajectory from $t = T$ to $1$:
\[\overrightarrow{h_t} = \overrightarrow{LSTM}(s_t, a_t, \overrightarrow{h_{t-1}}), \qquad
\overleftarrow{h_t} = \overleftarrow{LSTM}(s_t, a_t, \overleftarrow{h_{t+1}}),\]
and we obtain a hidden state $h_t$ by using element-wise sum to combine the forward state $\overrightarrow{h_t}$ and the backward state $\overleftarrow{h_t}$.
To produce the final representations of the whole trajectory, we propose the attention mechanism \cite{bahdanau2014neural,yang2016hierarchical,duan2017one} to extract the pairs that are important to the meaning of the trajectory. Specifically, the attention mechanism calculates a vector of weights used for a linear combination of all hidden states:
\begin{equation}
u_t = \tanh(W_wh_t + b_w),
\end{equation}
\begin{equation}
\alpha_t = \frac{\exp(u_t^Tu_w)}{\sum_{t'=1}^{T}\exp(u_{t'}^Tu_w)},
\end{equation}
A simple schematic of attention mechanism is shown in (Figure \ref{fig:schematic}(a)).
We first feed $h_t$ through a one-layer MLP, then a high-level representation of a
query $u_w$ is introduced to measure the importance of
each pair and calculate the importance weights $\alpha_t$ by a softmax function. The query vector $u_w$ is randomly initialized and learned jointly. The representations of the trajectory are then calculated by a simple linear combination:
\begin{equation}
r = \sum_{t=1}^{T} \alpha_t h_t.
\end{equation}
This attention mechanism aims to emphasize the informative pairs that can better represent the trajectory, which is similar to that used in document classification \cite{yang2016hierarchical}. The approximated posterior distribution is obtained by feeding $r$ through a one-layer MLP with a softmax function. Finally, we concatenate $z$, which is sampled using a reparameterization trick described in the following section, and a state as the input of a standard MLP policy (Figure \ref{fig:schematic}(c)) to generate an action.
\subsection{Reparametrization of discrete latent variables}
To train our conditional variational autoencoder with categorical latent variables, we employ a categorical reparameterization trick with Gumbel-Softmax distribution \cite{jang2016categorical,maddison2016concrete}. Gumbel-Softmax distribution is a continuous distribution that can approximate sampling from a categorical distribution. Let the approximated posterior $q_\phi(z|s_{1:T}, a_{1:T})$ represent a categorical distribution over $k$ classes with class probability $\lambda_1, ..., \lambda_k$ and $z \sim q_\phi(z|s_{1:T}, a_{1:T})$ is represented by a $k$-dimensional one-hot vector. Sampling $z$ according to this distribution can be replaced with the Gumbel-Max trick which draws samples $z$ according to:
\begin{equation}
z = \mathrm{one\_hot} \left(\mathop{\mathrm{argmax}}\limits_{i} g_i + \log\lambda_i\right),
\end{equation}
where $g_i$ are i.i.d samples drawn from a standard Gumbel distribution.
Because argmax is a non-differentiable operation, we use the softmax function to approximate it and relax $z$ to a continuous random variable $z'$ which can be expressed as:
\begin{equation}
z'_i = \frac{\exp{((g_i + \log\lambda_i)/\tau)}}{\sum_{j=1}^{k}\exp((g_j + \log\lambda_j)/\tau)} \quad \mathrm{for} \;i = 1, ...,\,k,
\end{equation}
where $\tau$ is the temperature parameter. As $\tau \rightarrow 0$, Gumbel-Softmax distribution approaches a categorical distribution while as $\tau \rightarrow \infty$, it becomes a uniform distribution.
We employ the straight-through variation of Gumbel-Softmax distribution \cite{jang2016categorical} to sample the output of the approximated posterior distribution. That is, we use $z$ as the input of our policy but back-propagate gradients according to its continuous relaxation $z'.$ Using straight-through variation during training is more consistent with testing since at test time, we simply input different one-hot vectors to our policy to perform different behaviors.
\section{Experiments}
Our primary goal of experiments is to justify whether our method can automatically discover discrete latent factors of variation in demonstrations with multiple behaviors, and learn a multi-modal policy that can perform different behaviors by conditioning on a categorical vector.
We evaluate our method on three different tasks (Figure \ref{fig:envs}), which includes simulated robotic tasks, and a challenging car driving task with visual inputs:
\begin{itemize}
\item \textbf{FetchReach}: The Fetch environments from OpenAI gym \cite{1606.01540} are based on the 7-DoF robotic arm. The goal of this task is to reach a target position. We place 4 targets randomly in 4 quadrants of the table. The demonstrations include 4 kinds of behaviors, where each behavior reaches the target in its corresponding quadrant.
\item \textbf{Walker-2D}: The Walker-2D environment from Deepmind Control Suite \cite{tassa2018deepmind} is based on a 6-DoF bipedal robot. We use a set of demonstrations which consist of 8 different moving behaviors separated by speeds.
\item \textbf{TORCS}: TORCS is a car racing simulator \cite{TORCS} which provides high-dimensional visual inputs. In this environment, our policy receives only raw visual inputs as the states and generates two continuous actions made up of \textit{steering} and \textit{acceleration}. We consider two experimental setups. One is keeping the car to the left or right, which we refer to \textbf{\textit{direction}}, and the other is to drive the car at certain speeds, which we refer to \textbf{\textit{speed}}.
\begin{figure}[t]
\includegraphics[clip=True, width=\linewidth]{envs_new.png}
\caption{Our experiment environments. \textbf{Left}: Examples of different behaviors in Walker-2D. \textbf{Middle}: FetchReach with 4 random targets. \textbf{Right}: Different driving behaviors in TORCS \textbf{\textit{direction}}.}
\label{fig:envs}
\end{figure}
\end{itemize}
\subsection{Reaching multiple targets}
In the FetchReach experiment, we place the targets randomly at 4 quadrants of the table. The goal of each policy is to reach the target at its corresponding quadrant. This experimental setting provides a concrete example of demonstrations that are inherently discrete with multiple distinct behaviors. This experiment aims to justify whether or not our method can automatically learn multiple behaviors from mixed demonstrations. We compare our approach to the following methods:
\begin{itemize}
\item \textbf{BC without labels}: We train a behavior cloning policy with all demonstrations as a baseline.
\item \textbf{BC with labels}: The architecture is as same as the decoder of our model, while the input is a state concatenated with a one-hot vector of a given behavior label. This model is designed to provide an upper bound on performance since it has knowledge of behavior labels.
\item \textbf{Gaussian VAE}: We use the same architecture as our method but change the prior distribution to a unit Gaussian distribution. We set the dimension of the latent variable z to 4, which is as same as our method. Since the latent vectors of Gaussian cannot be directly designated as our method, we evaluate the performance of Gaussian VAE by 2 kinds of latent vectors. One is randomly sampled from a prior distribution, and the other is encoded from trajectories that successfully reach different targets.
\end{itemize}
\begin{table*}
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}}
\centering
\caption{FetchReach experiment results for our approach and baselines.}
\begin{tabular}{l c c c c c c}
\\
\hlineB{2}
Approach & Success Rate & z & target 0 & target 1 & target 2 & target 3 \\ \hline
BC w/o labels & 33.25\% & N/A & 98 & 18 & 6 & 11 \\ \hline
\multirow{6}{*}{Gaussian VAE} & \multirow{2}{*}{49.5\%} & \multirow{2}{2cm}{Sampled from the prior} & \multirow{2}{*}{54} & \multirow{2}{*}{58} & \multirow{2}{*}{42} & \multirow{2}{*}{44} \\ &&&&&& \\ \cline{2-7}
& \multirow{4}{*}{77.25\%} & \multirow{4}{2cm}{Encoded from the trajectory} & 87 & 0 & 0 & 0 \\
&&& 0 & 84 & 0 & 0 \\
&&& 0 & 0 & 79 & 0 \\
&&& 0 & 0 & 0 & 59 \\ \hline
\multirow{4}{*}{Ours} & \multirow{4}{*}{ \textbf{98\%}} & [1,0,0,0] & 0 & 0 & 96 & 0 \\
&& [0,1,0,0] & 0 & 100 & 0 & 0 \\
&& [0,0,1,0] & 0 & 0 & 0 & 98 \\
&& [0,0,0,1] & 98 & 0 & 0 & 0 \\ \hline
\multirow{4}{*}{BC w/ labels} & \multirow{4}{*}{100\%} & [1,0,0,0] & 100 & 0 & 0 & 0 \\
&& [0,1,0,0] & 0 & 100 & 0 & 0 \\
&& [0,0,1,0] & 0 & 0 & 100 & 0 \\
&& [0,0,0,1] & 0 & 0 & 0 & 100 \\ \hlineB{2}
\end{tabular}\hspace{10pt}
\label{table:fetchreach_result}
\end{table*}
For training data, we train the expert policy by HER \cite{andrychowicz2017hindsight} reaching to different targets and collect 600 trajectories for each target.
For evaluation, we randomly choose 100 configurations for each behavior, total 400 configurations. We evaluate the overall success rate from the total targets reached by the gripper and the count of success reach of different latent vectors. The result is shown in Table.\ref{table:fetchreach_result}. Not surprisingly, BC without labels is unable to perform four behaviors. It nearly collapses to a single mode and results in a low success rate. We can observe that Gaussian VAE with latent vectors sampled from prior has better performance than BC w/o labels. However, the success rate is still not satisfactory. We consider that it is because the random sampled latent vector may lead the gripper to one position on the table, yet it is not suitable for the target position in the configuration. For Gaussian VAE with latent vectors encoded from trajectories, something noteworthy is that the success rate is lower than expected since we provide the successful trajectories for the model. We surmise that the reason for the poor performance is that the model has never seen the test configurations and the trajectories before. Therefore, it may not map the given trajectories to appropriate latent vectors. Finally, we can see that our method outperforms Gaussian VAE, and the performance of our method is competent to BC with labels. It can not only distinguish between different behaviors but also reach targets precisely by only conditioning on a one-hot vector that corresponds to a fixed behavior.
\begin{figure}[t]
\includegraphics[clip=True, width=\textwidth]{mujoco_torcs_exp.jpg}
\caption{Results of TORCS and Walker-2D. (a) For the TORCS experiment, we plot the trajectories generated from our policy. \textbf{Left}: Results of \textbf{\textit{direction}}. The x-axis represents the distance between the car and the track axis, and the y-axis represents the distance from the start. \textbf{Right}: Results of \textbf{\textit{speed}}. the x-axis represents the distance from the start, and the y-axis represents the speed. (b) For Walker-2D, we evaluate our policy conditioning on different latent vectors during training. The dashed lines represent the average speeds of demonstrations of different expert policies.}
\label{fig:walker_reward}
\end{figure}
\subsection{Moving behaviors}
In the Walker-2D experiment, the goal is to justify whether our approach can learn to separate and imitate different moving behaviors. We use PPO \cite{schulman2017proximal} to train expert policies reaching 8 different target speeds and collect 100 trajectories from each policy. Since the policies are trained by reaching different speeds without any constraint on the robot's movement, different behaviors may consist of similar state-action pairs. However, the behaviors are still distinct in terms of trajectories and we expect our approach can work well. We consider this experiment more challenging than FetchReach.
Since the behaviors are separated by speeds, we present speeds over training iterations to demonstrate the development of different latent vectors. The result is shown in Figure \ref{fig:walker_reward} (b). Our model can gradually separate 8 different behaviors from mixed data and successfully imitate each behavior. The policy can reach the same speeds as the expert policies by simply conditioning on different one-hot vectors. This result indicates that our method can still perform well even learning from complex demonstrations.
\iffalse
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{./picture/walker_similar.jpg}
\caption{Two behaviors in Walker-2D experiment. \textbf{Top}: The behavior of average speed 3.5m/s. \textbf{Bottom}: Behavior of average speed 5m/s. Even there are many similar movements in 2 styles, our method can distinguish between 2 behaviors and imitate each behavior well.}
\centering
\label{fig:walker_similar}
\end{figure*}
\fi
\subsection{Driving behaviors}
In the TORCS experiment, our goal is to evaluate if our approach can generalize to tasks with high-dimensional inputs. We design a heuristic agent to collect demonstrations. For \textbf{\textit{direction}}, there are two behaviors: keeping to the left or right. For \textbf{\textit{speed}}, there are three behaviors: driving at speed [40, 60, 80]. We collect 30 trajectories for each behavior.
Directly learning from raw visual inputs is challenging as the model is required to extract meaningful visual features and identify the meaning of different trajectories simultaneously. Besides, since we extract the information from a whole trajectory during training, it is limited by memory sizes and also brings computational inefficiency especially when dealing with long trajectories. Hence, we adopt a transfer learning approach.
In particular, We extract image features from the average pooling layer before the final classifier of GoogLeNet \cite{szegedy2015going} with the dimension of $1 \times 1024$ as the states of the trajectory. We do not feed any additional information such as current speeds into our model, forcing it to learn to distinguish different behaviors from only image features.
We visualize our results in Figure \ref{fig:walker_reward} (a). We can observe that our method can clearly separate different behaviors for both \textbf{\textit{direction}} and \textbf{\textit{speed}} even using high-dimensional visual inputs, demonstrating the ability to generalize to tasks with only visual inputs.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{without_prior.jpg}
\caption{(a)(b) Results of \textbf{\textit{direction}} with the number of categories greater than the number of behaviors. (c)(d) Results of \textbf{\textit{speed}} with the number of categories less (c) and greater (d) than the number of behaviors. (e)(f) Results of FetchReach experiment with the number of categories greater than the number of behaviors. Circles of different colors are the final positions of the gripper with 5 categories (e) and 6 categories (f). Crosses are the positions of targets.}
\centering
\label{fig:without_prior}
\end{figure*}
\subsection{Learning without prior knowledge}
We demonstrate that our model can successfully learn multiple behaviors without the prior knowledge of the number of behaviors in the data. Learning without the prior knowledge is important for many practical applications since in many cases we are unable to know variations in the data in advance. We conduct this experiment in the TORCS and FetchReach environments. We make the number of categories different from the number of behaviors in the demonstration, and test if our model can still separate different behaviors. We show the results in Figure \ref{fig:without_prior}. (a)(b) are the results of \textbf{\textit{direction}} experiment in TORCS with 3 and 4 categories. (c)(d) are the results of \textbf{\textit{speed}} experiment in TORCS with 2 and 4 categories. (e)(f) are the results of the FetchReach experiment with 5 and 6 categories. As expected, we can see that our model groups similar behaviors into the same category if we employ fewer categories (Figure \ref{fig:without_prior} (c)). However, we can observe that if the number of categories is greater than the number of behaviors in the data (Figure \ref{fig:without_prior} (a)(b)(d)(e)(f)), our model can still find all behaviors in the demonstrations and imitate each behavior. Our model will first discover the inter-class variation and then try to seek the intra-class variation since we find it separates the same behavior into different groups. These results suggest that our method can be used to learn a multi-modal policy from demonstrations that contain distinct unknown behaviors.
\section{Conclusion}
We present a method to learn a multi-modal policy from demonstrations with mixed behaviors. The presented approach is based on the variational autoencoder with a categorical latent variable which learns the representations corresponding to different behaviors in demonstrations. Our experimental results show our method can work on a variety of tasks, including a challenging task with high-dimensional visual inputs. We also show that using the categorical variable can automatically discover all distinct behaviors without prior knowledge of the number of behaviors in demonstrations. In the future, we plan to scale up our method to more realistic demonstrations which contain a larger number of complex behaviors.
\bibliographystyle{abbrv}
|
train/arxiv
|
BkiUd-I5qg5A56rhLXes
| 5 | 1 |
\section{Introduction}
Non-linear effects in electronic systems
are an interesting playground to look for novel
transport properties that cannot occur in equilibrium.
A famous example is the Franz-Keldysh effect in semiconductors,
where the band edge exhibits red-shifts
in static or AC electric fields.
This can be considered as an effect of the
deformation of the electron wavefunction, i.e., leakage to the gap region,
in strong electric fields,
which leads to the change in response properties.
Recently, a novel, optically controlled non-linear phenomenon
has been proposed in ref. \cite{Okaprb},
in which the present authors
revealed a new type of Hall effect --- photovoltaic Hall effect ---
can emerge in honeycomb lattices such as graphene subject to intense, circularly-polarized light
despite the absence of static magnetic fields.
If we denote the gauge field representing
the circularly-polarized light as $\Vect{A}_{\rm ac}(t)=
\frac{F}{\Omega}(\cos\Omega t,\sin\Omega t)$,
where $F=eE$ is the field strength and $\Omega $ the
frequency of the light, the
tight-binding Hamiltonian for electrons reads
\begin{eqnarray}
H(t)=-\sum_{ij}t_{ij}e^{-i\hat{e}_{ij}
\cdot{\bf A}_{\rm ac}(t)}c_{i}^\dagger c_j ,
\end{eqnarray}
where $t_{ij}=w$ is the nearest-neighbor
hopping for the honeycomb lattice, $\hat{e}_{ij}$ the
unit vector connecting the bond between $i$ and $j$, and
spins are ignored.
The Hamiltonian is time periodic, and, in momentum space the $k$-points
are driven in the Brillouin zone by the
field as $\Vect{k}(t)=\Vect{k}-\Vect{A}_{\rm ac}(t)$.
The orbit is a circle centered at $\Vect{k}$
with a radius $F/\Omega$, and the time period is
$T=2\pi/\Omega$ (Fig.~\ref{fig1}).
During the motion, the electron wave function
acquires a nontrivial geometric phase,
known as the Aharnov-Anandan phase\cite{Aharonov1987}
which is the
non-adiabatic extension of the Berry phase\cite{Berry1984}
and reduces to the Berry phase in the
adiabatic limit $\Omega\to 0$ with a fixed $F/\Omega$.
An important feature in the electron wavefunctions
in time-periodic systems,
which is a temporal analogue of Bloch's theorem for
spatially periodic systems,
is that the solution
$|\Psi_\alpha(t)\rangle $ of the
time-dependent Schr\"odinger
equation can be
described by the time-periodic
Floquet states $|\Phi_\alpha(t)\rangle $
as $|\Psi_\alpha(t)\rangle =e^{-i\varepsilon_\alpha t}|\Phi_\alpha(t)\rangle $,
where $\varepsilon_\alpha$ is the Floquet quasi-energy
and $\alpha$ an index that labels the Floquet states.
Note that $\alpha$ becomes a
composite index $\alpha=(i,m)$ when the
original system has an internal degrees of freedom $i$, e.g.
the band index for multiband systems such as the
Dirac cone with electron and hole branches, while
$m=0,\pm 1,\pm 2,\ldots$ denotes the number of
absorbed photons.
\begin{figure}[t]
\centering
\includegraphics[width=6cm]{fig1.eps}
\caption{
A trajectory of $\Vect{k}+\Vect{A}_{\rm ac}(t)$ around
a Dirac point in a circularly-polarized light field.
}
\label{fig1}
\end{figure}
The applied intense laser induces nontrivial photovoltaic transports as
follows:
Note first that we have two external fields, the intense laser
irradiation
and (ii) a weak, dc electric field for measuring the Hall effect.
If we define the inner product averaged over a period of
the ac laser field by
$\bra\bra\alpha|\beta\ket\ket\equiv \frac{1}{T}\int_0^T dt\langle \alpha(t)|
\beta(t)\rangle $,
the Floquet states span a complete orthogonal
basis in the presence of a strong laser field, and
one can perform a perturbation expansion in the
small dc field to probe the (Hall) conductivity
to obtain a Kubo formula extended to the case of
strong ac irradiation \cite{Okaprb},
\begin{eqnarray}
&&\sigma_{ab}(\Vect{A}_{ac})=i\int \frac{d\Vect{k}}{(2\pi)^d}
\sum_{\alpha,\beta\ne\alpha}
\frac{[f_\beta(\Vect{k})-f_\alpha(\Vect{k})]}{\varepsilon_\beta(\Vect{k})-\varepsilon_\alpha(\Vect{k})}
\frac{
\bra\bra\Phi_\alpha(\Vect{k})|J_b|\Phi_{\beta}(\Vect{k})\ket\ket
\bra\bra\Phi_\beta(\Vect{k})|J_a|\Phi_{\alpha}(\Vect{k})\ket\ket
}{\varepsilon_\beta(\Vect{k})-\varepsilon_\alpha(\Vect{k})+i\eta},
\end{eqnarray}
where $f_\alpha(\Vect{k})$ is the non-equilibrium distribution
(occupation fraction) of the $\alpha$-th Floquet state,
$\Vect{J}$ the current operator, and $\eta$ a positive infinitesimal.
The difference from the conventional
Kubo formula in the absence of AC fields is that
the energy is replaced with the Floquet quasi-energy, and
the inner product with a time average.
The photovoltaic Hall conductivity can be further simplified
to \cite{Okaprb},
\begin{eqnarray}
\sigma_{xy}(\Vect{A}_{\rm ac})=e^2\int \frac{d\Vect{k}}{(2\pi)^d}\sum_\alpha
f_\alpha(\Vect{k})\left[\nabla_{\Vect{k}}\times\Vect{\mathcal{A}}_\alpha(\Vect{k})\right]_z ,
\label{eq:TKNN}
\end{eqnarray}
in terms of a gauge field
$\Vect{\mathcal{A}}_\alpha(\Vect{k}) \equiv
-i\bra\bra\Phi_\alpha(\Vect{k})|\nabla_{\Vect{k}}|\Phi_\alpha(\Vect{k})\ket\ket$.
This reduces to the TKNN formula \cite{TKNN}
in the adiabatic limit.
Note that the photovoltaic Hall effect does not take
place in a linearly polarized light \cite{syzranov:045407}
which does not break the time-reversal symmetry.
\begin{figure}[tbh]
\centering
\includegraphics[width=12.5cm]{fig2.eps}
\caption{
(a) The Floquet quasi-energy $\varepsilon$ against the wave number, and
(b) the photovoltaic Berry curvature $
\left[\nabla_k\times\Vect{\mathcal{A}}_\alpha(\Vect{k})\right]_z$
for the upper band for $F=0.1w,\;\Omega=1.0w$ in the honeycomb lattice.
}
\label{fig2}
\end{figure}
\section{Photovoltaic Berry curvature in the honeycomb lattice}
Let us have a closer look at the Floquet states
and the photovoltaic curvature $\left[\nabla_{\Vect{k}}\times\Vect{\mathcal{A}}_\alpha(\Vect{k})\right]_z $
in the honeycomb lattice.
We begin by noting that there exists infinite array of photo-dressed states
in the Floquet spectrum, whose quasi-energy is
shifted by $m\Omega$ ($m=0,\pm 1,\ldots $)
from the plotted ones.
In Fig.~\ref{fig2}~(a), we plot the Floquet quasi-energy
against wave number for the Floquet states having the
largest weight of the $m=0$ Floquet state
(a descendant of the original $F=0$ dispersion).
An important feature in the plot is that the
band-gap-like
structure around $\varepsilon\sim \pm \Omega/2$,
where the gap opens an outcome of the photo-induced hybridization
of levels, a first-order effect in $F$.
More importantly, a small
gap opens \cite{Okaprb} at the Dirac points
around $\varepsilon\sim 0$, which is a consequence of
circularly-polarized light.
The photovoltaic Berry curvature is plotted in
Fig.~\ref{fig2}~(b) for the same parameters.
Around each K-point we find
a peak surrounded by concentric circles
and lines. The peak is due to the
gap opening at the Dirac points,
we can see that the sign is the
same for both $K$ and $K'$ points.
They contribute equally to the photovoltaic Hall coefficient
in the momentum integral
in eqn.~(\ref{eq:TKNN}) and {\it no cancellation
between $K$ and $K'$ points occurs}.
On the other hand, the concentric circles
and lines are due to the photo-induced hybridization,
e.g., the former corresponds to the band
structure around
$\varepsilon\sim \pm \Omega/2$ in Fig.~\ref{fig2}~(a).
\section{Experimental feasibility}
The two major candidates for the observation of the
photovoltaic Hall effect are
\begin{itemize}
\item Graphene, including multi-layer systems,
\item Surface states in topological insulators,
\end{itemize}
in which Dirac-like dispersions are realized.
As for the experimental feasibility,
a typical intensity of laser conceived here,
$F\sim 0.001w$, corresponds to
$E\sim 10^7\mbox{V/m}$
for photon energy $\Omega\sim 1\mbox{eV}$, $w=2.7\mbox{eV},\;a=2.6\AA$.
The size of the photovoltaic Dirac gap $\Delta=2\kappa$
scales as
\begin{eqnarray}
\Delta \propto F^2/\Omega^3
\end{eqnarray}
for small field strengths, which was derived in ref.~\cite{Okaprb}.
This formula implies that with a smaller laser energy $\Omega$
we can realize the photovoltaic Hall effect with a
weaker laser strength $F$.
Hence it should be interesting to study the
$\Omega$ dependence of the photovoltaic Hall
effect. Another important point is that
in graphene, the Dirac dispersion is
realized near zero energy, but the dispersion
begins to reflect the lattice structure for higher energies,
hence for higher values of $\Omega$.
So we may ask: Will the Hall effect survive when $\Omega$
becomes large, say
comparable with the band width?
In order to clarify this point, we have
calculated the photovoltaic Berry curvature for
several values of $\Omega$ in Fig.~\ref{fig3}.
For a smaller frequency in Fig.~\ref{fig3}~(a),
the central peak becomes smaller and broader.
The surrounding concentric circles
become closely packed and increase in number
because we now have contributions from
photo-induced hybridization
around $\varepsilon\sim \pm \Omega$
in addition to $\varepsilon\sim\pm\Omega/2$.
For a larger frequency in Fig.~\ref{fig3}~(b), on the other hand,
the peaks survive
even though $\Omega$ is greater than the
band width.
However, the peak becomes smaller (and
narrower), so it will become more difficult to observe the effect.
In Fig.~\ref{fig3}~(c), we plot the
Photovoltaic Hall coefficient (eqn.(3))
against the strength of the circularly polarized light.
Here, we assumed
sudden switch on of the electric fields, which implies
$f_\alpha=\sum_{E_n<E_F}|\bra\bra \Phi_\alpha|\psi_n\ket\ket|^2$,
where $|\psi_n\rangle $ is the zero field eigen-wavefunction, and $E_n$
its energy. This
assumption is expected to breakdown in the presence of
dissipation, however, the basic trend
do not change which was confirmed by calculations
based on more elaborate techniques \cite{Okaprb}.
As expected from the scaling relation
of the photovoltaic Dirac gap $\Delta$ (eqn.~(4)),
the Hall coefficient first increases as $
\mbox{Re}~\sigma_{xy}\propto \Delta\propto F^2/\Omega^3$
in the weak field limit. At this field strength,
the physics can be understood by the central peaks
of the Berry curvature at the K and K' points.
However, after saturating at a
certain value, the Hall coefficient starts to decrease and become
negative. When this happens, the electrons
are deeply in the ac-Wannier Stark ladder regime
and are localized
due to the strong circularly polarized light.
The Berry curvature peaks around the K and K' points
now become highly complex and the sign
alters at each concentric circles
as shown in Fig.~\ref{fig3}~(a). This leads to negative
Hall coefficients after integrating over the Brillouin Zone.
\begin{figure}[t]
\centering
\includegraphics[width=16.cm]{fig3.eps}
\caption{
Photovoltaic Berry curvature
for (a) $\Omega=0.5w$
and (b) $\Omega=5.0w$
with the same field strength as Fig.~\ref{fig2}.
(c) The dependence of the
Photovoltaic Hall coefficient $\mbox{Re}~\sigma_{xy}$ on the
strength of the ac-electric field $F$ in the honeycomb lattice.
}
\label{fig3}
\end{figure}
\section{Conclusion}
We have studied the dependence of the photovoltaic Hall
effect in a honeycomb lattice. Especially, we have elaborated
the dependence of the photovoltaic Berry curvature
on the frequency $\Omega$ of the applied circularly-polarized light.
We have found that the photovoltaic Berry curvature
and thus, the Hall coefficient
gradually increases as $\Omega$ becomes smaller
reflecting the relation $
\mbox{Re}~\sigma_{xy}\propto \Delta\propto F^2/\Omega^3$.
However, even when $\Omega$ is large and the
Dirac band approximation breaks down,
we found evidence that the photovoltaic Hall
effect survives, which indicates that
this is not limited to Dirac bands
but the effect is more universal and can
be realized in a wide range of electron systems in
circularly polarized light.
TO was supported by Grant-in-Aid for young Scientists (B).
\section*{References}
|
train/arxiv
|
BkiUflc5qhDCZ21LDLOG
| 5 | 1 |
\section{Introduction}\label{sec:intro}
Fluid dynamics is an effective description of near equilibrium physics.
It captures the dynamics of locally equilibriated systems in which the
parameters of equilibrium vary slowly compared to relaxation
length scale. When, for instance, microscopic dynamics is well described
by kinetic theory, the Boltzman equation reduces to the equations of fluid
dynamics on length scales that are large compared to the molecular
mean free path.
The variables of fluid mechanics are the local values of the parameters
that characterize fluid equilibrium; in the simplest context these are
the fluid temperature, chemical potentials and velocity. The
equations of fluid dynamics are simply the conservation of stress-tensor and
all other charged currents, once the stress tensor and currents are
expressed in terms of equilibriation parameters. The formulas that
express the stress tensor and charge currents as functions of fluid
variables are known as constitutive relations.
As fluid dynamics is a long wavelength effective description, it is
meaningful only to present constitutive relations in an expansion
in derivatives of the fluid variables.
For any given fluid, a microscopic
computation of constitutive relations starting from a microscopic description of
the system is often an impossibly difficult task. In the usual spirit
of effective field theory, this task is, moreover, extraneous to the
study of fluid dynamics. An autonomous `theory' of fluid dynamics
addresses the following question:
what is the most general form of the constitutive relations
that could possibly arise in the fluid description of any consistent system
(to any given order in the derivative expansion).
The requirement of symmetry restricts the form of the constitutive relations.
For example, the stress tensor is a tensor. At any given order in the
derivative expansion, there exists only a finite number of (onshell inequivalent)
tensor structures one can build out of fluid variables and their derivatives.
The most general expression for the stress tensor is clearly given by
a linear combination of these inequivalent tensors, where the coefficients
in this expansion are arbitrary functions of the scalar fluid variables
(temperatures and chemical potentials in the simplest situations). While
the requirements of symmetry are certainly necessary, they are not sufficient.
There is atleast one additional constraint on allowed constitutive
relations: that they are consistent with a local form of the second law
of thermodynamics \cite{landau}. In other words any given constitutive relation must
be accompanied by an entropy current, also constructed out of fluid
variables. The entropy current must have the property that its divergence
is positive for every conceivable fluid flow allowed by the fluid equations
with the specified constitutive relations \footnote{However, the existence of
a local entropy current with positive divergence is really a heuristic,
with as yet no solid basis
in thermodynamics or QFT}.
It is a quite remarkable fact that the requirement of the existence of
a positive divergence entropy current constrains the allowed constitutive
relations of fluid dynamics in a quite dramatic manner; as we will see
in some detail below, this requirement reduces the number of free parameters
(or more precisely free functions) allowed in constitutive relations.
In this note we will work the restrictions imposed by this requirment
on the constitutive relations of an uncharged relativistic fluid in $3+1$
spacetime dimensions at second
order in the derivative expansion. In this system the variables of fluid
dynamics are simply the temperature $T$ and the fluid four velocity $u^\mu$.
As we will see in some detail below, symmetry considerations allow a 15
parameter worth of constitutive relations for the stress tensor, where
every parameter is an arbitrary function of the temperature. It was already
noted by Romatschke \cite{Romatschke:2009kr} that the requirement of
entropy increase
imposes atleast two relations between these 15 functions. In this note
we generalize Romatschke's analysis and demonstrate that a complete study
of the requirements of positivity of the entropy current imposes 5 relations
on the 15 coefficients described above. The set of all 2nd order constitutive
relations consistent with the positivity of entropy increase
is parameterized by ten functions of the temperature. But unlike the first order
case we did not find any inequalities among the second order transport coefficients.
We now give a detailed presentation of our final results, i.e.
a precise characterization of the 10 parameter set of allowed constitutive
relations at second order in the derivative expansion for uncharged
relativistic fluids. Let the fluid stress tensory be given by
$$T_{\mu\nu}= T^{perf}_{\mu\nu} + \Pi^{\mu\nu}$$
We work in the so called Landau frame which imposes the transversality
condition
$$ u^\mu \Pi_{\mu\nu}=0 $$
In this frame the most general allowed form for $\Pi_{\mu\nu}$ upto
second order in the derivative expansion is given by\footnote{Our convention for
Riemann tensor is the following
$${R^\rho}_{\alpha\beta\nu} = \partial_\beta \Gamma^\rho_{\alpha\nu} -
\partial_\nu \Gamma^\rho_{\alpha\beta}
+ \Gamma^\lambda_{\alpha\nu}\Gamma^\rho_{\lambda\beta}
-\Gamma^\lambda_{\alpha\beta}\Gamma^\rho_{\lambda\nu}$$}
\begin{equation}\label{stresderi1}
\begin{split}
\Pi_{\mu\nu} =~&-\eta\sigma_{\mu\nu} - \zeta P_{\mu\nu} \Theta\\
~&T\bigg[ \tau ~(u.\nabla)\sigma_{\langle\mu\nu\rangle} + \kappa_1 R_{\langle \mu\nu\rangle} + \kappa_2 F_{\langle \mu\nu\rangle} +\lambda_0~ \Theta\sigma_{\mu\nu}\\
&+ \lambda_1~ {\sigma_{\langle \mu}}^a\sigma_{a\nu\rangle}+ \lambda_2~ {\sigma_{\langle \mu}}^a\omega_{a\nu\rangle}+ \lambda_3~ {\omega_{\langle \mu}}^a\omega_{a\nu\rangle} + \lambda_4~{\mathfrak a}_{\langle\mu}{\mathfrak a}_{\nu\rangle}\bigg]\\
&+TP_{\mu\nu}\bigg[\zeta_1(u.\nabla)\Theta + \zeta_2 R + \zeta_3R_{00}
+ \xi_1 \Theta^2 + \xi_2 \sigma^2+ \xi_3 \omega^2
+\xi_4 {\mathfrak a}^2 \bigg]
\end{split}
\end{equation}
where
\begin{equation}\label{notation}
\begin{split}
&u^\mu =\text{The normalised four velocity of the fluid}\\
&P^{\mu\nu} = g^{\mu\nu} + u^\mu u^\nu =\text{Projector perpendicular to $u^\mu$}\\
&\Theta = \nabla. u = \text{Expansion},~~
{\mathfrak a}_\mu = (u.\nabla) u_\mu = \text{Acceleration}\\
&\sigma^{\mu\nu} =
P^{\mu\alpha} P^{\nu\beta}\left(\frac{\nabla_\alpha u_\beta + \nabla_\beta u_\alpha}{2}
- \frac{\Theta}{3}g_{\alpha_\beta}\right) = \text{Shear tensor}\\
&\omega^{\mu\nu} =
P^{\mu\alpha} P^{\nu\beta}\left(\frac{\nabla_\alpha u_\beta
- \nabla_\beta u_\alpha}{2}\right)=\text{Vorticity}\\
&F^{\mu\nu} = R^{\mu a \nu b}u_a u_b,~~R^{\mu\nu}
= R^{a\mu b\nu}g_{ab}~~(R^{abcd} = \text{Reimann tensor})\\
&\sigma^2 = \sigma_{\mu\nu}\sigma^{\mu\nu},~~~~\omega^2 = \omega_{\mu\nu}\omega^{\nu\mu}
\end{split}
\end{equation}
and
$$
A_{\langle\mu\nu\rangle} \equiv P_\mu^\alpha P_\nu^\beta\left(\frac{A_{\alpha\beta} + A_{\beta\alpha}}{2} - \left[\frac{A_{ab}P^{ab}}{3}\right]g_{\alpha\beta}\right)~~\text{For any tensor $A_{\mu\nu}$}
$$
It turns out that `entropy-positivity' does not impose any constraint
on $\tau,~\lambda_0,~\lambda_1,~\lambda_2,~\zeta_1,~\xi_1$ and $\xi_2$.
The rest of the eight second order transport coefficients satisfy the following
5 relations.
\begin{equation}\label{relationsintro}
\begin{split}
\kappa_2 =&~ \kappa_1 + T\frac{d\kappa_1}{dT}\\
\zeta_2 =&~ \frac{1}{2}\left[s\frac{d\kappa_1}{ds} - \frac{\kappa_1}{3}\right]\\
\zeta_3 = &\left(s\frac{d\kappa_1}{ds} + \frac{\kappa_1}{3}\right) + \left(s\frac{d\kappa_2}{ds} - \frac{2\kappa_2}{3}\right)+\frac{s}{T}\left(\frac{dT}{ds}\right)\lambda_4\\
\\
\xi_3=&~\frac{3}{4}\left(\frac{s}{T}\right)\left(\frac{dT}{ds}\right)\left(T\frac{d\kappa_2}{dT} + 2\kappa_2\right) -\frac{3\kappa_2}{4} +\left(\frac{s}{T}\right)\left(\frac{dT}{ds}\right)\lambda_4 \\
&+\frac{1}{4}\left[s\frac{d\lambda_3}{ds} + \frac{\lambda_3}{3} -2 \left(\frac{s}{T}\right)\left(\frac{dT}{ds}\right)\lambda_3\right]\\
\xi_4 =&~-\frac{\lambda_4}{6} - \frac{s}{T}\left(\frac{dT}{ds}\right)\left(\lambda_4 + \frac{T}{2}\frac{d\lambda_4}{dT}\right)
-T\left(\frac{d\kappa_2}{dT}\right)\left(\frac{3s}{2T}\frac{dT}{ds} - \frac{1}{2}\right) \\
&- \frac{Ts}{2} \left(\frac{dT}{ds}\right)\left(\frac{d^2\kappa_2}{dT^2}\right)
\end{split}
\end{equation}
So finally there are 10 independent transport coefficients
at second order for some uncharged fluid.
As we have explained above, unless the equations \eqref{relationsintro} are
satisfied, the fluid dynamics equations do not have a positive divergence
entropy current. When the equations \eqref{relationsintro} are satisfied
the fluid equations are compatible with the existence of a positive divergence
entropy current, but this current is not unique. Consider a fluid with
a particular constitutive relation that obeys the equations
\eqref{relationsintro}. It turns out that any such fluid has a 7 parameter (7 arbitrary functions
of temperature)
set of positive divergence entropy currents.
\begin{equation}\label{duientintro}
\begin{split}
& J^\mu|_{\text{upto 2nd order}}\\
=&~su^\mu + \nabla_\nu\left[A_1(u^\mu\nabla^\nu T - u^\nu \nabla^\mu T)\right] + \nabla_\nu \left( A_2 T \omega^{\mu\nu}\right)\\
& + A_3 \left(R^{\mu\nu} - \frac{1}{2}g^{\mu\nu} R\right) u_\nu
+\left(\frac{A_3}{T} + \frac{dA_3}{dT}\right)\left[\Theta \nabla^\mu T - P^{ab}(\nabla_b u^\mu)( \nabla_a T )\right]\\ &+(B_1\omega^2 + B_2\Theta^2 + B_3 \sigma^2)u^\mu + B_4\left[(\nabla s)^2
u^\mu + 2 s \Theta \nabla^\mu s\right]\\
\end{split}
\end{equation}
Our results above apply to an arbitrary theory, i.e. a theory with an
arbitrary equation of state. The specialization of our results
to the special case of conformal fluids turns out to be trivial,
as we now explain. As was explained in (\cite{Baier:2007ix,Loganayagam:2008is}),
the requirement
of Weyl invariance forces 10 linear combinations of the 15 symmetry allowed
coefficients
to vanish; specifically
\begin{equation}\label{zeros}
\begin{split}
& \kappa_1 = 2\kappa_2\equiv \kappa, ~~\tau = 3\lambda_0,~~\lambda_4 =0\\
&\zeta_1= \zeta_2 = \xi_1 = \xi_2= \xi_3 =0
\end{split}
\end{equation}
Moreover, the the temperature dependence of the
remaining five coefficients (those that are allowed to take arbitrary values
consistent with Weyl invariance) is determined by dimensional analysis;
All of them are just linearly proportional to temperature.
It turns out that this linear dependence on temperarture,
the conformal equation of state
and \eqref{zeros} reduce all of the equations
\eqref{relationsintro} to trivial identities of the form $0=0$. In
other words, the requirement of positivity invariance of the entropy
current does not impose any equations on the five transport
coefficients (allowed by conformal symmetry).
The coefficients for a conformally covariant entropy
current are given by the following expressions.
\begin{equation}\label{conformalintro}
\begin{split}
A_1(T) = a_1 ,~~&~~A_2(T) = a_2 ,~~~~A_3(T) = \frac{a_1}{2} T\\
B_1(T) =b_1 T, ~~&~~B_2(T)=\frac{2a_1}{9} T,~~~~B_3(T) =b_3 T\\
&~~B_4(T)= -\left(\frac{a_1}{18}\right)T^{-5}
\end{split}
\end{equation}
where all $a_i$ and $b_i$ are constants.
\newline
Therefore the conformally covariant entropy current has
four independent coefficients ($a_1, ~a_2,~b_1$
and $b_3$) when expanded upto second order in derivatives.
Let us end this introduction with a description of our motivations in
undertaking the computations described in this note. Our first motivation
is practical. Theoretical reconstructions of the RHIC and LHC heavy
ion experiments often model the expansion of the hot dense deconfined
plasma by the equations of fluid dynamics including second order
corrections. Given that
we have not been able, from first principles, to compute the fluid
description of QCD it seems of interest to parameterize the most
general set of allowed equations, as we have done (to 2nd order) in this
note.
However our main motivation in undertaking the computations described
in this note are structural. At zero temperature, the equations of motion
of physical systems are strongly constrained by the requirement that
they follow from the extremization of an action. On the other hand
the tradiational formulation of the equations of fluid dynamics is at the
level of the equations of motion. It seems likely that the equations
of fluid dynamics inherit constraints that are the analogue of the
zero temperature requirement of being obtained from an action. One possible
source of such constraints, as described in this note, stems from the
requirment that our system admit an entropy current of positive divergence.
There are other potential sources of constraints, for instance the
requirement that correlation functions computed from fluid dynamcs
have certain symmetry properties that can be derived, on general grounds,
in quantum field theories (see \cite{Jensen:2011xb}). It appears to us to be of interest to find
a complete `theory' of fluid dynamics; a formalism that ennumerates all
consistency conditions on the equations of fluid dynamics. Such a formalism
would take the place of the zero temperature requirement that the equations
of motion follow from an action. The computations presented in this
note may be thought of as a small first step towards this larger goal.
\section{Brief Summary of our Procedure}\label{sec:summary}
In the rest of this note we proceed to determine the most general
second order constitutive relations and second order
entropy current consistent with positivity of the divergence of the current.
In order to do this we first list out the most general symmetry allowed
entropy current upto third order in the derivative expansion (onshell
equivalent currents are not treated as distinct). We then perform a
brute force computation of the divergence of this entropy current, keeping
all relevant terms (see below for an explaination of which terms are
relevant) to fourth order in the derivative expansion. We then use
the equations of motion (including constitutive terms upto second order in
derivatives) to rewrite our final answer entirely as a function of
derivatives of velocity and temperature and the background metric
that are independent of each other (i.e. are not related
to each other by the equations of motion). We then work out the
conditions on the entropy current and constitutive relations that
ensure the positivity of this divergence for arbitrary values of the
independent fluid derivatives, finally obtaining \eqref{relationsintro}
and \eqref{duientintro}
In order to implement the programme outlined in the paragraph above, in
this note we proceed in the following order. In section \ref{sec:class} below
we classify and ennumerate the onshell independent derivatives of fluid
fields (upto fourth order in derivatives). We also enumerate t
all the products of these derivative fields with net derivative number
$\leq 4$, organizing our ennumeration in representations of the local
$SO(3)$ that leaves the fluid velocity fixed.
In section \ref{sec:enropycurrent} below we then proceed to ennumerate the most general
entropy current upto third order in the derivative expansion. We then
compute the divergence of this entropy current and determine several
constraints on the entropy current that follow from the requirement that
its divergence is positive definite.
In section \ref{sec:finalconstraint} below we then ennumerate the constraints on
constitutive
relations, upto second order, that follow from the requirement of positivity
of divergence of the entropy current.
We end this brief section by listing our conventions. Throughout this
note we work in the Landau gauge. In this gauge the velocity $u^\mu$ at any
point is defined as the unique time-like eigenvector of the stress tensor,
normalised so that $u_\mu u^\mu = -1$. In other words, by definition
\begin{equation}\label{udef}
T_\mu^\nu u_\nu =-\epsilon u_\mu
\end{equation}
The quantity $\epsilon$ is taken by definition to be energy density of our
fluid. All other fluid thermodynamical quantities (like the temperature or
pressure) are obtained from $\epsilon$ using thermodynamics and equation of state.
Equation of state expresses the energy density $\epsilon$ as
a function of some thermodynamic parameter like entropy density and it
can vary from system to system . In this note we shall keep it arbitrary.
Once $\epsilon(s)$ is known, the temperature $(T)$ and the pressure $(P)$ can be determined
in the following
way.
$$T(s) = \frac{d\epsilon(s)}{ds},~~~P(s) = s\frac{d\epsilon(s)}{ds} - \epsilon(s)$$
Both of the above relations directly follow from equilibrium thermodynamics.
\section{Classification of fluid data}\footnote{This section has been
worked out in collaboration with Shiraz Minwalla and Tarun Sharma.}\label{sec:class}
In this section we present a partial listing of the onshell independent
derivatives of fluid fields ($T$ and $u^\mu$), at any given point $x$,
upto fourth order in the derivative expansion. We organize these
derivatives (which we will often refer to below as independent data)
according their transformation properties under the $SO(3)$ rotational
group that leaves $u^\mu(x)$ invariant.
In order to explain what we mean let us consider a listing of independent
data at first order in the derivative expansion. Before accounting for
onshell equivalences we have 16 independent pieces of first derivative data;
(the four derivatives of temperature and the four derivatives of each of the
three independent velocities). These 16 pieces of data transform, under
the local $SO(3)$, as two scalars, two vectors, a pseudo vector and
a traceless symmetric tensor (i.e. the {\bf 5}) of $SO(3)$
(see the second column of Table \ref{table:1storder} for details). However these 16
pieces of data are not all independent. The four {\it perfect} fluid
equations of motion may be used to solve for four of these fluid derivatives
in terms of the other 12. As the four fluid equations can be decomposed into
a vector and a scalar of $SO(3)$ (see the third column of Table
\ref{table:1storder})
it follows that the independent data consists of one vector, one scalar,
a pseudo vector and a traceless symmetric tensor (see the fourth column of
Table \ref{table:1storder}). The choice of the independent scalar and vector piece
of data
is arbitrary; we could take our independent data to be either of the vectors
and either of the scalars listed in the second column of Table
\ref{table:1storder}. In the
fourth column of Table \ref{table:1storder} we have made one particular
choice of the
independent data that we will employ in much of our note. Occasionally
we will find it more convenient to use another choice of independent data; we will
explicitly point this out when this is the case.
In this note we will require that the production of entropy is positive for
an arbitrary fluid flow on an arbitrary curved
manifold. As explained in \cite{Bhattacharya:2011tra} this requirement yields constraints
for the constitutive relations of fluids even in the flat space. In order
to implement the constraint described above, we will find it necessary to
list the data assocaited with local background metric curvatures in addition
to the data associated with fluid flows. All curvature invariants formed
from a background metric are given in terms of (contractions of) the Riemann
tensor and its derivatives. It is important to recall that, in addition to
certain symmetry properties, the Reimann tensor also obeys a Bianchi type
identity. The independent derivatives of the Reimann tensor should be counted
modulo the Bianchi identity and its derivatives. In analogy with the counting
problem for fluid data listed above, we will regard the set of all derivatives
of the Reimann tensor (with all symmetries imposed) as raw data, and
the Bianchi identities and its derivatives as `equations of motion'.
We will then list the indpendent pieces of data in curvature and derivatives
by subtracting equations of motion from raw data, just as described
in the previous paragraph.
\subsection{Independent Data}
With no further ado we simply proceed to list the (relevant parts of)
fluid and curvature data at various orders in the
derivative expansion.
At first order in derivatives we have only fluid data. They are listed below in Table \ref{table:1storder}.
\begin{table}[ht]
\caption{Data at 1st order in derivative}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c| c| c|}
\hline\hline
&Before imposing eom & Eoms & Independent data \\ [1ex]
\hline
\hline
Scalars (1) & $ (u.\nabla)T$,~ $(\nabla.u)$ & $u_\nu\nabla_\mu T^{\mu\nu}=0$ & $\Theta \equiv(\nabla.u)$ \\ [0.5ex]
\hline
Vectors (1) &$(u.\nabla)u^\mu$,~$P^{\mu\nu}\nabla_\nu T$ & $P^\mu_a\nabla_\nu T^{\nu a}=0$
& ${\mathfrak a}^\mu \equiv(u.\nabla)u^\mu$ \\[0.5ex]
\hline
Pseudo-vectors (1) & $u_\nu\epsilon^{\nu\mu\lambda\sigma}\nabla_\lambda u_\sigma$ &
&$ l^\mu \equiv u_\nu\epsilon^{\nu\mu\lambda\sigma}\nabla_\lambda u_\sigma$ \\[0.5ex]
\hline
Tensors (1)& $\nabla_{\langle\mu}u_{\nu\rangle}$
& & $\sigma_{\mu\nu}\equiv \nabla_{\langle\mu}u_{\nu\rangle}$ \\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:1storder}
\end{table}
Here for any tensor $A_{\mu\nu}$, the symbol $A_{\langle\mu\nu\rangle}$ means the symmetric
traceless part of
it, projected in the direction perpendicular to $u^\mu$.
$$A_{\langle\mu\nu\rangle}\equiv P_\mu^ a P_\nu ^b\left[\left(\frac{A_{ab} +A_{ba}}{2}\right)
- g_{ab}\left(\frac{P^{\alpha\beta}A_{\alpha\beta}}{3}\right)\right]$$
For example, if we expand this notation, the shear tensor
$\sigma_{\mu\nu}$ has the following definition
$$\sigma_{\mu\nu}\equiv P_\mu^ a P_\nu ^b\left[\frac{\nabla_a u_b + \nabla_b u_a}{2}
- g_{ab}\frac{\Theta}{3}\right]$$
At second order we have curvature data along with fluid data. The curvature
data is given by the 20 independent components of the Reimannn curvature
subject to the identities
\begin{equation*}
\begin{split}
&R_{\mu\nu\alpha\beta} = -R_{\nu\mu\alpha\beta} = - R_{\mu\nu\beta\alpha}\\
&R_{\mu\nu\alpha\beta} = R_{\alpha\beta\mu\nu}\\
&R_{\mu[\nu\alpha\beta]}=0
\end{split}
\end{equation*}
The 20 independent components may be decomposed into $SO(3)$ representations as
in Table \ref{table:2ndcurv}
\\
\begin{table}[ht]
\caption{$I_2$ type curvature data}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c| }
\hline
& $R\equiv {R^{\mu\nu}}_{\mu\nu},$\\
Scalars (2)&$R_{00}\equiv u^\mu u^\nu R_{\mu\nu}$\\
&$~~~~~~~\equiv u^\mu u^\nu {R^\alpha}_{\mu\alpha\nu}$\\ [0.5ex]
\hline
Vectors(1) &$ P^{\mu a}R_{ab} u^b$\\[0.5ex]
\hline
& $R_{\langle \mu\nu\rangle}$\\
Tensors(2) &$F_{\langle\mu\nu\rangle}$\\
& where
$F_{\mu\nu} \equiv u^\alpha u^\beta R_{\mu\alpha\nu\beta}$\\[0.5ex]
\hline
Pseudo-tensor&$u_b{ R_{\langle \mu }}^{bcd} ~\epsilon_{\nu\rangle c d q}u^q$\\
\hline
\hline
\end{tabular}
\label{table:2ndcurv}
\end{table}
Second order fluid data is tabulated in Table \ref{table:2ndorder}
\begin{table}[ht]
\caption{$I_2$ type fluid data}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c| c| c|}
\hline\hline
&Before imposing eom & Eoms &Independent data \\ [1ex]
\hline
\hline
Scalars (1) & $(u.\nabla)\Theta,~
\nabla^2 T$& $u_\nu (u.\nabla)\nabla_\mu T^{\mu\nu}=0,$ &
$(u.\nabla)\Theta$ \\
&$~u^\mu u^\nu \nabla_\mu\nabla_\nu T$&$\nabla_\mu\nabla_\nu T^{\mu\nu}=0$&\\ [0.5ex]
\hline
Vectors (2) &$P^{\mu\nu}(u.\nabla){\mathfrak a}_\nu,~P^{\mu\nu}\nabla^2 u_\nu,$
& $P^\mu_a(u.\nabla)\nabla_\nu T^{\nu a}=0,$
& $P^{\mu a}\nabla_a \Theta,$ \\
&$P^{\mu\nu}\nabla_\nu \Theta,~P^{\mu\nu}\nabla_\nu (u.\nabla)T$
&$u_aP^{\mu b}\nabla_b\nabla_\nu T^{\nu a}=0$&$P^\mu_a\nabla_b\sigma^{ab}$\\[0.5ex]
\hline
Pseudo-vectors (0)& $(u.\nabla)l^\mu$
&$u_\nu\epsilon^{\mu\nu\lambda\sigma}\nabla_\lambda\nabla_aT^a_\sigma =0$
& \\[0.5ex]
\hline
Tensors (1)& $P^{\mu a}P^{\nu b}(u.\nabla)\sigma_{ab},
~\nabla_{\langle\mu}\nabla_{\nu\rangle}T$
&$\nabla_{\langle\mu}\nabla_aT^a_{\nu\rangle}=0$
& $P^{\mu a}P^{\nu b}(u.\nabla)\sigma_{ab}$ \\[0.5ex]
\hline
Pseudo-tensors (1)&$\nabla_{\langle\mu}l_{\nu\rangle}$&&$\nabla_{\langle\mu}l_{\nu\rangle}$\\[0.5ex]
\hline
Spin-3 (1)&$\nabla_{\langle\mu}\nabla_\nu u_{\alpha\rangle}$&
&$\nabla_{\langle\mu}\nabla_\nu u_{\alpha\rangle}$\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:2ndorder}
\end{table}
Third order fluid data that transforms in the scalar, vector and pseudo vector
representations is tabulated in Table \ref{table:3rdfluid} (we will never
need 3rd order data in the ${\bf 5}$, ${\bf 7}$ and ${\bf 9}$
representations, and so do not bother to tabulate these below).
\begin{table}[ht]
\caption{$I_3$ type fluid data}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c| c| c|}
\hline\hline
&Before imposing eom & Eoms & Independent data \\ [1ex]
\hline
\hline
Scalars (1) & $(u.\nabla)^2\Theta,~\nabla^2\Theta,$&
$u_\nu (u.\nabla)^2 \nabla_\mu T^{\mu\nu}=0$,
& $(u.\nabla)^2\Theta$ \\
& $(u.\nabla)^3 T,~(u.\nabla)\nabla^2 T$&$(u.\nabla)\nabla_\mu\nabla_\nu T^{\mu\nu}=0$,&
\\
&&$u_\nu \nabla^2\nabla_\mu T^{\mu\nu}=0$&\\[0.5ex]
\hline
Vectors (1) & $ P^{\mu a} (u.\nabla)^3 u_a$
& $P^{\mu a}u^b\nabla_a\nabla_\mu T^\mu_b =0$
& $P^{\mu a}(u.\nabla) \nabla_a\Theta$ \\
& $P^{\mu a}(u.\nabla) \nabla_a\Theta$
& $P^\mu_a \nabla^2\nabla_\nu T^{\nu a}=0$
&\\
& $ P^{\mu a}\nabla^2\nabla_a T$&$P^{\mu a}\nabla_a \nabla_\alpha\nabla_\beta T^{\alpha\beta}=0$
&\\
&$ P^{\mu a}(u.\nabla)^2\nabla_a T$& $P^\mu_a (u.\nabla)^2\nabla_\nu T^{\nu a}=0$&\\
&$P^{\mu a} (u.\nabla)\nabla^2 u_a$&&\\[0.5ex]
\hline
Pseudo-vectors (1) & $P^{\mu a}(u.\nabla)^2 l_a$ &
$u_\nu \epsilon^{\mu\nu\alpha\beta} (u.\nabla)\nabla_\alpha\nabla_a T^a_\beta=0$
&$P^{\mu a}\nabla^2 l_a$ \\
&$P^{\mu a}\nabla^2 l_a$&&\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:3rdfluid}
\end{table}
The third order curvature data consists of derivatives of the Reimann curvature
constrained by Bianchi identity
$$\epsilon^{abcd}\nabla_b R_{\alpha\beta c d}=0$$
In Table \ref{table:3rdcurv} we list the independent curvature
data that transforms in the scalar,
vector and pseudo vector representations (again we will not need and so do
not list the remaining representations)
\begin{table}[ht]
\caption{$I_3$ type curvature data}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c| c| c|}
\hline\hline
&Before imposing eom & Eoms & Independent data \\ [1ex]
\hline
\hline
Scalars (2) & $(u.\nabla)R$
&
$u^\mu\epsilon_{\mu a\alpha\beta}\epsilon^{abcd}\nabla_b{ R^{\alpha\beta}}_{ c d}=0$
& $(u.\nabla)R_{00}$
\\
& $ (u.\nabla)R_{00}$ && $(u.\nabla) R$
\\
&$u_a\nabla_\mu R^{a\mu}$&&\\[0.5ex]
\hline
Vectors (3) & $ P^{\mu a}\nabla_a R_{00}$
&$u_a u^\nu\epsilon_{\mu \nu\alpha\beta}\epsilon^{abcd}\nabla_b{ R^{\alpha\beta}}_{ c d}=0 $
&$ P^{\mu a}\nabla_a R_{00}$
\\
& $P^{\mu a}\nabla_a R$
& $u_\alpha u^\nu\epsilon_{\mu \nu a\beta}\epsilon^{abcd}\nabla_b{ R^{\alpha\beta}}_{ c d}=0 $
&$P^\mu _a \nabla_\nu R^{\nu a}$\\
& $P^\mu _a\nabla_\nu F^{\nu a}$&&$P^{\mu a}u^b (u.\nabla)R_{ab}$\\
&$P^\mu _a \nabla_\nu R^{\nu a}$& &\\
&$P^{\mu a}u^b (u.\nabla)R_{ab}$&&\\[0.5ex]
\hline
Pseudo-vectors (1) & $u^p u_a \epsilon^{abcd}\nabla_b {R^\mu }_{p cd}$ &
$u_\alpha u_a\epsilon^{abcd}\nabla_b{ R^{\alpha\mu}}_{ c d}=0$
&$u^p u_a \epsilon^{\mu abc}\nabla_b R_{p c}$ \\
&$u^p u_a \epsilon^{\mu abc}\nabla_b R_{p c}$&&\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:3rdcurv}
\end{table}
Finally, fourth order scalar data (all we will need at fourth order), both
fluid as well as curvature, is tabuated in Table \ref{table:4th}.
\begin{table}[ht]
\caption{$I_4$ type scalars}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c| c| c|}
\hline\hline
&Before imposing eom & Eoms & Independent data \\ [1ex]
\hline
\hline
Fluid data (1) & $(u.\nabla)^3\Theta$
&
$(u.\nabla)^3\left(u_\nu\nabla_\mu T^{\mu\nu}\right)=0$
& $(u.\nabla)^3\Theta$
\\
& $ (u.\nabla)\nabla^2\Theta$ &$(u.\nabla)\nabla^2\left(u_\nu\nabla_\mu T^{\mu\nu}\right)=0$
&
\\
&$(u.\nabla)^4 T$&$(u.\nabla)^2\nabla_\mu\nabla_\nu T^{\mu\nu}=0$&
\\
&$(u.\nabla)^2\nabla^2 T$&$\nabla^2\nabla_\mu\nabla_\nu T^{\mu\nu}=0$&
\\
&$\nabla^2 (\nabla^2 T)$&&\\[0.5ex]
\hline
Curvature data (4) & $(u.\nabla)^2R_{00}$
&$u_a u^\nu\epsilon_{\mu \nu\alpha\beta}\nabla^\mu\epsilon^{abcd}\nabla_b{ R^{\alpha\beta}}_{ c d}
=0 $
&$(u.\nabla)^2R_{00}$
\\
& $(u.\nabla)^2 R$
& $u_\alpha u^\nu\epsilon_{\mu \nu a\beta}\nabla^\mu\epsilon^{abcd}\nabla_b{ R^{\alpha\beta}}_{ c d}
=0 $
&$(u.\nabla)^2R_{00}$
\\
& $\nabla^2 R$&$u^\mu\epsilon_{\mu a\alpha\beta}(u.\nabla)
\epsilon^{abcd}\nabla_b{ R^{\alpha\beta}}_{ c d}=0$
&$\nabla^2 R$
\\
&$\nabla^2 R_{00}$& &$\nabla^2 R_{00}$
\\
&$u_a (u.\nabla) \nabla_b R^{ab}$&&\\
&$\nabla_a\nabla_b R^{ab}$&&\\
&$\nabla_a\nabla_b F^{ab}$&&\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:4th}
\end{table}
\subsection{Composite Expressions}
In the sequel we will sometimes need to list for example,
all 3rd order vectors. In addition to expressions constructed
out of the independent 3rd order data, listed in the previous subsection,
the set of all 3rd order vectors includes expressions cubic in first order
data, and expressions formed out of the product of one first order and one
second order piece of data. We will refer to expressions constituted out
of products of independent data as composite expressions.
Composite expressions formed out of independent
data are easily ennumerated and decomposed into $SO(3)$ representations
using Clebsh Gordan decompositions (taking care to acccount for
symmetry properties when we multiply two or three copies of the same data).
In order to ease the process of reference to composite expressions in the
rest of the note we now adopt the following terminology. Independent
data at $m^{th}$ order in the derivative expansion is referred to data of
the type $I_{m}$. A composite expression that consists of a product of
three first order pieces of data is referred to as an expression of the type
$C_{1,1,1}$. Composite expressions that consist of the product of a first order
and 3rd order piece of data are called expressions of the form $C_{1,3}$.
The generalization of our notation to other forms of composite data is
obvious.
In the sequel we will need to list only those composite expressions that
transform in the vector (in order to list the most general entropy current)
or the scalar (in order to list the most general terms in its divergence).
In the rest of this subsection we present a listing of those vector and scalar
composite expressions that will be needed below.
\begin{table}[ht]
\caption{$C_{1,1}$ type expressions}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c|}
\hline\hline
\hline
Scalars (4) &$\Theta^2,~~{\mathfrak a}^2,~~\omega^2,~~\sigma^2$
\\[0.5ex]
\hline
Vectors (3) &${\mathfrak a}^\mu \Theta,~~{\mathfrak a}_\nu \omega^{\mu\nu},
~~{\mathfrak a}_\nu \sigma^{\mu\nu}$\\[0.5ex]
\hline
Tensors (5) &$\Theta \sigma_{\mu\nu}$,
~ $\sigma_{\langle\mu}^a\sigma_{a\nu\rangle}$,~$\omega_{\langle\mu}^a\sigma_{a\nu\rangle}$,~
$\omega_{\langle\mu}^a\omega_{a\nu\rangle}$,
~${\mathfrak a}_{\langle \mu}{\mathfrak a}_{\nu\rangle}$\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:c11}
\end{table}
Here $\omega_{\mu\nu} = P^{\mu a} P^{\nu b}\left[\frac{\nabla_a u_b - \nabla_b u_a}{2} \right]$
\begin{table}[ht]
\caption{$C_{1,2}$ type expressions independent of the curvature}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c|}
\hline\hline
\hline
Scalars (4) &$\Theta (u.\nabla)\Theta,~~(\nabla_\mu T)\nabla^2 u^\mu,
~~(\nabla_\mu T)(u.\nabla)(\nabla^\mu T),~~\sigma_{\mu\nu}\nabla^\mu\nabla^\nu T$
\\[0.5ex]
\hline
\hline
Vectors (11) &$P^\alpha_\mu\Theta \nabla^2 u^\mu,~~P^\alpha_\mu\Theta (u.\nabla)\nabla^\mu T,
~~P^\alpha_\mu(\nabla^2 T)\nabla^\mu T,~~\omega_{\mu\nu}\nabla^2 u^\mu$\\
&$\omega_{\mu\nu} (u.\nabla)\nabla^\nu T,~~\sigma_{\mu\nu}\nabla^2 u^\mu,~~
\sigma_{\mu\nu} (u.\nabla)\nabla^\nu T,~~P^\alpha_\mu(\nabla_a T)(\nabla^\mu\nabla^a T)$\\
&$(\nabla _a \omega_{b\mu})\sigma^{ab},~~ P^\alpha_\mu\sigma_{ab}\nabla^a\nabla^b u^\mu,~~
P^\alpha_\mu\omega ^{ab}\nabla^\mu \omega_{ab}$\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:c12}
\end{table}
\begin{table}[ht]
\caption{$C_{1,2}$ type expressions involving a curvature}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c|}
\hline\hline
\hline
Scalars (5) &$F_{ab}\sigma^{ab},~~R_{ab}\sigma^{ab},
~~u_a{\mathfrak a}_bR^{ab},~~\Theta R,~~\Theta R_{00}$
\\[0.5ex]
\hline
Vectors (9) &$P^\alpha_\mu{\mathfrak a}_\nu F^{\mu\nu},~~P^\alpha_\mu{\mathfrak a}_\nu R^{\mu\nu},
~~{\mathfrak a}^\mu R_{00},~{\mathfrak a}^\mu R,~~P^\alpha_\mu u_a\Theta R^{a\mu}$\\
&$u^a R_{ab}\sigma^{b\mu},~~u_a R^{a\mu bc}\omega_{bc},~~
P^\alpha_\mu u_a R^{ab \mu c}\sigma_{bc},~~u^a R_{ab}\omega^{b\mu}$\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:c12curv}
\end{table}
\begin{table}[ht]
\caption{$C_{1,1,1}$ type expressions}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c|}
\hline\hline
\hline
Scalars (7) &$\Theta^3,~~\sigma^2\Theta,
~~\omega^2 \Theta,~~{\mathfrak a}^2\Theta$
\\
& ${\mathfrak a}_\mu {\mathfrak a }_\mu \sigma^{\mu\nu},~~\sigma_{\mu a}\sigma^a_b\sigma^{b\mu},
~~\omega_{\mu a}\sigma^a_b\omega^{b\mu}$\\[0.5ex]
\hline
Vectors (10) &$\sigma^2 {\mathfrak a}^\mu,~~\omega^2 {\mathfrak a}^\mu,
~~\Theta^2 {\mathfrak a}^\mu,~~{\mathfrak a}^2 {\mathfrak a}^\mu,
~~\Theta \sigma^{\mu\nu}{\mathfrak a}_\nu$\\
&$\Theta \omega^{\mu\nu}{\mathfrak a}_\nu,~~\sigma^{\mu a}\sigma_{ab}{\mathfrak a}^b,~~
~\omega^{\mu a}\omega_{ab}{\mathfrak a}^b,~~\omega^{\mu a}\sigma_{ab}{\mathfrak a}^b
,~~\sigma^{\mu a}\omega_{ab}{\mathfrak a}^b$\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:c111}
\end{table}
\section{Entropy current}\label{sec:enropycurrent}
In this section we will derive constraints on constitutive relations, at
second order in the derivative expansion, from the requirement of positivity of
divergence of {\it any} entropy current that reduces to $s u^\mu$ (where
$s$ is the entropy density) in equilibrium.
We will first explain in very broad terms how we proceed.
The entropy current takes the form
\begin{equation}\label{mpec}
J^\mu = J^\mu_{eq} + \tilde J^\mu
\end{equation}
where
$$J^\mu_{eq} = s u^\mu$$
In general ${\tilde J}^\mu$ has terms of all orders in the derivative
expansion, but for the purposes of this note we will find it sufficient
to truncate ${\tilde J}^\mu$ to terms of third order or lower in
derivatives. To start with we allow ${\tilde J}$ to be given by the most
general possible form consistent with symmetries. We then compute the divergence
of $J^\mu$ and reexpress
the final result entirely in terms of independent data of fourth or
lower order in derivatives. The last step (reexpressing the divergence
of $J$ in terms of independent data) uses the equations of fluid dynamics,
and so the constitutive relations. Our final expression is a polynomial
in the (finite number of) pieces of data of fourth or lower order in
derivatives. We then demand that the resultant polynomial is positive
definite (or can be made so by the addition of terms higher than fourth
order in the derivative expansion) as a function of its arguments.
This rather stringent requirement turns out to yield several constraints
on the form of both the entropy current at second (and third) order
as well as constitutive relations at second order in the derivative expansion.
\subsection{Entropy current in equilibrium and 1st order correction}
In order to set the stage for our discussion we first recall how the
requirement of positivity of the entropy current constrains constitutive
relations at first order in the derivative expansion \cite{Bhattacharya:2011tra}.
We first recall that thermodynamics and the fluid dynamical equations
may be used to demonstrate that
\begin{equation}\label{divperf}
\nabla_\mu J^\mu_{eq} = -\frac{1}{T}\left(\Pi^{\mu\nu}\sigma_{\mu\nu}
+\frac{ \Theta\Pi^\mu_\mu }{3}\right)
\end{equation}
It follows in particular from \eqref{divperf} that entropy is conserved in
perfect fluid dynamics (i.e. when $\Pi^{\mu\nu}$ vanishes). It also follows
that the divergence of the most general entropy current \eqref{mpec}
only contains terms of second or higher order in the derivative expansion.
Let us now examine the constraints from the requirement of positivity of
these second order pieces. For this purpose we need to study the most
general entropy current at first order in derivatives. Imposing the
requirement of invariance under partiy, the most general (onshell inequivalent)
family of first order entropy currents is given by
\begin{equation} \label{gfec}
J^\mu=J^\mu_\text{equilibrium} + \alpha \Theta u^\mu +\beta {\mathfrak a}^\mu
\end{equation}
(we have used here that at first order in the derivative expansion we
have one piece of scalar data, which may be chosen as $\Theta$, and one
piece of vector data, which may be chosen as ${\mathfrak a}^\mu$).
We now proceed to compute the divergence of \eqref{gfec} and
use the perfect fluid equations to rexpress the result in terms of
independent data. The resultant expression is the sum of a quadratic
form in first order data and a linear form in 2nd order scalar data.
As derived in \cite{Bhattacharya:2011tra}, the final expression for this divergence
is given as
\begin{equation}\label{diveq1}
\begin{split}
&\nabla_\mu J^\mu|_{\text{upto 2nd order}}\\
=& -\frac{1}{T}\left(\Pi^{\mu\nu}\sigma_{\mu\nu} +\frac{ \Pi \Theta}{3}\right) \\
&+ \Theta (u.\nabla)\alpha + ({\mathfrak a}.\nabla)\beta
+ \left(\alpha + \frac{\beta}{3}\right)\frac{\Theta^2}{3} + \beta\left(\sigma^2 + \omega^2\right)\\
&+ (\alpha + \beta) (u.\nabla)\Theta + \beta R_{00}
\end{split}
\end{equation}
Here both the first and the second line have terms quadratic in 1st order data. The
last line contains the terms
which are linear in second order data. There are three independent 2nd order scalars
($(u.\nabla)\Theta,~R,~R_{00}$) as given in the classification in section \ref{sec:class}.
Only two of these three scalars appear in \eqref{diveq1}. Since these two terms are
linear in fluid variable,
they can have any sign and to ensure positivity of the divergence their coefficients
(both $\alpha$ and $\beta$) must be set to zero.
This implies that at 1st order no correction can be added to the entropy current which
is consistent with the
positivity requirement. Then in the RHS of \eqref{diveq1} only the first line will give
a non-zero contribution.
To evaluate the first line we need the first order corrections to the constitutive
relation.
At first order the most general correction to the constitutive relation (stress tensor
in Landau gauge)
will involve the single on-shell
independent 1st order scalar which we have chosen to be $\Theta$ and single on-shell
independent tensor
$\sigma_{\mu\nu}$.
$$\Pi_{\mu\nu}|_\text{upto 1st order} = -\eta\sigma_{\mu\nu} -\zeta \Theta P_{\mu\nu}$$
where $\eta$ and $\zeta$ are shear and bulk viscosity respectively.
Therefore finally
\begin{equation}\label{diveq2}
\begin{split}
&\nabla_\mu J^\mu|_{\text{upto 2nd order}}
=\frac{1}{T}\left(\eta\sigma^2 +\zeta\Theta^2\right) \\
\end{split}
\end{equation}
Hence to have a positive definite divergence one requires that
$$\eta\geq0,~~~\zeta\geq0$$
The main point to note in the above equation \eqref{diveq2} is that it involves only
two of the four first order
on-shell independent data as listed in section \ref{sec:class}. The squares of the
independent vector
${\mathfrak a}^\mu$ and the pseudo-vector $l^\mu$ do not appear in equation
\eqref{diveq2}.
Because of this fact any term in the divergence which is
of the form (${\mathfrak a}_\mu \times \text{$I_2$ or $I_3$ type vector}$) or
($l_\mu \times \text{$I_2$ or $I_3$ type pseudo-vector}$) can never be made
positive-definite.
\subsection{General constraints on second and the third order corrections}
In general $\tilde J^\mu$ can be written as
$$\tilde J^\mu = \left(\sum_i {\mathfrak S}_i\right)u^\mu + \sum_i{\mathfrak V}_i^\mu$$
where ${\mathfrak S}_i$ is an arbitrary combination of $i$th order on-shell independent
scalars and ${\mathfrak V}^\mu_i$ is a combination of $i$th order vectors.
In the previous subsection we have seen that to constrain the first order transport
coefficients $\eta $ and $\zeta$ we need to determine only the first order correction
to the entropy current (i.e. only ${\mathfrak S}_1$ and ${\mathfrak V}^\mu_1$ and both of
them finally turn out to be zero). But to constrain the second order transport
coefficients we need to go till the third corrections to the entropy current. The
reason is the following.
Suppose the divergence of the most general entropy current has two terms of the form
$$\nabla_\mu J^\mu_s = A x^2 + B x y= Ax^2\left(1 + \frac{By}{Ax}\right) $$
where $x$ and $y$ are two on-shell independent fluid data and $A$ and $B$ are some
functions of temperature, which in general will depend on the coefficients appearing
in the entropy current or transport coefficients.
In this schematic expression of divergence since $x$ and $y$ are two independent fluid
data, locally the ratio $\frac{By}{Ax}$ can take any negative value, larger or smaller
than 1 in magnitude and the positivity constraint will depend on whether $y^2$ term is
present or not in the final expression of the divergence. In the absence of a $y^2$
piece, the coefficient $B$ has to be set to zero and the coefficient $A$ to some
non-negative number.
But this argument does not require $x$ and $y$ to be of same order in derivative
expansion. Even when $x$ is of first order in derivative and $y$ is of second order,
the ratio $\frac{By}{Ax}$ can be of order 1 for some particular fluid configuration
where $x$ is accidentally small enough to be comparable to $y$ at a given point.
In such cases to see whether $y^2$ term is present or not, we need to compute the
divergence till fourth order. This is why we have to compute the divergence till
fourth order even if we want to constrain just the second order transport coefficients.
In fact the constraints on transport coefficients will involve situation where
$x$ and $y$ are necessarily of different orders. For example, $B$ will contain some
second order transport coefficients only when $x$ is of first order (as we will see
below that $x$ has to be equal to $\sigma_{\mu\nu}$ or $\Theta$) and $y$ is of second order in
derivatives. It will turn out that most of the equalities among the coefficients will
follow from this sort of argument.
Below we schematically list all the constraints we need to impose on the third and
fourth order pieces of the divergence in order to ensure its positivity.
\begin{itemize}
\item The coefficient of any term (appearing in third or fourth order piece of
the divergence) which contains more than one factors of $\sigma_{\mu\nu}$ or $\Theta$
or at least one factor of $\Theta\sigma_{\mu\nu}$ will not have any constraint from
positivity as long as $\eta$ and $\zeta$ are non-zero and are of order one. This is
because whenever such third or fourth order terms are non-zero, the second order piece
of the divergence is also non-zero and positive-definite and will always dominate these
terms within derivative expansion. These terms can never make the divergence negative.
Therefore while calculating the the third and fourth order divergence we shall ignore
all these terms.
\item One needs to do sixth order analysis to constrain the coefficients of any term
(appearing in the fourth order divergence of the entropy current) which is of the form
($\sigma_{\mu\nu} \times \text{Some third order tensor}$)
or ($\Theta \times \text{Some third order scalar}$). Such
terms generically will have contributions from third order
transport coefficients. Since we are interested only upto second
order transport coefficients we shall ignore all such terms while
calculating the fourth order divergence.
\item The coefficients of all the terms which contain a single
$I_2$, $I_3$ or $I_4$ type scalar (at second, third and fourth order respectively)
have to be set to zero. This is because locally all these terms are linear in fluid
variables and therefore can have any sign.
\item In the third order and fourth order piece of the divergence, the
coefficients of all the terms which are of the form
(${\mathfrak a}_\mu \times \text{$I_2$ or $I_3$ type vector}$) or
($l_\mu \times \text{ $I_3$ type pseudo-vector}$) have to be set to zero.
Since there is no on-shell independent $I_2$ type pseudo-vector, there
could not be any term of the form ($l_\mu \times \text{ $I_2$ type pseudo-vector}$).
\item At this stage the terms appearing in the third order piece of the
divergence will be of the following form.
\begin{enumerate}
\item $\sigma_{\mu\nu}\times(\text{$I_2$ or $C_{1,1}$ type tensors})$
\item $\Theta\times(\text{$I_2$ or $C_{1,1}$ type scalars})$
\end{enumerate}
All these terms will involve the second order transport coefficients.
The relevant terms appearing at the fourth order (where all the terms
involving $\sigma_{\mu\nu}$ and $\Theta$ are ignored) will be of the
following form.
\begin{enumerate}
\item A quadratic form involving independent $I_2$ type data
\item A quartic form involving ${\mathfrak a}_\mu$ and $\omega_{\mu\nu}$
\item Terms linear in $I_2$ type data and quadratic in ${\mathfrak a}_\mu$
and/or $\omega_{\mu\nu}$
\end{enumerate}
Therefore when $\eta\neq0$ and $\zeta\neq0$ the relevant part of the
divergence calculated upto fourth order is schematically given by
\begin{equation}\label{schmdiv}
\begin{split}
\text{Divergence}= &\frac{\eta~\sigma^2 + \zeta~\Theta^2}{T}\\
&+ \sigma_{\mu\nu}\times(\text{$I_2$ or $C_{1,1}$ type tensors})
+\Theta\times(\text{$I_2$ or $C_{1,1}$ type scalars})\\
&+\text{A quadratic form involving independent $I_2$ type data} \\
&+\text{Terms linear in $I_2$ type data and quadratic in ${\mathfrak a}_\mu$
and/or $\omega_{\mu\nu}$}\\
& +\text{A quartic form involving ${\mathfrak a}_\mu$ and $\omega_{\mu\nu}$}
\end{split}
\end{equation}
where in the second line all the $C_{1,1}$ type tensors involving $\sigma_{\mu\nu}$
and all the $C_{1,1}$ type scalars involving $\Theta$ are ignored.
\item Now we can shift $\sigma_{\mu\nu}$ by a combination of $I_2$ or $C_{1,1}$ type
tensors such that the term linear in $\sigma_{\mu\nu}$ appearing in
the second line gets absorbed.
This shift will generate fourth
order terms structurally similar to the terms appearing in third, fourth and fifth line
of the above equation. One can see that all these newly generated terms together will
necessarily be negative definite. Similar shift has to be done to absorb the terms linear
in $\Theta$ to the first line of \eqref{schmdiv}.
\item One can do similar
shifts in $I_2$ type data to absorb the terms appearing in the fourth line of equation
\eqref{schmdiv} into terms
appearing in the third and fifth line with $I_2$ data replaced by the shifted one.
At this stage the schematic expression of the divergence will take the following form.
\begin{equation}\label{schmdivp}
\begin{split}
\text{Divergence}= &\frac{\eta~(\text{shifted}~\sigma)^2 + \zeta~(\text{shifted}~
\Theta)^2}{T}\\
&+\text{A quadratic form involving shifted $I_2$ type data} \\
& +\text{A quartic form involving ${\mathfrak a}_\mu$ and $\omega_{\mu\nu}$}
\end{split}
\end{equation}
\item The positive definiteness of the divergence finally will imply the
positivity of the quadratic and the quartic form appearing in the second and
the third line of \eqref{schmdivp}.
\end{itemize}
Such condition will generically give some inequalities among the coefficients.
However suppose by explicit computation one finds that for some particular
negative definite
term generated by the shift
there is no term present in the third or fifth line
of equation \eqref{schmdiv} to compensate. Then this will imply that
the coefficient of the
corresponding linear term (the source for generating this particular
negative-definite term through shift) in the second line or fourth
line has to be set to zero.
This will give strict equalities among the coefficients.
It will turn out that all of the constraints on the 2nd order transport
coefficients will arise from this sort of argument.
In explicit calculation we will see that in the quadratic form involving the $I_2$
type data there will not be any term
proportional to $R_{00}^2$,
$R^2$, $F_{\mu\nu}F^{\mu\nu}$ and $R_{\mu\nu}R_{ab}P^{\mu a}P^{\nu b}$.
This will imply that the coefficients of all the terms linear in $R_{00}$,
$R$, $F_{\mu\nu}$ and $R_{\mu\nu}P^{\mu a}P^{\nu b}$ have
to be zero. It turns out that once we set these linear terms to zero,
the quartic form mentioned in
the last line of equation \eqref{schmdiv} also vanishes.
The vanishing of these terms at fourth order gives the final constraint
on the transport coefficients. In the explicit computation we will see that there
are eight
terms ($\Theta {\mathfrak a}^2$, $\Theta~ l^2$,
$\sigma_{\mu\nu}{\mathfrak a}^\mu{\mathfrak a}^\nu$, $\sigma_{\mu\nu}l^\mu l^\nu$
$\sigma_{\mu\nu}R^{\mu\nu}$, $\sigma_{\mu\nu}F^{\mu\nu}$, $R\Theta$ and $R_{00}\Theta$)
in the third order divergence which are linear in the set of fluid and curvature
data mentioned above and also involve eight independent transport coefficients.
So setting the coefficient of these linear terms to zero, we can
express the eight transport coefficients in terms of
the coefficients appearing in the second order entropy current. It will turn out only
three of the entropy current coefficients appear in these expressions. Eliminating these
three coefficients we get the final five relations among the 15 transport coefficients
as presented in \eqref{relationsintro}.
Once all these relations are imposed on the divergence, one is left with a quadratic
form involving only $I_2$ type data . To ensure that this quadratic form is
positive-definite the coefficients appearing in the second and third order entropy
current as well as the transport coefficients have to satisfy some inequalities.
But in this case, at least upto this order the entropy current coefficients
can not be eliminated from the relations. Therefore unlike the first order
transport coefficients the second order ones do not satisfy any inequalities within
themselves.
\subsection{Implementing the general rules at second order}
At second order we have to determine ${\mathfrak S}_2$ and ${\mathfrak V}_2^\mu$
such that the divergence calculated upto third order in derivative expansion is
non-negative. Here we shall follow the general procedure described in the previous
subsection.
We shall express ${\mathfrak S}_2$ and ${\mathfrak V}_2^\mu$ in terms of the
on-shell independent second order scalars and vectors respectively.
${\mathfrak S}_2$ will have 7 coefficients, three multiplying the three
independent $I_2$ type scalars and rest of four multiplying the four $C_{1,1}$
type scalars. ${\mathfrak V}^\mu$ will also have 6 coefficients, three multiplying
the three $I_2$ type vectors and the rest multiplying the three $C_{1,1}$ type vectors.
So before imposing any constraint the entropy current at second order contains total
13 coefficients, each of which is an arbitrary function of temperature.
We shall write this most general 13 parameter entropy current in the following form .
\begin{equation}\label{duient1}
\begin{split}
\tilde J^\mu|_{\text{second order}} =& \nabla_\nu\left[A_1(u^\mu\nabla^\nu T - u^\nu \nabla^\mu T)\right] + \nabla_\nu \left( A_2 T \omega^{\mu\nu}\right)\\
& + A_3 \left(R^{\mu\nu} - \frac{1}{2}g^{\mu\nu} R\right) u_\nu
+\left[ A_4 (u.\nabla)\Theta + A_5 R + A_6 R_{00}\right] u^\mu\\ &+(B_1\omega^2 + B_2\Theta^2 + B_3 \sigma^2)u^\mu + B_4\left[(\nabla s)^2
u^\mu + 2 s \Theta \nabla^\mu s\right]\\
&+\left[\Theta \nabla^\mu B_5 - P^{ab}(\nabla_b u^\mu)( \nabla_a B_5 )\right]+ B_6 \Theta {\mathfrak a^\mu} + B_7 {\mathfrak a}_\nu \sigma^{\mu\nu}
\end{split}
\end{equation}
Here $s$ is the entropy density and all the coefficients $A_i$ and the $B_i$
are the arbitrary functions
of temperature.
Now we shall argue that \eqref{duient1} is actually the most general 13 parameter
entropy
current.
By equations of motion one can show that the only $I_2$ type vector appearing in
the first term
is $P^{\mu a}\nabla_a \Theta$, the second term contains a linear combination of all
the three independent $I_2$ type vectors and in the third term
the only $I_2$ type vector that appears is $P^{\mu a} R_{ab}u^b$. Therefore the
first three terms
together take care of the all the three $I_2$ type vectors.
Terms multiplying $A_4,~~A_5$ and $A_6$ are the three $I_2$ type scalars.
By equation of motion $B_4$ term is equal to a linear combination of $\Theta^2 u^\mu$,
${\mathfrak a}^2 u^\mu$ and $\Theta {\mathfrak a^\mu}$. Similarly $B_5$ term is a
particular
linear combination of $\Theta^2 u^\mu$, ${\mathfrak a}^2 u^\mu$, $\Theta {\mathfrak a^\mu}$,
${\mathfrak a}_\nu \sigma^{\mu\nu}$ and ${\mathfrak a}_\nu \omega^{\mu\nu}$.
Therefore all
the $C_{1,1}$ type scalars and vectors appear in \eqref{duient1} with distinct
coefficients.
Next we shall compute the divergence of this 13 parameter entropy current constructed in
\eqref{duient1}. We have to set the coefficients of all the $I_3$ type on-shell
independent terms to zero.
Since there are total 3 independent $I_3$ type scalars, it can impose at most three
relations among the coefficients
appearing in the second order entropy current.
Next we have to isolate all the $C_{1,2}$ type terms which are of the form of
${\mathfrak a}_\mu$ times a $I_2$ type
vector and set their coefficients to zero.
Since there are total three second order $I_2$ type vectors this condition also can
impose at most three constraints.
\begin{itemize}
\item The divergence of the first two terms in \eqref{duient1} vanish identically.
\item The divergence of the third term (the term with coefficient $A_3$) does not
produce any $I_3$ type scalar.
The divergence of this term is explicitly computed in \eqref{a3term}.
\item The three independent $I_3$ type scalars
($u^a u^b \nabla_a \nabla_b \Theta,~~u.\nabla R,~~u.\nabla R_{00}$) are produced
from the three terms multiplying coefficients $A_4$, $A_5$ and $A_6$ respectively.
Therefore to maintain
positivity $A_4$, $A_5$ and $A_6$ have to be set to zero.
\item The divergence of the terms multiplying $B_1,~~B_2,~~B_3$ and $B_4$ do not
produce
any term of the form ${\mathfrak a}^\mu$ times an $I_2$ type vector.
The divergence of these terms are explicitly computed in \eqref{termb1}, \eqref{b2b3term},
and \eqref{b4term} respectively.
\item The divergence of the terms with coefficients $B_6$ and $B_7$ produce the
two terms
${\mathfrak a}_\nu\nabla_\mu \sigma^{\mu\nu}$ and ${\mathfrak a}_\mu\nabla^\mu\Theta$
respectively whose net coefficient should be zero to ensure the positivity of the
divergence.
Since these are the only places where these terms are produced, $B_6$ and $B_7$ are
set
to zero.
\item Both the terms multiplying $B_5$ and $A_3$ produce the third possible term of
the
form ${\mathfrak a}^\mu$ times an $I_2$ type vector which is
${\mathfrak a}_\mu R^{\mu\nu}u_\nu$
(see \eqref{a3term} and \eqref{b5term}). The net coefficient is
$\left(\frac{A_3}{T} + \frac{dA_3}{dT}-\frac{d B_5}{dT}\right)$.
Therefore positivity implies
$$\frac{d B_5}{dT}=\frac{A_3}{T} + \frac{dA_3}{dT}$$
\end{itemize}
After imposing all these constraints the final form of the second order
entropy current is given as
\begin{equation}\label{duient2}
\begin{split}
&\tilde J^\mu|_{\text{second order}}\\
=& \nabla_\nu\left[A_1(u^\mu\nabla^\nu T - u^\nu \nabla^\mu T)\right] +
\nabla_\nu \left( A_2 T \omega^{\mu\nu}\right)\\
& + A_3 \left(R^{\mu\nu} - \frac{1}{2}g^{\mu\nu} R\right) u_\nu
+\left(\frac{A_3}{T} + \frac{dA_3}{dT}\right)\left[\Theta \nabla^\mu T -
P^{ab}(\nabla_b u^\mu)( \nabla_a T )\right]\\ &+(B_1\omega^2 +
B_2\Theta^2 + B_3 \sigma^2)u^\mu + B_4\left[(\nabla s)^2
u^\mu + 2 s \Theta \nabla^\mu s\right]\\
\end{split}
\end{equation}
\subsection{Implementing the general rules at third order}
\begin{itemize}
\item First we have to write ${\mathfrak S}_3$ and ${\mathfrak V}_3^\mu$
in terms of the on-shell independent data.
\item The coefficients of all the $I_4$ type data appearing
at the fourth order divergence have to be set to
zero. An $I_4$ type term in the fourth order divergence can occur only when
the derivative acts on the $I_3$ type terms of the third order entropy current.
Therefore this condition will constrain the coefficients of the $I_3$ type terms.
Now there are 3 $I_3$ type independent third order scalars and 4 $I_3$ type independent
third order vectors and there are total 5 $I_4$ type fourth order scalars. This means
one
can have at least 2 distinct coefficients multiplying $I_3$ type terms in a third order
entropy current with positive definite divergence.
It turns out that after imposing this constraint there are exactly two coefficients
left and
the terms multiplying them can be chosen in such a way that their divergence vanish
identically.
So these two terms will not contribute to any further constraint.
\item The number of free coefficients that can multiply the $C_{1,2}$ type data in
third order entropy
current is quite large. There can be total 9 free coefficients in ${\mathfrak S}_3$
and 20
in ${\mathfrak V}_3^\mu$.
These coefficients will be constrained by the fact that any terms of the form
${\mathfrak a}_\mu$
times a $I_3$ type vector or $l_\mu$ times a $I_3$ type pseudo-vector have to be
set to zero.
Since there are total 4 $I_3$ type vectors and total 2 $I_3$ pseudo-vectors, it
can produce at most
6 constraints, reducing the number of free coefficients to 23.
\item Since there is no other general constraint to simplify the form of the entropy
current at
this stage we have to calculate the divergence.
But we will not attempt to calculate the full divergence. Instead we shall calculate
only those
terms which can impact the constraints on the second order transport coefficients.
For this purpose in the divergence we can ignore all those terms which are multiplied
by $\Theta$
or $\sigma_{\mu\nu}$.
Also to simplify we shall try to write the terms in the form of ($\nabla_\mu A^{\mu\nu}$ )
where $A^{\mu\nu}$ is an anti-symmetric tensor, so that their divergence vanish
identically.
We could not do it for all the independent terms, but we try to apply this trick for
as many terms as possible.
\end{itemize}
Now we shall explicitly construct the required part of the third order entropy current
piece by piece.
The part multiplying the $I_3$ type terms can be written as
\begin{equation}\label{tinent1}
\begin{split}
&\tilde J^\mu|_{\text{3rd order/$I_3 $type}}\\
=& \nabla_\nu \left[P_1(u^\nu (u.\nabla) \nabla^\mu T -
u^\mu (u.\nabla)\nabla^\nu T)\right] +
\nabla_\nu \left[P_2(u^\nu R^\mu _\theta u^\theta -u^\mu R^\nu_\theta u^\theta )\right]\\
&+ \left[P_3(u.\nabla)^3 T + P_4(u.\nabla) R +P_5 (u.\nabla)R_{00}\right] u^\mu\\
&+P^{\mu a}\left[ P_6\nabla_a R_{00} + P_7 \nabla_a R\right]
\end{split}
\end{equation}
In the first and the third term the two independent $I_3$ type fluid data are chosen
to be $ (u.\nabla)^2\nabla^\mu T$ and $(u.\nabla)^3 T$.
The independent $I_3$ type curvature data are chosen from the list given in section
\ref{sec:class}.
The second term (with the coefficient $P_2$) contains the independent
vector $P^{\mu a}u^b(u.\nabla) R_{ab}$.
Here all the terms in the third and the fourth line produce independent $I_4$
type scalars at
fourth order and therefore they are all set to zero. The divergence of the first
two terms vanish identically.
So finally this part of the entropy current has only two terms and both the terms
have zero divergence.
\begin{equation}\label{tinent2}
\begin{split}
&\tilde J^\mu|_{\text{3rd order/$I_3$ type}}\\
=& \nabla_\nu \left[P_1(u^\nu (u.\nabla) \nabla^\mu T
- u^\mu (u.\nabla)\nabla^\nu T)\right]
+ \nabla_\nu \left[P_2(u^\nu R^\mu _\theta u^\theta
-u^\mu R^\nu_\theta u^\theta )\right]\\
\end{split}
\end{equation}
The part multiplying the $C_{1,2}$ type terms have total
29 coefficients to begin with. We shall try to write them in a way so that the
computation becomes simpler.
In table (\ref{table:scalcombination}) and table (\ref{table:veccombination})
we have listed each of the independent $C_{1,2}$ type fluid data and then the
independent combination
through which this data
has entered the entropy current. In table (\ref{table:C111type}) we have listed
the relevant $C_{1,1,1}$ type scalars and vectors and also their coefficients in
the entropy current. In all these cases, to begin with the coefficients are some
unspecified functions of temperature.
\begin{table}[ht]
\caption{$C_{1,2}$ type Scalars (Fluid data)}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c|}
\hline\hline
\hline
Scalars as listed before&Combination that enters entropy current\\[0.5ex]
\hline
$\Theta (u.\nabla)\Theta$ &$Q_1\left[\Theta (u.\nabla)\Theta\right] u^\mu$
\\[0.5ex]
\hline
$\sigma_{\mu\nu}(u.\nabla)\sigma^{\mu\nu}$ &$Q_2 \left[\sigma_{ab}(u.\nabla)\sigma^{ab}\right]u^\mu$\\[0.5ex]
\hline
${\mathfrak a}_\mu\nabla_a\sigma^{a\mu}$ &$\nabla_\mu\left[Q_3\left(u^\mu\sigma^{a\nu}-u^\nu\sigma^{a\mu}\right){\mathfrak a}_a\right]$
\\[0.5ex]
\hline
${\mathfrak a}_\mu\nabla_a\omega^{a\mu}$&$\nabla_\mu\left[Q_4\left(u^\mu\omega^{a\nu}-u^\nu\omega^{a\mu}\right){\mathfrak a}_a\right]$\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:scalcombination}
\end{table}
\begin{table}[ht]
\caption{$C_{1,2}$ type Vectors (Fluid data)}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c|}
\hline\hline
\hline
Vectors as listed before&Combination that enters entropy current\\[0.5ex]
\hline
$\Theta \nabla^\mu\Theta $&$\nabla_\nu \left[Q_5~\Theta(u^\mu g^{a\nu} - u^\nu g^{a\mu}){\mathfrak a}_a\right]$
\\[0.5ex]
\hline
$\Theta \nabla_\nu \omega^{\mu\nu}$ &$\nabla_\nu\left[Q_6~\Theta\omega^{\mu\nu}\right]$\\[0.5ex]
\hline
$\omega^{\mu\nu}\nabla_a\sigma^a_\nu$ &$\nabla_\nu
\left[Q_7\left(\omega^{\mu\theta}\sigma_\theta^\nu
-\omega^{\nu\theta}\sigma_\theta^\mu\right)\right]$
\\[0.5ex]
\hline
${\mathfrak a}_a(\nabla^\mu\nabla^a T)$&$Q_{8} ~{\mathfrak a}_a(\nabla^\mu\nabla^a T)$\\[0.5ex]
\hline
$\omega ^{ab}\nabla^\mu \omega_{ab}$&$Q_{9}~\omega ^{ab}\nabla^\mu \omega_{ab}$\\[0.5ex]
\hline
${\mathfrak a}^\mu (u.\nabla)\Theta$&$Q_{10}\left[{\mathfrak a}^\mu (u.\nabla)\Theta -u^\mu ({\mathfrak a}.\nabla)\Theta \right]$\\[0.5ex]
\hline
$\omega_{\mu\nu} \nabla^\nu \Theta$&$\omega^{\mu\nu}\nabla_\nu\left(Q_{11}\Theta \right)$\\[0.5ex]
\hline
$\sigma^\mu_\nu\nabla^\nu\Theta$&$Q_{12}~\sigma^\mu_\nu\nabla^\nu\Theta$\\[0.5ex]
\hline
$\sigma^\mu_\nu\nabla_\theta\sigma^{\theta\nu}$&$ Q_{13}~\sigma^\mu_\nu\nabla_\theta\sigma^{\theta\nu}$\\[0.5ex]
\hline
$ P^\mu_c\sigma_{ab}\nabla^{\langle c}\sigma^{ab\rangle}$&$P^\mu_c\sigma_{ab}\nabla^{\langle c}\sigma^{ab\rangle}$\\[0.5ex]
\hline
$P^{\mu c}\sigma^{ab}\left(\nabla _a \omega_{b c}-\frac{P_{ab}}{3}\nabla^k\omega_{kc}\right)$&$Q_{15}~P^{\mu c}\sigma^{ab}\left(\nabla _a \omega_{b c}-\frac{P_{ab}}{3}\nabla^k\omega_{kc}\right)$\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:veccombination}
\end{table}
We shall start our analysis by computing the divergence of the
terms appearing in table (\ref{table:scalcombination}) and (\ref{table:veccombination}).
\begin{itemize}
\item The divergence of the terms with coefficients ($Q_i,~~i=3,\cdots,7$)
vanish identically.
\item It turns out that the term with coefficient $Q_8$ is the only term
which produces
${\mathfrak a}_\mu$ times a third order $I_3$ type vector. Therefore $Q_8$ has
to be set to zero.
\item Similarly if we analyse only the third order entropy current, the term
with coefficient
$Q_9$ is the only term that produces $l_\mu$ times a third order $I_3$ type
pseudo-vector in
the fourth order divergence. However a similar term is also produced when the
divergence of the
$B_1$ term in the second order entropy current is computed upto fourth order.
\begin{equation}\label{divB1}
\begin{split}
\nabla_\mu\left[B_1 \omega^2 u^\mu \right] &= B_1 \omega^2\Theta
+ [(u.\nabla)B_1]\omega^2 + 2 B_1\omega^{ab}(u.\nabla)\omega_{ba}\\
\end{split}
\end{equation}
Since there is no second order on-shell pseudo vector,
$\omega^{ba}(u.\nabla)\omega_{ba}$
must contain a third order pseudo-vector times $l_\mu$.
Then in the final fourth order divergence
the total coefficient of such term (ie. the term proportional to
third order pseudo-vector times $l_\mu$) will
be a linear combination of $B_1 $ and $Q_9$, which should be set to zero.
But to simplify the calculation instead we shall introduce a third order
shift in the `$B_1$ term' of the
second order entropy current and will consider the following term
$\left[B_1 \omega^2 u^\mu -\frac{2B_1}{Ts} \omega^{\mu b}\nabla_a\Pi^a_b\right]$.
The divergence of the shifted `$B_1$ term' no longer contains the terms
proportional to third order pseudo-vector times $l_\mu$.
(The releveant part for the divergence of the shifted
`$B_1$' term is computed in \eqref{termb1}.
Once this shift is done, $Q_9$ also has to be set to zero, since
now this is the only term which
produces $l_\mu$ times a third order $I_3$ type pseudo-vector.
\end{itemize}
For the rest of the 8 terms we have to compute the divergence explicitly.
However we are
interested in those terms in the fourth order divergence which does not have any
explicit factor of
$\Theta$ or $\sigma_{\mu\nu}$. This simplifies the calculation.
For example, in the divergence of the term with coefficient $Q_1$, the only
contribution which will
be relevant for our purpose is $\bigg(Q_1~[(u.\nabla)\Theta]^2\bigg)$.
\begin{itemize}
\item The relevant part of the divergence of the last two terms with coefficients
$Q_{14}$
and
$Q_{15}$ are the following.
\begin{equation}\label{q14q15}
\begin{split}
\nabla_\mu\left[ Q_{14}~P^\mu_ c\sigma_{ab}\nabla^{\langle c}\sigma^{ab\rangle}\right]&\Rightarrow~~ Q_{14}~P^\mu_\nu\left[\nabla_{\langle\mu}\sigma_{ab\rangle}\right]\left
[\nabla^{\langle\nu}\sigma^{ab\rangle}\right]\\
\\
\nabla_\mu\left[~Q_{15}~P^{\mu c}\sigma^{ab}\left(\nabla _a \omega_{b c}-\frac{P_{ab}}{3}\nabla^k\omega_{kc}\right)\right]&\Rightarrow~~ Q_{15}~P^{\nu c}\left[\nabla_\nu \sigma^a_b\right]\left[\nabla _a \omega_{b c}-\frac{P_{ab}}{3}\nabla^k\omega_{kc}\right]
\end{split}
\end{equation}
Here in the first line we get a term proportional to $\text{(spin-3)}^2$ and in
the second line we get a term proportional to $\text{(pseudo-tensor)}^2$. It will
turn out that such terms cannot occur in any other place. Positivity of the divergence
will be satisfied if both $Q_{14}$ and $Q_{15}$ are positive. Therefore these terms
will not produce any constraint on the second order transport coefficients.
\item The other five terms where the relevant parts are easy to calculate are the
following.
\begin{equation}\label{q12111213}
\begin{split}
\nabla_\mu\left[Q_1~u^\mu\Theta (u.\nabla)\Theta \right]&\Rightarrow~~ Q_1[(u.\nabla)\Theta]^2\\
\nabla_\mu\left[Q_2 ~u^\mu \sigma_{ab}(u.\nabla)\sigma^{ab}\right]&\Rightarrow~~ Q_2[(u.\nabla)\sigma_{ab}][(u.\nabla)\sigma^{ab}]\\
\\
\nabla_\mu\left[Q_{12}~\sigma^\mu_\nu\nabla^\nu\Theta\right]
&\Rightarrow~~ Q_{12}[\nabla_\mu\sigma^\mu_\nu][\nabla^\nu\Theta]\\
\nabla_\mu\left[Q_{13}~\sigma^\mu_\nu\nabla_\theta\sigma^{\theta\nu}\right]&\Rightarrow~~ Q_{13}\left[\nabla_\mu\sigma^\mu_\nu\right]
\left[\nabla_\theta\sigma^{\theta\nu}\right]\\
\end{split}
\end{equation}
\item The relevant part in the divergence of the terms with coefficients
$Q_{10}$ and $Q_{11}$
are more complicated.
The divergence of the `$Q_{11}$-term' is given by the following expression.
\begin{equation}\label{q11}
\begin{split}
&\nabla_\mu\left[\omega^{\mu\nu}\nabla_\nu\left(Q_{11}\Theta \right)\right]\\
= &~Q_{11}\left[\nabla_\mu\omega^{\mu\nu}\right]
\left[\nabla_\nu\Theta\right] + \left[\nabla_\mu\omega^{\mu\nu}\right]
\left[\nabla_\nu Q_{11}\right]\Theta\\
\Rightarrow&-Q_{11}\left[\nabla^\mu\Theta\right]
\bigg[- P_{\nu\mu}\nabla_a\sigma^{a \nu} + \frac{2}{3}P_{\nu\mu}\nabla^\nu\Theta
+ P_{\mu\nu} u_a R^{a \nu} + {\mathfrak a}_b \omega^{b\mu}\bigg]\\
&-Q_{11}\omega^2(u.\nabla)\Theta\\
\end{split}
\end{equation}
where in the last line we have used the identity \eqref{id4}
and ignored the terms proportional to $\Theta$ and $\sigma_{\mu\nu}$.
The divergence of the `$Q_{10}$-term' is given by
\begin{equation}\label{q10}
\begin{split}
& \nabla_\mu\bigg(Q_{10}\left[{\mathfrak a}^\mu (u.\nabla)\Theta -u^\mu ({\mathfrak a}.\nabla)\Theta \right]\bigg)\\
=&-T\left(\frac{dQ_{10}}{dT} \right){\mathfrak a}^2 (u.\nabla)\Theta + s\left(\frac{d Q_{10}}{ds}\right) \Theta ({\mathfrak a}.\nabla)\Theta\\
&+Q_{10}\left[(\nabla.{\mathfrak a})(u.\nabla)\Theta + {\mathfrak a}^\mu(\nabla_\mu u^a)(\nabla_a \Theta) -(\nabla_b\Theta) (u.\nabla) {\mathfrak a}^b -\Theta({\mathfrak a}.\nabla)\Theta\right]\\
\Rightarrow&~-\left(T\frac{dQ_{10}}{dT} + Q_{10}\right){\mathfrak a}^2 (u.\nabla)\Theta \\
&+Q_{10}\bigg[\omega^2(u.\nabla)\Theta +[(u.\nabla)\Theta]^2 + R_{00} (u.\nabla)\Theta
-\left(\frac{s}{T}\frac{dT}{ds}\right)P_{ab}(\nabla^a\Theta)(\nabla^b\Theta)\bigg]
\end{split}
\end{equation}
To express the divergence in the chosen basis of independent data we have used the
identities \eqref{id1}, \eqref{id2} and \eqref{id3}.
Here also in the final expression we have ignored the terms proportional to $\Theta$
and $\sigma_{\mu\nu}$.
\end{itemize}
\begin{table}[ht]
\caption{$C_{1,2}$ type Scalars (Curvature data)}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c|}
\hline\hline
\hline
Scalars as listed before&Combination that enters entropy current\\[0.5ex]
\hline
$F_{ab}\sigma^{ab}$ &$P_1 \left(F_{ab}\sigma^{ab}\right)u^\mu$
\\[0.5ex]
\hline
$R_{ab}\sigma^{ab}$ &$P_2 \left(R_{ab}\sigma^{ab}\right)u^\mu$\\[0.5ex]
\hline
$\Theta R$ &$P_3~u^\mu \Theta R$
\\[0.5ex]
\hline
$\Theta R_{00}$&$P_4~u^\mu \Theta R_{00}$\\[0.5ex]
\hline
$u_a{\mathfrak a}_bR^{ab}$&$P_5~u^\mu \left(u_a{\mathfrak a}_bR^{ab}~~\right)$\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:scalcombcurv}
\end{table}
\begin{table}[ht]
\caption{$C_{1,2}$ type Vectors (Curvature data)}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c|}
\hline\hline
\hline
Vectors as listed before&Combination that enters entropy current\\[0.5ex]
\hline
${\mathfrak a}^\mu R $&$P_6 ~{\mathfrak a}^\mu R$
\\[0.5ex]
\hline
${\mathfrak a}^\mu R_{00}$ &$P_7~{\mathfrak a}^\mu R_{00}$\\[0.5ex]
\hline
$u^a R_{ab}\omega^{b\mu}$ &$P_8~u^a R_{ab}\omega^{b\mu}$
\\[0.5ex]
\hline
${\mathfrak a}_\nu R^{\mu\nu}$&$P_{9} \left(R^{\mu\nu} - \frac{1}{2}g^{\mu\nu} R\right){\mathfrak a}_\nu$\\[0.5ex]
\hline
${\mathfrak a}_\nu F^{\mu\nu}$&$P_{10}\left(F^{\mu\nu}{\mathfrak a}_\nu -R_{00}{\mathfrak a}^\mu +u^\mu u_a {\mathfrak a}_b R^{ab}\right)$\\[0.5ex]
\hline
$u_a R^{a\mu bc}\omega_{bc}$&$P_{11}\left(u_a R^{a\mu\alpha\beta}\omega_{\alpha\beta} + 2\omega^{\mu\alpha} u^a R_{a\alpha}\right)$\\[0.5ex]
\hline
$u_a\Theta R^{a\mu}$&$P_{12} ~u_a\Theta R^{a\mu}$\\[0.5ex]
\hline
$u^a R_{ab}\sigma^{b\mu}$&$P_{13}~u^a R_{ab}\sigma^{b\mu}$\\[0.5ex]
\hline
$u_a R^{ab \mu c}\sigma_{bc}$&$P_{14}~u_a R^{ab \mu c}\sigma_{bc}$\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:veccombcurv}
\end{table}
Next we shall compute the divergence of the curvature type data appearing in
table (\ref{table:scalcombcurv}) and (\ref{table:veccombcurv}).
\begin{itemize}
\item It turns out that the divergence of the terms multiplying $P_5$, $P_6$
and $P_7$ are the
only terms which produce the terms of the form ${\mathfrak a}_\mu$ times an independent
third
order $I_3$ type curvature vector. Therefore we have to set $P_5$, $P_6$ and $P_7$ to
zero.
\item Similarly the divergence of the term multiplying $P_8$ is the only place where
$l_\mu$ times a
third order $I_3$ type curvature pseudo-vector is produced.
Therefore $P_8$ should also be set to zero.
\end{itemize}
The divergence of the remaining 10 terms have to be computed. Here also we shall
ignore any
term that are multiplied by an explicit factor of $\Theta$ or $\sigma_{\mu\nu}$.
\begin{itemize}
\item First we shall determine the relevant part of the divergence of the terms
with the coefficients $P_i,~~~i= 1,2,3,4~~ \text{and}~~12,13,14$.
These are easy to calculate
\begin{equation}\label{p1234}
\begin{split}
\nabla_\mu\left[P_1 \left(F_{ab}\sigma^{ab}\right)u^\mu\right]&\Rightarrow~~P_1~ F^{ab}(u.\nabla)\sigma_{ab}\\
\nabla_\mu\left[P_2 \left(R_{ab}\sigma^{ab}\right)u^\mu\right]&\Rightarrow~~P_2~ R^{ab}(u.\nabla)\sigma_{ab}\\
\nabla_\mu\left[ P_3~u^\mu \Theta R\right]&\Rightarrow~~P_3~ R (u.\nabla)\Theta\\
\nabla_\mu\left[ P_4~u^\mu \Theta R_{00}\right]&\Rightarrow~~P_4 ~R_{00} (u.\nabla)\Theta\\
\\
\nabla_\mu\left[ P_{12} ~P^{\mu}_{ b}u_a\Theta R^{ab}\right]&\Rightarrow~~P_{12}~ u_aR^{ab}P^{\mu}_{ b}\nabla_\mu\Theta\\
\nabla_\mu\left[P_{13}~u^a R_{ab}\sigma^{b\mu}\right]&\Rightarrow~~P_{13}~u^aR_{ab}P_\nu^b
\nabla_\mu\sigma^{\mu \nu}\\
\nabla_\mu\left[P_{14}~u_a P^\mu_\nu R^{ab \nu c}\sigma_{bc}\right]&\Rightarrow~~P_{14}~u_a P^\mu_\nu R^{ab\nu c}\nabla_\mu\sigma_{bc}
\end{split}
\end{equation}
\end{itemize}
Now we shall compute the divergence of the difficult terms multiplying the
coefficients $P_9$, $P_{10}$ and $P_{11}$ respectively.
We first analyse the situation where in a given basis all the fluid data are locally
zero upto the required order and only the curvature data are turned on.
For such configurations it will turn out that only these three terms can produce
non-zero divergence. They are given by the following expressions\footnote{In this computation,
apart from the explicit curvature there is one more source for producing the curvature terms.
These are arising because we want to write the final answer for the fluid data in a given basis as
chosen in section \ref{sec:class}. For example while computing the left hand side of equation
\eqref{p9curv}, we shall get a term like $R(\nabla.{\mathfrak a})$. However, our basis of
independent second order fluid data contains a single scalar $(u.\nabla)\Theta$. Therefore
we have to express $(\nabla.{\mathfrak a})$ in terms
$(u.\nabla)\Theta$ before setting the fluid data to zero. In this process we shall generate a
curvature term $R_{00}$ as calculated in equation \eqref{id2}. Similar techniques have been
used to compute the divergence of the other two terms multiplying $P_{10}$ and $P_{11}$.}.
\begin{equation}\label{p9curv}
\begin{split}
&\nabla_\mu\left[P_9~\left(R^{\mu b} - \frac{1}{2}g^{\mu b} R\right){\mathfrak a}_b\right]
=P_9\left[-\frac{R.R_{00}}{2} + R^{ab}F_{ab} \right]
\end{split}
\end{equation}
\begin{equation}\label{p10curv}
\begin{split}
&\nabla_\mu\left[P_{10}\left(F^{\mu\nu}{\mathfrak a}_\nu -R_{00}{\mathfrak a}^\mu + u^\mu u_a {\mathfrak a}_b R^{ab}\right)\right]
=P_{10}\left[F^{ab}F_{ab} - R_{00}^2\right]
\end{split}
\end{equation}
\begin{equation}\label{p11curv}
\begin{split}
&\nabla_\mu\left[P_{11}\left(u_a R^{a\mu\alpha\beta}\omega_{\alpha\beta} + 2\omega^{\mu\alpha} u^a R_{a\alpha}\right)\right]\\
=~&P_{11}\left[2u^a u^b R_{ac}P^{cd}R_{db}
+A^{\mu\nu\lambda}\left(\frac{1}{2} A_{\mu\nu\lambda}+ A_{\lambda\nu\mu}\right)\right]
\end{split}
\end{equation}
where $A^{\mu\nu\lambda} =u_\rho R^{\rho a b c}P^\mu_a P^\nu _b P^\lambda_c$.
Now from these three equations \eqref{p9curv}, \eqref{p10curv} and \eqref{p11curv}
we can conclude the following.
\begin{itemize}
\item The last term in the RHS of \eqref{p11curv} contains only the $(pseudo tensor)^2$
and a $(vector)^2$, but cannot produce any term proportional to $R_{00}^2$ or
$F_{ab}F^{ab}$.
Therefore to have positivity of the divergence for all values of $F^2$ and $R_{00}^2$
we
must set $P_{10}$ to zero .
\item Once $P_{10}$ is set to zero, there are no terms in the final expressions of
divergence
that contain $\left(R_{\langle\mu\nu\rangle}\right)^2$,
$\left(F_{\langle\mu\nu\rangle}\right)^2$, $R^2$ or
$R_{00}^2$.
Therefore in the full divergence the coefficients of all the terms linear in these
four independent
data must be zero.
To satisfy this condition we have to set the following coefficients to zero.
\begin{enumerate}
\item $P_1 = 0$ as it is the total coefficient of
the term $F^{ab}(u.\nabla)\sigma_{ab}$.
\item $P_2 = 0$ as it is the total coefficient of
the term $R^{ab}(u.\nabla)\sigma_{ab}$.
\item $P_3 = 0$ as it is the total coefficient of
the term $R(u.\nabla)\Theta$.
\item $Q_{10} + P_4 = 0$ as it is the total coefficient
of the term $R_{00}(u.\nabla)\Theta$.
\item $P_9 = 0$ as it is the total coefficient of the term $R.R_{00}$.
\end{enumerate}
{\bf {$C_{1,1,1}$ type data}}
In the fourth order divergence we are not interested in any terms that are multiplied
by $\Theta$ or $\sigma_{\mu\nu}$. Therefore in the third order entropy current we did
not need to consider the $C_{1,1,1}$ type terms which contains more than one factor
of $\Theta$, $\sigma_{\mu\nu}$ or both.
Here we are listing only those terms which we shall require for our analysis.
\begin{table}[ht]
\caption{$C_{1,1,1}$ type data (Only the relevant ones)}
\vspace{0.5cm}
\centering
\begin{tabular}{|c| c|}
\hline\hline
\hline
Scalars&Vectors\\[0.5ex]
\hline
$K_1~u^\mu\omega^2\Theta $ &$K_5~\omega^{\mu a}\sigma_{ab}{\mathfrak a}^b$
\\[0.5ex]
\hline
$K_2~u^\mu{\mathfrak a}^2\Theta$ &$K_6~\sigma^{\mu a}\omega_{ab}{\mathfrak a}^b$\\[0.5ex]
\hline
$K_3~u^\mu\left({\mathfrak a}_b{\mathfrak a}_c\sigma^{bc}\right)$ &$K_7~\omega^{\mu a}{\mathfrak a}_b\Theta$
\\[0.5ex]
\hline
$K_4~u^\mu\left(\omega_{ab}\sigma^b_c\omega^{ca}\right)$&$K_8~\omega^2{\mathfrak a}^\mu$\\[0.5ex]
\hline
&$K_9~{\mathfrak a}^2{\mathfrak a}^\mu$\\[0.5ex]
\hline
&$K_{10} ~\omega^{\mu a}\omega_{ab}{\mathfrak a}^b$\\[0.5ex]
\hline
\hline
\end{tabular}
\label{table:C111type}
\end{table}
Now we shall compute relevant part in the divergence of the each of the relevant
term. It will
turn out that analysing the explicit expression of the divergence we can further set
some of the
non-zero coefficients to zero.
\\
%
\begin{itemize}
\item Here also the relevant parts are easy to calculate for the terms with
the coefficients $K_i,~~i= 1,2,3,4,5,6~\text{and}~7$ .
\begin{equation}\label{keasy}
\begin{split}
\nabla_\mu \left[K_1~u^\mu\omega^2\Theta\right]&\Rightarrow ~~K_1~\omega^2(u.\nabla)\Theta\\
\nabla_\mu \left[K_2~u^\mu{\mathfrak a}^2\Theta\right]&\Rightarrow ~~K_2~{\mathfrak a}^2(u.\nabla)\Theta\\
\nabla_\mu\left[K_3~u^\mu\left({\mathfrak a}_b{\mathfrak a}_c\sigma^{bc}\right)\right]&\Rightarrow~~K_3 ~{\mathfrak a}_b{\mathfrak a}_c(u.\nabla)\sigma^{bc}\\
\nabla_\mu\left[K_4~u^\mu\left(\omega_{ab}\sigma^b_c\omega^{ca}\right)\right]
&\Rightarrow~~K_4~\omega_{ab}\left[(u.\nabla)\sigma^b_c\right]\omega^{ca}\\
\\
\nabla_\mu\left[K_5~\omega^{\mu a}\sigma_{ab}{\mathfrak a}^b\right]&\Rightarrow~~K_5~\omega^{\mu a}\left[\nabla_\mu\sigma_{ab}\right]{\mathfrak a}^b\\
\nabla_\mu\left[K_6~\sigma^{\mu a}\omega_{ab}{\mathfrak a}^b\right]&\Rightarrow~~K_6~\left[\nabla_\mu\sigma^{\mu a}\right]\omega_{ab}{\mathfrak a}^b\\
\nabla_\mu\left[K_7~\omega^{\mu b}{\mathfrak a}^b\Theta\right]&\Rightarrow~~K_7~\left[\omega^{\mu b}{\mathfrak a}^b\nabla_\mu\Theta\right]\\
\end{split}
\end{equation}
\item The divergence of the terms with coefficients $K_8$, $K_9$ and $K_{10}$ are
complicated.
These are given by the following expressions.
{\it `$K_8$-term'}
\begin{equation}\label{k8}
\begin{split}
& \nabla_\mu\left[K_8 ~\omega^2{\mathfrak a}^\mu\right]\\
=&~\left[({\mathfrak a}.\nabla)K_8\right]\omega^2 +2 K_8~\omega^{\mu\nu}({\mathfrak a}.\nabla)\omega_{\nu\mu} +K_8 ~\omega^2 (\nabla.{\mathfrak a})\\
\Rightarrow &~-T\left(\frac{dK_8}{dT}\right){\mathfrak a}^2\omega^2 +2 K_8~\omega^{\mu\nu}({\mathfrak a}.\nabla)\omega_{\nu\mu}\\
&~+ K_8~\omega^2\left[\omega^2 + (u.\nabla)\Theta + R_{00}\right]
\end{split}
\end{equation}
In the last step we have kept only the relevant terms and used \eqref{id1} and
\eqref{id2}
for simplification.
{\it $ K_9 $-term}
\begin{equation}\label{k9}
\begin{split}
&\nabla_\mu\left[K_9~{\mathfrak a}^2{\mathfrak a}^\mu\right]\\
=&~{\mathfrak a}^2({\mathfrak a}.\nabla)K_9 + K_9{\mathfrak a}^2(\nabla.{\mathfrak a}) +2 K_9 ~{\mathfrak a}^\mu{\mathfrak a}^\nu\nabla_\mu{\mathfrak a}_\nu\\
\Rightarrow &~-\left(T\frac{dK_9}{dT}+K_9\right){\mathfrak a}^4
+ K_9~{\mathfrak a}^2\left[\omega^2 + \frac{5}{3}(u.\nabla)\Theta + R_{00}\right]\\
~ &~+2K_9~{\mathfrak a}^\mu{\mathfrak a}^\nu \left[(u.\nabla)\sigma_{\mu\nu}
+ F_{\mu\nu}+\omega_{\mu a}{\omega^a}_\mu\right]
\end{split}
\end{equation}
In the last step we have used equations \eqref{id1}, \eqref{id2} and \eqref{id6}.
{\it $ K_{10} $-term}
\begin{equation}\label{k10}
\begin{split}
&\nabla_\mu\left[K_{10}~\omega^{\mu a}\omega_{a b}{\mathfrak a}^b\right]\\
= &~ K_{10}~\bigg[(\nabla_\mu\omega^{\mu a})\omega_{ab}{\mathfrak a}^b + \omega^{\mu a}{\mathfrak a}^b(\nabla_\mu\omega_{ab})
+ \omega^{\mu a}\omega_{ab}(\nabla_\mu {\mathfrak a}^b)\bigg]\\
&~+(\nabla_\mu K_{10})\omega^{\mu a}\omega_{a b}{\mathfrak a}^b \\
\Rightarrow&~-\left(T\frac{dK_{10}}{dT} + K_{10}\right)
\left({\mathfrak a}_\mu\omega^{\mu a}\omega_{a b}{\mathfrak a}^b\right)
+ K_{10}~\omega^{\mu a}{\mathfrak a}^b(\nabla_\mu\omega_{ab}) \\
&~+K_{10}~{\omega^\mu}_a\omega^{a \nu}\left[(u.\nabla)\sigma_{\mu\nu}
+ F_{\mu\nu}+ \omega_{\mu b}{\omega^b}_\nu\right]\\
&~-K_{10}~\omega_{\mu b}{\mathfrak a}^b\bigg[- \nabla_a\sigma^{a \mu} + \frac{2}{3}\nabla^\mu\Theta
+ u_a R^{a \mu} + {\mathfrak a}_b \omega^{b\mu}\bigg]\\
\end{split}
\end{equation}
In the last step we have used relevant part of equations \eqref{id1}, \eqref{id5}
and \eqref{id7}.
\end{itemize}
Now as explained before, all the terms that are linear in $R_{00}$, $R$, $R_{ij}$ and
$F_{ij}$ should be set to zero. This will imply the following for the $C_{1,1,1}$
part of the entropy current.
\begin{enumerate}
\item $K_8=0$ as it is the total coefficient of the term $\omega^2 R_{00}$.
\item $K_9=0$ as it is the total coefficient of the term ${\mathfrak a}. F .{\mathfrak a}$
\item $K_{10}=0$ as it is the total coefficient of the term $Tr[\omega. F .\omega]$
\end{enumerate}
\item Once $K_8$, $K_9$ and $K_{10}$ are zero there are no terms in the fourth order
divergence which
are proportional to $\left[\omega^2\right]^2$,
$\left[{\mathfrak a}^2\right]^2$,
${\mathfrak a}^2\omega^2$ or $\left[{\mathfrak a}.\omega\right]^2$.
This will imply that in the divergence the net coefficients of all the terms,
linear in ${\mathfrak a}^2$, $\omega^2$, $\omega_{\mu a}\omega^{a\nu}$
or ${\mathfrak a}_\mu\omega^{\mu\nu}$ should be zero.
To satisfy this condition we have to set all the $K_i$ from $i=1,\cdots,5$ to zero.
This will also set $Q_{11}$ to zero.
$K_6$ and $K_7$ get related to $B_1$ and $B_5$ in the following way
(see \eqref{rele4} and \eqref{termb1}).
$$\frac{K_6}{\eta} = \frac{K_7}{\zeta} =\frac{1}{s}\left(\frac{dB_5}{dT}
+ \frac{2B_1}{T} + 2 \frac{dB_1}{dT}\right) $$
Absence of these four fourth order terms mentioned above will also impose some
constraints on the transport coefficients of second
order stress-tensor by requiring that the coefficients of the four terms
$\Theta {\mathfrak a}^2$, $ \Theta \omega^2$, $\omega_{ab}\sigma^b_c\omega^{ca}$
and ${\mathfrak a}_a \sigma^{ab}{\mathfrak a}_b$ in the third order divergence should
vanish.
\end{itemize}
\section{ Constraints on 2nd order transport coefficients}\label{sec:finalconstraint}
In this section we shall finally analyse how this condition of local entropy production
constrains the second order transport coefficients. In the first part of this section
we shall derive these constraints. These include the set of five relations among the 15
transport coefficients (as mentioned in the section \ref{sec:intro}) and also two
inequalities
involving both the first and second order transport coefficients as well as
some coefficients
appearing in the entropy current.
Then in the next subsection we shall compare our final result
with the answer presented in
\cite{Kanitscheider:2009as} and \cite{Romatschke:2009kr}.
\subsection{Derivation of the constraints}
At second order, just from symmetry analysis, the stress tensor will have 15 transport
coefficients.
\begin{equation}\label{stress2}
\begin{split}
\Pi_{\mu\nu} =~&T\bigg[ \tau ~(u.\nabla)\sigma_{\langle\mu\nu\rangle} + \kappa_1 R_{\langle \mu\nu\rangle} + \kappa_2 F_{\langle \mu\nu\rangle} +\lambda_0~ \Theta\sigma_{\mu\nu}\\
&+ \lambda_1~ {\sigma_{\langle \mu}}^a\sigma_{a\nu\rangle}+ \lambda_2~ {\sigma_{\langle \mu}}^a\omega_{a\nu\rangle}+ \lambda_3~ {\omega_{\langle \mu}}^a\omega_{a\nu\rangle} + \lambda_4~{\mathfrak a}_{\langle\mu}{\mathfrak a}_{\nu\rangle}\bigg]\\
&+TP_{\mu\nu}\bigg[\zeta_1(u.\nabla)\Theta + \zeta_2 R + \zeta_3R_{00}
+ \xi_1 \Theta^2 + \xi_2 \sigma^2+ \xi_3 \omega^2
+\xi_4 {\mathfrak a}^2 \bigg]
\end{split}
\end{equation}
As explained before, in the expression of the divergence of the entropy current,
$\Pi^{\mu\nu}$
will always appear
contracted with $\sigma_{\mu\nu}$ and $\Pi$ with $\Theta$. Therefore, in $\Pi^{ab}$
all the terms,
which have either $\sigma_{ab}$ or $\Theta$ as factors, will finally generate a
set of quadratic and higher order
terms in $\sigma_{ab}$ and $\Theta$. Such
terms are always
suppressed in dervative expansion over the second order piece of the divergence
provided the shear and the bulk viscosities are non zero.
Therefore
the coefficients multiplying these terms can never be constrained from the condition
of positivity.
Among the 15 transport coefficients, five ($\lambda_0$, $\lambda_1$, $\lambda_2$,
$\xi_1$
and $\xi_2$) are of such type and therefore are completely unconstrained.
It turns out that to maintain the positivity of the divergence, the coefficients $\tau$
and $\zeta_1$ have to satisfy some inequalities. This is because
at fourth order, the divergence of the entropy current will contain terms
proportional to $\left[(u.\nabla)\sigma\right]^2$ and $\left[(u.\nabla)\Theta\right]^2$
whose coefficients are $Q_2$ and $Q_1$ respectively (see \eqref{q12111213}). These two terms, along with
four other terms ($\sigma^2$, $\Theta^2 $, $\sigma^{\mu\nu} (u.\nabla)\sigma_{\mu\nu}$
and $\Theta (u.\nabla)\Theta$, appearing in the second and third order pieces
of the divergence) together can be made positive definite provided
the transport
coefficients $\tau$ and $\zeta_1$ satisfy the following inequalities.
\begin{equation}\label{ineq}
\begin{split}
( \zeta_1 - C_\Theta)^2&\leq 4\zeta Q_1\\
( \tau - C_\sigma)^2&\leq 4\eta Q_2\\
\end{split}
\end{equation}
Where $C_\Theta$ and $C_\sigma$ are the coefficients
of the term $\Theta (u.\nabla)\Theta$ and
$\sigma^{ab}(u.\nabla)\sigma_{ab}$ respectively in the divergence of the
third order entropy current.
\begin{equation}\label{thetasigma}
\begin{split}
C_\Theta &= 2 s \frac{dB_5}{ds} - \frac{2}{3}T\frac{dB_5}{dT}
+ 2 B_2 + 2 B_4 s \left(s - T\frac{ds}{dT}\right)\\
C_\sigma &= T\frac{dB_5}{dT} + 2 B_3\\
\end{split}
\end{equation}
But unlike the inequalities for the first order transport
coefficients ($\eta\geq0$
and $\zeta\geq0$) \eqref{ineq}
involves several free coefficients appearing in the entropy
current and hence it does not give any relation within the
transport coefficients themselves.
Now we shall come to those relations which will give some equalities among the remaining
eight transport coefficients.
By computing the divergence of the entropy current upto fourth order we can see
that there
are no terms proportional to
$R^2,~R_{00}^2,~R_{ab}R^{ab},~F_{ab} F^{ab}, {\mathfrak a}^4,~\omega^4$ and
$ ({\mathfrak a}.\omega)^2$. It will imply that the coefficients of the following
8 terms in
the divergence of the entropy current have to be zero.
\begin{enumerate}
\item $C_F\equiv$ Coefficient of the term $\sigma_{ab}F^{ab}$
\item $C_R\equiv$ Coefficient of the term $\sigma_{ab}R^{ab}$
\item $C_{\mathfrak a}\equiv$ Coefficient of the term
${\mathfrak a}^a{\mathfrak a}^b\sigma_{ab}$
\item $C_{\omega}\equiv$ Coefficient of the term ${\omega}^{ap}{\omega_p}^b\sigma_{ab}$
\item $Q_F\equiv$ Coefficient of the term $\Theta R_{00}$
\item $Q_R\equiv$ Coefficient of the term $\Theta R$
\item $Q_{\mathfrak a}\equiv$ Coefficient of the term $\Theta{\mathfrak a}^2$
\item $Q_{\omega}\equiv$ Coefficient of the term $\Theta{\omega}^2$
\end{enumerate}
Solving each of the above eight conditions we can express the
remaining eight
transport coefficients in terms of the coefficients
appearing in the divergence of the second order entropy current.
These are given by the following expressions.
\begin{equation}\label{fconst}
\begin{split}
C_R=0&\Rightarrow~\kappa_1= A_3,~~~~~
C_F=0\Rightarrow~\kappa_2= T\frac{dB_5}{dT}\\
C_\omega =0&\Rightarrow \lambda_3 = T \frac{dB_5}{dT} - 4 B_1\\
C_{\mathfrak a}=0&\Rightarrow \lambda_4=-\left[T^2\frac{d^2B_5}{dT^2}
+ T\frac{dB_5}{dT} + 2B_4 T^2\left(\frac{ds}{dT}\right)^2\right]\\
Q_R=0 &\Rightarrow \zeta_2 =\frac{1}{2}\left[s\frac{dA_3}{ds} - \frac{A_3}{3}\right]\\
Q_F=0&\Rightarrow \zeta_3 = s\frac{dA_3}{ds} + \frac{A_3}{3}
-\frac{2T}{3}\frac{dB_5}{dT} - 2 B_4 Ts\frac{ds}{dT}\\
Q_\omega = 0&\Rightarrow \xi_3 =-2 B_4 Ts \frac{ds}{dT}
+ T \frac{dB_5}{dT}\left[\frac{s}{T}\frac{dT}{ds} -\frac{2}{3}\right]
-s \frac{dB_1}{ds}\\
&~~~~~~~~~~~ + B_1\left[\frac{2s}{T}\frac{dT}{ds} -\frac{1}{3}\right]\\
Q_{\mathfrak a}=0&\Rightarrow \xi_4 = T^2s\frac{ds}{dT}\frac{dB_4}{dT}
+ B_4\left[\frac{T^2}{3}\left(\frac{ds}{dT}\right)^2 + 4 T s \frac{ds}{dT}
+2 T^2s\frac{d^2s}{dT^2}\right]\\
&~~~~~~~~~~~ + \frac{2}{3}\left(T\frac{dB_5}{dT} + T^2\frac{d^2B_5}{dT^2}\right)\\
&\text{where}~~~ \frac{dB_5}{dT} = \frac{A_3}{T} + \frac{dA_3}{dT}
\end{split}
\end{equation}
From \eqref{fconst} we can see that all these eight coefficients can be
determined in terms of three independent
coefficients($A_3$, $B_1$ and $B_4$ ) appearing in the third order entropy current.
Therefore eleminating the three entropy current coefficients, mentioned above
finally we shall get
five relations among these eight transport coefficients.
\begin{equation}\label{relationsf}
\begin{split}
\kappa_2 =&~ \kappa_1 + T\frac{d\kappa_1}{dT},~~~~~~~
\zeta_2 =~ \frac{1}{2}\left[s\frac{d\kappa_1}{ds} - \frac{\kappa_1}{3}\right]\\
\zeta_3 = &\left(s\frac{d\kappa_1}{ds} + \frac{\kappa_1}{3}\right)
+ \left(s\frac{d\kappa_2}{ds}
- \frac{2\kappa_2}{3}\right)+\frac{s}{T}\left(\frac{dT}{ds}\right)\lambda_4\\
\xi_3=&~\frac{3}{4}\left(\frac{s}{T}\right)\left(\frac{dT}{ds}\right)\left(T\frac{d\kappa_2}{dT} + 2\kappa_2\right) -\frac{3\kappa_2}{4} +\left(\frac{s}{T}\right)\left(\frac{dT}{ds}\right)\lambda_4 \\
&+\frac{1}{4}\left[s\frac{d\lambda_3}{ds} + \frac{\lambda_3}{3} -2 \left(\frac{s}{T}\right)\left(\frac{dT}{ds}\right)\lambda_3\right]\\
\xi_4 =&~-\frac{\lambda_4}{6} - \frac{s}{T}\left(\frac{dT}{ds}\right)\left(\lambda_4 + \frac{T}{2}\frac{d\lambda_4}{dT}\right)
-T\left(\frac{d\kappa_2}{dT}\right)\left(\frac{3s}{2T}\frac{dT}{ds} - \frac{1}{2}\right) \\
&- \frac{Ts}{2} \left(\frac{dT}{ds}\right)\left(\frac{d^2\kappa_2}{dT^2}\right)
\end{split}
\end{equation}
\\
\subsection{Comparison with \cite{Kanitscheider:2009as}}
In \cite{Kanitscheider:2009as} authors have constructed some examples of
non-conformal
fluid
which can be obtained by dimensional reduction of some higher dimensional
conformal
theory.
The entropy of such non conformal fluid is proportional to $T^{2\sigma -1}$
where
$2\sigma$
was the dimension of the space-time before the reduction. Since this particular
nonconformal fluid
satisfies the condition of `positivity' of the divergence of the entropy current
by construction
the transport coefficients should also obey the relations listed in \eqref{relationsf}.
Below we are quoting the values of some transport coefficients for such non-conformal
fluids.
These are the transport coefficients which enter the 5 relations in \eqref{relationsf}.
\begin{equation}\label{skendmatch}
\begin{split}
\lambda_3 = \Lambda_3 T^{2\sigma -3},
~~~&\xi_3 =\frac{2\sigma -4}{3(2\sigma-1)}\lambda_3\\
\kappa_1 = \kappa T^{2\sigma -3},
~~~&\zeta_2 =\frac{2\sigma -4}{3(2\sigma-1)}\kappa_1\\
\kappa_2 = (2\sigma-2)\kappa_1,
~~~&\zeta_3 =\frac{2\sigma -4}{3(2\sigma-2)}\kappa_2\\
\lambda_4 =0,~~~&\xi_4 = 0
\end{split}
\end{equation}
where $\Lambda_3$ and $\kappa$ are two dimensionful constants
which depend on the
length of the compactified dimensions but independent of temperature.
Using the fact
that
for such dimensionally reduced nonconformal fluids the entropy can be
written as
$$s \propto T^{2\sigma -1}$$
one can check that these values satisfy the relations given in
\eqref{relationsf}.
\\
\subsection{Comparison with \cite{Romatschke:2009kr}}
To compare, first we shall express the eight
relevant transport coefficients (the ones which appear in
equation \eqref{relationsf}) in terms of the
coefficients as given in \cite{Romatschke:2009kr} . The dictionary is the following.
\begin{equation}\label{comprom}
\begin{split}
T\zeta_2 &= \xi_5^{Rom},~~
T\zeta_3 = \xi_6^{Rom},~~
T\xi_3 = -\xi_3^{Rom}\\
T\lambda_3&=-\lambda_3^{Rom},~~
T\kappa_1 =\kappa^{Rom},~~
T\kappa_2 = 2(\kappa -\kappa^*)^{Rom}
\\
T\xi_4 &= \frac{T^2}{s^2}\left(\frac{ds}{dT}\right)^2\xi_4^{Rom},~~
T\lambda_4=\frac{T^2}{s^2}\left(\frac{ds}{dT}\right)^2\lambda^{Rom}_4
\end{split}
\end{equation}
The author of \cite{Romatschke:2009kr} has argued for the existence of two
relations among 5 of these 8 nondissipative transport coefficients. These two relations are not
explicitly presented in the paper \cite{Romatschke:2009kr} in generality.
However the author of \cite{Romatschke:2009kr}
appears
to claim, in the un-numbered equation in section 5 (below equation 32 ) of \cite{Romatschke:2009kr},
that in the special case, when
\begin{equation}\label{unnu}
T = s^{c_s^2},~~~~\text{and}~~~\kappa^{Rom}\propto\frac{s}{T}
\end{equation}
the two relations among the five transport coefficients reduce to the folllowing.
\begin{equation}\label{romrel}
\begin{split}
\xi_5^{Rom} &= \frac{\kappa^{Rom}}{3}\left[1 - 3 c_s^2\right]\\
\xi_6^{Rom} + \xi_3^{Rom} &= -\left(\frac{3 c_s^2 -1}{3 c_s^2}\right)\left[\kappa^{Rom}
+ c_s^2\lambda_3^{Rom}\right]
\end{split}
\end{equation}
The first equation in \eqref{romrel} indeed reduces to the second equation in \eqref{relationsf}
when $c_s^2$ is a
constant. In order to compare our results with the second
of \eqref{romrel}, we subtract the fourth equation from the third equation of \eqref{relationsf} and
then use the first equation of \eqref{relationsf}.
This gives a relationship between all the same transport coefficients
that appear in the second equation of \eqref{romrel}. However the relationship we find is the following.
\begin{equation}\label{unmatched}
\begin{split}
\zeta_3- \xi_3 =&~ \xi_6^{Rom}+ \xi_3^{Rom}\\
=&\left(s\frac{d\kappa_1}{ds} + \frac{\kappa_1}{3}\right)
+\frac{1}{4}\left(s\frac{d\kappa_2}{ds} + \frac{\kappa_2}{3}\right)
-\frac{3}{2}\frac{s}{T}\frac{dT}{ds}\kappa_2\\
&-\frac{1}{4}\left[s\frac{d\lambda_3}{ds} + \frac{\lambda_3}{3}
-2 \left(\frac{s}{T}\right)\left(\frac{dT}{ds}\right)\lambda_3\right]\\
\text{where}&\\
\kappa_2 =&~ \kappa_1 + T\frac{d\kappa_1}{dT}
\end{split}
\end{equation}
This relationship does not reduce to the second of \eqref{romrel} after substituing the special case
of \eqref{unnu} with constant $c_s$.
We do not understand the reason for this disagreement. Perhaps the second of \eqref{romrel} applies
under more restrictive assumptions than stated
explicitly in \cite{Romatschke:2009kr}. As noted in \cite{Romatschke:2009kr} it certainly applies to the particular
case, described in \cite{Kanitscheider:2009as}.
\section{ Conformal limit}
Upto second order in derivative expansion
the final entropy current (consistent with the
constraint
of non-negative divergence) is given by the following
expression
\begin{equation}\label{duientabar}
\begin{split}
&\tilde J^\mu|_{\text{second order}}\\
=& \nabla_\nu\left[A_1(u^\mu\nabla^\nu T - u^\nu \nabla^\mu T)\right] + \nabla_\nu \left( A_2 T \omega^{\mu\nu}\right)\\
& + A_3 \left(R^{\mu\nu} - \frac{1}{2}g^{\mu\nu} R\right) u_\nu
+\left(\frac{A_3}{T} + \frac{dA_3}{dT}\right)\left[\Theta \nabla^\mu T - P^{ab}(\nabla_b u^\mu)( \nabla_a T )\right]\\
&+(B_1\omega^2 + B_2\Theta^2 + B_3 \sigma^2)u^\mu + B_4\left[(\nabla s)^2
u^\mu + 2 s \Theta \nabla^\mu s\right]\\
\end{split}
\end{equation}
If the theory has conformal symmetry, then the entropy current also should
transform covariantly under a conformal transformation.
The conformally covariant entropy current is a special case of equation
\eqref{duientabar}. In this case the only available length scale is provided by
the temperature and therefore the temperature dependence of all the
coefficients are fixed just by dimensional argument and also
some of the coefficients are related to the others in a way so that the terms that transform
in-homogeneously under
conformal transformation cancel.
At second order in derivative expansion, there are three scalars and two vectors
\cite{Loganayagam:2008is,Romatschke:2009kr,Bhattacharyya:2008xc} which
transform covariantly under conformal transformation. In our basis these
are given by the following combinations
\begin{equation}\label{congcomb}
\begin{split}
{\mathcal S}_1 &=\sigma_{ab}\sigma^{ba},~~~~~~~~~~~~~~~~~~~
{\mathcal S}_2 = \omega_{ab}\omega^{ba}\\
{\mathcal S}_3 &= \frac{P^{ab}\nabla_a \nabla_b T}{T} - \frac{P^{ab}(\nabla_a T)(\nabla_b T)}{2 T^2} - \frac{R_{00}}{2} - \frac{R}{4} + \frac{\Theta^2}{6}\\
{\mathcal V}_1^\mu &=P^\nu_a\nabla_\mu \sigma^{\mu a} - 3{\mathfrak a}_\mu \sigma^{\mu\nu} ~~~~~ {\mathcal V}_2^\mu =P^\nu_a\nabla_\mu \omega^{\mu a} - {\mathfrak a}_\mu \omega^{\mu\nu}
\end{split}
\end{equation}
A conformally covariant entropy current should be expressible only in terms
of these three scalars and two vectors. So to begin with it can have
five independent coefficients. Then the constraint of positivity will reduce it to some
special case of \eqref{duientabar}.
Here we have used \eqref{duientabar} to deduce the conformally
covariant form of the entropy current. First we have fixed the temperature dependence of
the coefficients $A_i$ and $B_i$ by dimensional analysis. Then we have tried to figure
out the minimal set of relations these coefficients have to satisfy such that all
the terms
transforming in-homogeneously under conformal transformation cancel. This means
that one should be able to choose the coefficients $A_i$ and $B_i$ in such a way
so that the entropy
current is expressible in terms of these 3 conformal scalars and
2 conformal vectors. To do this we first rearrange some of the terms appearing in
equation \eqref{duientabar} assuming that the temperature dependence of
the coefficients are
fixed by dimensional analysis.
\begin{equation}\label{newarrange1}
\begin{split}
& \nabla_\nu\left[A_1(u^\mu\nabla^\nu T - u^\nu \nabla^\mu T)\right]\\
= ~&A_1 \bigg[u^\mu {\mathcal S}_3 - \frac{1}{2}\left({\mathcal V}_1^\mu + {\mathcal V}_2^\mu\right) + \frac{u^\mu}{2}\left({\mathfrak a}^2 - \Theta^2 + R_{00} + \frac{R}{2}\right)\\
&~~~~~~-{\mathfrak a}_b\left(\sigma^{b\mu} + \omega^{b\mu}\right) + \frac{\Theta}{3}{\mathfrak a}^\mu - \frac{1}{2} u^k R_{ka}P^{\mu a}\bigg]\\
\end{split}
\end{equation}
\begin{equation}\label{newarrange2}
\begin{split}
& \nabla_\mu\left[A_2 \omega^{\mu\nu}\right] = A_2\left[ {\mathcal V}_2^\nu - {\mathcal S}_2 u^\nu\right]\\
& A_3\left(R^{\mu\nu} - \frac{1}{2}g^{\mu\nu} R\right)u_\nu = A_3\left[-u^\mu \left(\frac{R}{2} + R_{00}\right) + P^{\mu a} R_{ab} u^b\right]\\
\end{split}
\end{equation}
\begin{equation}\label{newarrange3}
\begin{split}
& \left(\frac{A_3}{T} + \frac{dA_3}{dT}\right)\left[\Theta \nabla^\mu T - P^{ab}(\nabla_b u^\mu)( \nabla_a T )\right] \\
=~& 2 A_3 \left[\frac{\Theta^2}{3} u^\mu
- \frac{2\Theta}{3}{\mathfrak a}^\mu + {\mathfrak a}_b\left(\sigma^{b\mu}
+ \omega^{b\mu}\right)\right]\\
\end{split}
\end{equation}
\begin{equation}\label{newarrange4}
\begin{split}
&B_4\left[(\nabla s)^2
u^\mu + 2 s \Theta \nabla^\mu s\right]
= T^6 B_4\left[\Theta^2 u^\mu + 9\left({\mathfrak a}^2 u^\mu - \frac{2\Theta}{3}{\mathfrak a}^\mu\right)\right]
\end{split}
\end{equation}
From these expressions we can see how one should choose the coefficients
$A_i$ and $B_i$ such that
all the pieces that transform inhomogeneously under conformal transformation
cancel.
The coefficients for a conformally covariant entropy current are given by the
following expressions.
\begin{equation}\label{conformal1}
\begin{split}
A_1(T) = a_1 ,~~&~~A_2(T) = a_2 ,~~~~A_3(T) = \frac{a_1}{2} T\\
B_1(T) =b_1 T, ~~&~~B_2(T)=\frac{2a_1}{9} T,~~~~B_3(T) =b_3 T\\
&~~B_4(T)= -\left(\frac{a_1}{18}\right)T^{-5}
\end{split}
\end{equation}
where all $a_i$ and $b_i$ are constants.
\newline
Therefore the conformally covariant entropy current has four
independent coefficients ($a_1, ~a_2,~b_1$
and $b_3$) when expanded upto second order in derivatives.
When written in terms of these four coefficients the expressions for
the conformal entropy current is given
as
\begin{equation}\label{identification}
\begin{split}
&J^\mu_\text{conformal}\\
&= a_1 T {\cal S}_3 u^\mu +\frac{a_1T}{2}\left({\mathcal V}_1^\mu + {\mathcal V}_2^\mu\right)+a_2 T\left[ {\mathcal V}_2^\nu - {\mathcal S}_2 u^\nu\right] + b_1 T {\mathcal S}_2 u^\mu + b_2 T {\mathcal S}_1 u^\mu\\
&=T\left[a_1 {\cal S}_3+b_2 {\cal S}_1 + \left(b_1 - a_2\right){\cal S}_2\right]u^\mu
+T \left(a_2+\frac{a_1}{2}\right){\mathcal V}_2^\mu +\frac{a_1T}{2}{\mathcal V}_1^\mu
\end{split}
\end{equation}
This expression coincides with the expression presented in \cite{Bhattacharyya:2008xc}
and \cite{Romatschke:2009kr} with the following identification.
\begin{equation}\label{identify}
\begin{split}
&a_1T = 4A^{Rom}_3\\
&b_2 T = \frac{A^{Rom}_1}{4} - \frac{A^{Rom}_3}{2} + \frac{B^{Rom}_1}{4}\\
&T(b_1-a_2) = A^{Rom}_2 + 2 A^{Rom}_3 - B^{Rom}_2\\
&\frac{a_1T}{2}=B^{Rom}_1\\
&T \left(a_2+\frac{a_1}{2}\right)=B^{Rom}_2\\
\end{split}
\end{equation}
where $A_i^{Rom}$ and $B_i^{Rom}$ are the coefficients in the conformal entropy
current as defined in \cite{Romatschke:2009kr}.
Substituting the relations \eqref{conformal1} in \eqref{fconst} one can
see that in
conformal case $\zeta_2$, $\zeta_3$, $\xi_3$, $\xi_4$ and $\lambda_4$ vanish and
$\kappa_2$ is related to $\kappa_1$ as
$$\kappa_2 = 2\kappa_1$$
However once the stress tensor is conformally covariant, all these vanishing of
the coefficients and the
relation between $\kappa_1$ and $\kappa_2$ are automatic (If these relations
were not true then the stress tensor
would have some terms which will transform in-homogeneously
under conformal transformation). Therefore we can say that the existence of an
entropy with positive divergence does not constrain the uncharged conformal fluid.
\section{Acknowledgement}
I would like to thank Shiraz Minwalla for suggesting this problem,
collaborating in the initial part of the calculation and providing
guidance at every stage. I would like to thank T. Sharma
and S. Jain for rechecking the calculation presented in section \ref{sec:class}.
I would also like to thank N. Banerjee, S. Jain,
T. Sharma and everyone in HRI for useful discussion.
Finally I would like to acknowledge
our debt
to the people of India for their generous and steady support to research in
the basic science.
\section{Appendices}
|
train/arxiv
|
BkiUdVs4dbghQvjK-Ao7
| 5 | 1 |
\section{Introduction and statement of the results}
Let $(M^n,g)$ be a complete smooth Riemannian manifold of dimension $n$. Let $\Omega \subset M$ be a domain, that is, an open connected subset with piecewise smooth boundary. More precisely, the boundary $\partial \Omega$ is the disjoint union of a (possibly infinite) number of $(n-1)$-dimensional smooth open pieces and its complement, the set of critical points $C$, which is assumed to have measure zero in $\partial \Omega$.
Let $H$ denote a uniformly elliptic second order operator acting on $L^2(\Omega,d\mu)$ with potential bounded from below subject to Dirichlet boundary conditions. So $H=A+V$ with
\[ \int\limits_\Omega fA(f) d\mu\geq \alpha \int\limits_\Omega f \Delta (f) d\mu. \]
For some $\alpha>0$. Here $d\mu$ denotes the usual Riemannian measure induced by $g$. Let $Q$ denote the quadratic form of $H$, so in particular for all $f \in \mathrm{Dom}(H)$ we have:
\[Q(f)=\int\limits_\Omega f H(f) d\mu . \]
Let $\delta$ be a differentiable function on $\Omega$ satisfying $||\mathrm{grad}(\delta)||_g\leq 1$. $H$ satisifes a (weak) Hardy inequality with respect to $\delta$ if there exist $a\geq 0, c>0$ such that
\[ c\left( Q(f) + a\int\limits_\Omega |f|^2 d\mu\right) \geq \int\limits_\Omega \frac{|f|^2}{\delta^2}d\mu\]
for all $f \in H^1_0:=\overline{C_0^\infty(\Omega)}$, where the closure is taken with respect to the Sobolev norm $\|\cdot\|_1$ given by $\|f\|^2_1=\int\limits_\Omega g(\mathrm{grad}(f),\mathrm{grad}(f)) d\mu + \int\limits_\Omega |f|^2d\mu$.\\
If $a=0$, then such an inequality is called a strong Hardy inequality and $c$ is then called a strong Hardy constant. Many versions of such inequalities have been established in different contexts. A review can be found for example in \cite{Dav2}. \\
One inequality of this kind (see for example \cite{Dav}) is the following: Let $\Omega \subset \mathbb{R}^n$ be a domain and let $H=\Delta:=-\mathrm{div}~\mathrm{grad}$ be the Laplacian. Then for all $f \in H^1_0(\Omega)$ the following strong Hardy inequality holds:
\begin{equation} Q(f)=\int\limits_\Omega||\mathrm{grad}(f)||^2 d\mu \geq \frac{n}{4} \int\limits_\Omega \frac{|f|^2}{m^2}d\mu,
\end{equation}
where the weight $m(x)$ is defined by
\[\frac{1}{m(x)^2}=\int\limits_{S^{n-1}}\frac{1}{r_x(v)^2}dv , \quad \quad \mathrm{with} \quad r_x(v):=\inf \{ |t|~:~ x+tv \in \partial \Omega\}\]
and $dv$ denotes the normalized euclidean surface measure of the sphere. That is, $m(x)$ represents a mean distance to the boundary evaluated along all lines through $x$. By definition, $m(x)\geq d(x):=d(x,\partial \Omega)$ holds.\\
The main result of this paper is that a generalized version of inequality (1) holds true on any Riemannian manifold by replacing lines with geodesics and by replacing the average over the sphere by an average over the unit tangent space $S_p\Omega$ at each point $p \in \Omega$.
\begin{theorem}
Let $\Omega \subset (M^n,g)$ be a domain with piecewise smooth boundary in a complete Riemannian manifold of dimension $n$. Then, for all $f \in H_0^1(\Omega)$:
\[\int\limits_\Omega||\mathrm{grad}(f)||_g^2 d\mu \geq \frac{n}{4} \int\limits_\Omega \frac{|f|^2}{m^2}d\mu\]
and
\[\frac{1}{m(p)^2}=\int\limits_{S_p\Omega}\frac{1}{r_p(v)^2}dv, \quad \quad \mathrm{with} \quad r_p(v):=\inf \{ |t|~:~ c_v(t) \in \partial \Omega\},\]
where $c_v$ is the geodesic with initial conditions $c_v(0)=p$ and $\dot{c}_v(0)=v$ and $dv$ denotes the normalized euclidean surface measure on $S_p\Omega\cong S^{n-1}$.
\end{theorem}
Note that the constant does not depend on the sectional curvature $K$ or other geometric data and is explicit as a function of the dimension only. However, the function $m$ can usually not be computed directly. Assume that $H$ is uniformly elliptic with potential bounded from below, then a direct consequence of Theorem 1 is the following:
\begin{theorem}
If $H$ is uniformly elliptic with potential bounded from below, then for all $f \in H^1_0(\Omega)$ the (weak) Hardy inequality:
\[Q(f)+a\int\limits_\Omega|f|^2d\mu\geq \frac{\alpha n}{4} \int\limits_\Omega \frac{|f|^2}{m^2}d\mu\]
holds with $a=\max\{0, -\inf(V(x)) \}$ and $\alpha$ the constant of uniform ellipticity of $H$.
\end{theorem}
As in the euclidean case, a large class of domains permits an estimate of the form $d(p)\leq m(p) \leq c d(p)$ with $c>1$ for all points $p\in \Omega$, where $d(p):=d_g(p,\partial \Omega)$ is now the Riemannian distance to the boundary. We call such domains \emph{boundary distance regular}.\\
Let $\sigma(H)$ denote the spectrum of the self-adjoint operator $H$. We have the standard decomposition $\sigma(H)=\sigma_{disc}(H)\cup~\sigma_{ess}(H)$. The discrete spectrum $\sigma_{disc}(H) $ consists of all isolated eigenvalues of finite multiplicity, and the essential spectrum is the closure of the complement of $\sigma_{disc}(H)$ in $\sigma(H)$.\\
If one considers the Laplace-Beltrami operator $\Delta=- \mathrm{div_g}~\mathrm{grad_g}$ with Dirichlet conditions, denoted by $\Delta^D$, then Theorem 1 implies a classification of all boundary distance regular domains such that on them the spectrum of $\Delta^D$ is purely discrete, that is, $\sigma_{ess}(\Delta^D)=\emptyset$. A domain $\Omega \subset (M^n,g)$ is said to be $\emph{quasi-bounded}$ iff for every $\epsilon>0$ the set $\Omega_\epsilon:=\{p \in \Omega~|~ d(p)\geq\epsilon \}$ is compact. We prove the following theorem for $\Delta^D$:
\begin{theorem}
Let $\Omega \subset (M^n,g)$ be a boundary distance regular domain. Then the quasi-boundedness of $\Omega$ is a sufficient condition for purely discrete spcetrum of $\Delta^D$, in other words $\sigma_{ess}(\Delta^D) =\emptyset$.
\end{theorem}
\begin{remark}
If $K>-a$ for some $a>0$, then quasi-boundedness of $\Omega$ is necessary for purely discrete spectrum of $\Delta^D$.
\end{remark}
This is well-known and follows for example from min-max arguments and Cheng's theorem \cite{Che} that allows one to compare eigenvalues on geodesic balls to eigenvalues of geodesic balls on spaces of constant curvature. \\
Boundary distance regularity has been investigated thoroughly in the past, see for example \cite{Dav}, p. 27. We state two conditions that imply it, though this is not an exhaustive list:
\begin{condition} (Uniform Interior Cone (UIC) Condition)\\
Assume that for $\Omega \subset (M,g)$ there exists an angle $\alpha>0$ and a constant $c_0>1$ such that for each $p \in \Omega$ there exists an $\alpha$-angled cone $C_{\alpha}(p)\subset T_p\Omega$ of directions with the property that $r_p(w)< c_0 d(p)$ for each $w \in C_{\alpha}$. Then $\Omega$ is boundary distance regular.
\end{condition}
To see this, one estimates the integral in the definition of $m$ from below by an integral of $c_0d(p)$ over the cone.
\begin{condition} (Uniform Exterior Ball Condition (UEB) Condition)\\
Suppose that there exists a constant $k>0$ such that for each $p \in \partial \Omega$, there exists a ball $B(q,kd_g(p,q))$ around a point q and of radius $kd_g(p,q)$ disjoint from $\Omega$, then $\Omega$ is boundary-distance regular.
\end{condition}
See for example Theorem. 1.5.4 in \cite{Dav}. \\
In the penultimate section, we will proceed to apply this classification theorem to an example in the form of polygons $P$ in negatively curved simply connected manifolds, which will have at least one vertex on the ideal boundary. We will show that they are both boundary distance regular and quasi-bounded and thus have purely discrete spectrum of $\Delta^D$. This was not known outside constant negative curvature.\\
In the final section of this paper we will establish a version of theorem 1 that applies to operators with mixed boundary conditions and gives a similar inequality for their quadratic forms.\\
We conclude the introduction by remarking that our method of proof is inspired by the work of Croke and Derdzinski \cite{CD}. Croke proved that for every complete Riemannian manifold with boundary $(M^n,g)$ the following bound holds for the first eigenvalue $\lambda_1(M,g)$ of $\Delta^D$:
\begin{equation} \lambda_1(M,g) \geq \frac{n \pi}{\mathrm{vol}(S^{n-1})} \inf\limits_{p \in M} \int\limits_{SpM}\frac{1}{l(v)^2}dv
\end{equation}
where $l(v)$ is the length of the maximal geodesic $c_v$.
Furthermore, it should be noted that the inequality of Croke is already sharp. Specifically, he and Derdzinski proved in the same paper that equality is equivalent to $M$ being a Riemannian hemisphere bundle.\\
Though unlinke inequality (2), the Hardy inequality of Theorem 1 allows no direct evaluation of the integral even in special cases, it does allow for better qualitative statements, since for $v \in S\Omega$, it may be that $l(v)$ is very large, even infinite, while $r_p(v)$ is small.
\section{Preliminaries}
Let $(M^n,g)$ denote a complete smooth Riemannian manifold of dimension $n$. We define $T_pM$ to be the tangent space at a point $p \in M$, and $S_pM$ to be the space of tangent vectors at $p$ of length one.\\
Furthermore, with $I$ being an intervall containing $0$, define by $c_v: I \to M$ the geodesic with intital conditions $c_v(0)=p$, $\dot{c}_v(0)=v$, with $v \in S_pM$. We also consider $\pi:TM \to M$ resp. $\pi:SM\to M$, the tangent bundle and unit tangent bundle of $M$. We will write an element $(p,v) \in SM$ simply as $v$, making the convention that for $v \in SM$, $p:=\pi(v)$. On $M$, the geodesic flow is defined as the smooth one-parameter group of diffeomorphisms given by:
\begin{eqnarray*}
\phi^t: SM &\to& SM \\
v &\mapsto& \phi^t(v):=\dot{c}_v(t)
\end{eqnarray*}
Consider a domain $\Omega \subset M$ and its closure as a metric space $\bar{\Omega}$. If the boundary $\partial \Omega$ is not empty, a geodesic in $\Omega$ might hit this boundary in finite time and in such a way that it does not exist beyond this point as a geodesic in $\Omega$. But we can always think of it as a geodesic segment in $M$, possessing a natural and unique extension to a geodesic in $M$ that exists for all times. We will not differ between the two in our notation, as the maximal geodesic in $\Omega$ is merely a restriction of the maximal geodesic in $M$.\\
Assume that the boundary is piecewise smooth. Denote the complement of the smooth pieces by $C$. Throughout this paper, we will assume that $C$ is a set of measure zero within $\partial \Omega$. Let $N$ denote the inward pointing unit normal vector field on the smooth boundary components. Define:
\[S^+\partial \Omega:=\{u \in SM ~|~ \pi(u)=p \in \partial \Omega \setminus C ~\mathrm{and}~ g_p(N,u)>0\}\] and
\[S^-\partial \Omega:=\{u \in SM ~|~ \pi(u)=p \in \partial \Omega \setminus C ~\mathrm{and}~ g_p(N,u)<0\}\]
Both sets are $(2n-2)$-dimensional submanifolds of $SM$. Consider $u \in S^+\partial \Omega$. Then the maximal geodesic $c_u$ in $\Omega$ is defined on an interval $I_u$ which is either of the form $(0, \infty)$ or $(0,l(u))$, with $0<l(u)<\infty$. Note that in the former case, we must have $c_u(l(u)) \in \partial \Omega$ and $g_{c_u(l(u))}(N, \dot{c}_u(l(u)))\leq 0$ whenever $c_u(l(u)) \notin C$. Note also that if $l(u)=\infty$, then $c_{-u}$ will be defined on $(-\infty, 0)$ as a geodesic in $\Omega$. The same holds, up to different signs, for $u \in S^-\partial \Omega$.
Let $A:=\bigcup\limits_{u \in S^+\partial \Omega}\{u\}\times I_u$, then the geodesic flow gives a map:
\begin{eqnarray*}
F:~~ A &\to& S\Omega \\
(u,t) &\mapsto& F(u,t):=\phi^t(u)=\dot{c}_u(t)
\end{eqnarray*}
If we consider the set $S^*\Omega:=\{ v \in S\Omega ~|~ \exists t_0> 0 ~\mathrm{s.t.}~ r_p(v) < \infty\}$, then the image $F(A)$ will be an open subset of positive measure in $S^*\Omega$. Furthermore, the map is injective and smooth on $A$. Let $d\mu_L$ denote the volume form associated with the canonical Liouville measure on $SM$ as well as its restriction onto $S^*\Omega$ and the image $F(A)$. We have the following formula for the pullback of the volume form $F^*d\mu_l$:
\begin{theorem} (Santalo's formula)\\
Let $du$ be the volume form on $S^+\partial \Omega$ induced by the volume form on $SM$. Then we have:
\[ F^*d\mu_L=g(N,u)dt \wedge du, \]
where $dt$ is the canonical measure on $\mathbb{R}$.
\end{theorem}
\paragraph{Proof:}
See \cite{San}, p. 337 or \cite{Ber} p. 282-285. $\hfill \square$
Note that the formula also holds true if we define the map $F$ on the vectors of $S^-\partial \Omega$ and their intervals of existence. Next, let us define the set:
\[ S^{-}_\infty\partial \Omega:=\{ u \in SM ~|~ -u \in S^+\partial \Omega ~\mathrm{and}~ I_{-u}=(0, \infty) \}\]
and finally consider the union
\[S^*\partial \Omega:=S^+\partial \Omega \cup S^-_\infty \partial \Omega\]
The map $F$ extends to $S^*\partial \Omega$ and is injective. The image of $F$ on this larger set is dense in $S^*\Omega$ since we exclude only the critical points $C$ on the boundary and the directions tangential to the boundary. Since $S^-_\infty \partial \Omega$ is a measurable subset of $S^-\partial \Omega$, we can restrict the measure induced by the pull-back of the volume form $F^*d\mu_L$ to this subset and the Santalo formula still holds true as a formula for the induced measure on $S^*\Omega$.\\
In order to finish the preparation of the proofs, we state the original Hardy inequality in dimension 1:
\begin{lemma} (Hardy's inequality)\\
If $f:[a,b] \to \mathbb{C}$ is continuously differentiable with $f(a)=0=f(b)$, then:
\[\int\limits_a^b\frac{|f(x)|^2}{4d(x)^2}dx \leq \int\limits_a^b |f^\prime(x)|^2dx\]
with $d(x)= \mathrm{min} \{|x-a|,|x-b|\}$
\end{lemma}
We give the proof, as stated in \cite{Dav}, p. 26:
\paragraph{Proof:}
It is sufficient to prove:
\[ \int\limits_a^c\frac{|f(x)|^2}{4(x-a)^2}dx \leq \int_a^c|f^\prime(x)|^2dx,\]
where $2c=a+b$, and a similar inequality for the other half-interval. It is sufficient to deal with the case where $a=0$, and where $f$ is real. Then:
\begin{eqnarray*}
\int\limits_0^c(f^\prime)^2dx&=&\int\limits_0^c\left(x^\frac{1}{2}(x^{-\frac{1}{2}}f)^\prime+\frac{f}{2x}\right)^2 dx\geq \int\limits_0^c\left( x^{-\frac{1}{2}}f(x^{-\frac{1}{2}}f)^\prime+\frac{f^2}{4x^2}\right) dx \\
&=& \left[\frac{1}{2}(x^{-\frac{1}{2}} f)^2\right]_0^c+ \int\limits_0^c\frac{f^2}{4x^2}dx=\frac{f(c)^2}{2c}+\int\limits_0^c\frac{f^2}{4x^2}dx\geq \int\limits_0^c\frac{f^2}{4x^2}dx
\end{eqnarray*}
$\hfill \square$\\
Note that the proof immediatly yields the following:
\begin{corollary} (Hardy's inequality for half-axes)\\
If $f:[a,\infty) \to \mathbb{C}$ is continuously differentiable and square integrable with $f(a)=0$, then:
\[\int\limits_a^\infty\frac{|f(x)|^2}{4|x-a|^2}dx \leq \int\limits_a^\infty |f^\prime(x)|^2dx.\]
\end{corollary}
\section{Proof of the main theorems:}
\paragraph{Proof of Theorem 1:}
To begin with, note that for all $f \in C^\infty_0(\Omega)$ we have the equality:
\[ \|\mathrm{grad} f(p)\|_g^2=\frac{n}{\mathrm{vol}(S^{n-1})}\int\limits_{Sp\Omega} (df(p)(v))^2 dv \]
where $dv$ is the canonical spherical suface measure. Using this, we find:
\begin{eqnarray*}
\int\limits_\Omega \|\mathrm{grad} f(p)\|_g^2d\mu(p)&=&\frac{n}{\mathrm{vol}(S^{n-1})}\int\limits_\Omega\int\limits_{Sp\Omega}df(p)(v)^2dvd\mu(p)\\
&=&\frac{n}{\mathrm{vol}(S^{n-1})}\int\limits_{S\Omega}df(\pi(v))(v)^2d\mu_L(v)\\
&\geq&\frac{n}{\mathrm{vol}(S^{n-1})}\int\limits_{S\Omega^*}df(\pi(v))(v)^2d\mu_L(v),\\
\end{eqnarray*}
where we remind that $S^*\Omega$ denotes all $v \in S\Omega$ for which $r_p(v)<\infty$. Using Santalo's formula and Hardy's inequality, we derive for $f \in C^\infty_0(\Omega)$:
\begin{eqnarray*}
&&\frac{n}{\mathrm{vol}(S^{n-1})}\int\limits_{S^*\Omega}df(\pi(v))(v)^2d\mu_L(v)\\
&=&\frac{n}{\mathrm{vol}(S^{n-1})}\int\limits_{S^*\partial \Omega}\left(\int\limits_{0}^{l(u)}df(\pi(\phi^t(u)))(\phi^t(u))^2 dt \right) g_{\pi(u)}(N(\pi(u)),u) du\\
&=&\frac{n}{\mathrm{vol}(S^{n-1})}\int\limits_{S^*\partial \Omega}\left(\int\limits_{0}^{l(u)}d(f \circ \pi(\phi^t(u)))^2 dt \right) g_{\pi(u)}(N(\pi(u)),u) du\\
&\geq&\frac{n}{\mathrm{vol}(S^{n-1})}\int\limits_{S^*\partial \Omega}\left( \int\limits_{0}^{l(u)}\frac{f(\pi(\phi^t(u)))^2}{4r_{\pi(\phi^t(u))}(\phi^t(u))}dt\right) g_{\pi(u)}(N(\pi(u)),u) du\\
&=&\frac{n}{\mathrm{vol}(S^{n-1})}\int\limits_{S^*\Omega}\frac{f(\pi(v))^2}{4r_p(v)^2}d\mu_L(v)
\end{eqnarray*}
\begin{eqnarray*}
&=&\frac{n}{\mathrm{vol}(S^{n-1})}\int\limits_{S\Omega}\frac{f(\pi(v))^2}{4r_p(v)^2}d\mu_L(v)\\
&=&\frac{n}{4}\int\limits_\Omega\frac{f(p)^2}{m(p)^2} d\mu(p),
\end{eqnarray*}
where the second-last equality holds since $r_p(v)=\infty$ for all $v \in S\Omega\setminus S^*\Omega$. Since $C_0^\infty(\Omega)$ is dense in $H^1_0(\Omega)$ we are done by Fatou's lemma. $\hfill \square$
\paragraph{Proof of Theorem 2:}
As a direct consequence of the definition of $H=A+V$, we know
\[H\geq\alpha \Delta^D+V\]
in the sense of quadratic forms. If the potential is non-negative, one omits it and as a result derives a strong Hardy inequality, while in the case where $V$ is negative but bounded, one just adds $a=- \inf (V(p))$ to the quadratic form and finds a weak Hardy inequality. $\hfill \square$
\paragraph{Proof of Theorem 3:}
Assuming boundary distance regularity of $\Omega$, the Hardy inequality implies that one has the following inequality in the sense of quadratic forms for any $b>0$:
\[ \Delta^D\geq \frac{1}{2} \left(\Delta^D+ \frac{nc}{4d^2}\right)=\frac{1}{2}\left(\Delta^D+V_b+W_b\right)\]
With
\begin{equation*}
V_b(p)=\begin{cases} b, &\text{if}~ \frac{nc}{4d(p) ^2}\leq b \\ \frac{nc}{4d(p)^2}, & \text{else} \end{cases}
\end{equation*}
and
\begin{equation*}
W_b(p)=\begin{cases} \frac{nc}{4d(p)^2}-b, &\text{if}~ \frac{nc}{4d(p) ^2}\leq b \\ 0, &\text{else}.
\end{cases}
\end{equation*}
This is an equality between quadratic forms, and, by for example min-max arguments, we can use to bound the ssential spectrum of $\Delta^D$ from below by the right hand side. Since $\Omega$ is quasi-bounded, $W_b$ is continous with compact support. Then, by standard arguments the difference of the resolvents of $\Delta^D+V_b+W_b$ and $\Delta^D+V_b$ is a compact operator. A theorem of Weyl (cf. \cite{RS}, Theorem XIII.14) then yields:
\[ \sigma_{ess}(\Delta^D+V_b+W_b)=\sigma_{ess}(\Delta^D+V_b) \]
However, the spectrum of the latter operator is contained in $[b,\infty)$, since the Laplacian is positve and the potential is bounded from below by $b$. Because $b$ was arbitrary, the essential spectrum of $\Delta^D$ is empty as claimed. $\hfill \square$
\section{Application: $\sigma(\Delta^D)$ on hyperbolic polygons with ideal vertices}
Let $<\cdot,\cdot>$ denote the euclidean metric, $||\cdot||$ its induced norm and $D^{n-1}$ the open euclidean unit ball. Then $(M,g)=(D^{n-1}, \frac{4}{((1-||p||^2)^2}<\cdot,\cdot>)=:\mathbb{H}^n$ is the Poincare ball model of hyperbolic $n$-space.\\
We consider a k-sided geodesic polygon $P\subset \mathbb{H}^2$, for which at least one of the interior angles is zero, or equivalently, that has at least one vertex on the ideal boundary $\partial \mathbb{H}^2\cong S^1$. Though it is not a bounded set in the hyperbolic metric, $P$ is of finite volume. We show the following well-known statement with the help of Theorem 3:
\begin{proposition}
The spectrum of $\Delta^D$ on any poylgon $P\subset \mathbb{H}^2$ with ideal vertices is discrete.
\end{proposition}
\paragraph{Proof:}
We have to verify that $P$ is (i) quasi-bounded and (ii) boundary distance regular.
\begin{itemize}
\item[i)] For quasi-boundedness we observe it suffices to show that for any $\epsilon>0$ no ideal vertex is in the closure of $P_\epsilon$. On each ideal vertex, the 2 bounding geodesics forming the vertex are asymptotic to each other in the hyperbolic metric. Therefore for each $\epsilon>0$ the ideal vertex cannot be a limit point of the set of points with a distance to both geodesics of at least $\epsilon$, of which $P_\epsilon$ is a subset. Thus, all $P_\epsilon$ are compact as claimed.
\item[ii)]
To show that $P$ is boundary distance regular, we verify that they fullfill the uniform interior cone condition (UIC). Note that the distance of any point in the interior of the polygon to the boundary is uniformly bounded and that the maximum $d_0$ is achieved at at least one point $p_0$. For any $p$ in the interior of the polygon the boundary distance $d(p)$ is realised by a geodesic arc $c_v$ that hits the boundary at a right angle.
We will prove the existance of a cone $C_\alpha(p)$ of directions around each such $v$, such that for each $w \in C_\alpha(p)$ the geodesic $c_w$ will hit the boundary after a time of at most $2d(p)$ and then bound $\alpha$ away from $0$ uniformly for all $p \in P$.\\
Since the boundary $\partial \Omega$ consists of geodesic arcs in $\mathbb{H}^2$, the existence of $C_\alpha(w) $ follows from the existence of certain right angled geodesic triangles. One side is given by a portion of one of the boundary geodesics, another side is given by the distance realizing geodesic arc $c_v$ and the hypothenuse is given by a geodesic arc such that the ratio of the length of the hypothenuse and length of $c_v$ is a number $1<r\leq 2$. Of course, the choice of a factor of $2$ was arbitrary and the existance of such triangles for all $r$ is a well-known geometric fact. The inital conditions for the hypothenuses then form $C_\alpha(p)$.\\
It remains to argue why the angle of these cones can be bounded uniformly from below. This, however follows from the geometric fact that for a fixed ratio of lengths $r$, the angle between the hypothenuse and the arc $c_v$ increases as the length decreases. Since the distance to the boundary is bounded from above by $d_0$, we set the uniform cone angle to be the angle of one of the cones around a distance realising arc $c_{v_0}$ at $p_0$. Note that at $p_0$, there might be more than one direction belonging to such a distance realising arc. However, for each of these the cone will be of equal size by the homogenity of $\mathbb{H}^2$. Thus, (UIC) holds.
\end{itemize}
$\hfill \square$
In the same way, one can argue in higher dimensions, where a $k$-sided polygon $P \subset \mathbb{H}^n$ is now given as a set that is bounded by $k$ totally geodesic subspaces $\mathbb{H}^{n-1}$, of which at least two are asymptotic to each other at the ideal boundary. The argument remains unchanged, and we note:\\
\begin{corollary}
The spectrum of $\Delta^D$ on a polygon $P \in \mathbb{H}^n$ with ideal vertices is discrete.
\end{corollary}
Furthermore, we can generalize the argument to variable negative curvature. That is, consider a simply connected manifold $M$ of dimension $n$ that satisfies the sectional curvature bounds $-b\leq K \leq -a$ with $b>a>0$ and define an $k$-sided polygon $P$ with ideal vertex to be a subset of $M$ bounded by $k$ totally geodesic submanifolds $N_i$, $1\leq i \leq n$, of dimension $n-1$, of which at least two are asymptotic at the ideal boundary.\\
\begin{proposition}
On any simply connected Riemanannian manifold $(M,g)$ with curvature $K$ satisfying $-b\leq K \leq -a$, $b>a>0$, the spectrum of $\Delta^D$ on a Polygon $P$ is discrete even if it has ideal vertices.
\end{proposition}
\paragraph{Proof:}
Using the triangle comparison theorem of Topogonov \cite{Pet}, compare right angled geodesic triangles in $P$ with those of the constant curvature space form $M_{-b}$. Both quasi-boundedness and boundary distance regularity follow directly from the properties of the polygons in $M_{-b}$.
$\hfill \square$
\section{Extension to operators with mixed boundary conditions}
In this section we show how to extend Theorem 1 and 2 to operators with mixed boundary conditions. The weight $m$ is then replaced by a restriction to an average over those directions that 'see' the Dirichlet component of the boundary. More precisely:
\begin{definition}
Let $\Omega \subset (M,g)$ be a domain in a Riemannian manifold. Let $\Gamma \subset \partial \Omega$ be a closed subset of the boundary. Define $m_\Gamma : \Omega \to (0,\infty) \cup \{\infty\}$ by:
\[\frac{1}{m_\Gamma^2(p)}:=\int\limits_{S_p\Omega}\frac{1}{r^\Gamma_p(v)^2}dv\]
with $r^\Gamma_p(v):= \inf \{ |t|~:~c_v(t) \in \Gamma \}.$
\end{definition}
If we furthermore define $H^1_{0,\Gamma}(\Omega):=\{ f\in H^1(\Omega) ~:~ f_{|\Gamma} \equiv 0\}$, we have the following generalization of Theorem 1:
\begin{theorem}Let $\Omega \subset (M^n,g)$ be a domain in a Riemannian manifold. Let $\Gamma \subset \partial \Omega$ be a piecewise smooth closed subset of the boundary. For all $f \in H^1_{0,\Gamma}(\Omega)$ the following inequality holds:
\[\int\limits_\Omega||\mathrm{grad}(f)||_g^2 d\mu \geq \frac{n}{4} \int\limits_\Omega \frac{|f|^2}{m_\Gamma^2}d\mu\]
\end{theorem}
\paragraph{Proof:} The proof of Theorem 1 holds line by line using $m_\Gamma$ instead of $m$, and working in $H^1_{0,\Gamma}(\Omega)$ instead of $H^1_0(\Omega)$. $\hfill \square$\\
Since $H^1_{0,\Gamma}$ is the form domain of $\Delta^{DN}$, the Laplacian with Dirichlet boundary conditions on $\Gamma$ and Neumann conditions everywhere else, we have completed our task. It is clear from the definition of $H$ and the inequality for the mixed Laplacian that Theorem 2 generalizes in just the same way to include uniformly elliptic operators with mixed boundary conditions.
\newpage
|
train/arxiv
|
BkiUeAfxK2li-HhsAJ4I
| 5 | 1 |
\section{Introduction}
Let $G$ be a finitely generated group. In order to state our main result, we quickly introduce the main players in it.
The group homology $H_\ast(G;\mathbb{C})$ can be computed as the homology of
the bar complex~$C_*(G;\mathbb{C})$. Chains~$c \in C_k(G;\mathbb{C})$ are of the form
$c = \sum_{g \in G^{k}} a_g \cdot [e,g_1,\dots, g_k]$,
where only finitely many of the coefficients $a_g$ are non-zero. We
choose a finite generating set $S$ for $G$ to get a word-metric on
$G$. For~$n \in \mathbb{N}$ and $p \in [1,\infty)$ we then define a weighted norm on
$C_k(G;\mathbb{C})$ by $\|c\|_{n,p}^S := \bigl(\sum_{g\in G^k} |a_g|^p \cdot
\operatorname{diam}_S(g)^n\bigr)^{1/p}$. We equip $C_k(G;\mathbb{C})$ with the
family~$(\|-\|_{n,p}^S + \|\partial - \|_{n,p}^S)_{n\in \mathbb{N}}$ of
norms and denote the corresponding completion to a Fr\'echet space
by~$C_k^p(G)$. The homology of the resulting chain
complex~$C_*^p(G)$ is denoted by~$H_*^p(G)$.
\begin{mainthm*}[Theorem~\ref{thm:pqcompgeneral}, Proposition~\ref{prop:comppq}]
Let $G$ be a finitely generated group of polynomial or exponential growth
and let $p,q \in (1,\infty)$ with~$p < q$.
Then the canonical homomorphism~$H_*^p(G) \longrightarrow H_*^q(G)$
is an isomorphism.
\end{mainthm*}
\paragraph{Relation to the strong Novikov conjecture}
Let us explain why we are interested in a theorem like the one above. We first have to recall the following two notions.
Firstly, a group $G$ is called of \emph{type~$F_\infty$}, if it admits a model for its classifying space~$BG$
of finite type (i.e., a CW-complex that in each dimension has only finitely many cells).
Secondly, a group of type~$F_\infty$ is called \emph{polynomially contractible}, if its Dehn function and its higher-dimensional analogues are polynomially bounded. Note that this assumption on $G$ is not very strong: most of the groups that one would call non-positively curved (like hyperbolic groups, CAT(0)-groups, systolic groups or mapping class groups) are polynomially contractible. This follows from the fact that if a group is polynomially combable (i.e., combable with a uniform polynomial bound on the lengths of the combing paths), then it is polynomially contractible~\cite[End of 2nd paragraph on p.~257]{ji_ramsey}\cite[Prop.~3.4]{engel_BSNC}.
For hyperbolic groups any choice of geodesics in the Cayley graph will be a suitable combing, for CAT(0)-groups any choice of quasi-geodesics in the group following uniformly closely CAT(0)-geodesics in the underlying space will do the job, for systolic groups one can use the bi-automatic structure found by Januszkiewicz--\'{S}wi{\c{a}}tkowski~\cite[Thm.~E]{janus_swia} or the combing by Osajda--Przytycki~\cite{osajda_przytycki}, and an automatic structure on mapping class groups was provided by Mosher~\cite{mosher}. For a thorough compilation of polynomially contractible groups see the introduction of~\cite{engel_BSNC}.
\begin{maincor*}
Let $G$ be a group of type $F_\infty$, and of polynomial or exponential growth,
and let $G$ be polynomially contractible.
If there exists some $p \in (1,\infty)$ such that the canonical homomorphism~$H^1_\ast(G) \to H^p_\ast(G)$ is injective, then the strong Novikov conjecture holds for $G$.
\end{maincor*}
\begin{proof}
The proof relies on the following diagram~\cite{engel_BSNC}:
\[\xymatrix{
RK_k(BG) \ar[r] \ar[d]_{\operatorname{ch}_k} & K_k(B_r^p G) \ar@{-->}[d]\\
H_k(G;\mathbb{C}) \ar[r] & H_k^1(G)
}\]
Here, $B_r^p G$ denotes the norm completion of $\mathbb{C} G \subset \mathfrak{B}(\ell^p G)$ and the top horizontal map is the analytic assembly map. In the case $p=2$ we have $B_r^2 G = C_r^\ast G$, i.e., the reduced group $C^\ast$-algebra, and the \emph{strong Novikov conjecture} asserts that the analytic assembly map in this case (i.e., for $p=2$) is rationally injective.
Let $x \in RK_\ast(BG) \otimes \mathbb{C}$ be any non-trivial element. Because the homological Chern character $\operatorname{ch}_\ast\colon RK_\ast(BG) \otimes \mathbb{C} \to H_\ast(G;\mathbb{C})$ is an isomorphism, there is some~$k \in \mathbb{N}$ such that $\operatorname{ch}_k(x) \in H_k(G;\mathbb{C})$ is non-zero. If $G$ is of type $F_\infty$ and polynomially contractible, then the canonical map $H_\ast(G;\mathbb{C}) \to H_\ast^1(G)$ is an isomorphism~\cite[Corollary 4.4]{engel_BSNC}. (Analogous statements in the dual situation, i.e., for the corresponding cohomology groups, also hold~\cite{ogle_pol,ji_ramsey,meyer}.)
Further, the right vertical map in the above diagram exists for all~$p \le (k+2) / (k+1)$. Hence, for such $p$ the element $x$ is not in the kernel of the analytic assembly map.
Since our goal is the strong Novikov conjecture, i.e., to show that the element $x$ is not in the kernel of the assembly map for $p=2$, we can try to go with the lower horizontal map to $H_k^q(G)$ for
some~$q >1$ instead of to~$H^1_k(G)$, i.e, we consider the new diagram
\[\xymatrix{
RK_\ast(BG) \ar[rr] \ar[d]_{\operatorname{ch}_k} & & K_\ast(B_r^p G) \ar@{-->}[d]\\
H_k(G;\mathbb{C}) \ar[r] & H_k^1(G) \ar[r] & H_k^q(G)
}\]
Also in this case, we can construct the right vertical map for all $p \le q \cdot (k+2) / (k+1)$
(this can be shown as in previous work of the first named author~\cite[Proposition~5.1]{engel_BSNC}).
In particular, if $q$ is big enough, we can do it for $p=2$. We have already noted above that polynomial contractibility gives us that the canonical map $H_\ast(G;\mathbb{C}) \to H_\ast^1(G)$ is an isomorphism. Using the main theorem, we see that if the canonical map~$H_\ast^1(G) \to H_\ast^p(G)$ is injective for some $p > 1$, then $H_\ast^1(G) \to H_\ast^q(G)$ will be injective for every $q \in (1,\infty)$. Hence our element $x$ is not in the kernel of the assembly map for the case $p=2$.
\end{proof}
Unfortunately, the hypotheses of this corollary are \emph{not} satisfied for all groups:
For example, for the free group~$F_2$ of rank~$2$, the canonical homomorphism~$H^1_1(F_2) \longrightarrow H^p_1(F_2)$ is trivial for all~$p \in (1,\infty)$ (Theorem~\ref{thm:f2vanishing}, Theorem~\ref{thm:p=1}).
In fact, we expect this vanishing result to hold in far greater generality, and thus this approach to the strong Novikov conjecture is not promising.
\paragraph{Questions}
Let us collect some open problems arising from the present paper. Since this seems to be the first time that such polynomially weighted $\ell^p$-completions of group homology are defined, there are many natural questions left open.
\begin{itemize}
\item
Does Theorem~\ref{thm:pqcompgeneral}, i.e., the comparison in the range $(1,\infty)$, also hold for groups
of intermediate growth?
\item
For which groups of superpolynomial growth does Theorem~\ref{thm:pqcompgeneral} also
hold in the cases~``$p=1$'' or~``$q= \infty$''\;?
\item
For which groups~$G$ and which~$p \in (1,\infty]$ is the canonical
map~$H^1_*(G) \longrightarrow H^p_*(G)$, resp.~the canonical map $H_*(G;\mathbb{C}) \to H^p_*(G)$, injective?
\item
For which groups~$G$ and which~$p \in [1,\infty]$, $k \in \mathbb{N}$ is~$H_k^p(G)$ non-trivial?
How can such classes be detected?
\end{itemize}
\paragraph{Related work}
Though this seems to be the first time that these polynomially weighted $\ell^p$-completions of group homology are defined, there are of course similar things already in the literature:
\begin{itemize}
\item Bader, Furman and Sauer~\cite{bfs} investigate the comparison maps from ordinary homology and Sobolev homology, respectively, to the $\ell^1$-homology of any word hyperbolic group.
\item Nowak and {\v{S}}pakula~\cite{nowak_spakula} study coarse homology theory with prescribed growth conditions.
\item Weighted simplicial homology was studied by Dawson~\cite{weighted_simpl_complexes} and by Ren, Wu and Wu~\cite{ren_wu_wu}.
\item The dual situation to the one from the present paper, but only in the case of~$\ell^1$, i.e., group cohomology of polynomial growth, was studied by Connes and Moscovici~\cite{connes_moscovici} in relation with the strong Novikov conjecture, and further investigated by many others like Ji~\cite{ji}, Meyer~\cite{meyer} and Ogle~\cite{ogle_pol}.
\end{itemize}
\paragraph{Overview of this article}
Section~\ref{sec:lpnorms} introduces the polynomially weighted
$\ell^p$-versions of group homology in full detail and
discusses the case of groups of polynomial growth. In
Section~\ref{sec:comparison}, we establish the comparison theorem for
groups of exponential growth. The vanishing result for the free group
is proved in Section~\ref{sec:vanishing}.
\paragraph{Acknowledgements}
The authors were supported by the SFB 1085 \emph{Higher Invariants} of the Deutsche Forschungsgemeinschaft DFG. The first named author was also supported by the Research Fellowship EN 1163/1-1 \emph{Mapping Analysis to Homology} of the DFG, and the DFG Priority Programme SPP 2026 \emph{Geometry at Infinity} (EN 1163/3-1 ``Duality and the coarse assembly map''). Moreover, we would like to thank the anonymous referee for providing helpful comments.
\section{Weighted \texorpdfstring{${\ell^p}$}{lp}-norms on group homology}\label{sec:lpnorms}
\subsection{Definition and basic properties}
\begin{defn}[weighted $\ell^p$-norms]
Let $G$ be a finitely generated group endowed with a finite generating set~$S$, let $k \in \mathbb{N}$, and let $p
\in [1,\infty)$. For~$n \in \mathbb{N}$ we define the \emph{$n$-weighted
$\ell^p$-norm (with respect to~$S$)} by
\begin{align*}
\|-\|_{n,p}^S \colon C_k(G;\mathbb{C}) & \longrightarrow \mathbb{R}_{\geq 0} \\
\sum_{g \in G^{k}} a_g \cdot [e,g_1,\dots, g_k]
& \longmapsto \biggl(\sum_{g\in G^k} |a_g|^p \cdot \operatorname{diam}_S(g)^n\biggr)^{1/p},
\end{align*}
where $\operatorname{diam}_S(g) := \operatorname{diam}_S\{e,g_1,\dots, g_k\}$ is the diameter with respect
to the word metric~$d_S$ on~$G$.
We then equip~$C_k(G;\mathbb{C})$ with the family~$(\|-\|_{n,p}^S +
\|\partial - \|_{n,p}^S)_{n\in \mathbb{N}}$ of norms and denote the
corresponding completion to a Fr\'echet space by~$C_k^p(G)$. By
construction, the boundary operator of~$C_k(G;\mathbb{C})$ extends continuously
to~$C_k^p(G)$ and the homology of~$C_*^p(G)$ is called
\emph{$\ell^p$-polynomially bounded homology of~$G$}, denoted
by~$H_k^p(G)$.
In the case of~$p = \infty$, we proceed in the same manner, using
the \emph{$n$-weighted $\ell^\infty$-norms (with respect to~$S$)},
defined by
\begin{align*}
\|-\|_{n,\infty}^S \colon C_k(G;\mathbb{C}) & \longrightarrow \mathbb{R}_{\geq 0} \\
\sum_{g \in G^{k}} a_g \cdot [e,g_1,\dots, g_k]
& \longmapsto \sup_{g \in G^k} |a_g| \cdot \operatorname{diam}_S(g)^n.
\qedhere
\end{align*}
\end{defn}
\begin{rem}
If $G$ is a finitely generated group, $k, n \in \mathbb{N}$, and $p \in
[1,\infty]$, then different finite generating sets~$S,T$ of~$G$ lead
to equivalent (semi-)norms~$\|-\|_{n,p}^S$ and $\|-\|_{n,p}^T$
on~$C_k(G;\mathbb{C})$. Therefore, the completions~$C_*^p(G)$ and the
homology~$H_*^p(G)$ are independent of the choice of finite
generating sets.
\end{rem}
\begin{rem}\label{rem:p<qmap}
Let $G$ be a finitely generated group and let $p,q \in [1,\infty)$ with~$p < q$. Then
the canonical inclusion~$C_*^p(G) \longrightarrow C_*^q(G)$ is
contractive in the following sense: For every finite generating set~$S$ of~$G$
and every~$n \in \mathbb{N}$ the identity map~$C_*(G;\mathbb{C}) \longrightarrow C_*(G;\mathbb{C})$ has norm
at most~$1$ with respect to the norms~$\|-\|_{n,p}^S$ and $\|-\|_{n,q}^S$,
respectively. In particular, we obtain a canonical induced map
\[ H_*^p(G) \longrightarrow H_*^q(G).
\]
If $p \in [1,\infty)$ and $n \in \mathbb{N}$, then
\[ \fa{c \in C_k(G;\mathbb{C})} \|c\|^S_{n, \infty} \leq \|c\|_{\lceil n \cdot p\rceil,p}^S,
\]
which yields a canonical map~$H_*^p(G) \longrightarrow H_*^\infty(G)$.
\end{rem}
\subsection{The case \texorpdfstring{$p=1$}{p=1}}
It is already known that the canonical map~$H_*(G;\mathbb{C}) \longrightarrow H_*^1(G)$ is an isomorphism for a large class of groups.
To state the corresponding theorem, we have to recall two notions.
Firstly, a group $G$ is called of \emph{type~$F_\infty$}, if it admits a model for its classifying space~$BG$
of finite type (i.e., a CW-complex that in each dimension has only finitely many cells).
Secondly, a group of type~$F_\infty$ is called \emph{polynomially contractible}, if its Dehn function and its higher-dimensional analogues are polynomially bounded.
Most of the groups that one calls non-positively curved (like hyperbolic groups, systolic groups, $\operatorname{CAT(0)}$-groups or mapping class groups) are polynomially contractible (see the introduction for references).
The following theorem has been proved (in variations) by different people in different ways~\cite{engel_BSNC,connes_moscovici,meyer,ogle_pol,ji_ramsey,JOR_B_bounded_coho}:
\begin{thm}\label{thm:p=1}
Let $G$ be a group of type~$F_\infty$ that is polynomially contractible. Then
the canonical map~$H_*(G;\mathbb{C}) \longrightarrow H_*^1(G)$ is an isomorphism.
\end{thm}
\begin{rem}
Without the assumption of polynomial contractibility, Theorem~\ref{thm:p=1} is likely false.
Ji, Ogle, and Ramsey provided groups whose comparison maps from bounded cohomology to ordinary cohomology fail to be injective or surjective, respectively~\cite[Sec.~6.4 \& 6.5]{JOR_B_bounded_coho}. Bounded cohomology, as they investigate it, is dual to our~$H_*^1(-)$ in the sense that it pairs with it (and this pairing is compatible with the usual pairing of homology with cohomology under the comparison maps). Please be also aware that their bounded cohomology is \emph{not} the one used by Gromov~\cite{gromov_vol}.
It seems therefore plausible that the groups of Ji, Ogle and Ramsey are also examples of groups for which the canonical map~$H_*(G;\mathbb{C}) \longrightarrow H_*^1(G)$ is not injective or surjective, respectively.
\end{rem}
\subsection{Comparison on groups of polynomial growth}
\begin{prop}\label{prop:comppq}
Let $G$ be a finitely generated group of polynomial growth and let
$p,q \in [1,\infty]$ with~$p < q$.
\begin{enumerate}
\item
Then the inclusion~$C_*^p(G) \longrightarrow C_*^q(G)$ is bounded
from below in the following sense: For every finite generating set~$S$
of~$G$ and all~$k,n \in \mathbb{N}$ there exist~$C \in \mathbb{R}_{>0}$ and~$m \in \mathbb{N}$ with
\[ \fa{c \in C_k(G;\mathbb{C})} \|c\|_{m,q}^S \geq C \cdot \|c\|_{n,p}^S.
\]
\item
In particular, the canonical map~$H_*^p(G) \longrightarrow H_*^q(G)$
(Remark~\ref{rem:p<qmap}) is an isomorphism.
\end{enumerate}
\end{prop}
\begin{proof}
\emph{Ad~1.}
We first consider the case $q < \infty$.
Let $D \in \mathbb{N}$ be the polynomial growth rate of~$G$, let $S$ be a
finite generating set of~$G$, and let $k, n \in \mathbb{N}$. Then
\[ m := \Bigl\lceil q \cdot \Bigl((k \cdot D+2) \cdot
\Bigl(\frac1p - \frac1q\Bigr) + \frac np \Bigr)\Bigr\rceil
\]
has the desired property, as can be seen by the generalized H\"older
inequality: Because $D$ is the polynomial growth rate of~$G$, there
is a constant~$K \in \mathbb{R}_{>0}$ with
\[ \fa{r \in \mathbb{N}_{>0}} \beta(r) := \bigl|B_e^{G,S}(r)\bigr| \leq K \cdot r^D.
\]
Moreover, because of~$q > p$, there is~$q' \in [1,\infty)$ with
\[ \frac1q + \frac1{q'} = \frac1p.
\]
We now consider $c \in C_k(G;\mathbb{C})$ and the weight functions
\begin{align*}
w_1,w_2 \colon G^k & \longrightarrow \mathbb{R}_{\geq0} \\
w_1 : g & \longmapsto \operatorname{diam}_S(g)^{m/q}\\
w_2 : g & \longmapsto \operatorname{diam}_S(g)^{n/p-m/q}.
\end{align*}
By definition, $\|c\|_{n,p}^S = \| c \cdot w_1 \cdot w_2\|_p$
and $\|c\|_{m,q}^S = \|c \cdot w_1\|_q$
(where
``$\cdot$'' denotes pointwise multiplication). Applying the
generalized H\"older inequality, we hence obtain
\begin{align*}
\|c\|_{n,p}^{S}
& \leq \| c \cdot w_1 \|_q \cdot \|w_2\|_{q'}
= \| c \|_{m,q}^S \cdot \|w_2\|_{q'}
\end{align*}
and it remains to bound~$\|w_2\|_{q'}$ by a constant. The polynomial growth
condition yields
\begin{align*}
\bigl(\|w_2\|_{q'}\bigr)^{q'}
& = \sum_{g \in G^k} \operatorname{diam}(g)^{q' \cdot (\frac np - \frac mq)}
\leq \sum_{r =1}^\infty \beta(r)^k
\cdot r^{q' \cdot (\frac np - \frac mq)}
\leq K^k \cdot \sum_{r=1}^\infty r^{k \cdot D}
\cdot r^{q' \cdot (\frac np - \frac mq)}\\
& \leq K^k \cdot \sum_{r=1}^{\infty} r^{-2}
= K^k \cdot\zeta(2).
\end{align*}
In the case~$q = \infty$, one can proceed in a similar way
(with~$m = \lceil 1/p \cdot (k \cdot D + n +2)\rceil$).
\emph{Ad~2.}
By the first part, the identity map on the ordinary chain complex~$C_*(G;\mathbb{C})$ induces an
isomorphism~$C_*^p(G) \longrightarrow C_*^q(G)$.
Hence, the claim follows.
\end{proof}
\subsection{Functoriality of weighted \texorpdfstring{$\ell^p$}{lp}-chain complexes}
\begin{defn}[polynomially controlled kernel]
Let $G$ be a finitely generated group.
\begin{itemize}
\item A subgroup~$H \subset G$ is \emph{polynomially controlled}, if
for one (whence every) finite generating set~$S \subset G$ there
exist $D \in \mathbb{N}$ and $K \in \mathbb{R}_{>0}$ such that
\[ \fa{r \in \mathbb{N}_{>0}}
\bigl| B_r^{G,S}(e) \cap H \bigr|
\leq K \cdot r^D.
\]
\item Let $G'$ be a group.
A group homomorphism~$\varphi \colon G \longrightarrow G'$ has
\emph{polynomially controlled kernel} if the subgroup~$\ker \varphi$ of $G$
is polynomially controlled in the sense above.
\qedhere
\end{itemize}
\end{defn}
Clearly, all group homomorphisms with finite kernel have polynomially
controlled kernel as well as all group homomorphisms mapping out of
finitely generated groups of polynomial growth.
\begin{rem}\label{rem_pol_controlled_fin_generated}
If $H$ is a polynomially controlled subgroup of a finitely generated group~$G$, then every finitely generated subgroup $K$ of $H$ has polynomial growth.
The reason for this is that the inclusion $K \to G$ does not increase lengths of elements if we choose a finite generating set $S$ of $G$ containing the chosen finite generating set $T$ of $K$ to define the word lengths. More concretely, we have for all $r \in \mathbb{N}_{>0}$
\[
|B_r^{K,T}(e)| \le |B_r^{G,S}(e) \cap K| \le |B_r^{G,S}(e) \cap H|.
\]
We see that we even actually have that the growth rates of finitely generated subgroups of $H$ are uniformly bounded from above.
\end{rem}
\begin{lem}\label{lem:hyppolysub}
Let $G$ be a hyperbolic group and $H$ a subgroup of $G$.
Then $H$ is a polynomially controlled subgroup if and only if $H$ is virtually cyclic.
\end{lem}
\begin{proof}
Let $H$ be a polynomially controlled subgroup. By Remark~\ref{rem_pol_controlled_fin_generated} finitely
generated subgroups of~$H$ are of polynomial growth. In particular,
$H$ does not contain a free group of rank~$2$. As the ambient
group~$G$ is hyperbolic, this implies that $H$ is virtually
cyclic~\cite[Corollaire on p.~224]{ghys}.
Let $H$ be virtually cyclic. Without loss of generality we can assume that $H$ is isomorphic to $\mathbb{Z}$. Then $H$ is quasi-isometrically embedded in $G$ \cite[Corollary III.$\Gamma$.3.10(1)]{bridson_haefliger} and hence a polynomially controlled subgroup of $G$.
\end{proof}
\begin{prop}\label{prop:polkernel}
Let $p \in [1,\infty]$, let $G$ and $H$ be finitely generated
groups, and let $\varphi \colon G \longrightarrow H$ be a group
homomorphism with polynomially controlled kernel.
Then the induced chain map~$C_*(\varphi;\mathbb{C}) \colon C_*(G;\mathbb{C})
\longrightarrow C_*(H;\mathbb{C})$ is continuous with respect to the
weighted $\ell^p$-Fr\'echet topologies.
\end{prop}
\begin{proof}
Let us establish some notation:
Let $S \subset G$ and $T \subset H$ be finite generating sets, and
without loss of generality we may assume that $\varphi(S) \subset T$.
Let $D \in \mathbb{N}$ be the polynomial control rate of~$\ker \varphi$; hence,
there is a~$K \in \mathbb{R}_{>0}$ with
\[ \fa{r \in \mathbb{N}_{>0}}
\beta(r) := \bigl| B_e^{G, S}(r) \cap \ker \varphi \bigr|
\leq K \cdot r^D.
\]
Furthermore, let $p \in [1,\infty]$ and $k,n \in \mathbb{N}$. It then
suffices to prove that there exist $m \in \mathbb{N}$ and $C \in \mathbb{R}_{>0}$
such that
\[ \fa{c \in C_k(G;\mathbb{C})}
\bigl\| C_k(\varphi;\mathbb{C})(c) \bigr\|^T_{n,p}
\leq C \cdot \|c\|_{m+n,p}^S.
\]
The arguments are similar to the proof of Proposition~\ref{prop:comppq}:
Let $c \in C_k(G;\mathbb{C})$, say~$c = \sum_{g \in G^k} a_g \cdot [e,g_1,\dots,g_k]$.
By definition of~$C_k(\varphi;\mathbb{C})$, we have
\[ \varphi(c)
:= C_k(\varphi;\mathbb{C})(c)
= \sum_{h \in H^k} \biggl(\sum_{g \in \varphi^{-1}(h)} a_g\biggr) \cdot [e,h_1,\dots, h_k],
\]
where $\varphi^{-1}(h) := \bigl\{ g \in G^k \bigm|
\fa{j \in \{1,\dots,k\}} \varphi(g_j) = h_j\bigr\}$.
We will first consider the case~$p \in (1, \infty)$. Let
\[ m := \bigl\lceil (k \cdot D + 2) \cdot (p-1)\bigr\rceil
\]
and let $\overline p := p / (p-1)$. As first step, we bound the
inner sum for a given~$h \in H^k$ (without loss of generality, we
may assume~$h \neq (e,\dots,e)$): By the H\"older inequality,
\begin{align*}
\biggl| \sum_{g \in \varphi^{-1}(h)} a_g \biggr|^p
& \leq \biggl( \sum_{g \in \varphi^{-1}(h)} |a_g|\biggr)^p
\\
&
\leq \sum_{g \in \varphi^{-1}(h)} |a_g|^p \cdot \operatorname{diam}_S(g)^m
\cdot \biggl( \sum_{g \in \varphi^{-1}(h)} \operatorname{diam}_S(g)^{- \frac{\overline p \cdot m}{p}}\biggr)^{\frac{p}{\overline p}}
\\
&
\leq \sum_{g \in \varphi^{-1}(h)} |a_g|^p \cdot \operatorname{diam}_S(g)^m
\cdot \biggl( \sum_{g \in \varphi^{-1}(h)} \operatorname{diam}_S(g)^{- \frac{m}{p-1}}\biggr)^{p-1}.
\end{align*}
The first factor is related to~$\|c\|_{m,p}^S$ and hence of the
right type. We will now take care of the second factor: To this
end, for~$j \in \{1,\dots,k\}$ let $g_j(h) \in G$ be a minimiser
of~$\min \bigl\{ d_S(e,g) \bigm| g \in G,\ \varphi(g) = h_j\bigr\}$.
Then we have
\begin{align*}
\fa{\kappa_j \in \ker \varphi} d_S\bigl(e, g_j(h) \cdot \kappa_j)
& \geq \frac12 \cdot \bigl (d_S(e,g_j(h)) + d_S(e, g_j(h) \cdot \kappa_j) \bigr)
\\
&
\geq \frac12 \cdot d_S\bigl(g_j(h), g_j(h) \cdot \kappa_j\bigr)
= \frac12 \cdot d_S(e,\kappa_j)
\end{align*}
and hence the polynomial control on the kernel yields
\begin{align*}
\sum_{g \in \varphi^{-1}(h)} \operatorname{diam}_S(g)^{- \frac{m}{p-1}}
& = \sum_{\kappa \in (\ker \varphi)^k} \operatorname{diam}_S\bigl(g(h) \cdot \kappa\bigr)^{-\frac{m}{p-1}}
\\
&
\leq \sum_{\kappa \in (\ker \varphi)^k} \bigl(\max_{j \in \{1,\dots,k\}} d_S(e, g_j(h) \cdot \kappa_j)\bigr)^{-\frac{m}{p-1}}
\\
&
\leq 2^{\frac m{p-1}} \cdot
\sum_{\kappa \in (\ker \varphi)^k} \bigl( \max_{j \in \{1,\dots,k\}} d_S(e,\kappa_j)\bigr)^{-\frac{m}{p-1}}
\\
&
\leq 2^{\frac m{p-1}} \cdot
\sum_{r=1}^\infty \beta(r)^k \cdot r^{-\frac{m}{p-1}}
\leq 2^{\frac m{p-1}} \cdot K^k \cdot
\sum_{r=1}^\infty r^{k\cdot D} \cdot r^{-\frac{m}{p-1}}
\\
&
\leq 2^{\frac m{p-1}} \cdot K^k \cdot \zeta(2).
\end{align*}
We set~$C := (2^{\frac m{p-1}} \cdot K^k \cdot \zeta(2))^{1/(p-1)}$.
Putting it all together, we obtain (because~$\varphi(S) \subset T$)
\begin{align*}
\bigl(\|\varphi(c)\|_{n,p}^T\bigr)^p
& = \sum_{h \in H^k} \biggl| \sum_{g \in \varphi^{-1}(h)} a_g \biggr|^p \cdot \operatorname{diam}_T(h)^n
\\
&
\leq C \cdot \sum_{h \in H^k} \sum_{g \in \varphi^{-1}(h)} |a_g|^p \cdot \operatorname{diam}_S(g)^m \cdot \operatorname{diam}_T(h)^n
\\
&
\leq C \cdot \sum_{h \in H^k} \sum_{g \in \varphi^{-1}(h)} |a_g|^p \cdot \operatorname{diam}_S(g)^m \cdot \operatorname{diam}_S(g)^n
\\
&
\leq C \cdot \sum_{g \in G^k} |a_g|^p \cdot \operatorname{diam}_S(g)^{m+n}
= C \cdot \bigl(\|c\|_{m+n,p}^S\bigr)^p,
\end{align*}
as desired.
In the case $p=1$, the estimates above simplify significantly
because the inner sum can be treated directly with the inherited
$\ell^1$-bound and one obtains
\[ \fa{c \in C_k(G;\mathbb{C})} \bigl\| \varphi(c)\bigr\|_{n,1}^T \leq \|c\|_{n,1}^S.
\]
In the case $p=\infty$, we take~$m := k \cdot D + 2$. Then the inner sum
admits the following estimate for given~$h \in H^k$ (without loss of generality, we
may assume~$h \neq (e,\dots,e)$):
\begin{align*}
\biggl| \sum_{g \in \varphi^{-1}(h)} a_g \biggr|
&
\leq \sum_{g \in \varphi^{-1}(h)} |a_g| \cdot \operatorname{diam}_S(g)^m \cdot \operatorname{diam}_S(g)^{-m}
\\
&
\leq \sup_{g \in \varphi^{-1}(h)} |a_g| \cdot \operatorname{diam}_S(g)^m
\cdot \sum_{g \in \varphi^{-1}(h)} \operatorname{diam}_S(g)^{-m}
\\
&
\leq \sup_{g \in \varphi^{-1}(h)} |a_g| \cdot \operatorname{diam}_S(g)^m \cdot 2^m \cdot K^k \cdot \zeta(2).
\end{align*}
This implies~$\| \varphi(c)\|_{n,\infty}^T \leq 2^m \cdot K^k \cdot \zeta(2) \cdot \|c\|_{m+n,\infty}^S$.
\end{proof}
\begin{cor}[functoriality]
Let $p \in [1,\infty]$, let $G$ and $H$ be finitely generated
groups, and let $\varphi \colon G \longrightarrow H$ be a group
homomorphism with polynomially controlled kernel.
\begin{enumerate}
\item
Then $C_*(\varphi;\mathbb{C})$ admits a well-defined, continuous extension
\[ C_*^p(\varphi) \colon C_*^p(G) \longrightarrow C_*^p(H),
\]
which is a chain map.
\item
In particular, we obtain a corresponding
homomorphism~$H_*^p(\varphi) \colon H_*^p(G) \longrightarrow
H_*^p(H)$ that is compatible with~$H_*(\varphi;\mathbb{C}) \colon
H_*(G;\mathbb{C}) \longrightarrow H_*(H;\mathbb{C})$.
\end{enumerate}
\end{cor}
\begin{proof}
This is a direct consequence of Proposition~\ref{prop:polkernel}.
\end{proof}
\section{Comparison in the range \texorpdfstring{${(1, \infty)}$}{[1,infty)}}\label{sec:comparison}
\begin{thm}\label{thm:pqcompgeneral}
Let $G$ be a finitely generated group of exponential growth
and $p,q \in (1,\infty)$ with~$p < q$.
\begin{enumerate}
\item The inclusion~$C_*^p(G) \longrightarrow C_*^q(G)$ is a chain homotopy
equivalence.
\item In particular, the canonical map~$H_*^p(G) \longrightarrow H_*^q(G)$
(see Remark~\ref{rem:p<qmap}) is an isomorphism.
\end{enumerate}
\end{thm}
The proof of Theorem~\ref{thm:pqcompgeneral} is based on the following
basic chain-level result, which will be proved in Section~\ref{subsec_completing_proof_estimates}.
\begin{prop}\label{prop:EBconstruction}
Let $G$ be a finitely generated group of exponential growth with finite generating set~$S$
and let $p,q \in (1,\infty)$ with~$p < q$. Then there exists a
chain map~$E \colon C_*(G;\mathbb{C}) \longrightarrow C_*(G;\mathbb{C})$ and a chain
homotopy~$B$ between~$E$ and the identity with the following
properties: For all~$k, n \in \mathbb{N}$ there exist~$K \in \mathbb{R}_{>0}$ and $m \in \mathbb{N}$
such that for all~$c \in C_k(G;\mathbb{C})$ we have
\begin{align}
\bigl\| E(c) \bigr\|_{n,p}^S
& \leq K \cdot \bigl( \|c\|_{m,q}^S + \| \partial c \|_{m,q}^S \bigr)
\label{eq:Enorm}
\\
\bigl\| \partial E(c) \bigr\|_{n,p}^S
& \leq K \cdot \bigl( \|c\|_{m,q}^S + \| \partial c \|_{m,q}^S \bigr)
\label{eq:dEnorm}
\\
\bigl\| B(c) \bigr\|_{n,p}^S
& \leq K \cdot \bigl( \|c\|_{m,p}^S + \| \partial c \|_{m,p}^S \bigr)
\label{eq:Bnormpp}
\\
\bigl\| \partial B(c) \bigr\|_{n,p}^S
& \leq K \cdot \bigl( \|c\|_{m,p}^S + \| \partial c \|_{m,p}^S \bigr)
\label{eq:dBnorm}
\end{align}
\end{prop}
Taking Proposition~\ref{prop:EBconstruction} for granted, the proof of Theorem~\ref{thm:pqcompgeneral} is immediate:
\begin{proof}[Proof of Theorem~\ref{thm:pqcompgeneral}]
We write~$i \colon C_*^p(G) \longrightarrow C_*^q(G)$ for the canonical
inclusion map.
Let $E$ and $B$ be maps as provided by Proposition~\ref{prop:EBconstruction}.
Estimates~\eqref{eq:Enorm} and \eqref{eq:dEnorm} show that $E$ extends
to a continuous chain map
\[ \overline E \colon C_*^q(G) \longrightarrow C_*^p(G).
\]
Similarly, the Estimates~\eqref{eq:Bnormpp} and \eqref{eq:dBnorm} (for~$p$ and
for~$q$) show that
$B$ extends to continuous chain homotopies
\begin{align*}
\overline B(p) \colon C_*^p(G) & \longrightarrow C_*^p(G) \\
\overline B(q) \colon C_*^q(G) & \longrightarrow C_*^q(G)
\end{align*}
between $E \circ i$ and the identity on~$C_*^p(G)$ and between $i
\circ E$ and the identity on~$C_*^q(G)$, respectively. Therefore,
$i$ is a chain homotopy equivalence and thus induces an
isomorphism~$H_*^p(G) \longrightarrow H_*^q(G)$ on homology.
\end{proof}
\subsection{Diffusion}
It remains to construct the maps~$E$ and~$B$ in
Proposition~\ref{prop:EBconstruction}. The fundamental observation is
that $\ell^p$-norms on~$C_*(G;\mathbb{C})$ can be decreased by diffusing the
coefficients over a large number of simplices. Therefore, we diffuse
the simplices by coning them off with cone points in annuli of suitable
radii (Figure~\ref{fig:diffusion}).
\begin{figure}
\begin{center}
\begin{tikzpicture}[x=1cm,y=1cm,thick]
\draw[red] (0,0) circle (2);
\draw[red] (0,0) circle (3);
\foreach \j in {0,20,...,360} {
\filldraw[fill=black, fill opacity=0.15] (0,-0.5) -- (\j:2.5) -- (0,0.5) -- cycle;
\begin{scope}[red]
\egvertx{(\j:2.5)}
\end{scope}
}
\begin{scope}[blue, very thick]
\draw (0,-0.5) -- (0,0.5);
\gvertx{(0,-0.5)}
\gvertx{(0,0.5)}
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Diffusing chains/cones of a simplex (solid vertices; centre) using cone points
(empty) in an annulus}
\label{fig:diffusion}
\end{figure}
\begin{defn}[diffusion cone operator]\label{def:diffusion}
Let $G$ be a finitely generated group, $k \in \mathbb{N}$ and let~$Z \colon G^*
\longrightarrow P_{\mathrm{fin}}(G)$ be a map (here $P_{\mathrm{fin}}(G)$ denotes the collection of finite subsets of $G$). The \emph{diffusion cone
operator associated with~$Z$} is defined by
\begin{align*}
C_k(G;\mathbb{C}) & \longrightarrow C_{k+1}(G;\mathbb{C})
\\
[e,g_1,\dots,g_k] & \longmapsto
\frac1{|Z_{(g_1,\dots, g_k)}|} \cdot \sum_{z \in Z_{(g_1,\dots,g_k)}} [z,e,g_1,\dots,g_k]\,.\qedhere
\end{align*}
\end{defn}
The key parameter of the diffusion cone construction is the
function~$Z$ determining the supports of the diffused simplices. We
will use wide enough annuli of large enough radii. More precisely,
we let the radii grow polynomially (of high degree) in terms of
the diameter of the original simplices.
\begin{defn}[diffusion annuli]\label{def:annuli}
Let $G$ be a finitely generated group with a chosen finite generating set~$S$
and let $N \in \mathbb{N}_{>10}$. We define the \emph{diffusion annuli map of degree~$N$} for $k \in \mathbb{N}$ by
\begin{align*}
Z \colon
G^k & \longrightarrow P_{\mathrm{fin}}(G) \\
g & \longmapsto \overline Z_{\operatorname{diam}_S(g)},
\end{align*}
where
\begin{align*}
\overline Z \colon \mathbb{N} & \longmapsto P_{\mathrm{fin}}(G) \\
r & \longmapsto \overline Z _r :=
\bigl\{ g \in G \bigm| r^N - r^{N/10} < d_S(e,g) \leq r^N
\bigr\}.
\end{align*}
Moreover, we write
\begin{align*}
\varphi \colon \mathbb{N} & \longrightarrow \mathbb{N} \\
r & \longmapsto 2 \cdot r^N.
\qedhere
\end{align*}
\end{defn}
Before starting with the actual proof of Proposition~\ref{prop:EBconstruction},
we first collect some basic estimates concerning this diffusion construction:
\begin{lem}[accumulation control]\label{lem:accum}
In the situation of Definition~\ref{def:annuli}, we have for all~$k \in \mathbb{N}_{\geq 1}$,
all~$g \in G^k$, and all~$z \in Z_g$:
\begin{enumerate}
\item\label{item_diam_estimate} Clearly, $\operatorname{diam}_S [z,e,g_1,\dots,g_k] \leq \varphi(\operatorname{diam}_S(g))$.
\item If $j \in \{1,\dots,k\}$, $h \in G^k$, and $w \in Z_h$ satisfy the
relation
$[z,e,g_1,\dots,\widehat g_j,\dots,g_k]
= [w,e,h_1,\dots,\widehat h_j,\dots,h_k]$, then
\[ w = z
\quad\text{and}\quad
\operatorname{diam}_S(g) = \operatorname{diam}_S(h).
\]
\item If $h \in G^k$ and $w \in Z_h$ satisfy $[z,g_1,\dots,g_k] =[w,h_1,\dots,h_k]$, then
\[ w = h_1 \cdot g_1^{-1} \cdot z
\quad\text{and}\quad
\operatorname{diam}_S(g) = \operatorname{diam}_S(h).
\]
\end{enumerate}
\end{lem}
\begin{proof}
\emph{Ad~1.}
This is immediate from the construction.
\emph{Ad~2.}
Because both simplices have the same $1$-vertex (namely~$e$), all the
vertices must coincide. Thus, $w = z$. Because the annuli defined by~$\overline Z$
are disjoint for different radii and because $w = z \in Z_h \cap Z_g$, we obtain
$\operatorname{diam}_S(g) = \operatorname{diam}_S(h)$.
\emph{Ad~3.}
The assumption implies that
\[ \fa{j \in \{1,\dots,k\}} z^{-1} \cdot g_j = w^{-1} \cdot h_j.
\]
In particular, $w = h_1 \cdot g_1^{-1} \cdot z$.
Using the abbreviations $r_g := \operatorname{diam}_S(g)$ and $r_h := \operatorname{diam}_S(h)$,
we obtain by the triangle inequality that $d_S(e,z^{-1}\cdot g_1)
= d_S(e,w^{-1}\cdot h_1)$ is in the intersection
\begin{align*}
[r_g^N - r_g^{N/10} - r_g, r_g^N + r_g]
\cap
[r_h^N - r_h^{N/10} - r_h, r_h^N + r_h]
\end{align*}
(which is hence non-empty).
Therefore, $r_g = r_h$.
\end{proof}
\begin{lem}[norm control]\label{lem:normcontrol}
In the situation of Definition~\ref{def:annuli}, let $k \in \mathbb{N}$, and let
\begin{align*}
I_k := \bigl\{ (z,g_1,\dots,g_k) \bigm| (g_1,\dots, g_k) \in G^k,\ z \in Z_g \bigr\}.
\end{align*}
Furthermore, let $J_k$ be a set, let $\pi \colon I_k \longrightarrow J_k$ be a
map, and let $\beta \colon I_k \longrightarrow \mathbb{N}$ be a function controlling the
size of the fibres of~$\pi$, i.e.,
\[ \fa{i \in I_k} \bigl| \pi^{-1}(\pi(i))\bigr| \leq \beta(i).
\]
For functions~$f \colon I_k \longrightarrow \mathbb{C}$, we define the push-forward
\begin{align*}
\pi_* f \colon J_k & \longrightarrow \mathbb{C} \\
j & \longmapsto \sum_{i \in \pi^{-1}(j)} f(i).
\end{align*}
Finally, let $p \in (1,\infty)$ and let $f \colon I_k
\longrightarrow \mathbb{C}$ be a function with finite support.
\begin{enumerate}
\item Then
\[ \| \pi_* f\|_p \leq \biggl(
\sum_{i \in I_k} \beta(i)^p \cdot \bigl|f(i) \bigr|^p \biggr)^{1/p}.
\]
\item
Moreover, let $q \in (1,\infty)$ with~$p< q$, let $q' \in
(1,\infty)$ with~$1/q + 1/q' = 1/p$,
and let $w \colon I_k \longrightarrow \mathbb{C}$
be a function such that $\pi_* w \colon J_k \longrightarrow \mathbb{C}$ is $q'$-summable.
Then (with respect to pointwise multiplication)
\begin{align*}
\bigl\| \pi_*(f \cdot w) \bigr\|_p
& \leq \bigl\| \pi_* f \bigr\|_q \cdot \|\pi_*w\|_{q'}.
\end{align*}
\end{enumerate}
\end{lem}
\begin{proof}
The first part is a consequence of the following elementary estimate: For
all~$n \in \mathbb{N}$ and all~$a_1,\dots,a_n \in \mathbb{C}$, we have (by looking at a
coefficient of maximal modulus)
\[ |a_1 + \dots + a_n|^p \leq n^p \cdot \bigl( |a_1|^p + \dots + |a_n|^p\bigr).
\]
The second part is just an instance of the generalized H\"older inequality.
\end{proof}
\subsection{Completing the proof of the comparison result}
\label{subsec_completing_proof_estimates}
\begin{proof}[Proof of Proposition~\ref{prop:EBconstruction}]
We choose the parameter~$N := 100$ for the construction
in Definition~\ref{def:annuli} (basically any choice will work
because of the exponential growth of~$G$).
Let $Z \colon G^* \longrightarrow P_{\mathrm{fin}}(G)$ be the associated
diffusion annuli map (Definition~\ref{def:annuli}) and let $B \colon
C_*(G;\mathbb{C}) \longrightarrow C_{*+1}(G;\mathbb{C})$ be the diffusion cone operator
associated with~$Z$ (Definition~\ref{def:diffusion}). We then
define
\begin{align*}
E := \operatorname{id} - \partial \circ B - B \circ \partial
\colon C_*(G;\mathbb{C}) \longrightarrow C_*(G;\mathbb{C}).
\end{align*}
It is clear
that $E$ is a chain map and $B$ a chain homotopy between~$E$
and the identity on~$C_*(G;\mathbb{C})$.
Therefore, it remains to prove the norm estimates. We first replace
this zoo of estimates by the following estimates: For all~$k,n \in \mathbb{N}$
there exist~$K \in \mathbb{R}_{>0}$ and $m \in \mathbb{N}$ such that for all~$c \in C_k(G;\mathbb{C})$
we have
\begin{align}
\bigl\| B(c) \bigr\|_{n,p}^S
& \leq K \cdot \|c\|_{m,p}^S
\label{eq:Bnormppp}
\\
\bigl\| B(c) \bigr\|_{n,p}^S
& \leq K \cdot \|c\|_{m,q}^S
\label{eq:Bnormpq}
\\
\bigl\| c - \partial B(c) \bigr\|_{n,p}^S
& \leq K \cdot \|c\|_{m,q}^S
\label{eq:iddBnorm}
\end{align}
These estimates imply the Estimates~\eqref{eq:Enorm}--\eqref{eq:dBnorm} (modulo
unification of the constants by taking the maximum) as follows:
\begin{itemize}
\item Estimate~\eqref{eq:Bnormpp} follows from Estimate~\eqref{eq:Bnormppp}.
\item Estimate~\eqref{eq:Enorm} follows from the fact that $E = \operatorname{id} -
\partial \circ B - B \circ \partial$ and the Estimates~\eqref{eq:iddBnorm} and
\eqref{eq:Bnormpq} (with modified constants).
\item Estimate~\eqref{eq:dEnorm} follows from~\eqref{eq:Enorm} and the
fact that $E$ is a chain map.
\item Estimate~\eqref{eq:dBnorm} then follows from~$\partial \circ B = \operatorname{id} - E - B \circ \partial$
and the Estimates~\eqref{eq:Enorm} (and Remark~\ref{rem:p<qmap}), and \eqref{eq:Bnormpp}
(with modified constants).
\end{itemize}
In the following, let~$k, n\in \mathbb{N}$, and let
\[ c= \sum_{g \in G^k} a_g \cdot [e,g_1, \dots, g_k] \in C_k(G;\mathbb{C}).
\]
We will first prove~\eqref{eq:Bnormppp};
of course, \eqref{eq:Bnormppp} follows from~\eqref{eq:Bnormpq} (with Remark~\ref{rem:p<qmap}),
but we will use this straightforward estimate as warm-up for the other estimates.
By construction of the
diffusion cone operator~$B$, we have (using Lemma~\ref{lem:accum}.\ref{item_diam_estimate} for the first inequality, and Definition~\ref{def:annuli} of $\varphi$ for the second inequality)
\begin{align*}
\bigl\| B_k(c) \bigr\|_{n,p}^S
& = \biggl\| \sum_{g \in G^k} a_g \cdot \frac1{|Z_g|} \cdot
\sum_{z \in Z_g} [z,e,g_1,\dots,g_k] \biggr\|_{n,p}^S
\\
& = \biggl(
\sum_{g \in G^k} \sum_{z\in Z_g} \frac1{|Z_g|^p} \cdot |a_g|^p \cdot
\bigl(\operatorname{diam}_S[z,e,g_1,\dots,g_k]\bigr)^n
\biggr)^{1/p}
\\
& \leq \biggl(
\sum_{g \in G^k} \frac1{|Z_g|^{p-1}} \cdot |a_g|^p
\cdot \varphi\bigl(\operatorname{diam}_S(g)\bigr)^n
\biggr)^{1/p}
\\
& \leq \biggl(
\sum_{g \in G^k} |a_g|^p
\cdot 2^n \cdot \bigl(\operatorname{diam}_S(g)\bigr)^{N \cdot n}
\biggr)^{1/p}
\\
& = 2^{n/p} \cdot \|c\|_{N\cdot n,p}.
\end{align*}
Before proving~\eqref{eq:Bnormpq} and \eqref{eq:iddBnorm},
let us first fix some notation:
Because of~$q > p$, there is some~$q'$ such that $1/q +
1/q' = 1/p$; let $x := 1/q$, let $y := 1 - x = 1-1/p+1/q'$, and let
$\varepsilon := y \cdot q' - 1 = q' \cdot (1 - 1/p) > 0$.
Let us establish~\eqref{eq:Bnormpq} (with~$m = 1$):
The generalized H\"older inequality shows that
\begin{align*}
\bigl\| B_k(c) \bigr\|_{n,p}^S
& = \biggl(
\sum_{g \in G^k} \sum_{z\in Z_g} \frac1{|Z_g|^p} \cdot |a_g|^p \cdot
\bigl(\operatorname{diam}_S[z,e,g_1,\dots,g_k]\bigr)^n
\biggr)^{1/p}
\\
& \leq \biggl(
\sum_{g \in G^k} \sum_{z\in Z_g}
\frac1{|Z_g|^{q\cdot x}} \cdot |a_g|^q \cdot \operatorname{diam}_S(g)
\biggr)^{1/q}
\\
& \qquad \cdot
\biggl(
\sum_{g \in G^k} \sum_{z\in Z_g}
\frac1{|Z_g|^{q'\cdot y}} \cdot
\frac{\bigl(\operatorname{diam}_S[z,e,g_1,\dots,g_k]\bigr)^{q'\cdot n / p}}
{\bigl(\operatorname{diam}_S(g)\bigr)^{q'/ q}}
\biggr)^{1/q'}
\end{align*}
We denote the first factor by~$A_1$ and the second factor by~$A_2$.
As $q \cdot x =1$, we obtain
\begin{align*}
A_1 & = \biggl(
\sum_{g \in G^k} |Z_g| \cdot
\frac1{|Z_g|^{q\cdot x}} \cdot |a_g|^q \cdot \operatorname{diam}_S(g)
\biggr)^{1/q}
= \|c\|_{1,q}^S.
\end{align*}
The term~$A_2$ can be estimated via
\begin{align*}
A_2^{q'} & \leq
\sum_{g \in G^k} \sum_{z\in Z_g}
\frac1{|Z_g|^{q'\cdot y}} \cdot
\frac{\varphi\bigl(\operatorname{diam}_S(g)\bigr)^{q'\cdot n / p}}
{\bigl(\operatorname{diam}_S(g)\bigr)^{q'/ q}}
\\
& \leq
\sum_{g \in G^k}
\frac1{|Z_g|^{q'\cdot y - 1}} \cdot
\frac{\varphi\bigl(\operatorname{diam}_S(g)\bigr)^{q'\cdot n / p}}
{\bigl(\operatorname{diam}_S(g)\bigr)^{q' / q}}
\\
& \leq \sum_{r \in \mathbb{N}}
\frac{\bigl| \{ g\in G^k \mid \operatorname{diam}_S\{e,g_1,\dots,g_k\} = r\} \bigr|}{|\overline Z_r|^\varepsilon} \cdot
\frac{\varphi(r)^{q'\cdot n/p}}{r^{q'/q}}
\\
& \leq \sum_{r \in \mathbb{N}}
\frac{\beta_{G,S}(r)^k}{|\overline Z_r|^\varepsilon} \cdot
\frac{\varphi(r)^{q'\cdot n/p}}{r^{q'/q}},
\end{align*}
where $\beta_{G,S} \colon \mathbb{N} \longrightarrow \mathbb{N}$ denotes the growth
function of~$G$ with respect to~$S$.
The second factor in the series above is dominated by a polynomial (in~$r$);
we will now show that the first factor decreases exponentially in~$r$:
By definition, we have
\[ \beta_{G,S}(r)^k \leq \bigl(4 \cdot |S| \bigr)^{r \cdot k}.
\]
Because $G$ has exponential growth, there is an~$\alpha \in \mathbb{R}_{>1}$
such that $\beta_{G,S}(r) \geq \alpha^r$ holds for all~$r \in \mathbb{N}$.
Therefore, for all~$r \in \mathbb{N}$,
\[ |\overline Z _r|
\geq \bigl|\beta_{G,S}(r^{N/10}/2)|
\geq \alpha^{r^{N/10}/2},
\]
and so
\begin{align*}
\frac{\beta_{G,S}(r)^k}{|\overline Z_r|^\varepsilon}
& \leq \frac{\bigl(4 \cdot |S|\bigr)^{r \cdot k}}{\alpha^{\varepsilon/2 \cdot r^{N/10}}},
\end{align*}
which (eventually) decreases exponentially in~$r$. Hence, $A_2^{q'}$ is dominated by a convergent
series (whose value is independent of~$c$). This shows Estimate~\eqref{eq:Bnormpq}.
Finally, we prove the most delicate Estimate~\eqref{eq:iddBnorm}. By construction,
\begin{align*}
c - \partial & B_k(c) \\
& = \sum_{g \in G^k} a_g \biggl( [e,g_1,\dots,g_k]
- \frac1{|Z_g|} \cdot \sum_{z \in Z_g} \partial [z,e,g_1,\dots,g_k]\biggr)
\\
& = \sum_{g \in G^k} a_g \cdot \frac1{|Z_g|} \cdot \sum_{z \in Z_g} \biggl( -[z,g_1,\dots,g_k]
+ \sum_{j=1}^k (-1)^{j+1} \cdot [z,e,g_1,\dots,\widehat g_j, \dots, g_k] \biggr).
\end{align*}
Therefore,
\begin{align*}
\bigl\| c - \partial B_k(c) \bigr\|_{n,p}^S
& \leq \biggl\| \sum_{g\in G^k} \frac1{|Z_g|} \cdot a_g \cdot \sum_{z \in Z_g} [z,g_1,\dots,g_k]\biggr\|_{n,p}^S
\\
& \qquad + \sum_{j=1}^k \biggl\| \sum_{g\in G^k} \frac1{|Z_g|} \cdot a_g \cdot
\sum_{z \in Z_g} [z,e,g_1,\dots,\widehat g_j,\dots, g_k]\biggr\|_{n,p}^S.
\end{align*}
We will treat these $k+1$~sums separately.
In order to introduce~$\|-\|_{m,q}$, we again will use the generalized H\"older
inequality. However, in contrast with the previous estimates, we now have to
carefully control accumulations of coefficients
on $k$-simplices (using Lemma~\ref{lem:accum} and Lemma~\ref{lem:normcontrol}).
We will only treat the first sum in detail (the other sums can be handled in
the same way by modifying~$J_k$ accordingly).
We will apply Lemma~\ref{lem:normcontrol} to the following situation:
We consider the set
\[ J_k := \bigl\{ [z,g_1,\dots,g_k] \bigm| (z,g_1,\dots,g_k) \in I_k \bigr\}
\subset C_k(G;\mathbb{C}),
\]
together with the canonical projection~$\pi \colon I_k \longrightarrow J_k$.
In view of Lemma~\ref{lem:accum}, the projection~$\pi$ has $\beta$-controlled
fibres, where
\begin{align*}
\beta \colon I_k & \longrightarrow \mathbb{N} \\
(z,g_1,\dots,g_k) & \longmapsto \beta_{G,S} \bigl(\operatorname{diam}_S (g) \bigr)^k.
\end{align*}
Let $\delta \in \mathbb{R}_{>0}$ with~$\delta < y - 1/q' = \min(y, \varepsilon/q')$ and
\begin{align*}
f \colon I_k & \longrightarrow \mathbb{C} \\
(z,g_1,\dots,g_k) & \longmapsto
\frac1{|Z_g|^{x + \delta}} \cdot a_g \cdot \operatorname{diam}_S[z,g_1,\dots,g_k]^{1/q}\,,
\\
w \colon I_k & \longrightarrow \mathbb{C} \\
(z,g_1,\dots,g_k) & \longmapsto
\frac1{|Z_g|^{y-\delta}} \cdot \operatorname{diam}_S[z,g_1,\dots,g_k]^{n/p - 1/q}\,.
\end{align*}
Then, by construction,
\[ \biggl\| \sum_{g \in G^k} \frac1{|Z_g|} \cdot a_g \cdot
\sum_{z \in Z_g} [z,g_1,\dots,g_k] \biggr\|_{n,p}^S
= \bigl\|\pi_* (f \cdot w) \bigr\|_p.
\]
We will now bound~$\|\pi_*(f \cdot w)\|_p$ from above with the help of
Lemma~\ref{lem:normcontrol}: Clearly, $f$ has finite support. Let us show that~$\pi_* w$ is a $q'$-summable function. By definition of~$w$, we have
(with~$\Phi(r) := 2 \cdot r^{N \cdot (n/p-1/q) \cdot q'}$)
\begin{align*}
\sum_{i \in I_k} \beta(i)^{q'} \cdot \bigl|w(i)\bigr|^{q'}
& \leq \sum_{(z,g) \in I_k} \frac{\beta(z,g)^{q'}}{|Z_g|^{(y-\delta)\cdot q'}} \cdot \Phi(\operatorname{diam}_S(g))
\\
& \leq \sum_{r\in \mathbb{N}} \beta_{G,S}(r)^k \cdot |\overline Z_r| \cdot \frac{\beta_{G,S}(r)^{q'\cdot k}}{|\overline Z_r|^{(y-\delta) \cdot q'}}
\cdot \Phi(r)
\\
& \leq \sum_{r\in \mathbb{N}} \frac{\beta_{G,S}(r)^{k+q'\cdot k}}
{|\overline Z_r|^{(y-\delta) \cdot q' -1}}
\cdot \Phi(r).
\end{align*}
Because $(y- \delta) \cdot q' -1 > 0$, the same argument as in the proof
of Estimate~\eqref{eq:Bnormpq} shows that first factor (eventually) decreases at least
exponentially in~$r$ while $\Phi$ grows only polynomially in~$r$. Therefore,
this series is convergent; let $A_2$ be the value of this series.
The first part of Lemma~\ref{lem:normcontrol}
shows that $\pi_* w$ is $q'$-summable and that
\[ \| \pi_* w \|_{q'} \leq A_2^{1/q^\prime}.
\]
Therefore, the second part of Lemma~\ref{lem:normcontrol} shows that
\begin{align*}
\bigl\|\pi_*(f \cdot w)\bigr\|_p
& \leq \| \pi_* f \|_q \cdot \|\pi_* w\|_{q'}
\leq A_2^{1/q^\prime} \cdot \| \pi_* f\|_q.
\end{align*}
It hence remains to provide a suitable estimate for~$\|\pi_*f\|_q$.
Using Lemma~\ref{lem:normcontrol}, we obtain
\begin{align*}
\| \pi_*f\|_q^q
& \leq \sum_{i \in I_k} \beta(i)^q \cdot \bigl|f(i)\bigr|^q
\\
& \leq \sum_{(z,g) \in I_k}
\frac{\beta_{G,S}(\operatorname{diam}_S (g))^{q \cdot k}}
{|Z_g|^{q \cdot x + q \cdot \delta}} \cdot |a_g|^q \cdot \varphi(\operatorname{diam}_S g)
\\
& \leq 2 \cdot \sum_{g \in G^k}
\frac{\beta_{G,S}(\operatorname{diam}_S (g))^{q \cdot k}}{|Z_g|^{1 + q \cdot \delta}} \cdot |Z_g| \cdot |a_g|^q \cdot \operatorname{diam}_S(g)^N
\\
& = 2 \cdot \sum_{g \in G^k}
\frac{\beta_{G,S}(\operatorname{diam}_S (g))^{q \cdot k}}{|Z_g|^{q \cdot \delta}} \cdot |a_g|^q \cdot \operatorname{diam}_S(g)^N\,.
\end{align*}
Again, because $q \cdot \delta > 0$, we see as in the proof of the Estimate~\eqref{eq:Bnormpq}
that the first factor is bounded, say by~$A_1$. Then,
\[ \|\pi_* f\|_q^q
\leq 2 \cdot A_1 \cdot \sum_{g \in G^k} |a_g|^q \cdot \operatorname{diam}_S(g)^N
= 2 \cdot A_1 \cdot \|c\|_{N,q}^q\,.
\]
This completes the proof of Proposition~\ref{prop:EBconstruction}
and hence of Theorem~\ref{thm:pqcompgeneral}.
\end{proof}
\section{A vanishing result in degree~\texorpdfstring{$1$}{1}}\label{sec:vanishing}
We have the following vanishing result for the free group~$F_2$ of rank~$2$:
\begin{thm}\label{thm:f2vanishing}
Let $p \in (1,\infty)$. Then the canonical homomorphism~$H_1(F_2;\mathbb{C})
\longrightarrow H_1^p(F_2)$ is the zero map.
\end{thm}
\begin{proof}
Let $S := \{\alpha,\beta\}$ be a free generating set of the free group~$F_2$ of
rank~$2$. In this proof, all distances, diameters, norms, etc.\ will be taken with
respect to this generating set~$S$.
Before starting with the actual proof, we perform the following
reductions:
\begin{itemize}
\item
In view of Theorem~\ref{thm:pqcompgeneral}, it suffices to prove
Theorem~\ref{thm:f2vanishing} for~$p>2$.
\item
Because~$H_1(F_2;\mathbb{C})$ is generated by the homology classes corresponding
to the cycles~$[e,\alpha]$ and $[e,\beta]$, it suffices to show that
the classes in~$H^p_1(F_2)$ represented by~$[e,\alpha]$ and $[e,\beta]$ are trivial.
\item
Since the classes represented by~$[e,\alpha]$ and $[e,\beta]$ only differ by an
isometric automorphism of~$F_2$, it suffices to prove the vanishing for~$[e,\alpha]$.
\end{itemize}
To this end, we will construct an explicit chain~$b$ in~$C^p_2(F_2)$ whose
boundary is~$[e,\alpha]$.
The geometric idea for the construction of such a $2$-chain~$b$ is
to start with two $2$-simplices with coefficient~$1/2$ that
contain~$[e,\alpha]$ as an edge; inductively, we then choose two
$2$-simplices with halved coefficients that contain the new
edges~\dots\ (Figure~\ref{fig:f2vanishing}).
The resulting infinite chain will converge in the $\ell^p$-setting
because the coefficients are distributed over enough summands.
The main technical difficulty is to ensure that the
weights are really distributed so that they do not accumulate on
simplices via accidental cancellations. This will be
achieved by a careful selection of markers and suffixes that encode
the induction level and the two different choices at each stage.
\begin{figure}
\begin{center}
\begin{tikzpicture}[thick]
\filldraw[fill=black,fill opacity=0.1] (0,0) -- (0,2) -- (4,0) -- cycle;
\filldraw[fill=black,fill opacity=0.1] (0,0) -- (0,2) -- (4,2) -- cycle;
%
\gvertx{(4,0)}
\gvertx{(4,2)}
\gvertx{(0,0)}
\gvertx{(0,2)}
%
\draw (-0.5,0) node {$e$};
\draw (-0.5,2) node {$x$};
\draw (4.3,0) node[anchor=west] {$xm(x)t_d$};
\draw (4.3,2) node[anchor=west] {$xm(x)s_d$};
\draw (2,-0.5) node {$t(x)$};
\draw (2,2.5) node {$s(x)$};
\end{tikzpicture}
\end{center}
\caption{For each edge~$[e,x]$, we choose two $2$-simplices that
contain this edge (and halve the coefficients).}
\label{fig:f2vanishing}
\end{figure}
We will describe the construction of~$b$ in a top-down manner, first giving
the final formula and then explaining all the ingredients: For~$D\in \mathbb{N}$, we
set
\[ b(D) := \sum_{d=0}^D \sum_{x \in W(d)} \frac1{2^{d+1}} \cdot
\varepsilon(x) \cdot \bigl(s(x) + t(x)\bigr) \in C_2(F_2;\mathbb{C}).
\]
We will then show that the sequence~$(b(D))_{D \in \mathbb{N}}$ converges to a chain
\[ b := \lim_{D \rightarrow \infty} b(D) \in C^p_2(F_2)
\]
that satisfies~$\partial b = [e,\alpha]$. But first we have to explain
the ingredients of~$b(D)$: To this end, we define (by mutual recursion)
the subsets~$W(d) \subset F_2$ (keeping track of the set of edges),
the suffixes~$s_d, t_d \in F_2$, the markers~$m(x) \in F_2$
and the $2$-simplices~$s(x)$ and~$t(x)$:
\begin{itemize}
\item For each~$d \in \mathbb{N}$, we set~$s_d := \alpha^d\beta^d$ and $t_d := \beta^d \alpha^d \in F_2$.
\item We set~$W(0) := \{\alpha\}$ (and $m(\alpha) := e$)
and for~$d \in \mathbb{N}_{\geq 1}$, we let
\[ W(d) := \bigcup_{x\in W(d-1)} W(x,d) \subset F_2,
\]
where (for each~$x \in W(d-1)$)
\[ W(x,d) := \{ x m(x) s_d, m(x)s_d, xm(x)t_d, m(x) t_d\} \subset F_2.
\]
\item
Inductively, we see that $|W(d)| \leq 4^d$ for all~$d \in \mathbb{N}$. We
can thus choose an injection~$m \colon W(d) \longrightarrow
\{\alpha, \beta\}^{2\cdot d}$ and view the words~$m(x)$ with~$x
\in W(d)$ as elements of~$F_2$.
\item
For~$d \in \mathbb{N}$ and $x \in W(d)$, we set
\[ s(x) := \bigl[e, x, xm(x)s_d \bigr], \quad
t(x) := \bigl[e, x, xm(x)t_d \bigr] \in C_2(F_2;\mathbb{C}).
\]
\item Finally, the signs~$\varepsilon(\dots)$ are defined as follows:
We set~$\varepsilon(\alpha) :=1$; for $d \in \mathbb{N}_{>0}$
and~$x \in W(d-1)$, we set
\begin{align*}
\varepsilon(m(x) s_d) & := - \varepsilon(x)
\\
\varepsilon(m(x) t_d) & := - \varepsilon(x)
\\
\varepsilon(xm(x) s_d) & := \varepsilon(x)
\\
\varepsilon(xm(x) t_d) & := \varepsilon(x).
\end{align*}
\end{itemize}
By construction, all elements of~$W(d)$ consist of \emph{non-negative}
powers of~$\alpha$ and $\beta$ and no cancellations occur in the
definitions above. Therefore, $s$, $t$, and $\varepsilon$ are
well-defined. Moreover, the construction of the edge sets~$W(d)$ is justified
by the following observation: For each~$d \in \mathbb{N}$ and each~$x \in W(d)$,
we have
\begin{align*}
\partial \bigl( s(x) + t(x) \bigr)
& = \partial \bigl( [e,x,xm(x)s_d] + [e,x,xm(x)t_d]\bigr)
\\
& = [e,m(x)s_d] - [e,xm(x)s_d] + [e,x]
+ [e,m(x)t_d] - [e,xm(x)t_d] + [e,x].
\end{align*}
In order to prove convergence of~$(b(D))_{D \in \mathbb{N}}$ and $(\partial
b(D))_{D \in \mathbb{N}}$, we need to estimate the diameters of the
simplices involved: For~$d \in \mathbb{N}$ and $x \in W(d)$, we have
\[ \operatorname{diam} s(x) \leq d(e,x) + 4 \cdot d,
\quad
\operatorname{diam} t(x) \leq d(e,x) + 4 \cdot d;
\]
inductively, we obtain for $x \in W(d)$
\[d(e,x) \in O(d^2)\]
and therefore
\[ \operatorname{diam} s(x),\quad \operatorname{diam} t(x) \in O(d^2).
\]
We now give the convergence arguments:
\begin{itemize}
\item The sequence~$(b(D))_{D \in \mathbb{N}}$ is Cauchy with respect to~$\|-\|_{n,p}$:
Let $D, D' \in \mathbb{N}$ with~$D' > D \geq 0$ and let $n \in \mathbb{N}$. By construction, we
have
\begin{align*}
b(D') - b(D)
& = \sum_{d=D+1}^{D'} \sum_{x \in W(d)} \frac1{2^{d+1}} \cdot \varepsilon(x) \cdot \bigl(s(x) + t(x)\bigr).
\end{align*}
The markers/suffixes show that all of these $2$-simplices are different (so
no cumulations of coefficients occur). Therefore,
\begin{align*}
\bigl\| b(D') - b(D) \bigr\|_{n,p}
& \leq \sum_{d=D+1}^{D'} \frac{|W(d)|}{(2^{d+1})^p} \cdot O(d^{2n})
\\
& \leq \sum_{d=D+1}^{D'} \frac{4^d}{(2^{d+1})^p} \cdot O(d^{2n}).
\end{align*}
Because $p > 2$, the corresponding series on the right-hand side
is convergent. Therefore, these differences between its partial
sums form a Cauchy sequence.
\item The sequence~$(\partial b(D))_{D \in \mathbb{N}}$ is Cauchy with respect to~$\|-\|_{n,p}$:
Let $D ,D' \in \mathbb{N}$ with~$D' > D \geq 0$ and let $n \in \mathbb{N}$. By construction, we have
\begin{align*}
\partial b(D') - \partial b(D)
& = \sum_{d = D+1}^{D'} \sum_{x\in W(d)} \frac1{2^{d+1}} \cdot \varepsilon(x)
\cdot \partial \bigl(s(x) + t(x)\bigr)
\\
& = \sum_{x \in W(D+1)} \frac1{2^{D+1}} \cdot \varepsilon(x) \cdot [e,x] - \sum_{y \in W(D'+1)} \frac1{2^{D'+1}} \cdot \varepsilon(y) \cdot [e,y].
\end{align*}
The markers/suffixes show that all of these $1$-simplices are different (so
no cumulations of coefficients occur). Therefore,
\begin{align*}
\bigl\| \partial b(D') - \partial b(D) \bigr\|_{n,p}
& \leq \frac{|W(D' +1)|}{(2^{D'+1})^p} \cdot O(D'{}^{2n})
+ \frac{|W(D+1)|}{(2^{D+1})^p} \cdot O(D{}^{2n})
\\
& \leq \frac{4^{D'+1}}{(2^{D'+1})^p} \cdot O(D'{}^{2n})
+ \frac{4^{D+1}}{(2^{D+1})^p} \cdot O(D{}^{2n}).
\end{align*}
Because $p > 2$, these terms converge to~$0$ for~$D,D' \rightarrow \infty$.
\end{itemize}
Thus, we have established that $b = \lim_{D \rightarrow \infty} b(D) \in
C^p_2(F_2)$ is a well-defined chain. By a similar computation as the previous one for~$\partial b(D') - \partial b(0)$,
we have
\[ \partial b = [e,\alpha],
\]
as claimed.
\end{proof}
|
train/arxiv
|
BkiUfqM241xiQRY-TEou
| 5 | 1 |
\section{Introduction}
\input{section/1_intro}
\section{Related Work}\label{sec:related_work}
\input{section/related_work}
\section{Problem Setup}\label{sec:model}
\input{section/2_problem_setup}
\section{Learning from Explicit Policies}\label{sec:policy-level}
\input{section/3_alg_policy}
\section{Learning from Representative Sets}\label{sec:trajectory-level}
\input{section/4_alg_trajectory}
\input{section/app_disuccion}
\input{section/ack}
\section{Proof of Lemma~\ref{lmm:gap-hatw-w}}\label{app:gap-hatw-w}
\lmmhatww*
\begin{proof}
We introduce some notations first.
For any $x,y\in\mathbb{R}^k$, let $\theta(x,y)$ denotes the angle between $x$ and $y$.
For any subspace $U\subset \mathbb{R}^k$, let $\theta(x,U) := \min_{y\in U}\theta(x,y)$.
For any two subspaces $U,U'\subset \mathbb{R}^k$, $\theta(U,U') := \max_{x\in U} \min_{y\in U'}\theta(x,y)$.
For any matrix $M$, let $M_i$ denote the $i$-th row vector of $M$, $M_{i:j}$ denote the submatrix of $M$ composed of rows $i,i+1,\ldots,j$ and $M_{i:}$ denote the submatrix composed of all rows $j\geq i$.
Let $\mathrm{span}(M)$ denote the span of the row vectors of $M$.
Recall that $\hat A\in \mathbb{R}^{d\times k}$ is defined in Eq~\eqref{eq:Ahat} as
\begin{align*}
\hat A = \begin{pmatrix}
V^{\pi_1\top}\\
(\hat \alpha_1 V^{\pi_1} -V^{\pi_2})^\top\\
\vdots\\
(\hat \alpha_{d-1} V^{\pi_1} -V^{\pi_d})^\top
\end{pmatrix}\,,
\end{align*}
which is the approximation of matrix $A\in \mathbb{R}^{d\times k}$ defined by true values of $\alpha_i$, i.e.,
\begin{align*}
A = \begin{pmatrix}
V^{\pi_1\top}\\
(\alpha_1 V^{\pi_1} -V^{\pi_2})^\top\\
\vdots\\
(\alpha_{d-1} V^{\pi_1} -V^{\pi_d})^\top
\end{pmatrix}\,.
\end{align*}
Let $\hat A^{(\delta)} = \hat A_{1:d_\delta}, A^{(\delta)}=A_{1:d_\delta}\in \mathbb{R}^{d_\delta\times k}$ be the sub-matrix comprised of the first $d_\delta$ rows of $\hat A$ and $A$ respectively.
\lmmspanangle*
We will use the following lemma by~\cite{Balcan2015} to prove Lemma~\ref{lmm:spanangle}.
\begin{lemma}[Lemma 3 of~\cite{Balcan2015}]\label{lmm:Balcan}
Let $U_l = \mathrm{span}(\xi_1,\ldots,\xi_l)$ and $\hat U_l = \mathrm{span}(\hat \xi_1,\ldots,\hat \xi_l)$.
Let $\epsilon_{\text{acc}}, \gamma\geq 0$ and $\epsilon_{\text{acc}}\leq \gamma^2/(10l)$.
Assume for $i=2,\ldots, l$ that $\theta(\hat \xi_i, \hat U_{i-1})\geq \gamma$, and for $i=1,\ldots, l$, $\theta(\xi_{i}, \hat \xi_{i})\leq \epsilon_{\text{acc}}$. Then
$$\theta(U_l, \hat U_l)\leq 2l\frac{\epsilon_{\text{acc}}}{\gamma}\,.$$
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lmm:spanangle}]
For all $2\leq i \leq d_\delta$, we have that
\begin{align*}
&\theta(\hat A^{(\delta)}_i, \mathrm{span}(\hat A^{(\delta)}_{2:i-1})) \geq \theta(\hat A^{(\delta)}_i, \mathrm{span}(\hat A^{(\delta)}_{1:i-1})) \geq \sin( \theta(\hat A^{(\delta)}_i, \mathrm{span}(\hat A^{(\delta)}_{1:i-1})))\\ \stackrel{(a)}{\geq}&\frac{\abs{\hat A^{(\delta)}_i \cdot u_i}}{\norm{\hat A^{(\delta)}_i}_2}\stackrel{(b)}{\geq}
\frac{\delta}{\norm{\hat A^{(\delta)}_i}_2} = \frac{\delta}{\norm{\hat \alpha_{i-1} V_{1} -V_{i}}_2} \geq \frac{\delta}{(C_{\alpha}+1)C_V}
\,,
\end{align*}
where Ineq~(a) holds as $u_i$ is orthogonal to $\mathrm{span}(\hat A^{(\delta)}_{1:i-1})$ according to line~\ref{alg-line:orthonormal-basis} of Algorithm~\ref{alg:policy-threshold-free} and Ineq~(b) holds due to the definition of $d_\delta$.
The last inequality holds due to $\norm{\hat \alpha_{i-1} V^{\pi_1} -V_{i}}_2 \leq \abs{\hat \alpha_{i-1}} \norm{V^{\pi_1}}_2+\norm{V_i}_2 \leq (C_{\alpha}+1)C_V$.
Similarly, we also have
\[\theta(A^{(\delta)}_i, \mathrm{span}(A^{(\delta)}_{2:i-1}))\geq \frac{\delta}{(C_{\alpha}+1)C_V}\,.\]
We decompose $V_i$ in the direction of $V^{\pi_1}$ and the direction perpendicular to $V^{\pi_1}$. Denote by $v_i^\parallel = V_i\cdot \frac{V^{\pi_1}}{\norm{V^{\pi_1}}_2}$, $V_i^\parallel =v_i^\parallel \frac{V^{\pi_1}}{\norm{V^{\pi_1}}_2}$, $V_i^\perp =V_i -V_i^\parallel$ and $v_i^\perp = \norm{V_i^\perp}_2$.
Then we have
\[\theta( A^{(\delta)}_i,\hat A^{(\delta)}_i) = \theta(\alpha_{i-1} V_{1} -V_{i},\hat\alpha_{i-1} V_{1} -V_{i})= \theta(\alpha_{i-1} V_{1} -V_{i}^\parallel- V_i^\perp,\hat\alpha_{i-1} V_{1} -V_{i}^\parallel- V_i^\perp)\,.\]
If $(\hat \alpha_{i-1} V_{1} -V_{i}^\parallel)\cdot(\alpha_{i-1} V_{1} -V_{i}^\parallel) \geq 0$, i.e., $\hat \alpha_{i-1} V_{1} -V_{i}^\parallel $ and $\alpha_{i-1} V_{1} -V_{i}^\parallel$ are in the same direction, then
\begin{align}
\theta(A^{(\delta)}_i,\hat A^{(\delta)}_i)
&= \abs{\arctan \frac{\norm{\hat \alpha_{i-1} V_{1} -V_{i}^\parallel}_2}{v_i^\perp}-\arctan \frac{\norm{\alpha_{i-1} V_{1} -V_{i}^\parallel}_2}{v_i^\perp} }\nonumber\\
& \leq \abs{\frac{\norm{\hat \alpha_{i-1} V_{1} -V_{i}^\parallel}_2}{v_i^\perp} - \frac{\norm{\alpha_{i-1} V_{1} -V_{i}^\parallel}_2}{v_i^\perp}}\label{eq:arctan}\\
&\leq\frac{\abs{\hat \alpha_{i-1} - \alpha_{i-1}}\norm{V^{\pi_1}}_2}{v_i^\perp}\nonumber\\
&\leq \frac{\epsilon_{\alpha} C_V}{\delta}\,,\label{eq:applytau}
\end{align}
where Ineq~\eqref{eq:arctan} follows from the fact that the derivative of $\arctan$ is at most 1, i.e., $\partial \arctan x/\partial x = \frac{1}{1+x^2}\leq 1$.
Ineq~\eqref{eq:applytau} holds since $v_i^\perp \geq \abs{\inner{V_i}{u_i}}\geq \delta$.
If $(\hat \alpha_{i-1} V_{1} -V_{i}^\parallel)\cdot(\alpha_{i-1} V_{1} -V_{i}^\parallel) < 0$, i.e., $\hat \alpha_{i-1} V_{1} -V_{i}^\parallel $ and $\alpha_{i-1} V_{1} -V_{i}^\parallel$ are in the opposite directions, then we have $\norm{\hat \alpha_{i-1} V_{1} -V_{i}^\parallel}_2+\norm{\alpha_{i-1} V_{1} -V_{i}^\parallel}_2\leq \epsilon_{\alpha} \norm{V^{\pi_1}}_2$. Similarly, we have
\begin{align*}
\theta(\hat A^{(\delta)}_i, A^{(\delta)}_i)
&= \abs{\arctan \frac{\norm{\hat \alpha_{i-1} V_{1} -V_{i}^\parallel}_2}{v_i^\perp}+\arctan \frac{\norm{\alpha_{i-1} V_{1} -V_{i}^\parallel}_2}{v_i^\perp} } \\
& \leq \abs{\frac{\norm{\hat \alpha_{i-1} V_{1} -V_{i}^\parallel}_2}{v_i^\perp} + \frac{\norm{\alpha_{i-1} V_{1} -V_{i}^\parallel}_2}{v_i^\perp}}\\
&\leq\frac{\epsilon_{\alpha} \norm{V^{\pi_1}}_2}{\abs{v_i^\perp}}\nonumber\\
&\leq \frac{\epsilon_{\alpha} C_V}{\delta}\,.
\end{align*}
By Lemma~\ref{lmm:Balcan}, we have that when
$\delta \geq 3(C_{\alpha}+1)^{\frac{4}{3}}C_V\epsilon_{\alpha}^\frac{1}{3}$,
\begin{equation*}
\theta(\mathrm{span}(A^{(\delta)}_{2:}), \mathrm{span}(\hat A^{(\delta)}_{2:})) \leq \frac{2d_\delta(C_{\alpha}+1) C_V^2\epsilon_{\alpha} }{\delta^2}=\eta_{\epsilon_{\alpha},\delta}\,,
\end{equation*}
and
\begin{equation*}
\theta(\mathrm{span}(\hat A^{(\delta)}_{2:}), \mathrm{span}(A^{(\delta)}_{2:})) \leq
\eta_{\epsilon_{\alpha},\delta}\,.
\end{equation*}
We are done with proof of Lemma~\ref{lmm:spanangle}.
\end{proof}
Let $b_1,\ldots,b_{d_\delta-1}$ be any orthonormal basis of $\mathrm{span}(\hat A^{(\delta)}_{2:})$.
We construct a $\hat w^{(\delta)}$ s.t. $\hat A^{(\delta)}\hat w^{(\delta)} ={\bf e}_1$ by removing $w'$'s component in $\mathrm{span}(\hat A^{(\delta)}_{2:})$ and rescaling.
Specifically, let
\begin{equation}\label{eq:createwdelta}
\hat w^{(\delta)} = \frac{ w'- \sum_{i=1}^{{d_\delta}-1} \inner{w'}{b_i} b_i}{1-\inner{V^{\pi_1}}{( \sum_{i=1}^{{d_\delta}-1} \inner{w'}{b_i} b_i)}}\,.
\end{equation}
Since $A^{(\delta)}w' = {\bf e}_1$,
it is direct to check that $\inner{\hat A^{(\delta)}_1}{\hat w^{(\delta)}} = \inner{V^{\pi_1}}{\hat w^{(\delta)}} = 1$ and $\hat A^{(\delta)}_i \cdot \hat w^{(\delta)} = 0$ for $i=2,\ldots,d_\delta$, i.e., $\hat A^{(\delta)}\hat w^{(\delta)} ={\bf e}_1$.
Let $\gamma = \inner{V^{\pi_1}}{( \sum_{i=1}^{{d_\delta}-1} \inner{w'}{b_i} b_i)}$ and we show that $\gamma$ is small as follows.
Since $\theta( \mathrm{span}(\hat A^{(\delta)}_{2:}), \mathrm{span}(A^{(\delta)}_{2:})) \leq \eta_{\epsilon_{\alpha},\delta}$, there exist unit vectors $\tilde b_1,\ldots,\tilde b_{d_\delta-1}\in \mathrm{span}(A^{(\delta)}_{2:})$ such that $\theta(b_i,\tilde b_i)\leq \eta_{\epsilon_{\alpha},\delta}$.
Since $w'$ has zero component in $\mathrm{span}(A^{(\delta)}_{2:})$, $w'$ should have a small component in $\mathrm{span}(\hat A^{(\delta)}_{2:})$.
In particular,
\begin{align*}
\abs{\inner{w'}{b_i}}= \abs{\inner{w'}{b_i-\tilde b_i}} \leq \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta}\,,
\end{align*}
which implies that
\begin{align*}
\norm{\sum_{i=1}^{{d_\delta}-1} \inner{w'}{b_i} b_i}_2\leq \sqrt{d_\delta -1} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta}\,.
\end{align*}
Hence we have that $\abs{\gamma} \leq \sqrt{d_\delta -1} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta} C_V$.
Besides, for any policy $\pi$, we also have \begin{equation*}
\abs{\sum_{i=1}^{d_\delta-1} \inner{w'}{b_i} \inner{b_i}{V^\pi}} \leq \sqrt{d_\delta -1} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta} C_V\,.
\end{equation*}
Hence
\begin{align}
\abs{\inner{\hat w^{(\delta)}}{V^{\pi}} - \inner{w'}{V^{\pi}}}& \leq \abs{\inner{\hat w^{(\delta)}}{V^{\pi}} - \frac{1}{1-\gamma} \inner{w'}{V^{\pi}}} + \abs{\frac{1}{1-\gamma} \inner{w'}{V^{\pi}}- \inner{w'}{V^{\pi}}}\nonumber\\
&\leq \frac{1}{1-\gamma}\abs{\sum_{i=1}^{d_\delta-1} \inner{w'}{b_i} \inner{b_i}{V^{\pi}}} +\frac{\gamma \norm{w'}_2 C_V}{1-\gamma}\nonumber\\
&\leq \frac{\sqrt{d_\delta -1} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta} u}{1-\gamma} + \frac{\gamma \norm{w'}_2 C_V}{1-\gamma}\nonumber\\
& =\cO(\norm{w'}_2^2 \frac{d_\delta^{\frac{3}{2}}(C_{\alpha}+1)C_V^4\epsilon_{\alpha} }{\delta^2})\label{eq:lmm2}
\end{align}
when $\sqrt{d_\delta -1} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta} C_V= \cO(1)$, which is true when $\epsilon$ is small enough.
\end{proof}
\section{Proof of Theorem~\ref{thm:est-without-tau}}
\thmwithouttau*
\begin{proof}
Theorem~\ref{thm:est-without-tau} follows by setting $\epsilon_{\alpha} = \frac{4k^2\epsilon}{v^*}$ and $C_\alpha = 2k$ as shown in Lemma \ref{lmm:V1} and combining the results of Lemma~\ref{lmm:gap-hatw-w} and \ref{lmm:est-without-tau} with the triangle inequality.
Specifically, for any policy $\pi$, we have
\begin{align}\label{eq:Lem1Helper}
\begin{split}
\;\abs{\inner{\hat w}{V^{\pi}} - \inner{w'}{V^{\pi}}} \leq & \abs{\inner{\hat w}{V^{\pi}} - \inner{\hat w^{(\delta)}}{V^{\pi}}}+\abs{\inner{\hat w^{(\delta)}}{V^{\pi}} - \inner{w'}{V^{\pi}}} \\
\leq &\cO((\sqrt{k} +1)^{d-d_\delta} C_{\alpha} (\frac{C_{\alpha}C_V^4d_\delta^{\frac{3}{2}}\norm{w'}_2^2\epsilon_{\alpha} }{\delta^2}+\sqrt{k} \delta \norm{w'}_2))\\
\leq & \cO((\sqrt{k} +1)^{d-d_\delta +3}\norm{w'}_2 (\frac{C_V^4 k^4 \norm{w'}_2\epsilon}{v^*\delta^2}+ \delta))\,.
\end{split}
\end{align}
Since $\norm{w'} = \frac{\norm{w^*}}{\inner{w^*}{V^{\pi_1}}}$ and $\inner{w^*}{V^{\pi_1}} \geq \frac{v^*}{2k}$ from
Lemma~\ref{lmm:V1},
we derive
\begin{align*}
&v^*- \inner{w^*}{V^{\pi^{\hat w}}} = \inner{w^*}{V^{\pi_1}} \left(\inner{w'}{V^{\pi^*}} - \inner{w'}{V^{\pi^{\hat w}}}\right)
\\
\leq& \inner{w^*}{V^{\pi_1}}\left(\inner{\hat w}{V^{\pi^*}} - \inner{\hat w}{V^{\pi^{\hat w}}} + \cO((\sqrt{k} +1)^{d-d_\delta +3}\norm{w'}_2 (\frac{C_V^4 k^4 \norm{w'}_2\epsilon}{v^*\delta^2}+ \delta)) \right)
\\
\leq& \cO\left(\inner{w^*}{V^{\pi_1}}(\sqrt{k} +1)^{d-d_\delta +3}\norm{w'}_2 (\frac{C_V^4 k^4 \norm{w'}_2\epsilon}{v^*\delta^2}+ \delta)\right)\\
=& \cO\left((\sqrt{k} +1)^{d-d_\delta +3}\norm{w^*}_2 (\frac{C_V^4 k^5 \norm{w^*}_2\epsilon}{v^{*2}\delta^2}+ \delta)\right)\\
=& \cO\left((\frac{C_V^2 \norm{w^*}_2^2}{v^*})^\frac{2}{3}(\sqrt{k} +1)^{d+ \frac{16}{3}} \epsilon^\frac{1}{3}\right)\,.
\end{align*}
The first inequality follows from
\[
\inner{w'}{V^{\pi^*}} - \inner{w'}{V^{\pi^{\hat w}}}= \inner{w'}{V^{\pi^*}} +\left(\inner{\hat w}{V^{\pi^*}}-\inner{\hat w}{V^{\pi^*}}\right)+\left(\inner{\hat w}{V^{\pi^{\hat w}}}-\inner{\hat w}{V^{\pi^{\hat w}}}\right)- \inner{w'}{V^{\pi^{\hat w}}},
\]
and applying (\ref{eq:Lem1Helper}) twice- once for $\pi^*$ and once for $\pi^{\hat w}$. The last inquality follows by setting $\delta = \left(\frac{C_V^4k^5 \norm{w^*}_2\epsilon}{v^{*2}}\right)^{\frac{1}{3}}$.
\end{proof}
\section{Proof of Theorem~\ref{thm:traj-compression}}\label{app:traj-compression}
\trajcompress*
\begin{proof}
\textbf{Correctness:}
C4 guarantees that $\abs{\Qsup{H}}\leq k+1$.
We will prove $\sum_{\tau \in \Qsup{H}} \betasup{H}_\tau \Phi(\tau)=V^\pi$ by induction on $t=1,\ldots,H$.
Recall that for any trajectory $\tau$ of length $h$, $J(\tau)=\Phi(\tau) + V(s^\tau_h, H-h)$ was defined as the expected return of trajectories (of length $H$) with the prefix being $\tau$.
In addition, recall that $J_{\Qsup{t}} = \{J(\tau\circ s)|\tau\in \Qsup{t}, s\in \mathcal{S}\}$ and $p_{\Qsup{t},\betasup{t}}$ was defined by letting $p_{\Qsup{t},\betasup{t}}(\tau\circ s) = \betasup{t}(\tau) P(s \vert s^\tau_t, \pi(s^\tau_t))$.
For the base case, we have that at $t=1$
\begin{align*}
V^\pi &=R(s_0,\pi(s_0)) + \sum_{s\in \mathcal{S}} P(s|s_0,\pi(s_0))V^\pi(s, H-1)
= \sum_{s\in \mathcal{S}} P(s|s_0,\pi(s_0)) J((s_0)\circ s)\\
&= \sum_{s\in \mathcal{S}} p_{\Qsup{0},\betasup{0}}((s_0)\circ s) J((s_0)\circ s) =\sum_{\tau\in \Qsup{1}} \betasup{1}(\tau)J(\tau)\,.
\end{align*}
Suppose that $V^\pi = \sum_{\tau'\in \Qsup{t}} \betasup{t}(\tau')J(\tau')$ holds at time $t$,
then we prove the statement holds at time $t+1$.
\begin{align*}
V^\pi&=\sum_{\tau'\in \Qsup{t}} \betasup{t}(\tau')J(\tau')=\sum_{\tau'\in \Qsup{t}} \betasup{t}(\tau')(\Phi(\tau') + V^\pi(s^{\tau'}_t, H-t)) \\
&=\sum_{\tau'\in \Qsup{t}} \betasup{t}(\tau')\left(\Phi(\tau') + \sum_{s\in \mathcal{S}}P(s|s^{\tau'}_t,\pi(s^{\tau'}_t))\left( R(s^{\tau'}_t, \pi(s^{\tau'}_t)) + V^\pi(s, H-t)\right)\right)\\
&= \sum_{\tau'\in \Qsup{t}}\sum_{s\in \mathcal{S}} \betasup{t}(\tau')P(s|s^{\tau'}_t,\pi(s^{\tau'}_t))\left(\Phi(\tau') + R(s^{\tau'}_t, \pi(s^{\tau'}_t)) + V^\pi(s, H-t)\right)\\
&= \sum_{\tau'\in \Qsup{t}}\sum_{s\in \mathcal{S}} \betasup{t}(\tau')P(s|s^{\tau'}_t,\pi(s^{\tau'}_t))\left(\Phi(\tau'\circ s) + V^\pi(s, H-t)\right)\\
&= \sum_{\tau'\in \Qsup{t}}\sum_{s\in \mathcal{S}} p_{\Qsup{t},\betasup{t}}(\tau'\circ s)J(\tau'\circ s)\\
&=\sum_{\tau\in \Qsup{t+1}} \betasup{t+1}(\tau)J(\tau)\,.
\end{align*}
By induction, the statement holds at $t=H$ by induction, i.e., $V^\pi= \sum_{\tau \in \Qsup{H}} \betasup{H}_\tau J(\tau) = \sum_{\tau \in \Qsup{H}} \betasup{H}_\tau \Phi(\tau)$.
\noindent\textbf{Computational complexity:} Solving $V^\pi(s,h)$ for all $s\in \mathcal{S}, h\in [H]$ takes time $\cO(kH\abs{\mathcal{S}}^2)$. In each round, we need to call C4 for $\leq (k+1)\abs{S}$ vectors, which takes $\cO(k^4\abs{\mathcal{S}})$ time. Thus, we need $\cO(k^4H\abs{\mathcal{S}} + kH\abs{\mathcal{S}}^2)$ time in total.
\end{proof}
\section{Proofs of Lemma~\ref{lmm:gap-w-w'} and Lemma~\ref{lmm:min_norm_hatw_delta}}\label{app:tool-lmms}
\lmmwwprime*
\begin{proof}[Proof of Lemma~\ref{lmm:gap-w-w'}]
Since $\mathrm{span}(A^\textrm{(full)}) = \mathrm{span}(\{V^\pi|\pi\in \Pi\})$, for every policy $\pi$, the value vector can be represented as a linear combination of row vectors of $A^\textrm{(full)}$, i.e., there exists $a = (a_1,\dots,a_d)\in \mathbb{R}^d$ s.t.
\begin{align}\label{eq:lem4Helper}
V^\pi = \sum_{i=1}^d{a_i A^\textrm{(full)}_i} = A^{\textrm{(full)}\top} a\,.
\end{align}
Now, for any unit vector $\xi \in \mathrm{span}(b_1,\ldots,b_{d-d_\delta})$, we have $\inner{V^\pi}{\xi} \leq \sqrt{k} \delta$.
The reason is that at each round ${d_\delta+1}$, we pick an orthonormal basis $\rho_1,\ldots,\rho_{k-{d_\delta}}$ of $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{d_\delta}})^\perp$ (line~\ref{alg-line:orthonormal-basis} in Algorithm~\ref{alg:policy-threshold-free}) and pick $u_{d_\delta+1}$ to be the one in which there exists a policy with the largest component as described in line~\ref{alg-line:largest-component}.
Hence, $\abs{\inner{\rho_j}{V^\pi}}\leq \delta$ for all $j\in [k-{d_\delta}]$.
It follows from Cauchy-Schwarz inequality that $\inner{\xi}{V^\pi}= \sum_{j=1}^{k-{d_\delta}} \inner{\xi}{\rho_j}\inner{\rho_j}{V^\pi} \leq \sqrt{k}\delta$.
Combining with the observation that $b_1,\ldots,b_{d-d_\delta}$ are pairwise
orthogonal
and that each of them is
orthogonal to $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{d_\delta}})$ we have $$\sum_{i={d_\delta+1}}^d a_i^2 = \abs{\inner{V^\pi}{\sum_{i=d_\delta+1}^d a_{i} b_{i-d_\delta}}} \leq \sqrt{\sum_{i={d_\delta+1}}^d a_i^2} \sqrt{k}\delta\,,$$
which implies that
\begin{equation}
\sqrt{\sum_{i={d_\delta+1}}^d a_i^2} \leq \sqrt{k}\delta\,.\label{eq:anorm}
\end{equation}
Since $w'$ satisfies $Aw' ={\bf e}_1$, we have $$A^\textrm{(full)} w' = (1,0,\ldots,0, \inner{b_1}{w'},\ldots,\inner{b_{d-d_\delta}}{w'})\,.$$
For any $w \in \mathbb{R}^k$ satisfying $A^\textrm{(full)} w = {\bf e}_1$, consider $\tilde w = w +\sum_{i= 1}^{d-d_\delta}\inner{b_{i}}{w'} b_{i}$.
Then we have $A^\textrm{(full)}\tilde w = A^\textrm{(full)}w'$.
Thus, applying (\ref{eq:lem4Helper}) twice, we get
\begin{align*}
\tilde w\cdot V^\pi = \tilde w^\top A^{\textrm{(full)}\top}a = w'^\top A^{\textrm{(full)}\top}a = w'\cdot V^\pi\,.
\end{align*}
Hence,
\begin{align*}
&\abs{w\cdot V^\pi- w'\cdot V^\pi} = \abs{w\cdot V^\pi- \tilde w\cdot V^\pi} \stackrel{(a)}=\abs{\sum_{i=1}^d a_i (w - \tilde w)\cdot A^\textrm{(full)}_i }\\
=& \abs{\sum_{i=d_\delta +1}^{d} a_i \inner{b_{i-d_\delta}}{w'}}
\stackrel{(b)}{\leq} \sqrt{\sum_{i={d_\delta+1}}^d a_i^2} \norm{w'}_2 \stackrel{(c)}{\leq} \sqrt{k} \delta \norm{w'}_2\,,
\end{align*}
where Eq~(a) follows from (\ref{eq:lem4Helper}), inequality~(b) from Cauchy-Schwarz, and inequality~(c) from applying \eqref{eq:anorm}.
\end{proof}
\lmmminnorm*
Before proving Lemma~\ref{lmm:min_norm_hatw_delta}, we introduce some notations and a claim.
\begin{itemize}
\item For any $x,y\in\mathbb{R}^k$, let $\theta(x,y)$ denotes the angle between $x$ and $y$.
\item For any subspace $U\subset \mathbb{R}^k$, let $\theta(x,U) := \min_{y\in U}\theta(x,y)$.
\item For any two subspaces $U,U'\subset \mathbb{R}^k$, we define $\theta(U,U')$ as $\theta(U,U') = \max_{x\in U} \min_{y\in U'}\theta(x,y)$.
\item For any matrix $M$, let $M_i$ denote the $i$-th row vector of $M$, $M_{i:j}$ denote the submatrix of $M$ composed of rows $i,i+1,\ldots,j$, and $M_{i:}$ denote the submatrix composed of all rows $j\geq i$.
\item Let $\mathrm{span}(M)$ denote the span of the rows of $M$.
\end{itemize}
Recall that $\hat A\in \mathbb{R}^{d\times k}$ is defined in Eq~\eqref{eq:Ahat} as
\begin{align*}
\hat A = \begin{pmatrix}
V^{\pi_1\top}\\
(\hat \alpha_1 V^{\pi_1} -V^{\pi_2})^\top\\
\vdots\\
(\hat \alpha_{d-1} V^{\pi_1} -V^{\pi_d})^\top
\end{pmatrix}\,,
\end{align*}
which is the approximation of matrix $A\in \mathbb{R}^{d\times k}$ defined by true values of $\alpha_i$, i.e.,
\begin{align*}
A = \begin{pmatrix}
V^{\pi_1\top}\\
(\alpha_1 V^{\pi_1} -V^{\pi_2})^\top\\
\vdots\\
(\alpha_{d-1} V^{\pi_1} -V^{\pi_d})^\top
\end{pmatrix}\,.
\end{align*}
We denote by $\hat A^{(\delta)} = \hat A_{1:d_\delta}, A^{(\delta)}=A_{1:d_\delta}\in \mathbb{R}^{d_\delta\times k}$ the sub-matrices comprised of the first $d_\delta$ rows of $\hat A$ and $A$ respectively.
\begin{restatable}{claim}{lmmspanangle}\label{clm:spanangle}
If $\abs{\hat \alpha_i-\alpha_i}\leq \epsilon_{\alpha}$ and $\alpha_i\leq C_{\alpha}$ for all $i\in [d-1]$,
for every $\delta \geq 4 C_{\alpha}^{\frac{2}{3}}C_V d^{\frac{1}{3}}\epsilon_{\alpha}^\frac{1}{3}$, we have
\begin{equation}
\theta(\mathrm{span}(A^{(\delta)}_{2:}), \mathrm{span}(\hat A^{(\delta)}_{2:})) \leq \eta_{\epsilon_{\alpha},\delta}\,,\label{eq:theta-A-hatA}
\end{equation}
and
\begin{equation}
\theta(\mathrm{span}(\hat A^{(\delta)}_{2:}), \mathrm{span}(A^{(\delta)}_{2:})) \leq
\eta_{\epsilon_{\alpha},\delta}\,,\label{eq:theta-hatA-A}
\end{equation}
where $\eta_{\epsilon_{\alpha},\delta} = \frac{4C_{\alpha}C_V^2 d_\delta\epsilon_{\alpha} }{\delta^2}$.
\end{restatable}
To prove the above claim, we use the following lemma by~\cite{Balcan2015}.
\begin{lemma}[Lemma 3 of~\cite{Balcan2015}]\label{lmm:Balcan}
Let $U_l = \mathrm{span}(\xi_1,\ldots,\xi_l)$ and $\hat U_l = \mathrm{span}(\hat \xi_1,\ldots,\hat \xi_l)$.
Let $\epsilon_{\text{acc}}, \gamma_\text{new}\geq 0$ and $\epsilon_{\text{acc}}\leq \gamma_\text{new}^2/(10l)$, and assume that $\theta(\hat \xi_i, \hat U_{i-1})\geq \gamma_\text{new}$ for $i=2,\ldots, l$, and that $\theta(\xi_{i}, \hat \xi_{i})\leq \epsilon_{\text{acc}}$ for $i=1,\ldots, l$.
Then,
$$\theta(U_l, \hat U_l)\leq 2l\frac{\epsilon_{\text{acc}}}{\gamma_\text{new}}\,.$$
\end{lemma}
\begin{proof}[Proof of Claim~\ref{clm:spanangle}]
For all $2\leq i \leq d_\delta$, we have that
\begin{align*}
\theta(\hat A^{(\delta)}_i, \mathrm{span}(\hat A^{(\delta)}_{2:i-1})) \geq& \theta(\hat A^{(\delta)}_i, \mathrm{span}(\hat A^{(\delta)}_{1:i-1})) \geq \sin( \theta(\hat A^{(\delta)}_i, \mathrm{span}(\hat A^{(\delta)}_{1:i-1})))\\ \stackrel{(a)}{\geq}&\frac{\abs{\hat A^{(\delta)}_i \cdot u_i}}{\norm{\hat A^{(\delta)}_i}_2}\stackrel{(b)}{\geq}
\frac{\delta}{\norm{\hat A^{(\delta)}_i}_2} = \frac{\delta}{\norm{\hat \alpha_{i-1} V^{\pi_1} -V^{\pi_i}}_2} \geq \frac{\delta}{(C_{\alpha}+1)C_V}
\,,
\end{align*}
where Ineq~(a) holds as $u_i$ is orthogonal to $\mathrm{span}(\hat A^{(\delta)}_{1:i-1})$ according to line~\ref{alg-line:orthonormal-basis} of Algorithm~\ref{alg:policy-threshold-free} and Ineq~(b) holds due to $\abs{\hat A^{(\delta)}_i \cdot u_i} = \abs{V^{\pi_i}\cdot u_i}\geq \delta$.
The last inequality holds due to $\norm{\hat \alpha_{i-1} V^{\pi_1} -V^{\pi_i}}_2 \leq \hat \alpha_{i-1} \norm{V^{\pi_1}}_2+\norm{V_i}_2 \leq (C_{\alpha}+1)C_V$.
Similarly, we also have
\[\theta(A^{(\delta)}_i, \mathrm{span}(A^{(\delta)}_{2:i-1}))\geq \frac{\delta}{(C_{\alpha}+1)C_V}\,.\]
We continue by decomposing $V^{\pi_i}$ in the direction of $V^{\pi_1}$ and the direction perpendicular to $V^{\pi_1}$.
For convince, we denote $v_i^\parallel: = V^{\pi_i}\cdot \frac{V^{\pi_1}}{\norm{V^{\pi_1}}_2}$, $V_i^\parallel :=v_i^\parallel \frac{V^{\pi_1}}{\norm{V^{\pi_1}}_2}$, $V_i^\perp :=V^{\pi_i} -V_i^\parallel$ and $v_i^\perp := \norm{V_i^\perp}_2$.
Then we have
\[\theta( A^{(\delta)}_i,\hat A^{(\delta)}_i) = \theta(\alpha_{i-1} V^{\pi_1} -V^{\pi_i},\hat\alpha_{i-1} V^{\pi_1} -V^{\pi_i})= \theta(\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel- V_i^\perp,\hat\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel- V_i^\perp)\,.\]
If $(\hat \alpha_{i-1} V^{\pi_1} -V_{i}^\parallel)\cdot(\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel) \geq 0$, i.e., $\hat \alpha_{i-1} V^{\pi_1} -V_{i}^\parallel $ and $\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel$ are in the same direction, then
\begin{align}
\theta(A^{(\delta)}_i,\hat A^{(\delta)}_i)
&= \abs{\arctan \frac{\norm{\hat \alpha_{i-1} V^{\pi_1} -V_{i}^\parallel}_2}{v_i^\perp}-\arctan \frac{\norm{\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel}_2}{v_i^\perp} }\nonumber\\
& \leq \abs{\frac{\norm{\hat \alpha_{i-1} V^{\pi_1} -V_{i}^\parallel}_2}{v_i^\perp} - \frac{\norm{\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel}_2}{v_i^\perp}}\label{eq:arctan}\\
&=\frac{\abs{\hat \alpha_{i-1} - \alpha_{i-1}}\norm{V^{\pi_1}}_2}{v_i^\perp}\nonumber\\
&\leq \frac{\epsilon_{\alpha} C_V}{\delta}\,,\label{eq:applytau}
\end{align}
where Ineq~\eqref{eq:arctan} follows from the fact that the derivative of $\arctan$ is at most $1$, i.e., $\frac{\partial \arctan x}{\partial x} =\lim_{a\to x}\frac{\arctan a-\arctan x}{a-x}= \frac{1}{1+x^2}\leq 1$.
Ineq~\eqref{eq:applytau} holds since $v_i^\perp \geq \abs{\inner{V^{\pi_i}}{u_i}}\geq \delta$.
If $(\hat \alpha_{i-1} V^{\pi_1} -V_{i}^\parallel)\cdot(\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel) < 0$, i.e., $\hat \alpha_{i-1} V^{\pi_1} -V_{i}^\parallel $ and $\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel$ are in the opposite directions, then we have $\norm{\hat \alpha_{i-1} V^{\pi_1} -V_{i}^\parallel}_2+\norm{\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel}_2 = \norm{(\hat \alpha_{i-1} - \alpha_{i-1}) V^{\pi_1}}_2\leq \epsilon_{\alpha} \norm{V^{\pi_1}}_2$.
Similarly, we have
\begin{align*}
\theta(\hat A^{(\delta)}_i, A^{(\delta)}_i)
&= \abs{\arctan \frac{\norm{\hat \alpha_{i-1} V^{\pi_1} -V_{i}^\parallel}_2}{v_i^\perp}+\arctan \frac{\norm{\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel}_2}{v_i^\perp} } \\
& \leq \abs{\frac{\norm{\hat \alpha_{i-1} V^{\pi_1} -V_{i}^\parallel}_2}{v_i^\perp} + \frac{\norm{\alpha_{i-1} V^{\pi_1} -V_{i}^\parallel}_2}{v_i^\perp}}\\
&\leq\frac{\epsilon_{\alpha} \norm{V^{\pi_1}}_2}{\abs{v_i^\perp}}\nonumber\\
&\leq \frac{\epsilon_{\alpha} C_V}{\delta}\,.
\end{align*}
By applying Lemma~\ref{lmm:Balcan} with $\epsilon_{\text{acc}} = \frac{\epsilon_{\alpha} C_V}{\delta}$, $ \gamma_\text{new}=\frac{\delta}{(C_{\alpha}+1)C_V}$, $(\xi_i,\hat \xi_i) = (A_{i+1},\hat A_{i+1})$ (and $(\xi_i,\hat \xi_i) = (\hat A_{i+1}, A_{i+1})$), we have that when
$\delta \geq 10^{\frac{1}{3}}(C_{\alpha}+1)^{\frac{2}{3}}C_V d_\delta^{\frac{1}{3}}\epsilon_{\alpha}^\frac{1}{3}$,
\begin{equation*}
\theta(\mathrm{span}(A^{(\delta)}_{2:}), \mathrm{span}(\hat A^{(\delta)}_{2:})) \leq \frac{2d_\delta(C_{\alpha}+1) C_V^2\epsilon_{\alpha} }{\delta^2}=\eta_{\epsilon_{\alpha},\delta}\,,
\end{equation*}
and
\begin{equation*}
\theta(\mathrm{span}(\hat A^{(\delta)}_{2:}), \mathrm{span}(A^{(\delta)}_{2:})) \leq
\eta_{\epsilon_{\alpha},\delta}\,.
\end{equation*}
This completes the proof of Claim~\ref{clm:spanangle} since $C_\alpha\geq 1$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lmm:min_norm_hatw_delta}]
Recall that $ \hat w ^{(\delta)}=\argmin_{\hat A^{(\delta)} x ={\bf e}_1}\norm{x}_2$ is the minimum norm solution to $\hat A^{(\delta)} x ={\bf e}_1$.
Thus, $\inner{\hat w ^{(\delta)}}{b_i} = 0$ for all $i\in [d-d_\delta]$.
Let $\lambda_1,\ldots,\lambda_{d_\delta-1}$ be any orthonormal basis of $\mathrm{span}(A^{(\delta)}_{2:})$.
We construct a vector $w$ satisfying $A^\textrm{(full)} w ={\bf e}_1$ by removing $\hat w ^{(\delta)}$'s component in $\mathrm{span}(A^{(\delta)}_{2:})$ and rescaling.
Formally,
\begin{equation}\label{eq:creatw}
w := \frac{\hat w ^{(\delta)} - \sum_{i=1}^{{d_\delta}-1} \inner{\hat w ^{(\delta)}}{\lambda_i} \lambda_i}{1 -V^{\pi_1}\cdot( \sum_{i=1}^{{d_\delta}-1} \inner{\hat w ^{(\delta)}}{\lambda_i} \lambda_i)}\,.
\end{equation}
It is direct to verify that $A_1\cdot w = V^{\pi_1}\cdot w = 1$ and $A_i \cdot w = 0$ for $i=2,\ldots,d_\delta$. As a result, $A^{(\delta)}w ={\bf e}_1$.
Combining with the fact that $\hat w ^{(\delta)}$ has zero component in $b_i$ for all $i\in [d-d_\delta]$, we have $A^\textrm{(full)} w ={\bf e}_1$.
According to Claim~\ref{clm:spanangle}, we have $$\theta(\mathrm{span}(A^{(\delta)}_{2:}), \mathrm{span}(\hat A^{(\delta)}_{2:})) \leq \eta_{\epsilon_{\alpha},\delta}.$$
Thus, there exist unit vectors $\tilde \lambda_1,\ldots,\tilde \lambda_{d_\delta-1}\in \mathrm{span}(\hat A^{(\delta)}_{2:})$ such that $\theta(\lambda_i, \tilde \lambda_i)\leq \eta_{\epsilon_{\alpha},\delta}$.
Since $\hat A^{(\delta)} \hat w ^{(\delta)} = {\bf e}_1$, we have $\hat w ^{(\delta)} \cdot \tilde \lambda_i = 0$ for all
$i=1,\ldots,d_\delta-1$, and therefore,
\begin{align*}
\abs{\hat w ^{(\delta)} \cdot \lambda_i} = \abs{\hat w ^{(\delta)} \cdot (\lambda_i-\tilde \lambda_i)} \leq \norm{\hat w ^{(\delta)}}_2 \eta_{\epsilon_{\alpha},\delta}\,.
\end{align*}
This implies that for any policy $\pi$,
\[\abs{V^\pi\cdot\sum_{i=1}^{d_\delta-1} (\hat w ^{(\delta)}\cdot \lambda_i) \lambda_i}\leq \norm{V^\pi}_2 \sqrt{d_\delta}\norm{\hat w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta} \leq C_V\sqrt{d_\delta}\norm{\hat w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta} \,.\]
Denote by $\gamma =V^{\pi_1}\cdot( \sum_{i=1}^{{d_\delta}-1} \inner{\hat w ^{(\delta)}}{\lambda_i} \lambda_i)$, which is no greater than $C_V\sqrt{d_\delta}\norm{\hat w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta} $.
We have that
\begin{align}
\abs{\hat w ^{(\delta)} \cdot V^\pi - w\cdot V^\pi}& \leq \abs{\hat w ^{(\delta)} \cdot V^\pi - \frac{1}{1-\gamma} \hat w ^{(\delta)}\cdot V^\pi} + \abs{\frac{1}{1-\gamma} \hat w ^{(\delta)}\cdot V^\pi - w\cdot V^\pi}\nonumber\\
&\leq \frac{\gamma \norm{\hat w ^{(\delta)}}_2 C_V}{1-\gamma} +\frac{1}{1-\gamma}\abs{\sum_{i=1}^{d_\delta-1} (\hat w ^{(\delta)}\cdot \lambda_i) \lambda_i\cdot V^\pi}\nonumber\\
&\leq \frac{\gamma \norm{\hat w ^{(\delta)}}_2 C_V}{1-\gamma} +\frac{C_V\sqrt{d_\delta}\norm{\hat w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta}}{1-\gamma}\nonumber\\
&= 2(C_V \norm{\hat w ^{(\delta)}}_2 +1)C_V\sqrt{d_\delta}\norm{\hat w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta},\label{eq:hatwwintermediate}
\end{align}
where the last equality holds when $C_V\sqrt{d_\delta}\norm{\hat w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta}\leq \frac{1}{2}$.
Now we show that $\norm {\hat w ^{(\delta)}}_2 \leq C\norm{w'}_2$ for some constant $C$.
Since $\hat w ^{(\delta)}$ is the minimum norm solution to $\hat A^{(\delta)}x= {\bf e}_1$, we will construct another solution to $\hat A^{(\delta)}x= {\bf e}_1$, denoted by $\hat w_0$ in a simillar manner to the construction in Eq~\eqref{eq:creatw}, and show that $\norm{\hat w_0}_2\leq C\norm{w'}$.
Let $\xi_1,\ldots,\xi_{d_\delta-1}$ be any orthonormal basis of $\mathrm{span}(\hat A^{(\delta)}_{2:})$.
We construct a $\hat w_0$ s.t. $\hat A^{(\delta)}\hat w_0 = {\bf e}_1$ by removing the component of $w'$ in $\mathrm{span}(\hat A^{(\delta)}_{2:})$ and rescaling.
Specifically, let
\begin{equation}
\hat w_0 = \frac{ w'- \sum_{i=1}^{{d_\delta}-1} \inner{w'}{\xi_i} \xi_i}{1-\inner{V^{\pi_1}}{( \sum_{i=1}^{{d_\delta}-1} \inner{w'}{\xi_i} \xi_i)}}\,.
\end{equation}
Since $A^{(\delta)}w' = {\bf e}_1$,
it directly follows that $\inner{\hat A_1}{\hat w_0} = \inner{V^{\pi_1}}{\hat w_0} = 1$ and that $\inner{\hat A_i}{\hat w_0} = 0$ for $i=2,\ldots,d_\delta$, i.e., $\hat A^{(\delta)}\hat w_0 ={\bf e}_1$.
Since Claim~\ref{clm:spanangle} implies that $\theta( \mathrm{span}(\hat A^{(\delta)}_{2:}), \mathrm{span}(A^{(\delta)}_{2:})) \leq \eta_{\epsilon_{\alpha},\delta}$, there exist unit vectors $\tilde \xi_1,\ldots,\tilde \xi_{d_\delta-1}\in \mathrm{span}(A^{(\delta)}_{2:})$ such that $\theta(\xi_i,\tilde \xi_i)\leq \eta_{\epsilon_{\alpha},\delta}$.
As $w'$ has zero component in $\mathrm{span}(A^{(\delta)}_{2:})$, $w'$ should have a small component in $\mathrm{span}(\hat A^{(\delta)}_{2:})$.
In particular,
\begin{align*}
\abs{\inner{w'}{\xi_i}}= \abs{\inner{w'}{\xi_i-\tilde \xi_i}} \leq \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta}\,,
\end{align*}
which implies that
\begin{align*}
\norm{\sum_{i=1}^{{d_\delta}-1} \inner{w'}{\xi_i} \xi_i}_2\leq \sqrt{d_\delta} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta}\,.
\end{align*}
Hence
\begin{equation*}
\abs{\inner{V^{\pi_1}}{( \sum_{i=1}^{{d_\delta}-1} \inner{w'}{\xi_i} \xi_i)}} \leq C_V\sqrt{d_\delta} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta} \,.
\end{equation*}
As a result, $\norm{\hat w_0}_2 \leq \frac{3}{2}\norm{w'}_2$ when $ C_V\sqrt{d_\delta} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta} \leq \frac{1}{3}$, which is true when $\epsilon_{\alpha}$ is small enough.
According to Lemma~\ref{lmm:V1}, $\epsilon_{\alpha}\leq \frac{4k^2 \epsilon}{v^*}$, $\epsilon_{\alpha}\rightarrow 0$ as $\epsilon\rightarrow 0$.
Thus, we have $\norm{\hat w ^{(\delta)}}_2\leq \norm{\hat w_0}_2\leq \frac{3}{2} \norm{w'}_2$ and $C_V\sqrt{d_\delta}\norm{\hat w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta}\leq \frac{1}{2}$.
Combined with Eq~\eqref{eq:hatwwintermediate}, we get
\begin{align*}
\abs{\hat w ^{(\delta)} \cdot V^\pi - w\cdot V^\pi} = \cO\left((C_V \norm{w'}_2 +1)C_V\sqrt{d_\delta}\norm{w'}_2\eta_{\epsilon_{\alpha},\delta}\right)\,.
\end{align*}
Since $C_V \norm{w'}_2 \geq \abs{\inner{V^{\pi_1}}{w'}} = 1$, by taking $\eta_{\epsilon_{\alpha},\delta} = \frac{4C_{\alpha}C_V^2 d_\delta\epsilon_{\alpha} }{\delta^2}$ into the above equation, we have
\begin{align*}
\abs{\hat w ^{(\delta)} \cdot V^\pi - w\cdot V^\pi} =\cO\left( \frac{C_{\alpha}C_V^4 d_\delta^{\frac{3}{2}}\norm{w'}_2^2\epsilon_{\alpha} }{\delta^2}\right)\,,
\end{align*}
which completes the proof.
\end{proof}
\subsection{Finding a
Basis of Policies
}\label{subsec:independentvalue}
The process of efficiently finding $d$ policies with $d$ linearly independent value vectors that span the space of value vectors is not trivial. The naive approach of selecting the $k$ policies that each optimizes one of the $k$ objectives might fail--- in Appendix~\ref{app:eg-linear-dependent}, we show an instance in which these $k$ value vectors are linearly dependent even though there exist $k$ policies that their values span a space of rank $k$.
Besides linear independence of values, another challenge is to find a basis of policies to contain a benchmark policy, $\pi_1$ (where the index $1$ is wlog) with a relatively large personalized value, $\inner{w^*}{V^{\pi_1}}$, so that $\hat \alpha_i$'s error is small (e.g., in the extreme case where $\inner{w^*}{V^{\pi_1}} = 0$, we will not be able to estimate $\alpha_i$).
For any $w\in \mathbb{R}^k$, we use
$\pi^w$ denote a policy that maximizes the scalar reward $\inner{w}{R}$, i.e.,
\begin{equation}
\pi^w = \argmax_{\pi\in \Pi} \inner{w}{V^{\pi}}\,,\label{eq:piw}
\end{equation}
and by $v^w = \inner{w}{V^{\pi^w}}$ to denote the corresponding personalized value. The personalized optimal policies and values can be computed efficiently as suggested in Observation~\ref{obs:wSuffices}.
Let ${\bf e}_1,\ldots,{\bf e}_k$ denote the standard basis.
To find $\pi_1$ with large personalized value $\inner{w^*}{V^{\pi_1}}$, we find policies $\pi^{{\bf e}_j}$ that maximize the $j$-th objective for every $j=1,\ldots,k$ and then query the user to compare them until we find a $\pi^{{\bf e}_j}$ with (approximately) a maximal personalized value among them. This policy will be our benchmark policy, $\pi_1$.
The details are described in lines~\ref{alg-line:initV1-1}--\ref{alg-line:initV1-5} of Algorithm~\ref{alg:policy-threshold-free}.
\begin{algorithm}[H]\caption{Identification of Basis Policies}\label{alg:policy-threshold-free}
\begin{algorithmic}[1]
\STATE initialize $\pi^{{\bf e}^*} \leftarrow \pi^{{\bf e}_1}$\label{alg-line:initV1-1}
\FOR{$j=2,\ldots,k$}
\STATE compare $\pi^{{\bf e}^*}$ and $\pi^{{\bf e}_j}$
\STATE \textbf{if} $\pi^{{\bf e}_j}\succ \pi^{{\bf e}^*}$ \textbf{then} $\pi^{{\bf e}^*} \leftarrow \pi^{{\bf e}_j}$
\ENDFOR
\STATE
$\pi_1 \leftarrow \pi^{{\bf e}^*}$ and $u_1 \leftarrow \frac{V^{\pi^{{\bf e}^*}}}{\norm{V^{\pi^{{\bf e}^*}}}_2}$\label{alg-line:initV1-5}
\FOR{$i=2,\ldots, k$}
\STATE arbitrarily pick an orthonormal basis $\rho_1,\ldots, \rho_{k+1-i}$ of $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{i-1}})^\perp$ \label{alg-line:orthonormal-basis}
\STATE $j_{\max} \leftarrow \argmax_{j\in [k+1-i]} \max(\abs{v^{\rho_j}},\abs{v^{-\rho_j}})$\label{alg-line:largest-component}
\IF{$\max(\abs{v^{\rho_{j_{\max}}}},\abs{v^{-\rho_{j_{\max}}}})>0$}
\STATE $\pi_i \leftarrow\pi^{\rho_{j_{\max}}}$ \textbf{if} $\abs{v^{\rho_{j_{\max}}}}>\abs{v^{-\rho_{j_{\max}}}}$; \textbf{otherwise} $\pi_i \leftarrow\pi^{-\rho_{j_{\max}}}$. $u_i \leftarrow \rho_{j_{\max}}$\label{alg-line: returnedu}
\ELSE
\STATE output $(\pi_1,\pi_2,\ldots)$, $(u_1,u_2,\ldots)$ and \textbf{stop}\label{alg-line:end}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
After finding $\pi_1$, we next search the remaining $d-1$ polices $\pi_2,\ldots,\pi_d$ sequentially (lines \ref{alg-line:orthonormal-basis}--\ref{alg-line:end} of Algorithm~\ref{alg:policy-threshold-free}).
For $i=2,\ldots, d$, we find a direction $u_i$ such that (i) the vector $u_i$ is orthogonal to the space of current value vectors $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{i-1}})$, and (ii) there exists a policy $\pi_i$ such that $V^{\pi_i}$ has a significant component in the direction of $u_i$.
Condition (i) is used to guarantee that the policy $\pi_i$ has a value vector linearly independent of $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{i-1}})$.
Condition (ii) is used to cope with the error caused by inaccurate approximation of the ratios $\hat \alpha_i$.
Intuitively, when $\norm{\alpha_{i} V^{\pi_1}- V^{\pi_{i+1}}}_2\ll \epsilon$, the angle between $\hat\alpha_{i} V^{\pi_1}- V^{\pi_{i+1}}$ and $\alpha_{i} V^{\pi_1}- V^{\pi_{i+1}}$ could be very large, which results in an inaccurate estimate of $w^*$ in the direction of $\alpha_i V^{\pi_1}- V^{\pi_{i+1}}$.
For example, if $V^{\pi_1} = {\bf e}_1$ and $V^{\pi_i} = {\bf e}_1 + \frac{1}{w^*_i}\epsilon {\bf e}_i$ for $i=2,\ldots, k$, then $\pi_1, \pi_i$ are ``indistinguishable'' and the estimate ratio $\hat\alpha_{i-1}$ can be $1$. Then the estimate of $w^*$ by solving linear equations in Eq~\eqref{eq:wexact} is $(1,0,\ldots,0)$, which could be far from the true $w^*$.
Finding $u_i$'s in which policy values have a large component can help with this problem.
Algorithm~\ref{alg:policy-threshold-free} provides a more detailed description of this procedure.
Note that if there are multiple users with different preference vectors, we only need to run Algorithm~\ref{alg:policy-threshold-free} once for all users, but need to re-estimate the $\alpha_i$'s for different users.
\subsection{Computation of Basis Ratios}\label{subsec:ratios
As we have mentioned before, comparing basis policies alone does not allow for the exact computation of the $\alpha_i$ ratios as comparing $\pi_1,\pi_i$ can only reveal which is better but not how much.
To this end, we will use the ``do nothing'' policy to approximate every ratio $\alpha_i$ up to some additive error $\abs{\hat \alpha_i - \alpha_i}$ using binary search over the parameter $\hat{\alpha}_i\in [0,C_\alpha]$ for some $C_\alpha\geq 1$ (to be determined later) and comparison queries of policy $\pi_{i+1}$ with policy $\hat{\alpha}_i\pi_1 + (1-\hat{\alpha}_i)\pi_0$ if $\hat \alpha_i\leq 1$ (or comparing $\pi_1$ and $\frac{1}{\hat \alpha_i}\pi_{i+1} + (1-\frac{1}{\hat \alpha_i})\pi_0$ instead if $\hat \alpha_i>1$).\footnote{We write $\hat{\alpha}_i\pi_1+ (1-\hat{\alpha}_i)\pi_0$ to indicate that $\pi_1$ is used with probability $\hat{\alpha}_i$, and that $\pi_0$ is used with probability $1-\hat{\alpha}_i$.}
Notice that the personalized value of $\hat{\alpha}_i\pi_1 + (1-\hat{\alpha}_i)\pi_0$ is identical to the personalized value of $\pi_1$ multiplied by $\hat{\alpha}_i$. We stop once $\hat{\alpha}_i$ is such that the user returns ``indistinguishable''.
Once we stop, the two policies have roughly the same personalized value, i.e.,
\begin{align}
\abs{\hat\alpha_i\inner{w^*}{V^{\pi_1}}-\inner{w^*}{V^{\pi_{i+1}}}}&\leq \epsilon\,,\text{ if }\hat \alpha_i \leq 1\,,\nonumber\\
\abs{\inner{w^*}{V^{\pi_1}}-\frac{1}{\hat\alpha_i}\inner{w^*}{V^{\pi_{i+1}}}}&\leq \epsilon\,,\text{ if }\hat \alpha_i > 1\,.\label{eq:stopcond}
\end{align}
Eq~\eqref{eq:wexact} combined with the above inequality implies that
$\abs{\hat \alpha_i - \alpha_i}\inner{w^*}{V^{\pi_1}} \leq C_{\alpha}\epsilon$. Thus, the approximation error of each ratio is bounded by $\abs{\hat \alpha_i -\alpha_i} \leq \frac{C_{\alpha} \epsilon}{\inner{w^*}{V^{\pi_1}}}$.
To make sure the procedure will terminate, we need to set $C_{\alpha} \geq \frac{v^*}{\inner{w^*}{V^{\pi_1}}}$ since $\alpha_i$'s must lie in the interval $[0,\frac{v^*}{\inner{w^*}{V^{\pi_1}}}]$.
Upon stopping binary search once Eq~\eqref{eq:stopcond} holds, it takes at most $\cO(d\log(C_{\alpha} \inner{w^*}{V^{\pi_1}}/\epsilon))$ comparison queries to estimate all the $\alpha_i$'s.
Due to the carefully picked $\pi_1$ in Algorithm~\ref{alg:policy-threshold-free}, we can derive the following upper bounds for
$\frac{v^*}{\inner{w^*}{V^{\pi_1}}}$ and $\abs{\hat \alpha_i - \alpha_i}$.
\begin{restatable}{lemma}{lmmVinit}\label{lmm:V1}
When $\epsilon \leq \frac{v^*}{2k^2}$, we have $\frac{v^*}{\inner{w^*}{V^{\pi_1}}} \leq 2k$. By setting $C_{\alpha} = 2k$, the returned $\hat\alpha_i$'s satisfy that $\abs{\hat \alpha_i - \alpha_i}\leq \frac{4k^2\epsilon}{v^*}$.
\end{restatable}
We set $C_{\alpha} = 2k$ from now on. The pseudo code of the above process of estimating $\alpha_i$'s is deferred to Algorithm~\ref{alg:alpha} in Appendix~\ref{app:bin-search}.
\subsection{Preference Approximation and Personalized Policy}\label{subsec:estimatew}
We move on to present an algorithm that estimates $w^*$ and calculates a nearly optimal personalized policy.
Given the $\pi_i$'s returned by Algorithm~\ref{alg:policy-threshold-free} and the $\hat \alpha_i$'s returned by Algorithm~\ref{alg:alpha}, we define the matrix $\hat A \in \mathbb{R}^{d\times k}$ as
\begin{align}
\hat A := \begin{pmatrix}
V^{\pi_1\top}\\
(\hat \alpha_1 V^{\pi_1} -V^{\pi_2})^\top\\
\vdots\\
(\hat \alpha_{d-1} V^{\pi_1} -V^{\pi_d})^\top
\end{pmatrix}\,.\label{eq:Ahat}
\end{align}
Let $\hat w$ be a solution to $\hat A x = {\bf e}_1$.
We will show that $\hat w$ is a good estimate of $w':=\frac{w^*}{\inner{w^*}{V^{\pi_1}}}$ and that $\pi^{\hat w}$ is a nearly optimal personalized policy.
In particular, when $\epsilon$ is small, we have $\abs{\inner{\hat w}{V^\pi} - \inner{w'}{ V^\pi}} = \cO(\epsilon^\frac{1}{3})$ for every policy $\pi$.
Putting this together, we derive the following theorem.
\begin{restatable}{theorem}{thmwithouttau}\label{thm:est-without-tau}
Consider the algorithm of computing $\hat A$ defined in Eq~\eqref{eq:Ahat} and any solution $\hat w$ to $\hat A x = {\bf e}_1$ and outputting the policy $\pi^{\hat w} = \argmax_{\pi\in \Pi} \inner{\hat w}{V^{\pi}}$, which is the optimal personalized policy for preference vector $\hat w$.
Then the output policy $\pi^{\hat w}$ satisfying that
$$v^*-\inner{w^*}{V^{\pi^{\hat w}}}\leq \cO\left(\left(\sqrt{k} +1\right)^{d+ \frac{14}{3}} \epsilon^\frac{1}{3}\right)$$
by using $\cO(k\log(k/\epsilon))$ comparison queries.
\end{restatable}
\textbf{Computation Complexity} We remark that Algorithm~\ref{alg:policy-threshold-free} solves Eq~\eqref{eq:piw} for the optimal policy in scalar reward MDP at most $\cO(k^2)$ times. Using, e.g., Finite Horizon Value iteration to solve for the optimal policy takes $\cO(H|\mathcal{S}|^2|\mathcal{A}|)$ steps. However, while the time complexity it takes to return the optimal policy for a single user is
$\cO(k^2H|\mathcal{S}|^2 |\mathcal{A}|+ k\log(\frac{k }{\epsilon}))$,
considering $n$ different users rather than one results in an overall time complexity of
$\cO((k^2+n)H|\mathcal{S}|^2 |\mathcal{A}|+nk\log(\frac{k}{\epsilon}))$.
\textbf{Proof Technique}
The analysis of Theorem~\ref{thm:est-without-tau} has two parts.
First, as mentioned in Sec~\ref{subsec:independentvalue}, when $\norm{\alpha_{i} V^{\pi_1}- V^{\pi_{i+1}}}_2\ll \epsilon$, the error of $\hat\alpha_{i+1}$ can lead to inaccurate estimate of $w^*$ in direction $\alpha_i V^{\pi_1}- V^{\pi_{i+1}}$.
Thus, we consider another estimate of $w^*$ based only on some $\pi_{i+1}$'s with a relatively large $\norm{\alpha_{i} V^{\pi_1}- V^{\pi_{i+1}}}_2$.
In particular, for any $\delta>0$, let $d_\delta := \min_{i\geq 2: \max(\abs{v^{u_i}},\abs{v^{-u_i}}) \leq \delta} i-1$.
That is to say, for $i=2,\ldots,d_\delta$, the policy $\pi_i$ satisfies $\inner{u_i}{V^{\pi_i}}>\delta$ and for any policy $\pi$, we have $\inner{u_{d_\delta+1}}{V^\pi} \leq \delta$.
Then, for any policy $\pi$ and any unit vector $\xi \in \mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{d_\delta}})^\perp$, we have $\inner{\xi}{V^\pi} \leq \sqrt{k} \delta$.
This is because at round ${d_\delta+1}$, we pick an orthonormal basis $\rho_1,\ldots,\rho_{k-{d_\delta}}$ of $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{d_\delta}})^\perp$ (line~\ref{alg-line:orthonormal-basis} in Algorithm~\ref{alg:policy-threshold-free}) and pick $u_{d_\delta+1}$ to be the one in which there exists a policy with the largest component as described in line~\ref{alg-line:largest-component}.
Hence, $\abs{\inner{\rho_j}{V^\pi}}\leq \delta$ for all $j\in [k-{d_\delta}]$.
Then, we have $\inner{\xi}{V^\pi}= \sum_{j=1}^{k-{d_\delta}} \inner{\xi}{\rho_j}\inner{\rho_j}{V^\pi} \leq \sqrt{k}\delta$ by Cauchy-Schwarz inequality.
Let $\hat A^{(\delta)} \in \mathbb{R}^{d_\delta \times k}$ be the
sub-matrix comprised of the first $d_\delta$ rows of $\hat A$.
Then we consider an alternative estimate $\hat w ^{(\delta)} =\argmin_{x:\hat A^{(\delta)} x ={\bf e}_1}\norm{x}_2$, the minimum norm solution of $x$ to $\hat A^{(\delta)} x={\bf e}_1$. We upper bound $\sup_\pi \abs{\inner{\hat w^{(\delta)}}{V^\pi}-\inner{w' }{V^\pi}}$ in Lemma~\ref{lmm:gap-hatw-w} and $\sup_\pi \abs{\inner{\hat w }{V^\pi} - \inner{\hat w^{(\delta)}}{V^\pi}}$ in Lemma~\ref{lmm:est-without-tau}. Then we are done with the proof of Theorem~\ref{thm:est-without-tau}.
\begin{restatable}{lemma}{lmmhatww}\label{lmm:gap-hatw-w}
If $\abs{\hat \alpha_i-\alpha_i}\leq \epsilon_{\alpha}$ and $\alpha_i\leq C_{\alpha}$ for all $i\in [d-1]$, for every $\delta \geq 4 C_{\alpha}^{\frac{2}{3}}C_V d^{\frac{1}{3}}\epsilon_{\alpha}^\frac{1}{3}$,
we have
$\abs{\inner{\hat w ^{(\delta)}}{V^\pi} - \inner{w'}{ V^\pi}}\leq \cO( \frac{C_{\alpha}C_V^4d_\delta^{\frac{3}{2}}\norm{w'}_2^2\epsilon_{\alpha} }{\delta^2}+\sqrt{k} \delta \norm{w'}_2)$ for all $\pi$, where $w'=\frac{w^*}{\inner{w^*}{V^{\pi_1}}}$.
\end{restatable}
Since we only remove the rows in $\hat A$ corresponding to $u_i$'s in the subspace where no policy's value has a large component, $\hat w$ and $\hat w^{(\delta)}$ are close in terms of $\sup_\pi \abs{\inner{\hat w }{V^\pi} - \inner{\hat w^{(\delta)}}{V^\pi}}$.
\begin{restatable}{lemma}{lmmwithouttau}\label{lmm:est-without-tau}
If $\abs{\hat \alpha_i-\alpha_i}\leq \epsilon_{\alpha}$ and $\alpha_i\leq C_{\alpha}$ for all $i\in [d-1]$, for every policy $\pi$ and every $\delta \geq 4 C_{\alpha}^{\frac{2}{3}}C_V d^{\frac{1}{3}}\epsilon_{\alpha}^\frac{1}{3}$, we have
\[\abs{\hat w \cdot V^\pi - \hat w^{(\delta)} \cdot V^\pi}\leq \cO((\sqrt{k} +1)^{d-d_\delta} C_{\alpha}\epsilon^{(\delta)})\,,\]
where $\epsilon^{(\delta)} = \frac{C_{\alpha}C_V^4d_\delta^{\frac{3}{2}}\norm{w'}_2^2\epsilon_{\alpha} }{\delta^2}+\sqrt{k} \delta \norm{w'}_2$ is the upper bound in Lemma~\ref{lmm:gap-hatw-w}.
\end{restatable}
Note that the result in Theorem~\ref{thm:est-without-tau} has a factor of $k^{\frac{d}{2}}$, which is exponential in $d$.
Usually, we consider the case where $k=\cO(1)$ is small and thus $k^d = \cO(1)$ is small.
We get rid of the exponential dependence on $d$ by applying $\hat w ^{(\delta)}$ to estimate $w^*$ directly,
which requires us to set the value of $\delta$ beforehand.
The following theorem follows directly by assigning the optimal value for $\delta$ in Lemma~\ref{lmm:gap-hatw-w}.
\begin{restatable}{theorem}{thmestwithtau}\label{thm:est-with-threshold}
Consider the algorithm of computing $\hat A$ defined in Eq~\eqref{eq:Ahat} and any solution $\hat w ^{(\delta)}$ to $\hat A^{(\delta)} x = {\bf e}_1$ for $\delta = k^\frac{5}{3}\epsilon^\frac{1}{3}$ and outputting the policy $\pi^{\hat w ^{(\delta)}} = \argmax_{\pi\in \Pi} \inner{\hat w ^{(\delta)}}{V^{\pi}}$.
Then the policy $\pi^{\hat w ^{(\delta)}}$ satisfies that
$$v^*-\inner{w^*}{V^{\pi^{\hat w ^{(\delta)}}}}\leq \cO\left(k^\frac{13}{6} \epsilon^\frac{1}{3}\right)\,.$$
\end{restatable}
Notice that the algorithm in Theorem~\ref{thm:est-with-threshold} needs to set the hyperparameter $\delta$ beforehand while we don't have to set any hyperparameter in Theorem~\ref{thm:est-without-tau}. The improper value of $\delta$ could degrade the performance of the algorithm.
Though we think of $k$ as a small number, it is unclear whether the dependency on $\epsilon$ in Theorems~\ref{thm:est-without-tau} and \ref{thm:est-with-threshold} is optimal.
The tight dependency on $\epsilon$ is left as an open problem. We briefly discuss a potential direction to improve this bound in Appendix~\ref{app:eps-dependence}.
\section{Proof of Lemma~\ref{lmm:min_norm_hatw_delta}}\label{app:min_norm_hatw_delta}
\begin{proof}
Recall that $\bar w ^{(\delta)} =\argmin_{\hat A^{(\delta)} x ={\bf e}_1}\norm{x}_2$ is the minimum norm solution and thus, $\inner{\bar w ^{(\delta)}}{b_i} = 0$ for all $i\in [d-d_\delta]$.
Let $\lambda_1,\ldots,\lambda_{d_\delta-1}$ be any orthonormal basis of $\mathrm{span}(A^{(\delta)}_{2:})$.
Then we construct a $w$ satisfying $A^\textrm{(full)} w ={\bf e}_1$ by removing $\bar w ^{(\delta)}$'s component in $\mathrm{span}(A^{(\delta)}_{2:})$ and rescaling.
Let
\begin{equation}\label{eq:creatw}
w = \frac{\bar w ^{(\delta)} - \sum_{i=1}^{{d_\delta}-1} \inner{\bar w ^{(\delta)}}{\lambda_i} \lambda_i}{1 -V^{\pi_1}\cdot( \sum_{i=1}^{{d_\delta}-1} \inner{\bar w ^{(\delta)}}{\lambda_i} \lambda_i)}\,.
\end{equation}
It is direct to check that $A_1\cdot w = 1$ and $A_i \cdot w = 0$ for $i=2,\ldots,d_\delta$, i.e., $A^{(\delta)}w ={\bf e}_1$.
Combining with that $\bar w ^{(\delta)}$ has zero component in $b_i$ for all $i\in [d-d_\delta]$, we have $A^\textrm{(full)} w ={\bf e}_1$.
According to Lemma~\ref{lmm:spanangle}, we have $\theta(\mathrm{span}(A^{(\delta)}_{2:}), \mathrm{span}(\hat A^{(\delta)}_{2:})) \leq \eta_{\epsilon_{\alpha},\delta}$, there exist unit vectors $\tilde \lambda_1,\ldots,\tilde \lambda_{d_\delta-1}\in \mathrm{span}(\hat A^{(\delta)}_{2:})$ such that $\theta(\lambda_i, \tilde \lambda_i)\leq \eta_{\epsilon_{\alpha},\delta}$.
Since $\hat A^{(\delta)} \bar w ^{(\delta)} = {\bf e}_1$, we have $\bar w ^{(\delta)} \cdot \tilde \lambda_i = 0$ for all
$i=1,\ldots,d_\delta-1$.
Thus,
\begin{align*}
\abs{\bar w ^{(\delta)} \cdot \lambda_i} = \abs{\bar w ^{(\delta)} \cdot (\lambda_i-\tilde \lambda_i)} \leq \norm{\bar w ^{(\delta)}}_2 \eta_{\epsilon_{\alpha},\delta}\,,
\end{align*}
which implies that for any policy $\pi$ value vector $V^\pi$,
\[V^\pi\cdot\sum_{i=1}^{d_\delta-1} (\bar w ^{(\delta)}\cdot \lambda_i) \lambda_i\leq \norm{V^\pi}_2 \sqrt{d_\delta}\norm{\bar w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta} \leq C_V\sqrt{d_\delta}\norm{\bar w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta} \,.\]
Denote by $\gamma =V^{\pi_1}\cdot( \sum_{i=1}^{{d_\delta}-1} \inner{\bar w ^{(\delta)}}{\lambda_i} \lambda_i)$, then we have
\begin{align*}
\abs{\bar w ^{(\delta)} \cdot V^\pi - w\cdot V^\pi}& \leq \abs{\bar w ^{(\delta)} \cdot V^\pi - \frac{1}{1-\gamma} \bar w ^{(\delta)}\cdot V^\pi} + \abs{\frac{1}{1-\gamma} \bar w ^{(\delta)}\cdot V^\pi - w\cdot V^\pi}\\
&\leq \frac{\gamma \norm{\bar w ^{(\delta)}}_2 C_V}{1-\gamma} +\frac{1}{1-\gamma}\abs{\sum_{i=1}^{d_\delta-1} (\bar w ^{(\delta)}\cdot \lambda_i) \lambda_i\cdot V^\pi}\\
&\leq \frac{\gamma \norm{\bar w ^{(\delta)}}_2 C_V}{1-\gamma} +\frac{C_V\sqrt{d_\delta}\norm{\bar w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta}}{1-\gamma} =\cO\left( \frac{(C_{\alpha}+1)C_V^4 d_\delta^{\frac{3}{2}}\norm{\bar w ^{(\delta)}}_2^2\epsilon_{\alpha} }{\delta^2}\right)
\end{align*}
when $C_V\sqrt{d_\delta}\norm{\bar w ^{(\delta)}}_2\eta_{\epsilon_{\alpha},\delta} = \cO(1)$.
Now we only need to show that $\norm {\bar w ^{(\delta)}}_2 \leq C\norm{w'}_2$ for some constant $C$.
This can be proved by a construction similar to Eq~\eqref{eq:creatw}.
Let $b_1,\ldots,b_{d_\delta-1}$ be any orthonormal basis of $\mathrm{span}(\hat A^{(\delta)}_{2:})$.
We construct a $\hat w_0$ s.t. $\hat A^{(\delta)}\hat w_0 = {\bf e}_1$ by removing $w'$'s component in $\mathrm{span}(\hat A^{(\delta)}_{2:})$ and rescaling.
Specifically, let
\begin{equation}
\hat w_0 = \frac{ w'- \sum_{i=1}^{{d_\delta}-1} \inner{w'}{b_i} b_i}{1-\inner{V^{\pi_1}}{( \sum_{i=1}^{{d_\delta}-1} \inner{w'}{b_i} b_i)}}\,.
\end{equation}
Since $A^{(\delta)}w' = {\bf e}_1$,
it is direct to check that $\inner{\hat A_1}{\hat w_0} = \inner{V^{\pi_1}}{\hat w_0} = 1$ and $\hat A_i \cdot \hat w_0 = 0$ for $i=2,\ldots,d_\delta$, i.e., $\hat A^{(\delta)}\hat w_0 ={\bf e}_1$.
Since $\theta( \mathrm{span}(\hat A^{(\delta)}_{2:}), \mathrm{span}(A^{(\delta)}_{2:})) \leq \eta_{\epsilon_{\alpha},\delta}$ according to Lemma~\ref{lmm:spanangle}, there exist unit vectors $\tilde b_1,\ldots,\tilde b_{d_\delta-1}\in \mathrm{span}(A^{(\delta)}_{2:})$ such that $\theta(b_i,\tilde b_i)\leq \eta_{\epsilon_{\alpha},\delta}$.
Since $w'$ has zero component in $\mathrm{span}(A^{(\delta)}_{2:})$, $w'$ should have a small component in $\mathrm{span}(\hat A^{(\delta)}_{2:})$.
In particular,
\begin{align*}
\abs{\inner{w'}{b_i}}= \abs{\inner{w'}{b_i-\tilde b_i}} \leq \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta}\,,
\end{align*}
which implies that
\begin{align*}
\norm{\sum_{i=1}^{{d_\delta}-1} \inner{w'}{b_i} b_i}_2\leq \sqrt{d_\delta -1} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta}\,.
\end{align*}
Hence
\begin{equation*}
\abs{\inner{V^{\pi_1}}{( \sum_{i=1}^{{d_\delta}-1} \inner{w'}{b_i} b_i)}} \leq \sqrt{d_\delta -1} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta} C_V\,,
\end{equation*}
and then there exists a constant $C>0$ such that
$\norm{\hat w_0}_2 \leq C \norm{w'}_2$ when $\sqrt{d_\delta -1} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta} C_V = \cO(1)$.
Since $\bar w ^{(\delta)}$ is the minimum norm solution, we have $\norm{\bar w ^{(\delta)}}_2\leq C \norm{w'}_2$.
\end{proof}
\section{Proof of Lemma~\ref{lmm:est-without-tau}}\label{app:est-without-tau}
\lmmwithouttau*
\begin{proof}
Given the output $(V^{\pi_1},\ldots, V^{\pi_{d}})$ of Algorithm~\ref{alg:policy-threshold-free}, we have $\mathrm{rank}(\mathrm{span}(\{V^{\pi_1},\ldots, V^{\pi_{d_\delta}}\})) = d_\delta$.
For $i=d_\delta+1,\ldots,d$, let $\psi_{i}$ be the normalized vector of
$V^{\pi_i}$'s projection into $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{i-1}})^\perp$ with $\norm{\psi_i}_2 = 1$.
Then we have that $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{i-1}},\psi_i) = \mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{i}})$ and that $\{\psi_i|i=d_\delta+1,\ldots,d\}$ are orthonormal.
For every policy $\pi$, the value vector can be represented as a linear combination of $\hat A_1,\ldots,\hat A_{d_\delta}, \psi_{d_\delta+1},\ldots, \psi_d$, i.e., there exists a unique $a = (a_1,\dots,a_d)\in \mathbb{R}^d$ s.t.
$V^\pi = \sum_{i=1}^{d_\delta} a_i \hat A_i +\sum_{i=d_\delta+1}^{d} a_i \psi_i$.
Since $\psi_i$ is orthogonal to $\psi_j$ for all $j\neq i$ and $\psi_i$ is are orthogonal to $\mathrm{span}(\hat A_1,\ldots,\hat A_{d_\delta})$, we have $a_i = \inner{V^\pi}{\psi_i}$ for $i\geq d_\delta+1$.
This impliese that
\begin{align*}
\abs{\inner{\hat w}{V^{\pi}} - \inner{\hat w ^{(\delta)}}{V^{\pi}}} \leq \underbrace{\abs{\sum_{i=1}^{d_\delta}a_i\left(\inner{\hat w}{\hat A_i} - \inner{w^{(\delta)}}{\hat A_i}\right)}}_{(a)} + \underbrace{\abs{\sum_{i=d_\delta+1}^{d} a_i \inner{\hat w}{\psi_i}}}_{(b)} + \underbrace{\abs{\sum_{i=d_\delta+1}^{d} a_i \inner{\hat w ^{(\delta)}}{ \psi_i}}}_{(c)}\,.
\end{align*}
Since $\hat A^{(\delta)} \hat w = \hat A^{(\delta)} \hat w ^{(\delta)} ={\bf e}_1$, we have term $(a) = 0$.
We move on to bound term (c).
Note that the vectors $\{\psi_i|i=d_\delta+1,\ldots,d\}$ are orthogonal to $\mathrm{span}(V^{\pi_1},\ldots, V^{\pi_{d_\delta}})$ and that together with $V^{\pi_1},\ldots, V^{\pi_{d_\delta}}$ they form a basis for $\mathrm{span}(\{V^\pi|\pi\in \Pi\})$.
Thus, we can let $b_i$ in the proof of Lemma~\ref{lmm:gap-hatw-w} be $ \psi_{i+d_\delta}$.
In addition, all the properties of $\{b_i|i\in [d-d_\delta]\}$ also apply to $\{\psi_i|i=d_\delta+1,\ldots,d\}$ as well.
Hence, similarly to Eq~\eqref{eq:anorm},
\begin{equation*}
\sqrt{\sum_{i={d_\delta+1}}^d a_i^2} \leq \sqrt{k}\delta\,.
\end{equation*}
Consequentially, we can bound term $(c)$ is by
$$(c) \leq \sqrt{k}\delta \norm{\hat w ^{(\delta)}}_2 =\frac{3}{2} \sqrt{k}\delta\norm{w'}_2$$
since $\norm{\hat w ^{(\delta)}}_2\leq \frac{3}{2}\norm{w'}_2$ when $ C_V\sqrt{d_\delta} \norm{w'}_2 \eta_{\epsilon_{\alpha},\delta} \leq \frac{1}{3}$ as discussed in the proof of Lemma~\ref{lmm:min_norm_hatw_delta}.
Now all is left is to bound term (b).
We cannot bound term (b) in the same way as that of term (c) because $\norm{\hat w}_2$ is not guaranteed to be bounded by $\norm{w'}_2$.
For $i=d_\delta+1,\ldots, d$, we define
\[\epsilon_i:= \abs{\inner{\psi_i}{\hat A_i}} \,.\]
For any $i,j=d_\delta+1,\ldots, d$, $\psi_i$ is perpendicular to $V^{\pi_1}$, thus
$\abs{\inner{\psi_i}{\hat A_j}} = \abs{\inner{\psi_i}{\hat \alpha_{j-1}V^{\pi_1} - V^{\pi_j}}}= \abs{\inner{\psi_i}{V^{\pi_j}}}$.
Especially, we have
\[
\epsilon_i = \abs{\inner{\psi_i}{\hat A_i}} = \abs{\inner{\psi_i}{V^{\pi_i}}}\,.
\]
Let $\hat A_i^\parallel := \hat A_i- \sum_{j=d_\delta+1}^{d}\inner{\hat A_i}{\psi_j}\psi_j$ denote $\hat A_i$'s projection into $\mathrm{span}(\hat A_1,\ldots,\hat A_{d_\delta})$.
Since $\hat A_i$ has zero component in direction $\psi_j$ for $j>i$, we have $\hat A_i^\parallel = \hat A_i- \sum_{j=d_\delta+1}^{i}\inner{\hat A_i}{\psi_j}\psi_j$.
Then, we have
\begin{align*}
0= \inner{\hat w}{\hat A_i} = \hat w\cdot \hat A_i^\parallel + \hat w\cdot \sum_{j=d_\delta+1}^i\inner{\hat A_i}{\psi_j}\psi_j= \hat w\cdot \hat A_i^\parallel - \sum_{j=d_\delta+1}^i\inner{V^{\pi_i}}{\psi_j}\inner{\hat w}{\psi_j}\,,
\end{align*}
where the first equation holds due to $\hat A \hat w = {\bf e}_1$.
By rearranging terms, we have
\begin{align}
\inner{V^{\pi_i}}{\psi_i} \inner{ \hat w}{\psi_i} = \hat w\cdot \hat A_i^\parallel - \sum_{j=d_\delta+1}^{i-1}\inner{V^{\pi_i}}{\psi_j}\inner{\hat w}{\psi_j}\,.\label{eq:wui}
\end{align}
Recall that at iteration $j$ of Algorithm~\ref{alg:policy-threshold-free}, in line~\ref{alg-line:orthonormal-basis} we pick an orthonormal basis $\rho_1,\ldots, \rho_{k+1-j}$ of $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{j-1}})^\perp$. Since $\psi_j$ is in $ \mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{j-1}})^\perp$ according to the definition of $\psi_j$,
$\abs{\inner{V^{\pi_i}}{\psi_j}}$ is no greater then the norm of $V^{\pi_i}$'s projection into $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{j-1}})^\perp$.
Therefore, we have
\begin{align}
&\abs{\inner{V^{\pi_i}}{\psi_j}} \leq \sqrt{k} \max_{l\in [k+1-j]} \abs{\inner{V^{\pi_i}}{\rho_l}}\stackrel{(d)}{\leq}
\sqrt{k} \max_{l\in [k+1-j]} \max(\abs{\inner{V^{\pi^{\rho_l}}}{\rho_l}},\abs{\inner{V^{\pi^{-\rho_l}}}{-\rho_l}})\nonumber\\
\stackrel{(e)}{=}&
\sqrt{k} \abs{\inner{V^{\pi_j}}{u_j}}\stackrel{(f)}{\leq} \sqrt{k} \abs{\inner{V^{\pi_j}}{\psi_j}}=\sqrt{k}\epsilon_j\,,\label{eq:psi-u}
\end{align}
where inequality (d) holds because $\pi^{\rho_l}$ is the optimal personalized policy with respect to the preference vector $\rho_l$, and Equation (e) holds due to the definition of $u_j$ (line~\ref{alg-line:largest-component} of Algorithm~\ref{alg:policy-threshold-free}).
Inequality (f) holds since $\inner{V^{\pi_j}}{\psi_j}$ is the norm of $V^{\pi_j}$'s projection in $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{j-1}})^\perp$ and $u_j$ belongs to $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{j-1}})^\perp$.
By taking absolute value on both sides of Eq~\eqref{eq:wui}, we have
\begin{align}
\epsilon_i \abs{\inner{ \hat w}{\psi_i}}
= \abs{\hat w\cdot \hat A_i^\parallel - \sum_{j=d_\delta+1}^{i-1}\inner{V^{\pi_i}}{\psi_j}\inner{\hat w}{\psi_j}}
\leq \abs{\hat w\cdot \hat A_i^\parallel} + \sqrt{k}\sum_{j=d_\delta+1}^{i-1}\epsilon_j\abs{ \inner{\hat w}{\psi_j}}\,.\label{eq:inducteps}
\end{align}
We can now bound $\abs{\hat w\cdot \hat A_i^\parallel}$ as follows.
\begin{align}
\abs{\hat w\cdot \hat A_i^\parallel} &= \abs{\hat w ^{(\delta)}\cdot \hat A_i^\parallel}\label{eq:hatwwdelta}\\
&= \abs{\hat w ^{(\delta)}\cdot (\hat A_i- \sum_{j=d_\delta+1}^{d}\inner{\hat A_i}{\psi_j}\psi_j)} = \abs{\hat w ^{(\delta)}\cdot \hat A_i}\label{eq:applyminnorm}\\
&\leq \abs{\hat w ^{(\delta)}\cdot A_i} + \abs{\hat w ^{(\delta)}\cdot (\hat A_i-A_i)}\nonumber\\
&\leq \abs{w'\cdot A_i} + \abs{(\hat w ^{(\delta)}-w')\cdot A_i} + \abs{\hat w ^{(\delta)}\cdot (\hat A_i-A_i)}\nonumber\\
&\leq 0+ (C_{\alpha}+1)\sup_{\pi} \abs{(\hat w ^{(\delta)}-w')\cdot V^\pi} + C_V \norm{\hat w ^{(\delta)}}_2 \epsilon_{\alpha}\nonumber\\
&\leq C'C_{\alpha}\epsilon^{(\delta)}\nonumber\,,
\end{align}
for some constant $C'>0$.
Eq~\eqref{eq:hatwwdelta} holds because $\hat A^{(\delta)} \hat w = \hat A^{(\delta)} \hat w ^{(\delta)} = {\bf e}_1$ and $\hat A_i^\parallel$ belongs to $\mathrm{span}(\hat A^{(\delta)})$.
Eq~\eqref{eq:applyminnorm} holds because $\hat w ^{(\delta)}$ is the minimum norm solution to $\hat A^{(\delta)} x = {\bf e}_1$, which implies that $\hat w ^{(\delta)} \cdot \psi_i = 0$.
The last inequality follows by applying Lemma~\ref{lmm:gap-hatw-w}.
We will bound $\epsilon_i \abs{\inner{ \hat w}{\psi_i}}$ by induction on $i=d_\delta+1,\ldots,d$.
In the base case of $i=d_\delta+1$,
\begin{align*}
\epsilon_{d_\delta+1} \abs{\inner{ \hat w}{\psi_{d_\delta+1}}}\leq \abs{\hat w\cdot \hat A_{d_\delta+1}^\parallel} \leq C'C_{\alpha}\epsilon^{(\delta)}\,.
\end{align*}
Then, by induction through Eq~\eqref{eq:inducteps}, we have for $i = d_\delta+2,\ldots,d$,
\[\epsilon_i \abs{\inner{ \hat w}{\psi_i}} \leq (\sqrt{k} +1)^{i-d_\delta-1} C'C_{\alpha}\epsilon^{(\delta)}\,.\]
Similar to the deviation of Eq~\eqref{eq:psi-u}, we pick an orthonormal basis $\rho_1,\ldots,\rho_{k+1-i}$ of $\mathrm{span}(V^{\pi_1},\ldots,V^{\pi_{i-1}})^\perp$ at line~\ref{alg-line:orthonormal-basis} of Algorithm~\ref{alg:policy-threshold-free}, then we have that, for any policy $\pi$,
\begin{align*}
\abs{\inner{V^\pi}{\psi_i}} \leq \sqrt{k} \max_{l\in [k+1-i]} \abs{\inner{V^{\pi}}{\rho_l}}\leq \sqrt{k} \abs{\inner{V^{\pi_{i}}}{u_i}} \leq \sqrt{k}\abs{\inner{V^{\pi_{i}}}{\psi_i}} = \sqrt{k}\epsilon_i\,.
\end{align*}
Then we have that term (b) is bounded by
\begin{align*}
(b) =& \abs{\sum_{i=d_\delta+1}^{d} \inner{V^\pi}{\psi_i} \inner{\hat w}{\psi_i}}
\leq \sum_{i=d_\delta+1}^{d} \abs{\inner{V^\pi}{\psi_i}}\cdot \abs{{\inner{\hat w}{\psi_i}}}\\
\leq& \sqrt{k}\sum_{i=d_\delta+1}^{d} \epsilon_i \abs{{\inner{\hat w}{\psi_i}}}
\leq (\sqrt{k} +1)^{d-d_\delta} C'C_{\alpha}\epsilon^{(\delta)}\,.
\end{align*}
Hence we have that for any policy $\pi$,
\begin{equation*}
\abs{\inner{\hat w}{V^{\pi}} - \inner{\hat w ^{(\delta)}}{V^{\pi}}} \leq (\sqrt{k} +1)^{d-d_\delta} C'C_{\alpha}\epsilon^{(\delta)} +\frac{3}{2} \sqrt{k}\delta\norm{w'}_2\,.
\end{equation*}
\end{proof}
\section{Dependency on $\epsilon$}\label{app:eps-dependence}
In this section, we would like to discuss a potential way of improving the dependency on $\epsilon$ in Theorems~\ref{thm:est-without-tau} and \ref{thm:est-with-threshold}.
Consider a toy example where the returned three basis policies are $\pi_1$ with $V^{\pi_1} =(1,0,0)$, $\pi_2$ with $V^{\pi_2} =(1,1,1)$ and $\pi_3$ with $V^{\pi_3} = (1, \eta,-\eta)$ for some $\eta>0$ and $w^* = (1,w_2,w_3)$ for some $w_2,w_3$.
The estimated ratio $\hat \alpha_1$ lies in $ [1+w_2+w_3-\epsilon,1+w_2+w_3+\epsilon]$, and $\hat \alpha_2$ lies in $ [1+\eta w_2-\eta w_3 - \epsilon,1+\eta w_2-\eta w_3 + \epsilon]$.
Suppose that $\hat \alpha_1 = 1+w_2+w_3+\epsilon$ and $\hat \alpha_2 = 1+\eta w_2-\eta w_3 + \epsilon$.
By solving
\begin{equation*}
\begin{pmatrix}
1 & 0 & 0\\
w_2+w_3+\epsilon & -1 & -1\\
\eta w_2-\eta w_3 + \epsilon & -\eta & \eta
\end{pmatrix}
\hat w = \begin{pmatrix}
1\\ 0 \\ 0
\end{pmatrix}\,
\end{equation*}
we can derive $\hat w_2 = w_2 +\frac{\epsilon}{2}(1+\frac{1}{\eta})$ and $\hat w_3 = w_3 +\frac{\epsilon}{2}(1-\frac{1}{\eta})$.
The quantity measuring sub-optimality we care about is $\sup_\pi \abs{\inner{\hat w }{V^\pi} - \inner{w^*}{V^\pi}}$, which is upper bounded by $C_V \norm{\hat w - w^*}_2$.
But the $\ell_2$ distance between $\hat w$ and $w^*$ depends on the condition number of $\hat A$, which is large when $\eta$ is small.
To obtain a non-vacuous upper bound in Section~\ref{sec:policy-level}, we introduce another estimate $\hat w^{(\delta)}$ based on the truncated version of $\hat A$ and then upper bound $\norm{\hat w^{(\delta)} - w^*}_2$ and $\sup_\pi \abs{\inner{\hat w }{V^\pi} - \inner{\hat w^{(\delta)}}{V^\pi}}$ separately.
However, it is unclear if $\sup_\pi \abs{\inner{\hat w }{V^\pi} - \inner{w^*}{V^\pi}}$ depends on the condition number of $\hat A$.
Due to the construction of Algorithm~\ref{alg:policy-threshold-free}, we can obtain some extra information about the set of all policy values.
First, since we find $\pi_2$ before $\pi_3$, $\eta$ must be no greater than $1$.
According to the algorithm, $V^{\pi_2}$ is the optimal policy when the preference vector is $u_2$ (see line~\ref{alg-line: returnedu} of Algorithm~\ref{alg:policy-threshold-free} for the definition of $u_2$) and $V^{\pi_3}$ is the optimal policy when the preference vector is $u_3 = (0,1,-1)$.
Note that the angle between $u_2$ and $V^{\pi_2}$ is no greater than 45 degrees according to the definition of $u_2$.
Then the values of all policies can only lie in the small box $B = \{x\in \mathbb{R}^3| \abs{u_2^\top x}\leq \abs{\inner{u_2}{V^{\pi_2}}}, \abs{u_3^\top x}\leq \abs{\inner{u_3}{V^{\pi_3}}}\}$.
It is direct to check that for any $x\in B$, $\abs{\inner{\hat w}{x} - \inner{w^*}{x}} < (1+\sqrt{2})\epsilon$.
This example illustrates that even when the condition number of $\hat A$ is large, $\sup_\pi \abs{\inner{\hat w }{V^\pi} - \inner{w^*}{V^\pi}}$ can be small.
It is unclear if this holds in general.
Applying this additional information to upper bound $\sup_\pi \abs{\inner{\hat w }{V^\pi} - \inner{w^*}{V^\pi}}$ directly instead of through bounding $C_V \norm{\hat w - w^*}_2$ is a possible way of improving the term $\epsilon^\frac{1}{3}$.
\section{Proof of Lemma~\ref{lmm:gap-hatw-w}}
\lmmhatww*
To prove Lemma~\ref{lmm:gap-hatw-w}, we will first define a matrix, $ A^\textrm{(full)}$.
Given the output $(V^{\pi_1},\ldots, V^{\pi_{d}})$ of Algorithm~\ref{alg:policy-threshold-free}, we have $\mathrm{rank}(\mathrm{span}(\{V^{\pi_1},\ldots, V^{\pi_{d_\delta}}\})) = d_\delta$.
Let $b_1,\ldots,b_{d-d_\delta}$ be a set of orthonormal vectors that are orthogonal to $\mathrm{span}(V^{\pi_1},\ldots, V^{\pi_{d_\delta}})$ and together with $V^{\pi_1},\ldots, V^{\pi_{d_\delta}}$ form a basis for $\mathrm{span}(\{V^\pi|\pi\in \Pi\})$.
We define $A^\textrm{(full)}\in \mathbb{R}^{d\times k}$ as the matrix of replacing the last $d-d_\delta$ rows of $A$ with $b_1,\ldots,b_{d-d_\delta}$, i.e.,
\begin{equation*}
A^\textrm{(full)} = \begin{pmatrix}
V^{\pi_1\top}\\
(\alpha_1 V^{\pi_1} -V^{\pi_2})^\top\\
\vdots\\
(\alpha_{d_\delta-1} V^{\pi_1} -V^{\pi_{d_\delta}})^\top\\
b_1^\top\\
\vdots\\
b_{d-d_\delta}^\top
\end{pmatrix}\,.
\end{equation*}
\begin{observation}
We have that $\mathrm{span}(A^\textrm{(full)}) = \mathrm{span}(\{V^\pi|\pi\in \Pi\})$ and $\mathrm{rank}(A^\textrm{(full)}) =d$.
\end{observation}
\begin{restatable}{lemma}{lmmwwprime}\label{lmm:gap-w-w'}
For all $w \in \mathbb{R}^k$ satisfying $A^\textrm{(full)} w = {\bf e}_1$, we have $\abs{w\cdot V^\pi - w'\cdot V^\pi}\leq \sqrt{k} \delta \norm{w'}_2$ for all $\pi$.
\end{restatable}
We then show that there exists a $w \in \mathbb{R}^k$ satisfying $A^\textrm{(full)} w = {\bf e}_1$ such that $\abs{\hat w ^{(\delta)}\cdot V^\pi - w\cdot V^\pi}$ is small for all $\pi\in \Pi$.
\begin{restatable}{lemma}{lmmminnorm}\label{lmm:min_norm_hatw_delta}
If $\abs{\hat \alpha_i-\alpha_i}\leq \epsilon_{\alpha}$ and $\alpha_i\leq C_{\alpha}$ for all $i\in [d-1]$,
for every $\delta \geq 4 C_{\alpha}^{\frac{2}{3}}C_V d^{\frac{1}{3}}\epsilon_{\alpha}^\frac{1}{3}$
there exists a $w \in \mathbb{R}^k$ satisfying $A^\textrm{(full)} w = {\bf e}_1$ s.t. $\abs{\hat w ^{(\delta)}\cdot V^\pi - w\cdot V^\pi}\leq \cO( \frac{C_{\alpha}C_V^4d_\delta^{\frac{3}{2}}\norm{w'}_2^2\epsilon_{\alpha} }{\delta^2})$ for all $\pi$.
\end{restatable}
We now derive Lemma~\ref{lmm:gap-hatw-w} using the above two lemmas.
\begin{proof}[Proof of Lemma~\ref{lmm:gap-hatw-w}]
Let $w$ be defined in Lemma~\ref{lmm:min_norm_hatw_delta}. Then for any policy $\pi$, we have
\begin{align*}
\abs{\hat w ^{(\delta)} \cdot V^\pi - w'\cdot V^\pi} \leq \abs{\hat w ^{(\delta)} \cdot V^\pi - w\cdot V^\pi}+\abs{w\cdot V^\pi - w'\cdot V^\pi}
\leq \cO( \frac{C_{\alpha}C_V^4d_\delta^{\frac{3}{2}}\norm{w'}_2^2\epsilon_{\alpha} }{\delta^2}+\sqrt{k} \delta \norm{w'}_2) \,,
\end{align*}
by applying Lemma~\ref{lmm:gap-w-w'} and \ref{lmm:min_norm_hatw_delta}.
\end{proof}
\section{Discussion}\label{app:discussion}
In this paper, we designed efficient algorithms for learning users' preferences over multiple objectives from comparative feedback. The efficiency is expressed in both the running time and number of queries (both polynomial in $H, \abs{\mathcal{S}}, \abs{\mathcal{A}}, k$ and logarithmic in $1/\epsilon$).
The learned preferences of a user can then be used to reduce the problem of finding a personalized optimal policy for this user to a (finite horizon) single scalar reward MDP, a problem with a known efficient solution.
As we have focused on minimizing the policy comparison queries, our algorithms are based on polynomial time pre-processing calculations that save valuable comparison time for users.
The results in Section~\ref{sec:policy-level} are of independent interest and can be applied to a more general learning setting, where for some unknown linear parameter $w^*$, given a set of points $X$ and access to comparison queries of any two points, the goal is to learn $\argmax_{x\in X} \inner{w^*}{x}$.
E.g., in personalized recommendations for coffee beans in terms of the coffee profile described by the coffee suppliers (body, aroma, crema, roast level,...), while users could fail to describe their optimal
coffee beans profile, adopting the methodology in Section~\ref{sec:policy-level} can retrieve the ideal coffee beans for a user using comparisons (where the mixing with ``do nothing'' is done by diluting the coffee with water and the optimal coffee for a given profile is the one closest to it).
When moving from the explicit representation of policies as mappings from states to actions to a more natural policy representation as a weighted trajectory set, we then obtained the same optimality guarantees in terms of the number of queries.
While there could be other forms of policy representations (e.g., a small subset of common states), one advantage of our weighted trajectory set representation is that it captures the essence of the policy multi-objective value in a clear manner via $\cO(k)$ trajectories and weights. The algorithms provided in Section~\ref{sec:trajectory-level} are standalone and could also be of independent interest for explainable RL~\citep{Alharin20SurveyInt}. For example, to exemplify the multi-objective performance of generic robotic vacuum cleaners (this is beneficial if we only have e.g., $3$ of them--- we can apply the algorithms in Section~\ref{sec:trajectory-level} to generate weighted trajectory set representations and compare them directly without going through the algorithm in Section~\ref{sec:policy-level}.).
An interesting direction for future work is to relax the assumption that the MDP is known in advance. One direct way is to first learn the model (in model-based RL), then apply our algorithms in the learned MDP. The sub-optimality of the returned policy will then depend on both the estimation error of the model and the error introduced by our algorithms (which depends on the parameters in the learned model).
\section{Pseudo Code of Computation of the Basis Ratios}\label{app:bin-search}
The pseudo code of searching $\hat\alpha_i$'s is described in Algorithm~\ref{alg:alpha}.
\begin{algorithm}[H]\caption{Computation of Basis Ratios}\label{alg:alpha}
\begin{algorithmic}[1]
\STATE \textbf{input:} $(V^{\pi_1},\ldots, V^{\pi_d})$ and $C_{\alpha}$
\FOR{$i=1,\ldots, d-1$}
\STATE let $l=0$, $h=2C_\alpha$ and $\hat \alpha_i = C_\alpha$
\WHILE{True}
\IF{$\hat \alpha_i>1$}
\STATE compare $\pi_1$ and $\frac{1}{\hat \alpha_i}\pi_{i+1} + (1-\frac{1}{\hat \alpha_i})\pi_0$;
\textbf{if} $\pi_1 \succ \frac{1}{\hat \alpha_i}\pi_{i+1} + (1-\frac{1}{\hat \alpha_i})\pi_0$ \textbf{then} $h\leftarrow \hat \alpha_i$, $\hat \alpha_i\leftarrow \frac{l+h}{2}$; \textbf{if} $\pi_1 \prec \frac{1}{\hat \alpha_i}\pi_{i+1} + (1-\frac{1}{\hat \alpha_i})\pi_0$ \textbf{then} $l\leftarrow \hat \alpha_i$, $\hat \alpha_i\leftarrow \frac{l+h}{2}$
\ELSE
\STATE compare $\pi_{i+1}$ and $\hat{\alpha}_i\pi_1 + (1-\hat{\alpha}_i)\pi_0$; \textbf{if} $\hat{\alpha}_i\pi_1 + (1-\hat{\alpha}_i)\pi_0\succ \pi_{i+1}$ \textbf{then} $h\leftarrow \hat \alpha_i$, $\hat \alpha_i\leftarrow \frac{l+h}{2}$; \textbf{if} $\hat{\alpha}_i\pi_1 + (1-\hat{\alpha}_i)\pi_0\prec \pi_{i+1}$ \textbf{then} $l\leftarrow \hat \alpha_i$, $\hat \alpha_i\leftarrow \frac{l+h}{2}$
\ENDIF
\IF{``indistinguishable'' is returned}
\STATE break
\ENDIF
\ENDWHILE
\ENDFOR
\STATE \textbf{output: } $(\hat \alpha_1,\ldots,\hat \alpha_{d-1})$
\end{algorithmic}
\end{algorithm}
\section{Proof of Theorem~\ref{thm:flow-decomposition}}\label{app:flow-decomposition}
\thmflow*
\begin{proof}
\textbf{Correctness: }
The function $f$ defined by Eq~\eqref{eq:fdef} is a well-defined flow
since for all $t=1,\ldots, H$, for all $s\in \Lsup{t}$, we have that
\begin{align*}
&\sum_{s'\in \Lsup{t+1}:(s,s')\in \Esup{t}} f(s,s')
=
\sum_{s'\in \Lsup{t+1}:(s,s')\in \Esup{t}}\sum_{\tau: (s^\tau_t,s^\tau_{t+1}) = (s,s')} q^\pi(\tau)
= \sum_{\tau: s^\tau_t = s}q^\pi(\tau)\\
= &\sum_{s''\in \Lsup{t-1}:(s'',s)\in \Esup{t-1}} f(s'',s)\,.
\end{align*}
In the following,
we first show that Algorithm~\ref{alg:flow-decomposition} will terminate with $f(e)=0$ for all $e\in E$.
First, after each iteration, $f$ is still a feasible $(\xsup{0},\xsup{H+1})$-flow feasible flow with the total flow out-of $\xsup{0}$ reduced by $f_\tau$.
Besides, for edge $e$ with $f(e)>0$ at the beginning, we have $f(e)\geq 0$ throughout the algorithm because we never reduce $f(e)$ by an amount greater than $f(e)$.
Then, since $f$ is a $(\xsup{0},\xsup{H+1})$-flow and $f(e)\geq 0$ for all $e\in E$, we can always find a path $\tau$ in line~\ref{alg-line:path} of Algorithm~\ref{alg:flow-decomposition}.
Otherwise, the set of vertices reachable from $\xsup{0}$ through edges with positive flow does not contain $\xsup{H+1}$ and the flow out of this set equals the total flow out-of $\xsup{0}$. But since other vertices are not reachable, there is no flow out of this set, which is a contradiction.
In line~\ref{alg-line:flowupdate}, there exists at least one edge $e$ such that $f(e)>0$ is reduced to $0$.
Hence, the algorithm will run for at most $\abs{E}$ iterations and terminate with $f(e)=0$ for all $e\in E$.
Thus we have that for any $(s,s')\in \Esup{t}$, $f(s,s')=\sum_{(\tau,f_\tau)\in Q: (s^\tau_t,s^\tau_{t+1}) = (s,s')}f_\tau$.
Then we have
\begin{align*}
V^\pi &= \sum_{\tau}q^\pi(\tau) \Phi(\tau)
= \sum_{\tau}q^\pi(\tau) \left( \sum_{t=0}^{H-1} R(s^\tau_t,\pi(s^\tau_t))\right) \\
& = \sum_{t=0}^{H-1} \sum_{\tau}q^\pi(\tau) R(s^\tau_t,\pi(s^\tau_t))\\
& = \sum_{t=0}^{H-1} \sum_{(s,s')\in \Esup{t}} R(s,\pi(s)) \sum_{\tau: (s^\tau_t,s^\tau_{t+1}) =(s,s')}q^\pi(\tau) \\
& = \sum_{t=0}^{H-1} \sum_{(s,s')\in \Esup{t}}R(s,\pi(s)) f(s,s')\\
& = \sum_{t=0}^{H-1} \sum_{(s,s')\in \Esup{t}}R(s,\pi(s)) \sum_{(\tau,f_\tau)\in Q: (s^\tau_t,s^\tau_{t+1}) = (s,s')}f_\tau \\
& = \sum_{(\tau,f_\tau)\in Q}f_\tau ( \sum_{t=0}^{H-1} R(s^\tau_t,\pi(s^\tau_t)))\\
& =\sum_{(\tau,f_\tau)\in Q}f_\tau \Phi(\tau)\,.
\end{align*}
\textbf{Computational complexity: }
Solving $f$ takes $\cO(\abs{E})$ time.
The algorithm will run for $\cO(\abs{E})$ iterations and each iteration takes $\cO(H)$ time.
Since $\abs{E} = \cO(\abs{\mathcal{S}}^2 H)$, the total running time of Algorithm~\ref{alg:flow-decomposition} is $\cO(\abs{\mathcal{S}}^2 H^2)$.
C4 will take $\cO(k^3\abs{E})$ time.
\end{proof}
\section{Proof of Theorem~\ref{thm:est-with-threshold}}\label{app:alg_with_threshold}
\thmestwithtau*
\begin{proof}[Proof of Theorem~\ref{thm:est-with-threshold}]
As shown in Lemma~\ref{lmm:V1}, we set $C_\alpha = 2k$ and have
$\epsilon_{\alpha} = \frac{4k^2\epsilon}{v^*}$.
We have $\norm{w'} = \frac{\norm{w^*}}{\inner{w^*}{V^{\pi_1}}}$ and showed that $\inner{w^*}{V^{\pi_1}} \geq \frac{v^*}{2k}$ in the proof of Lemma \ref{lmm:V1}.
By applying Lemma~\ref{lmm:gap-hatw-w} and setting $\delta = \left(\frac{C_V^4k^5 \norm{w^*}_2\epsilon}{v^{*2}}\right)^{\frac{1}{3}}$,
we have
\begin{align*}
&v^*- \inner{w^*}{V^{\pi^{\hat w ^{(\delta)}}}} = \inner{w^*}{V^{\pi_1}} \left(\inner{w'}{V^{\pi^*}} - \inner{w'}{V^{\pi^{\hat w ^{(\delta)}}}}\right)
\\
\leq& \inner{w^*}{V^{\pi_1}}\left(\inner{\hat w ^{(\delta)}}{V^{\pi^*}} - \inner{\hat w ^{(\delta)}}{V^{\pi^{\hat w ^{(\delta)}}}} +\cO(\sqrt{k}\norm{w'}_2 (\frac{C_V^4 k^5 \norm{w^*}_2\epsilon}{v^{*2}})^\frac{1}{3})\right)\\
=&\cO(\sqrt{k}\norm{w^*}_2 (\frac{C_V^4 k^5 \norm{w^*}_2\epsilon}{v^{*2}})^\frac{1}{3})\,.
\end{align*}
\end{proof}
\section{Flow Decomposition Based Approach}\label{app:approach-flow-based}
We first introduce an algorithm based on the idea of flow decomposition.
For that, we construct a layer graph $G=((\Lsup{0}\cup\ldots \cup\Lsup{H+1}), E)$ with $H+2$ pairwise disjoint layers $\Lsup{0},\ldots,\Lsup{H+1}$, where every layer $t\leq H$ contains a set of vertices labeled by the (possibly duplicated) states reachable at the corresponding time step $t$, i.e., $\{s\in \mathcal{S} \vert \Pr(S_t =s \vert S_0 = s_0) >0\}$.
Let us denote by $\xsup{t}_s$ the vertex in $\Lsup{t}$ labeled by state $s$.
Layer $\Lsup{H+1} =\{\xsup{H+1}_*\}$ contains only an artificial vertex, $\xsup{H+1}_*$, labeled by an artificial state $*$.
For $t=0,\ldots,H-1$, for every $\xsup{t}_s\in \Lsup{t}$, $\xsup{t+1}_{s'}\in \Lsup{t+1}$, we connect $\xsup{t}_s$ and $\xsup{t+1}_{s'}$ by an edge labeled by $(s,s')$ if $P(s' \vert s, \pi(s))>0$.
Every vertex $\xsup{H}_s$ in layer $H$ is connected to $\xsup{H+1}_*$ by one edge, which is labeled by $(s, *)$.
We denote by $\Esup{t}$ the edges between $\Lsup{t}$ and $\Lsup{t+1}$.
Note that every trajectory $\tau = (s_0,s_1,\ldots,s_H)$ corresponds to a single path $(\xsup{0}_{s_0},\xsup{1}_{s_1},\ldots, \xsup{H}_{s_H},\xsup{H+1}_*)$ of length $H+2$ from $\xsup{0}_{s_0}$ to $\xsup{H+1}_*$.
This is a one-to-one mapping and in the following, we use path and trajectory interchangeably.
The policy corresponds to a $(\xsup{0}_{s_0},\xsup{H+1}_*)$-flow with flow value $1$ in the graph $G$.
In particular, the flow is defined as follows.
When the layer $t$ is clear from the context, we actually refer to vertex $\xsup{t}_s$ by saying vertex $s$.
For $t=0,\ldots,H-1$, for any edge $(s,s') \in \Esup{t}$, let $f:E\rightarrow \mathbb R^+$ be defined as
\begin{equation}
f(s,s') = \sum_{\tau: (s^\tau_t,s^\tau_{t+1}) = (s,s')} q^\pi(\tau)\,,\label{eq:fdef}
\end{equation}
where $q^\pi(\tau)$ is the probability of $\tau$ being sampled.
For any edge $(s, *)\in \Esup{H}$, let
$f(s, *) = \sum_{(s',s)\in \Esup{H-1}} f(s',s)$.
It is direct to check that the function $f$ is a well-defined flow.
We can therefore compute $f$ by dynamic programming.
For all $(s_0,s)\in \Esup{0}$, we have $f(s_0,s) = P(s|s_0,\pi(s_0))$ and for $(s,s')\in \Esup{t}$,
\begin{equation}
f(s,s')= P(s' \vert s, \pi(s)) \sum_{s'': (s'',s)\in \Esup{t-1}} f(s'',s)\,.\label{eq:flowinduction}
\end{equation}
Now we are ready to present our algorithm by decomposing $f$ in Algorithm~\ref{alg:flow-decomposition}.
Each iteration in Algorithm~\ref{alg:flow-decomposition} will zero out at least one edge and thus, the algorithm will stop within $\abs{E}$ rounds.
\begin{algorithm}[H]\caption{Flow decomposition based approach}\label{alg:flow-decomposition}
\begin{algorithmic}[1]
\STATE initialize $Q\leftarrow \emptyset$.
\STATE calculate $f(e)$ for all edge $e\in E$ by dynamic programming according to Eq~\eqref{eq:flowinduction}
\WHILE{$\exists e\in E$ s.t. $f(e)>0$}
\STATE pick a path $\tau=(s_0,s_1,\ldots,s_H,*)\in \Lsup{0}\times\Lsup{1}\times\ldots\times\Lsup{H+1}$
s.t. $f(s_i,s_{i+1})>0\; \forall i\geq 0$
\label{alg-line:path}
\STATE $f_\tau \leftarrow \min_{e\text{ in }\tau} f(e)$
\STATE $Q\leftarrow Q\cup\{(\tau,f_\tau)\}$, $f(e) \leftarrow f(e)-f_\tau$ for $e$ in $\tau$\label{alg-line:flowupdate}
\ENDWHILE
\STATE output $Q$
\end{algorithmic}
\end{algorithm}
\begin{restatable}{theorem}{thmflow}\label{thm:flow-decomposition}
Algorithm~\ref{alg:flow-decomposition} outputs $Q$ satisfying that $\sum_{(\tau,f_\tau) \in Q} f_\tau \Phi(\tau) = V^{\pi}$ in time $\cO( H^2\abs{\mathcal{S}}^2)$.
%
\end{restatable}
The core idea of the proof is that for any edge $(s,s')\in \Esup{t}$, the flow on $(s,s')$ captures the probability of $S_t=s\wedge S_{t+1}=s'$ and thus, the value of the policy $V^\pi$ is linear in $\{f(e)|e\in E\}$.
The output $Q$ has at most $\abs{E}$ number of weighted paths (trajectories).
We can further compress the representation through C4, which takes $O(\abs{Q}k^3)$ time.
\begin{corollary}
Executing Algorithm~\ref{alg:flow-decomposition} with the output $Q$ first and then running C4 over $\{(\Phi(\tau),f_\tau)|(\tau,f_\tau)\in Q\}$ returns a $(k+1)$-sized weighted trajectory representation in time $\cO(H^2\abs{\mathcal{S}}^2 +k^3H\abs{\mathcal{S}}^2 )$.
\end{corollary}
We remark that the running time of this flow decomposition approach underperforms that of the expanding and compressing approach (see Theorem~\ref{thm:traj-compression}) whenever $|\mathcal{S}|H + |\mathcal{S}|k^3 = \omega(k^4 + k|\mathcal{S}|)$.
\section{Example of maximizing individual objective}\label{app:eg-linear-dependent}
\begin{observation}
Assume there exist $k>2$ policies that together assemble $k$ linear independent value vectors. Consider the $k$ different policies $\pi^*_1,\dots,\pi^*_k$ that each $\pi^*_i$ maximizes the objective $i\in[k]$. Then, their respective value vectors $V^*_1,\dots,V^*_k$ are not necessarily linearly independent. Moreover, if $V^*_1,\dots,V^*_k$ are linearly depended it does not mean that $k$ linearly independent value vectors do not exists.
\end{observation}
\begin{proof}
For simplicity, we show an example with a horizon of $H=1$ but the results could be extended to any $H\geq 1$. We will show an example where there are $4$ different value vectors, where $3$ of them are obtained by the $k=3$ policies that maximize the $3$ objectives and have linear dependence.
Consider an MDP with a single state (also known as Multi-arm Bandit) with $4$ actions with deterministic reward vectors (which are also the expected values of the $4$ possible policies in this case):
\[
r(1)=
\begin{pmatrix}
8\\4\\2
\end{pmatrix},
\quad
r(2)=
\begin{pmatrix}
1\\2\\3
\end{pmatrix},
\quad
r(3)=
\begin{pmatrix}
85/12\\25/6\\35/12
\end{pmatrix}\approx
\begin{pmatrix}
7.083\\4.167\\2.9167
\end{pmatrix},
\quad
r(4)=
\begin{pmatrix}
1\\3\\2
\end{pmatrix}.
\]
Denote $\pi^a$ as the fixed policy that always selects action $a$.
Clearly, policy $\pi^1$ maximizes the first objective, policy $\pi^2$ the third, and policy $\pi^3$ the second ($\pi^4$ do not maximize any objective).
However,
\begin{itemize}
\item $r(3)$ linearly depends on $r(1)$ and $r(2)$ as
\[
\frac{5}{6}r(1)+\frac{5}{12}r(2)=r(3).
\]
\item In addition, $r(4)$ is linearly independent in $r(1),r(2)$: Assume not. Then, there exists $\beta_1,\beta_2\in \mathbb{R}$ s.t.:
\[
\beta_1\cdot r(1)+\beta_2\cdot r(2) =
\begin{pmatrix}
8\beta_1 +\beta_2 \\4\beta_1 +2\beta_2 \\2\beta_1 +3\beta_2
\end{pmatrix}
=\begin{pmatrix}
1\\3\\2
\end{pmatrix}= r(4).
\]
Hence, the first equations imply $\beta_2=1-8\beta_1$, and $4\beta_1+2-16\beta_1=3$, hence $\beta_1=-\frac{1}{12}$ and $\beta_2=\frac{5}{3}$. Assigning in the third equation yields $-\frac{1}{6}+5=2$ which is a contradiction.
\end{itemize}
\end{proof}
\section{Proof of Theorem~\ref{thm:cara}}\label{app:Caratheodory}
\cara*
\begin{proof}
The proof is similar to the proof of Carathéodory's theorem.
Given the vectors $\mu_1,\ldots,\mu_{k+2}$ picked in line~\ref{alg-line:anyvecs} of Algorithm~\ref{alg:Caratheodory} and their probability masses $p(\mu_i)$, we solve $x\in \mathbb{R}^{k+2}$ s.t. $\sum_{i=1}^{k+2} x_i (\mu_i\circ 1) = {\bm 0}$ in the algorithm.
Note that there exists a non-zero solution of $x$ because $\{\mu_i\circ 1|i\in[k+2]\}$ are linearly dependent.
Besides, $x$ satisfies $\sum_{i=1}^{d+2} x_i = 0$.
Therefore, $$\sum_{i=1}^{d+2} (p(\mu_i) - \gamma x_i) = \sum_{i=1}^{d+2} p(\mu_i).$$
For all $i$, if $x_i<0$, $p(\mu_i) - \gamma x_i \geq 0$ as $\gamma>0$; if $x_i>0$, then $\frac{x_i}{p(\mu_i)}\leq \frac{x_{i_0}}{p(\mu_{i_0})}=\frac{1}{\gamma}$ and thus $p(\mu_i) - \gamma x_i\geq 0$.
Hence, after one iteration, the updated $p$ is still a probability over $M$ (i.e., $p(\mu)\geq 0$ for all $\mu\in M$ and $\sum_{\mu\in M}p(M)=1$).
Besides, $\sum_{i=1}^{d+2} (p(\mu_i) - \gamma x_i) \mu_i = \sum_{i=1}^{d+2} p(\mu_i)\mu_i - \gamma \sum_{i=1}^{d+2} x_i \mu_i = \sum_{i=1}^{d+2} p(\mu_i)\mu_i$.
Therefore, after one iteration, the expected value $\EEs{\mu\sim p}{\mu}$ is unchanged.
When we finally output $(M',q)$, we have that $q$ is a distribution over $M$ and that $\EEs{\mu\sim q}{\mu} =\EEs{\mu\sim p}{\mu}$.
Due to line~\ref{alg-line:flipsign} of the algorithm, we know that $x_{i_0}>0$.
Hence $p(\mu_{i_0}) - \gamma x_{i_0} = p(\mu_{i_0}) - \frac{p(\mu_{i_0})}{x_{i_0}} x_{i_0} =0$.
We remove at least one vector $\mu_{i_0}$ from $M$ and we will run for at most $\abs{M}$ iterations.
Finally, solving $x$ takes $\cO(k^3)$ time and thus, Algorithm~\ref{thm:cara} takes $\cO(\abs{M}k^3)$ time in total.
\end{proof}
\subsection{Expanding and Compressing Approach}
The basic idea is to find $k+1$ trajectories of length $1$ to represent $V^\pi$ first and then increase the length of the trajectories without increasing the number of trajectories.
For policy $\pi$,
let $V^{\pi}(s,h)= \EEs{S_0 = s}{\sum_{t=0}^{h-1} R(S_t, \pi(S_t))}$ be the value of $\pi$ with initial state $S_0 = s$ and time horizon $h$.
Since we study the representation for a fixed policy $\pi$ in this section, we slightly abuse the notation and represent a trajectory by $\tau^\pi = (s,s_1,\ldots,s_H)$.
We denote the state of trajectory $\tau$ at time $t$ as $s^\tau_t =s_t$.
For a trajectory prefix $\tau = (s, s_1,\ldots,s_h)$ of $\pi$ with initial state $s$ and $h\leq H$ subsequent states, the return of $\tau$ is $\Phi(\tau) = R(s, \pi(s))+\sum_{t=1}^{h-1} R(s_t,\pi(s_t))$.
Let $J(\tau)$ be the expected return of trajectories (of length $H$) with the prefix being $\tau$, i.e.,
$$J(\tau):=\Phi(\tau) + V(s^\tau_h, H-h)\,.$$
For any $s\in \mathcal{S}$, let $\tau \circ s$ denote the trajectory of appending $s$ to $\tau$.
We can solve $V^\pi(s,h)$ for all $s\in \mathcal{S}, h\in [H]$ by dynamic programming in time $\cO(kH\abs{\mathcal{S}}^2)$.
Specifically, according to definition, we have $V^\pi(s,1) = R(s, \pi(s))$ and
\begin{equation}
V^\pi(s,h+1) = R(s,\pi(s)) + \sum_{s'\in \mathcal{S}} P(s' \vert s, \pi(s)) V^\pi(s',h)\,.\label{eq:stepinduction}
\end{equation}
Thus, we can represent $V^\pi$ by
\begin{align*}
V^\pi =& R(s_0,\pi(s_0)) +\sum_{s\in \mathcal{S}} P(s|s_0,\pi(s_0)) V^\pi(s,H-1)\\ =& \sum_{s\in \mathcal{S}} P(s|s_0,\pi(s_0)) J(s_0,s)\,.
\end{align*}
By applying C4, we can find a set of representative trajectories of length $1$, $\Qsup{1}\subset \{(s_0,s)|s\in \mathcal{S}\}$, with $\abs{\Qsup{1}}\leq k+1$ and weights $\betasup{1}\in \mathrm{Simplex}^{\Qsup{1}}$ such that
\begin{align}
V^\pi = \sum_{\tau\in \Qsup{1}}\betasup{1}(\tau) J(\tau)\,.\label{eq:Q1}
\end{align}
Supposing that we are given a set of trajectories $\Qsup{t}$ of length $t$ with weights $\betasup{t}$ such that $V^\pi =\sum_{\tau\in \Qsup{t}} \betasup{t}(\tau)J(\tau)$, we can first increase the length of trajectories by $1$ through Eq~\eqref{eq:stepinduction} and obtain a subset of $\{\tau \circ s|\tau\in \Qsup{t},s\in \mathcal{S}\}$, in which the trajectories are of length $t+1$.
Specifically, we have
\begin{equation}
V^\pi =\sum_{\tau\in \Qsup{t}, s\in \mathcal{S}} \betasup{t}(\tau) P(s \vert s^\tau_t, \pi(s^\tau_t))J(\tau\circ s)\,.\label{eq:expand}
\end{equation}
Then we would like to compress the above convex combination through C4 as we want to keep track of at most $k+1$ trajectories of length $t+1$ due to the computing time.
More formally, let $J_{\Qsup{t}} := \{J(\tau\circ s)|\tau\in \Qsup{t}, s\in \mathcal{S}\}$ be the set of expected returns and $p_{\Qsup{t},\betasup{t}} \in \mathrm{Simplex}^{\Qsup{t}\times \mathcal{S}}$ with $p_{\Qsup{t},\betasup{t}}(\tau\circ s) = \betasup{t}(\tau) P(s \vert s^\tau_t, \pi(s^\tau_t))$ be the weights appearing in Eq~\eqref{eq:expand}.
Here $p_{\Qsup{t},\betasup{t}}$ defines a distribution over $J_{\Qsup{t}}$ with the probability of drawing $J(\tau\circ s)$ being $p_{\Qsup{t},\betasup{t}}(\tau\circ s)$.
Then we can apply C4 over $(J_{\Qsup{t}},p_{\Qsup{t},\betasup{t}})$ and compress the representative trajectories $\{\tau\circ s|\tau\in \Qsup{t},s\in \mathcal{S}\}$.
We start with trajectories of length $1$ and repeat the process of expanding and compressing until we get trajectories of length $H$.
The details are described in Algorithm~\ref{alg:trajcompression}.
\begin{algorithm}[H]\caption{Expanding and compressing trajectories}\label{alg:trajcompression}
\begin{algorithmic}[1]
\STATE compute $V^\pi(s,h)$ for all $s\in \mathcal{S}, h\in [H]$ by dynamic programming according to Eq~\eqref{eq:stepinduction}
\STATE $\Qsup{0} = \{(s_0)\}$ and $\betasup{0}(s_0) = 1$
\FOR{$t=0,\ldots,H-1$}
\STATE $J_{\Qsup{t}} \leftarrow \{J(\tau\circ s)|\tau\in \Qsup{t}, s\in \mathcal{S}\}$ and $p_{\Qsup{t},\betasup{t}}(\tau\circ s) \leftarrow \betasup{t}(\tau) P(s \vert s^\tau_t, \pi(s^\tau_t))$ for $\tau\in \Qsup{t}, s\in \mathcal{S}$ \COMMENT{expanding step}
\STATE $(\Jsup{t+1},\betasup{t+1}) \leftarrow \text{C4}(J_{\Qsup{t}},p_{\Qsup{t},\betasup{t}})$\label{alg-line:Qtplus1}
and $\Qsup{t+1} \leftarrow \{\tau| J(\tau)\in \Jsup{t+1}\}$\COMMENT{compressing step}
\ENDFOR
\STATE output $\Qsup{H}$ and $\betasup{H}$
\end{algorithmic}
\end{algorithm}
\begin{restatable}{theorem}{trajcompress}\label{thm:traj-compression}
Algorithm~\ref{alg:trajcompression} outputs $\Qsup{H}$ and $\betasup{H}$ satisfying that $\abs{\Qsup{H}}\leq k+1$ and $\sum_{\tau\in \Qsup{H}} \betasup{H}(\tau) \Phi(\tau)=V^\pi$ in time $\cO(k^4H\abs{\mathcal{S}} + kH\abs{\mathcal{S}}^2)$.
\end{restatable}
The proof of Theorem~\ref{thm:traj-compression} follows immediately after the construction of the algorithm.
According to Eq~\eqref{eq:Q1}, we have
$V^\pi = \sum_{\tau\in \Qsup{1}}\betasup{1}(\tau) J(\tau)$.
Then we can show that the output of Algorithm~\ref{alg:trajcompression} is a valid weighted trajectory set by induction on the length of representative trajectories.
C4 guarantees that $\abs{\Qsup{t}} \leq k+1$ for all $t=1,\ldots,H$, and thus, we only keep track of at most $k+1$ trajectories at each step and achieve the computation guarantee in the theorem.
\begin{corollary}
Running the algorithm in Theorem~\ref{thm:est-without-tau} with weighted trajectory set representation returned by Algorithm~\ref{alg:trajcompression} gives us the same guarantee as that of Theorem~\ref{thm:est-without-tau} in time $\cO(k^2 H|\mathcal{S}|^2 |\mathcal{A}|+ (k^5H\abs{\mathcal{S}} + k^2H\abs{\mathcal{S}}^2)\log(\frac{k }{\epsilon}))$.
\end{corollary}
\section*{Acknowledgements}
This work was supported in part by the National Science Foundation under grants CCF-2212968 and ECCS-2216899 and by the Defense Advanced Research Projects Agency under cooperative agreement HR00112020003. The views expressed in this work do not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. Approved for public release; distribution is unlimited.
This project was supported in part by funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement number 882396), by the Israel Science Foundation (grant number 993/17), Tel Aviv University Center for AI and Data Science (TAD), the Eric and Wendy Schmidt Fund, and the Yandex Initiative for Machine Learning at Tel Aviv University.
\section{Proof of Lemma~\ref{lmm:V1}}\label{app:V1}
\lmmVinit*
\begin{proof}
Since $\pi_1$ is chosen as the policy with the highest personalized value among all the policies which are optimal for single objectives (lines~\ref{alg-line:initV1-1}-\ref{alg-line:initV1-5} of Algorithm~\ref{alg:policy-threshold-free}), it directly follows that $$\inner{w^*}{V^{\pi_1}}\geq \max_{i\in [k]}\inner{w^*}{V^{\pi^{{\bf e}_i}}}-k\epsilon\,.$$
As $\pi^{{\bf e}_i}$ is the optimal personalized policy when the user's preference vector is ${\bf e}_i$, we have that $$v^* =\inner{w^*}{V^{\pi^*}} = \sum_{i=1}^k w^*_i \inner{V^{\pi^*}}{{\bf e}_i}\leq \sum_{i=1}^k w^*_i \inner{V^{\pi^{{\bf e}_i}}}{{\bf e}_i} \leq \inner{w^*}{\sum_{i=1}^k V^{\pi^{{\bf e}_i}}}\,,$$
where the last inequality holds because the entries of $V^{\pi^{{\bf e}_i}}$ and $w^*$ are non-negative.
Therefore, there exists $i\in [k]$ such that $\inner{w^*}{V^{\pi^{{\bf e}_i}}} \geq \frac{1}{k} \inner{w^*}{V^{\pi^*}} =\frac{1}{k}v^*$.
Then we have
$$\inner{w^*}{V^{\pi_1}}\geq \max_{i\in[k]}\inner{w^*}{V^{\pi^{{\bf e}_i}}} -k\epsilon \geq \frac{1}{k} v^*-k\epsilon \geq \frac{1}{2k} v^*\,,$$ when $\epsilon \leq \frac{v^*}{2k^2}$.
By rearranging terms, we have $\frac{v^*}{\inner{w^*}{V^{\pi_1}}} \leq 2k$.
By setting $C_{\alpha} = 2k$, we have $\abs{\hat \alpha_i - \alpha_i}\inner{w^*}{V^{\pi_1}} \leq C_{\alpha}\epsilon = 2k\epsilon$ and thus, $\abs{\hat \alpha_i - \alpha_i}\leq \frac{4k^2\epsilon}{v^*}$.
\end{proof}
\section{Proof of Lemma~\ref{lmm:gap-w-w'}}\label{app:gap-w-w'}
\begin{proof}
For every policy $\pi$, the value vector can be represented as a linear combination of row vectors of $A^\textrm{(full)}$, i.e.,there exists $a = (a_1,\dots,a_d)\in \mathbb{R}^d$ s.t.
$$V^\pi = \sum_{i=1}^d{a_i A^\textrm{(full)}_i} = A^{\textrm{(full)}\top} a\,.$$
For any unit vector $\xi \in \mathrm{span}(b_1,\ldots,b_{d-d_\delta})$, we have $\inner{V^\pi}{\xi} \leq \sqrt{k} \delta$.
This is because at round ${d_\delta+1}$, we pick an orthonormal basis $\rho_1,\ldots,\rho_{k-{d_\delta}}$ in line~\ref{alg-line:orthonormal-basis} of Algorithm~\ref{alg:policy-threshold-free} and pick $u_{d_\delta+1}$ to be the one in which there exists a policy with the largest component as described in line~\ref{alg-line:largest-component} of Algorithm~\ref{alg:policy-threshold-free}.
Hence, $\abs{\inner{V^\pi}{\rho_j}}\leq \delta$ for all $j\in [k-{d_\delta}]$.
Then, we have
$\inner{V^\pi}{\xi}= \sum_{j=1}^{k-{d_\delta}} \inner{\xi}{\rho_j}\inner{V^\pi}{\rho_j} \leq \sqrt{k-d_\delta}\delta$ by Cauchy-Schwarz inequality.
Thus, we have $$\sum_{i={d_\delta+1}}^d a_i^2 = \abs{\inner{V^\pi}{\sum_{i=d_\delta+1}^d a_{i} b_{i-d_\delta}}} \leq \sqrt{\sum_{i={d_\delta+1}}^d a_i^2} \sqrt{k-d_\delta}\delta\,,$$
which implies that
\begin{equation}
\sqrt{\sum_{i={d_\delta+1}}^d a_i^2} \leq \sqrt{k-d_\delta}\delta\,.\label{eq:anorm}
\end{equation}
Since $w'$ satisfies $Aw' ={\bf e}_1$, we have $$A^\textrm{(full)} w' = (1,0,\ldots,0, \inner{b_1}{w'},\ldots,\inner{b_{d-d_\delta}}{w'})\,.$$
For any $w \in \mathbb{R}^k$ satisfying $A^\textrm{(full)} w = {\bf e}_1$, consider $\tilde w = w +\sum_{i= 1}^{d-d_\delta}\inner{b_{i}}{w'} b_{i}$.
Then we have $A^\textrm{(full)}\tilde w = A^\textrm{(full)}w'$.
Thus, we have
\begin{align*}
\tilde w\cdot V^\pi = \tilde w^\top A^{\textrm{(full)}\top}a = w'^\top A^{\textrm{(full)}\top}a = w'\cdot V^\pi\,.
\end{align*}
Hence,
\begin{align*}
&\abs{w\cdot V^\pi- w'\cdot V^\pi} = \abs{w\cdot V^\pi- \tilde w\cdot V^\pi} = \abs{\sum_{i=1}^d a_i (w - \tilde w)\cdot A^\textrm{(full)}_i }\\
=& \abs{\sum_{i=d_\delta +1}^{d} a_i \inner{b_{i-d_\delta}}{w'}}
\stackrel{(a)}{\leq} \sqrt{\sum_{i={d_\delta+1}}^d a_i^2} \norm{w'}_2 \stackrel{(b)}{\leq} \sqrt{k-d_\delta} \delta \norm{w'}_2\,,
\end{align*}
where Eq~(a) applies Cauchy-Schwarz inequality and Eq~(b) applies Eq~\eqref{eq:anorm}.
\end{proof}
|
train/arxiv
|
BkiUa9Q5qX_AY17CldXJ
| 5 | 1 |
\section{Introduction}
\label{Sec:Intro}
\citet*{Yuekcue/Yuekcue/2018} derived explicit expressions for the
Fourier transform of a bound-state hydrogen eigenfunction. Their article
creates the impression that their results \citep*[Eqs.\ (15) and
(16)]{Yuekcue/Yuekcue/2018} are new. This is wrong. Moreover, their
explicit expressions are less compact and therefore also less useful than
those already described in the literature.
In \citeyear{Podolsky/Pauling/1929}, \citet[Eq.\
(28)]{Podolsky/Pauling/1929} were the first ones to derive an explicit
expression via a direct Fourier transformation of a generating function
of the generalized Laguerre polynomials. In \citeyear{Hylleraas/1932},
\citet[Eqs.\ (11c) and (12)]{Hylleraas/1932} derived this Fourier
transformation algebraically by solving a differential equation for the
momentum space eigenfunction. In \citeyear{Fock/1935}, \citet{Fock/1935}
re-formulated the momentum space Schr\"{o}dinger equation for the
hydrogen atom as a 4-dimensional integral equation, whose solutions --
the 4-dimensional hyperspherical harmonics -- are nothing but the Fourier
transforms of bound-state hydrogen eigenfunctions in disguise (see for
example \citep[Section VI]{Weniger/1985} or the books by
\citet{Avery/1989,Avery/2000}, \citet{Avery/Avery/2006}, and
\citet*{Avery/Rettrup/Avery/2012} and references therein).
The Fourier transform of a bound state hydrogen eigenfunction has been
treated in numerous books and articles. Examples are the books by
\citet[Eq.\ (8.8)]{Bethe/Salpeter/1977}, \citet[Eqs.\ (5.5) and
(5.6)]{Englefield/1972}, or \citet*[Eq.\
(7.4.69)]{Biedenharn/Louck/1981a} or the relatively recent review by
\citet[Eq.\ (9.55)]{Hill/2006}. This Fourier transform was even discussed
in a Wikipedia article \citep{WikipediaHydAt/2018}, which cites the book
by \citet[Eq.\ (A5.34)]{Bransden/Joachain/1983} as its source. There are
also articles by \citet{Klein/1966} and by \citet{Hey/1993a,Hey/1993b},
which discuss properties of the momentum space hydrogen functions. In
\cite[Section IV]{Weniger/1985}, I presented a different and remarkably
simple derivation of the Fourier transform of a bound state hydrogen
eigenfunction and of related functions which will play a major role in
\cref{Sec:ExpRBF}.
In \cref{Sec:GLag_BoundStateHydrEigFun}, basic properties of the
generalized Laguerre polynomials and of bound state hydrogen
eigenfunctions are reviewed. As discussed in
\cref{Sec:Incom_BoundStateHydrEigFun}, bound state hydrogen
eigenfunctions are in contrast to several other similar function sets not
complete in the Hilbert space $L^{2} (\mathbb{R}^3)$ of square integrable
functions. This makes bound state hydrogen eigenfunctions useless in
expansions. Apparently, \citeauthor{Yuekcue/Yuekcue/2018} are unaware of
this well known and very consequential fact.
The explicit expressions derived by \citet*[Eqs.\ (15) and
(16)]{Yuekcue/Yuekcue/2018} are less useful than those mentioned above
(compare the discussion in \cref{Sec:WorkYuekcueYuekcue}). This is a
direct consequence of their derivation: \citeauthor{Yuekcue/Yuekcue/2018}
expressed generalized Laguerre polynomials in terms of
powers. Superficially, this looks quite natural, but actually it is a bad
idea. \Cref{Sec:ExpRBF} shows how their approach can be improved
substantially by expanding generalized Laguerre polynomials in terms of
better suited alternative function sets, the so-called reduced Bessel
functions.
\typeout{==> Section: Generalized Laguerre Polynomials and Bound-State
Hydrogen Eigenfunctions}
\section{Generalized Laguerre Polynomials and Bound-State Hydrogen
Eigenfunctions}
\label{Sec:GLag_BoundStateHydrEigFun}
The generalized Laguerre polynomials $L_{n}^{(\alpha)} (z)$ with
$\Re (\alpha) > - 1$ and $n \in \mathbb{N}_{0}$ are the classical
orthogonal polynomials associated with the integration interval
$[0, \infty)$ and the weight function $w (z) = z^{\alpha} \exp (-z)$.
They are of considerable importance in mathematics and also in
theoretical physics. There is a detailed literature which is far too
extensive to be cited here. Those interested in the historical
development with a special emphasis on quantum physics should consult an
article by \citet*{Mawhin/Ronveaux/2010}. Generalized Laguerre
polynomials also played a major role in my own research
\citep*{Weniger/Steinborn/1983a,Weniger/Steinborn/1984,Weniger/1985,%
Weniger/2008,Weniger/2012,Borghi/Weniger/2015}.
It is recommendable to use the modern mathematical definition of the
generalized Laguerre polynomials $L_{n}^{(\alpha)} (z)$ with
$n \in \mathbb{N}_{0}$ and $\alpha, z \in \mathbb{C}$, which are defined
either via their Rodrigues' relationship \citep*[Eq.\ (18.5.5) and Table
18.5.1]{Olver/Lozier/Boisvert/Clark/2010} or as a terminating confluent
hypergeometric series ${}_{1} F_{1}$ \citep*[Eq.\
(18.5.12)]{Olver/Lozier/Boisvert/Clark/2010}:
\begin{align}
\label{GLag_Rodrigues}
L_{n}^{(\alpha)} (z) & \; = \; z^{-\alpha} \,
\frac{\mathrm{e}^{z}}{n!} \, \frac{\mathrm{d}^n}{\mathrm{d} z^n} \,
\bigl[ \mathrm{e}^{-z} z^{n+\alpha} \bigr]
\\
\label{GLag_1F1}
& \; = \; \frac{(\alpha+1)_n}{n!} \, {}_1 F_1 (-n; \alpha+1; z) \, .
\end{align}
Further details can be found in books on special functions.
Dating back from the early days of quantum mechanics, an antiquated
notation is still frequently used mainly in atomic theory. For example,
\citet*[Eq.\ (3.5)]{Bethe/Salpeter/1977} introduced so-called
\emph{associated Laguerre functions}
$\bigl[ L_{n}^{m} (z) \bigr]_{\text{BS}}$ with $n, m \in \mathbb{N}_0$
via the Rodrigues-type relationships
\begin{subequations}
\label{AssLagFun_BS}
\begin{align}
\label{AssLagFun_BS_1}
\bigl[ L_{n}^{m} (z) \bigr]_{\text{BS}} & \; = \;
\frac{\mathrm{d}^{m}}{\mathrm{d} z^{m}} \,
\bigl[L_{n} (z) \bigr]_{\text{BS}} \, ,
\\
\label{AssLagFun_BS_2}
\bigl[ L_{n} (z) \bigr]_{\text{BS}} & \; = \; \mathrm{e}^{z} \,
\frac{\mathrm{d}^{n}}{\mathrm{d} z^{n}} \, \bigl[ \mathrm{e}^{-z}
z^{n} \bigr] \, .
\end{align}
\end{subequations}
This convention is also used in the books by \citet*[Eqs.\ (6) and (9) on
p.\ 115]{Condon/Shortley/1970} and \citet*[Eq.\ (2) on p.\
189]{Condon/Odabasi/1980}.
Generalized Laguerre polynomials with integral superscript
$\alpha=m \in \mathbb{N}_{0}$ and the associated Laguerre functions
\eqref{AssLagFun_BS} are connected via
\begin{equation}
\label{GLagPol_2_GLagPol_BS}
L_{n}^{(m)} (z) \; = \; \frac{(-1)^m}{(n+m)!} \,
\bigl[ L_{n+m}^{m} (z) \bigr]_{\text{BS}} \, ,
\quad m, n \in \mathbb{N}_{0} \, .
\end{equation}
The notation for associated Laguerre functions is less intuitive than the
notation for the generalized Laguerre polynomials, whose subscript $n$
corresponds to the polynomial degree and whose superscript $\alpha$
characterizes the weight function $w (z) = z^{\alpha} \exp (-z)$. The
worst drawback of the functions \eqref{AssLagFun_BS} is that they cannot
express generalized Laguerre polynomials $L_{n}^{(\alpha)}$ with
\emph{non-integral} superscripts $\alpha$ which also occur in quantum
physics. The eigenfunctions $\Omega_{n, \ell}^{m} (\beta, \bm{r})$ of the
Hamiltonian $\beta^{-2} \nabla^2 - \beta^2 r^2$ of the three-dimensional
isotropic harmonic oscillator contain generalized Laguerre polynomials in
$r^{2}$ with half-integral superscripts (see for example \citep[Eq.\
(5.4)]{Weniger/1985} and references therein). Similarly, the
eigenfunctions of the Dirac equation for the hydrogen atom contain
generalized Laguerre polynomials with in general non-integral
superscripts \citep[Eqs.\ (9.84) and (9.85)]{Hill/2006}.
If the modern mathematical notation is used, the bound-state
eigenfunctions of a hydrogenlike ion with nuclear charge $Z$ in spherical
polar coordinates is essentially the product of an exponential and a
generalized Laguerre polynomial, both depending on $r$, and a regular
solid harmonic
$\mathcal{Y}_{\ell}^{m} (\bm{r}) = r^{\ell} Y_{\ell}^{m} (\theta, \phi)$
(see for example \citep*[Eqs.\ (7.4.41) -
(7.4.43)]{Biedenharn/Louck/1981a} or \citep[Eqs.\ (9.2) and
(9.10)]{Hill/2006}):
\begin{align}
\label{Def_HydEigFun}
& W_{n, \ell}^{m} (Z, \bm{r}) \; = \; \left( \frac{2Z}{n} \right)^{3/2}
\, \left[ \frac{(n-\ell-1)!}{2n(n+\ell)!} \right]^{1/2}
\notag
\\
& \qquad \times \, \mathrm{e}^{-Zr/n} \, L_{n-\ell-1}^{(2\ell+1)}
(2Zr/n) \, \mathcal{Y}_{\ell}^{m} (2Z \bm{r}/n) \, ,
\notag
\\
& \qquad \quad \; n \in \mathbb{N} \, , \;
\ell \in \mathbb{N}_0 \le n-1 \, , \; -\ell \le m \le \ell \, .
\end{align}
\citet{Yuekcue/Yuekcue/2018} define the radial part of the bound-state
eigenfunctions \eqref{Def_HydEigFun} via their Eq.\ (3), which is
inconsistent with their definition of the generalized Laguerre
polynomials via their Eq.\ (11). It can be shown that their Eq.\ (11) is
equivalent to \cref{GLag_1F1} which implies that
\citeauthor{Yuekcue/Yuekcue/2018} also use the modern mathematical
notation. In addition, their Ref.\ [26] for their Eq.\ (11) is
incorrect. The so-called \emph{Bateman Manuscript Project}
\citep{Erdelyi/Magnus/Oberhettinger/Tricomi/HTF1/1953a,%
Erdelyi/Magnus/Oberhettinger/Tricomi/HTF2/1953b,%
Erdelyi/Magnus/Oberhettinger/Tricomi/HTF3/1953c,%
Erdelyi/Magnus/Oberhettinger/Tricomi/TIT1/1954a,%
Erdelyi/Magnus/Oberhettinger/Tricomi/TIT2/1954b} was named to honor
Harry Bateman who had died in 1946, i.e., long before these books had
been completed. Thus, the correct reference for Eq.\ (11) of
\citet{Yuekcue/Yuekcue/2018} would be \citep[Eq.\ (7) on p.\
188]{Erdelyi/Magnus/Oberhettinger/Tricomi/HTF2/1953b}.
\typeout{==> Section: Incompleteness of the Bound-State Hydrogen
Eigenfunctions}
\section{Incompleteness of the Bound-State Hydrogen Eigenfunctions}
\label{Sec:Incom_BoundStateHydrEigFun}
Expansions of a given function in terms of suitable function sets are
among the most useful techniques of mathematical physics. This approach
requires that the function set being used is complete and preferably also
orthogonal in the corresponding Hilbert space. As for example discussed
in \citep{Weniger/2012} or in \citep{Klahn/1981}, non-orthogonal
expansions can easily have pathological properties.
Bound-state hydrogenic eigenfunctions \eqref{Def_HydEigFun} are
orthonormal with respect to an integration over the whole $\mathbb{R}^3$,
\begin{equation}
\int \, \bigl[ W_{n, \ell}^{m} (Z, \bm{r}) \bigr]^{*} \,
W_{n', \ell'}^{m'} (Z, \bm{r}) \, \mathrm{d}^3 \bm{r}
\; = \; \delta_{n n'} \, \delta_{\ell \ell'} \, \delta_{m m'} \, ,
\end{equation}
but they are not complete in the Hilbert space
\begin{equation}
\label{HilbertL^2}
L^{2} (\mathbb{R}^3) \; = \; \Bigl\{ f \colon \mathbb{R}^3 \to
\mathbb{C} \Bigm\vert \, \int \, \vert f (\bm{r}) \vert^2 \,
\mathrm{d}^3 \bm{r} < \infty \Bigr\}
\end{equation}
of square integrable functions \emph{without} the inclusion of the
technically very difficult continuum eigenfunctions, described for
instance in \citep[pp.\ 21 - 25]{Bethe/Salpeter/1977}, in \citep*[Chapter
33 Coulomb Functions]{Olver/Lozier/Boisvert/Clark/2010} or in the recent
article \citep{Gaspard/2018}. \citeauthor{Yuekcue/Yuekcue/2018} are
apparently not aware of this incompleteness.
In the literature, this incompleteness, which was first described in
\citeyear{Hylleraas/1928} by \citet[p.\ 469]{Hylleraas/1928}, is
sometimes overlooked -- often with catastrophic consequences. For
example, \citeauthor{Yuekcue/Yuekcue/2018} cited as their Ref.\ [4] an
article by \citet{Yamaguchi/1983} in order to demonstrate the usefulness
of bound-state hydrogen eigenfunctions in expansions. However,
\citeauthor{Yamaguchi/1983}'s article had been severely criticized in
\citep{Weniger/Steinborn/1984} for simply neglecting the troublesome
continuum eigenfunctions. Already in 1955, \citet{Shull/Loewdin/1955} had
emphasized the importance of the continuum eigenfunctions and tried to
estimate the magnitude of the error due to their omission.
At first sight, this incompleteness may seem surprising since the
completeness of the generalized Laguerre polynomials
$L_{n}^{(\alpha)} (z)$ in the weighted Hilbert space
\begin{align}
\label{HilbertL^2_Lag}
& L^{2}_{\mathrm{e}^{-z} z^{\alpha}} \bigl([0, \infty) \bigr)
\notag
\\
& \quad \; = \;
\Bigl\{ f \colon \mathbb{C} \to \mathbb{C} \Bigm\vert \,
\int_{0}^{\infty} \,\mathrm{e}^{-z} \, z^{\alpha} \, \vert f (z)
\vert^2 \, \mathrm{d} z < \infty \Bigr\}
\end{align}
is a classic result of mathematical analysis (see for example the books
by \citet[p.\ 33]{Higgins/1977}, \citet[pp.\ 349 - 351]{Sansone/1977},
\citet[pp.\ 108 - 110]{Szegoe/1975}, or \citet[pp.\ 235 -
238]{Tricomi/1970}). Thus, every function
$f \in L^{2}_{\mathrm{e}^{-z} z^{\alpha}} \bigl([0, \infty) \bigr)$ can
be expressed by a Laguerre series
\begin{subequations}
\label{f_Exp_GLag}
\begin{align}
\label{f_Exp_GLag_a}
f (z) \; = \; & \sum_{n=0}^{\infty} \,
\lambda_{n}^{(\alpha)} \, L_{n}^{(\alpha)} (z) \, ,
\\
\label{f_Exp_GLag_b}
\lambda_{n}^{(\alpha)} \; = \; & \frac{n!}{\Gamma (\alpha+n+1)} \,
\int_{0}^{\infty} \, z^{\alpha} \, \mathrm{e}^{-z} \,
L_{n}^{(\alpha)} (z) \, f (z) \, \mathrm{d} z \, ,
\end{align}
\end{subequations}
which converges in the mean with respect to the norm of the Hilbert space
$L^{2}_{\mathrm{e}^{-z} z^{\alpha}} \bigl([0, \infty) \bigr)$. For a
condensed discussion of Laguerre expansions, see \citep[Section
2]{Weniger/2008}.
How can the incompleteness of the bound-state hydrogen eigenfunctions
\eqref{Def_HydEigFun} be explained? The culprit is their $n$-dependent
scaling parameter $2Z/n$. \citet[Eq.\ (6.17) on p.\ 200]{Fock/1978} showed
that the confluent hypergeometric function
\begin{align}
\label{1F1_HydEigFun}
& {}_{1} F_{1} \bigl( -n+\ell+1; 2\ell + 2; 2 Z r/n \bigr)
\notag
\\
& \quad \; = \;
\sum_{\nu=0}^{n-\ell-1} \,
\frac{(-n+\ell+1)_{\nu}}{(2\ell+2)_{\nu}} \,
\frac{[2Z r/n]^{\nu}}{\nu!}
\end{align}
occurring in \cref{Def_HydEigFun} can in the limit $n \to \infty$ be
represented by a Bessel function
$J_{2\ell+1} \left( \sqrt{8 Z r} \right)$ of the first kind, which is an
oscillatory function that decays too slowly to be square integrable
(compare also \citep*[Eq.\
(18.11.6)]{Olver/Lozier/Boisvert/Clark/2010}). In the limit
$n \to \infty$, the exponential $\exp (-Z r/n)$ in \cref{Def_HydEigFun}
loses its exponential decay as $r \to \infty$. Consequently, the bound
state hydrogen eigenfunctions \eqref{Def_HydEigFun} become oscillatory as
$n \to \infty$, which means that they are no longer square
integrable. Instead, they belong to the continuous spectrum. Thus, the
so-called bound-state eigenfunctions are no longer bound-state functions
if the principal quantum number $n$ becomes very large. This implies that
the bound-state eigenfunctions cannot form a basis for the Hilbert space
$L^{2} (\mathbb{R}^3)$ of square integrable functions (compare
\citep[text following Eq.\ (6.19) on p.\ 201]{Fock/1978}).
Because of the incompleteness of the bound-state hydrogen eigenfunction,
it is now common to use in expansions alternative function sets also
based on the generalized Laguerre polynomials that possess more
convenient completeness properties. Closely related to the bound-state
hydrogenic eigenfunctions are the so-called Coulomb Sturmians or
Sturmians which were already used in \citeyear{Hylleraas/1928} by
\citet[Eq.\ (25) on p.\ 478]{Hylleraas/1928}:
\begin{align}
\label{Def_SturmFun}
& \Psi_{n, \ell}^{m} (\beta, \bm{r}) \; = \; (2 \beta)^{3/2} \,
\left[ \frac{(n-\ell-1)!}{2n(n+\ell)!} \right]^{1/2}
\notag
\\
& \qquad \, \times \,
\mathrm{e}^{-\beta r} \, L_{n-\ell-1}^{(2\ell+1)} (2\beta r) \,
\mathcal{Y}_{\ell}^{m} (2 \beta \bm{r}) \, .
\end{align}
Here, the notation of \citep[Eq.\ (4.6)]{Weniger/1985} is used. We obtain
bound-state hydrogen eigenfunctions \eqref{Def_HydEigFun} with a correct
normalization factor if we make in \cref{Def_SturmFun} the substitution
$\beta \mapsto Z/n$ (compare the discussion following \citep[Eq.\
(4.12)]{Weniger/1985}):
\begin{equation}
\label{SturmFun<->BSHEF}
\Psi_{n, \ell}^{m} (Z/n, \bm{r}) \; = \;
W_{n, \ell}^{m} (Z, \bm{r}) \, .
\end{equation}
This is a non-trivial result. Sturmians are complete and orthonormal in
the in the Sobolev space $W_{2}^{(1)} (\mathbb{R}^3)$ (for the definition
of Sobolev spaces plus further references, see \citep[Section
II]{Weniger/1985}), whereas bound state hydrogen functions are
orthonormal but incomplete in the Hilbert space $L^{2} (\mathbb{R}^3)$.
Sturmians occur in the context of \citeauthor{Fock/1935}'s treatment of
the hydrogen atom \citep{Fock/1935}, albeit in a somewhat disguised form
(compare \citep[Section VI]{Weniger/1985}). There is a classic review by
\citet{Rotenberg/1970}. A fairly detailed discussion of their properties
was given by \citet{Novosadov/1983}. Sturmians also play a major role in
books by \citet{Avery/1989,Avery/2000}, \citet{Avery/Avery/2006}, and
\citet*{Avery/Rettrup/Avery/2012}. We used Sturmians for the construction
for an addition theorem of the Yukawa potential
\citep*{Homeier/Weniger/Steinborn/1992a} with the help of weakly
convergent orthogonal and biorthogonal expansions for the plane wave
introduced in \citep[Section III]{Weniger/1985}.
Lambda functions were introduced already in \citeyear{Hylleraas/1929} by
\citet[Footnote~${}^{*}$ on p.\ 349]{Hylleraas/1929}, and later by
\citet{Shull/Loewdin/1955} and by \citet[Eq.\ (46)]{Loewdin/Shull/1956}:
\begin{align}
\label{Def_LambdaFun}
& \Lambda_{n, \ell}^{m} (\beta, \bm{r}) \; = \; (2 \beta)^{3/2}
\left[ \frac{(n-\ell-1)!} {(n+\ell+1)!} \right]^{1/2}
\notag
\\
& \qquad \times \,
\mathrm{e}^{-\beta r} \, L_{n-\ell-1}^{(2\ell+2)} (2 \beta r) \,
\mathcal{Y}_{\ell}^{m} (2 \beta \bm{r}) \, .
\end{align}
Here, the notation of \citep[Eq.\ (4.4)]{Weniger/1985} is used.
The use of Lambda functions in electronic structure theory was suggested
by \citet{Kutzelnigg/1963} and \citet{Smeyers/1966} in
\citeyear{Kutzelnigg/1963} and \citeyear{Smeyers/1966},
respectively. \citet{Filter/Steinborn/1980} used them for the derivation
of one-range addition theorems of exponentially decaying functions, and I
used both Sturmians and Lambda functions for the construction of weakly
convergent expansions of a plane wave \citep{Weniger/1985}.
Both Sturmians and Lambda functions defined by
\cref{Def_SturmFun,Def_LambdaFun} have a fixed scaling parameter
$\beta > 0$ that does not depend on the principal quantum number
$n$. Consequently, these functions are orthogonal and complete in
suitable Hilbert and Sobolev spaces. A detailed discussion of the
mathematical properties of the functions
$\Psi_{n, \ell}^{m} (\beta, \bm{r})$ and
$\Lambda_{n, \ell}^{m} (\beta, \bm{r})$ was given in \citep[Section
IV]{Weniger/1985} or in \citep[Section 2]{Weniger/2012}.
\typeout{==> Section: The Work of Podolsky and Pauling}
\section{The Work of Podolsky and Pauling}
\label{Sec:PodolskyPauling}
The Fourier transform of an irreducible spherical tensor of integral rank
yields a Hankel-type radial integral multiplied by a spherical harmonic
if the so-called Rayleigh expansion of a plane wave (compare for instance
\citep*[p.\ 442]{Biedenharn/Louck/1981a}) is used:
\begin{align}
\label{Rayleigh_Expan}
& \mathrm{e}^{\pm \mathrm{i} \bm{x} \cdot \bm{y}} \; = \;
4\pi \, \sum_{\ell=0}^{\infty} \, (\pm \mathrm{i})^{\ell} \,
j_{\ell} (xy)
\notag
\\
& \quad \times \, \sum_{m=-\ell}^{\ell} \,
\bigl[ Y_{\ell}^{m} (\bm{x}/x) \bigr]^{*} \, Y_{\ell}^{m} (\bm{y}/y)
\, , \quad \bm{x}, \bm{y} \in \mathbb{R}^{3} \, .
\end{align}
With the help of the orthonormality of the spherical harmonics and the
definition of the spherical Bessel functions $j_{\ell} (xy)$ (see for
example \citep*[Eq.\ (10.47.3)]{Olver/Lozier/Boisvert/Clark/2010}), we
obtain the following expression for the Fourier transformation of a
Sturmian function \eqref{Def_SturmFun} without normalization factor and
with fixed $\beta > 0$:
\begin{align}
\label{FT_Sturm_1}
& (2 \pi)^{-3/2} \,
\int \, \mathrm{e}^{\mathrm{i} \bm{p} \cdot \bm{r}} \,
\mathrm{e}^{-\beta r} \, r^{\ell} \, L_{n}^{(2\ell+1)} (2 \beta r)
\, Y_{\ell}^{m} (\bm{r}/r) \, \mathrm{d}^{3} \bm{r}
\notag
\\
& \; \; = \; (-\mathrm{i})^{\ell} \, p^{-1/2} \,
Y_{\ell}^{m} (\bm{p}/p)
\notag
\\
& \quad \times \, \int_{0}^{\infty} \, r^{\ell+3/2} \,
\mathrm{e}^{-\beta r} \, J_{\ell+1/2} (p r) \,
L_{n}^{(2\ell+1)} (2 \beta r) \, \mathrm{d} r \, .
\end{align}
For a closed form expression of the Hankel-type radial integral in
\cref{FT_Sturm_1}, we need an explicit expression for the integral
\begin{equation}
\label{HankelInt_GLag}
I_{n}^{(\alpha, \mu, \nu)} (a, b) \; = \; \int_{0}^{\infty} \, y^{\mu}
\, \mathrm{e}^{-ay} \, J_{\nu} (by) \, L_{n}^{(\alpha)} (2ay) \,
\mathrm{d} y \, .
\end{equation}
In \citeyear{Podolsky/Pauling/1929}, when \citet*{Podolsky/Pauling/1929}
tried to derive an expression for the Fourier transform of a bound-state
hydrogen eigenfunction, no explicit expression for this integral was
known. Even today, I could not find the required expression in the usual
books on special function theory.
\citet[Eq.\ (6)]{Podolsky/Pauling/1929} found a very elegant solution to
this problem. Their starting point was the generating function
\citep*[p.\ 242]{Magnus/Oberhettinger/Soni/1966}
\begin{equation}
\label{GLag_GenFun}
\frac {\exp \left( \frac{xt}{t-1} \right)}{(1-t)^{\alpha+1}} \; = \;
\sum_{n=0}^{\infty} \, L_{n}^{(\alpha)} (x) \, t^{n} \, ,
\quad \vert t \vert < 1 \, .
\end{equation}
Inserting this generating function of the generalized Laguerre
polynomials into the radial integral in \cref{FT_Sturm_1} yields:
\begin{align}
\label{FT_Sturm_2}
& \sum_{n=0}^{\infty} \, t^{n} \,
\int_{0}^{\infty} \, r^{\ell+3/2} \,
\mathrm{e}^{-\beta r} \, J_{\ell+1/2} (p r) \,
L_{n}^{(2\ell+1)} (2 \beta r) \, \mathrm{d} r
\notag
\\
& \quad \; = \; (1-t)^{-2\ell-2}
\notag
\\
& \qquad \times \int_{0}^{\infty} \,
\mathrm{e}^{-\beta r \frac{1+t}{1-t}} \,
\, r^{\ell+3/2} J_{\ell+1/2} (p r) \, \mathrm{d} r \, .
\end{align}
The radial integral on the right-hand side can be expressed in closed
form. We use \citep[Eq.\ (2) on p.\ 385]{Watson/1922}
\begin{align}
\label{Int_ExpBesJ}
& \int_{0}^{\infty} \, \mathrm{e}^{-ay} \, J_{\nu} (by) \, y^{\mu-1}
\, \mathrm{d} y \; = \; \frac{(b/2)^{\nu} \Gamma (\mu+\nu)}
{a^{\mu+\nu} \Gamma (\nu+1)}
\notag
\\
& \qquad \times \, {}_{2} F_{1} \left( \frac{\mu+\nu}{2},
\frac{\mu+\nu+1}{2}; \nu+1; - \frac{b^2}{a^2} \right) \, ,
\notag
\\
& \qquad \qquad \Re (a \pm \mathrm{i} b) > 0 \, ,
\end{align}
to obtain
\begin{align}
\label{FT_Sturm_4}
& \int_{0}^{\infty} \, \mathrm{e}^{-\beta r \frac{1+t}{1-t}} \,
\, r^{\ell+3/2} J_{\ell+1/2} (p r) \, \mathrm{d} r
\notag
\\
& \; \; = \; \frac {(2\ell+2)!} {\Gamma (\ell+3/2)} \, \frac
{(p/2)^{\ell+1/2}}
{\bigl[ \beta (1+t)/(1-t) \bigr]^{2\ell+3}}
\notag
\\[1\jot]
& \quad \
\times {}_{2} F_{1} \left( \ell+3/2, \ell+2; \ell+3/2;
- \frac{p^{2}}{\beta^{2}} \frac{(1-t)^{2}}{(1+t)^{2}} \right) \, .
\end{align}
This Gaussian hypergeometric series ${}_{2} F_{1}$ is actually a binomial
series
${}_{1} F_0 (\ell+2; z) = \sum_{m=0}^{\infty} (\ell+2)_{m} z^{m} / m! =
(1-z)^{-\ell -2}$
with $z = - p^{2} (1-t)^{2}/[\beta^{2} [1+t]^{2}]$ \citep*[Eq.\
(15.4.6)]{Olver/Lozier/Boisvert/Clark/2010}. Thus, we obtain for the
right-hand side of \cref{FT_Sturm_2}:
\begin{align}
\label{FT_Sturm_6}
& (1-t)^{-2\ell-2} \,
\int_{0}^{\infty} \, \mathrm{e}^{-\beta r \frac{1+t}{1-t}} \,
\, r^{\ell+3/2} J_{\ell+1/2} (p r) \, \mathrm{d} r
\notag
\\
& \quad \; = \; \frac
{(2\ell+2)!} {\Gamma (\ell+3/2)} \,
\frac {(p/2)^{\ell+1/2} \beta (1-t^{2})}
{\bigl[ \beta^{2} (1+t)^{2} + p^{2} (1-t)^{2} \bigr]^{\ell+2}}
\end{align}
The denominator can be simplified further, using
$\beta^{2} (1+t)^{2} + p^{2} (1-t)^{2} = \bigl( \beta^{2} + p^{2} \bigr)
\bigl\{ 1 + \bigl[2 \bigl( \beta^{2} - p^{2} \bigr) \bigr] \ \bigl[
\beta^{2} + p^{2} \bigr] t + t^{2} \bigr\}$, yielding
\begin{align}
\label{FT_Sturm_9}
& (1-t)^{-2\ell-2} \,
\int_{0}^{\infty} \, \mathrm{e}^{-\beta r \frac{1+t}{1-t}} \,
\, r^{\ell+3/2} J_{\ell+1/2} (p r) \, \mathrm{d} r
\notag
\\
& \quad \; = \; \frac
{(2\ell+2)!} {\Gamma (\ell+3/2)} \,
\notag
\\
& \qquad \times \,
\frac {(p/2)^{\ell+1/2} \beta (1-t^{2})}
{\displaystyle \left[ \bigl( \beta^{2} + p^{2} \bigr)
\left\{ 1 + \frac {2 \bigl( \beta^{2} - p^{2} \bigr)}
{\beta^{2} + p^{2}} t + t^{2} \right\} \right]^{\ell+2}} \, .
\end{align}
The rational function on the right-hand side closely resembles the
generating function \citep*[p.\ 222]{Magnus/Oberhettinger/Soni/1966}
\begin{equation}
\label{GegPol_GenFun}
\bigl( 1 - 2 x t + t^{2} \bigr)^{-\lambda} \; = \;
\sum_{n=0}^{\infty} \, C_{n}^{\lambda} (x) \, t^{n} \, ,
\quad \vert t \vert < 1 \, ,
\end{equation}
of the Gegenbauer polynomials. \citeauthor{Podolsky/Pauling/1929} only
had apply the differential operator
$t^{1-\lambda} [\partial/\partial t] t^{\lambda}$ to
\cref{GegPol_GenFun}. This yields the following modified generating
function of the Gegenbauer polynomials (compare \citep*[Eq.\ (25
)]{Podolsky/Pauling/1929}),
\begin{equation}
\label{Mod_GegPol_GenFun}
\frac {1-t^{2}} {\bigl( 1 - 2 x t + t^{2} \bigr)^{\lambda+1}}
\; = \; \sum_{n=0}^{\infty} \, \frac{\lambda+n}{\lambda} \,
C_{n}^{\lambda} (x) \, t^{n} \, ,
\end{equation}
which I could not find in the usual books on special function theory. The
rational function on the right-hand side of \cref{FT_Sturm_9} is of the
same type as the left-hand side of this modified generating function. If
we make in \cref{Mod_GegPol_GenFun} the substitutions
$x \mapsto (p^{2}-\beta^{2})/(p^{2}+\beta^{2})$ and
$\lambda \mapsto \ell+1$, we obtain the following expansion in terms of
Gegenbauer polynomials:
\begin{align}
\label{FT_Sturm_10}
& \bigl(1-t^{2}\bigr) \bigg/ {\displaystyle
\left\{ 1 - 2 \frac {p^{2}-\beta^{2}}
{p^{2}+\beta^{2}}t + t^{2} \right\}^{\ell+2}}
\notag
\\
& \qquad \; = \;
\sum_{n=0}^{\infty} \, \frac{n+\ell+1}{\ell+1} \,
C_{n}^{\ell+1} \left( \frac {p^{2}-\beta^{2}}
{p^{2}+\beta^{2}} \right) \, t^{n} \, .
\end{align}
Inserting this into \cref{FT_Sturm_9} yields:
\begin{align}
\label{FT_Sturm_11}
& (1-t)^{-2\ell-2} \,
\int_{0}^{\infty} \, \mathrm{e}^{-\beta r \frac{1+t}{1-t}} \,
\, r^{\ell+3/2} J_{\ell+1/2} (p r) \, \mathrm{d} r
\notag
\\
& \quad \; = \; \frac
{(p/2)^{\ell+1/2} (2\ell+2)!} {(\ell+1) \Gamma (\ell+3/2)} \,
\beta
\notag
\\
& \qquad \times \, \sum_{n=0}^{\infty} \, \frac {n+\ell+1}
{\bigl[ p^{2}+\beta^{2} \bigr]^{\ell+2}} \,
C_{n}^{\ell+1} \left( \frac {p^{2}-\beta^{2}}
{p^{2}+\beta^{2}} \right) \, t^{n} \, .
\end{align}
Thus, we finally obtain the following explicit expression for the Fourier
transform of an unnormalized Sturmian:
\begin{align}
\label{FT_Sturm_13}
& (2 \pi)^{-3/2} \,
\int \, \mathrm{e}^{\mathrm{i} \bm{p} \cdot \bm{r}} \,
\mathrm{e}^{-\beta r} \, L_{n-\ell-1}^{(2\ell+1)} (2 \beta r)
\, \mathcal{Y}_{\ell}^{m} (2 \beta \bm{r}) \, \mathrm{d}^{3} \bm{r}
\notag
\\
& \quad \; = \; \frac
{(2/\pi)^{1/2} \, 2^{2\ell+1} \, \ell! \, \beta^{\ell+1} \, n}
{\bigl[ p^{2}+\beta^{2} \bigr]^{\ell+2}}
\notag
\\
& \qquad \times \, C_{n-\ell-1}^{\ell+1}
\left( \frac {p^{2}-\beta^{2}} {p^{2}+\beta^{2}} \right) \,
\mathcal{Y}_{\ell}^{m} (- \mathrm{i} \bm{p}) \, .
\end{align}
To obtain the Fourier transform of a normalized Sturmian defined by
\cref{Def_SturmFun}, we multiply \cref{FT_Sturm_13} by the normalization
factor
$(2 \beta)^{3/2} \, \left[ (n-\ell-1)! / [2n(n+\ell)!] \right]^{1/2}$,
yielding \cite[Eq.\ (4.24)]{Weniger/1985}:
\begin{align}
\label{FT_SturmFun}
& \overline{\Psi_{n, \ell}^{m}} (\beta, \bm{p}) \; = \; (2 \pi)^{-3/2}
\, \int \, \mathrm{e}^{\mathrm{i} \bm{p} \cdot \bm{r}} \,
\Psi_{n, \ell}^{m} (\beta, \bm{r}) \, \mathrm{d}^{3} \bm{r}
\notag
\\
& \quad \; = \; 2^{\ell} \ell! \, \, \left[ \frac
{2\beta n (n-\ell-1)!}{\pi (n+\ell)!} \right]^{1/2} \left[ \frac
{2\beta} {p^{2}+\beta^{2}} \right]^{\ell+2}
\notag
\\
& \qquad \times
C_{n-\ell-1}^{\ell+1} \left( \frac {p^{2}-\beta^{2}}
{p^{2}+\beta^{2}} \right) \,
\mathcal{Y}_{\ell}^{m} (- \mathrm{i} \bm{p}) \, .
\end{align}
To obtain the Fourier transform of a bound-state hydrogen eigenfunction,
we only have to use \cref{SturmFun<->BSHEF} and make the substitution
$\beta \mapsto Z/n$. Thus, we obtain \citep*[Eq.\
(28)]{Podolsky/Pauling/1929}:
\begin{align}
\label{FT_HydEigFun}
& \overline{W_{n, \ell}^{m}} (Z, \bm{p}) \; = \; (2 \pi)^{-3/2} \,
\int \, \mathrm{e}^{\mathrm{i} \bm{p} \cdot \bm{r}} \,
W_{n, \ell}^{m} (Z, \bm{r}) \, \mathrm{d}^{3} \bm{r}
\notag
\\
& \quad \; = \; 2^{\ell} \ell! \, \, \left[ \frac
{2Z (n-\ell-1)!}{\pi (n+\ell)!} \right]^{1/2} \left[ \frac
{2 Z n} {n^{2} p^{2}+Z^{2}} \right]^{\ell+2}
\notag
\\
& \qquad \times
C_{n-\ell-1}^{\ell+1}
\left( \frac {n^{2} p^{2}-Z^{2}} {n^{2} p^{2}+Z^{2}} \right) \,
\mathcal{Y}_{\ell}^{m} (- \mathrm{i} \bm{p}) \, .
\end{align}
This Fourier transformation was in principle also derived by \citet[Eq.\
(26) on p.\ 241]{Rotenberg/1970} in disguised form. However,
\citeauthor{Rotenberg/1970}'s results are misleading because of an
unfortunate definition of the Sturmians (compare \citep[p.\
283]{Weniger/1985}).
If we compare \cref{FT_HydEigFun} with formulas published by other
authors, we find some discrepancies. In the formula given by \citet[Eq.\
(28)]{Podolsky/Pauling/1929}, a phase factor $(- \mathrm{i})^{\ell}$ is
missing. The same error was reproduced by \citet[Eq.\
(8.8)]{Bethe/Salpeter/1977}. The formula given by \citet[Eqs.\ (5.5) and
(5.6)]{Englefield/1972} differs from \cref{FT_HydEigFun} by a phase
factor $(-1)^{m}$. Finally, in the expression given by \citet*[Eq.\
(7.4.69)]{Biedenharn/Louck/1981a} a factor $\pi^{-1/2}$ is missing.
\citet*[pp.\ 50 - 52]{Kaijser/Smith/1977} showed that the generating
function approach of \citeauthor{Podolsky/Pauling/1929} can be extended
to the Fourier transform of a Lambda function defined by
\cref{Def_LambdaFun}. However, the approach of
\citet*{Podolsky/Pauling/1929} and \citet*{Kaijser/Smith/1977} requires
considerable manipulative skills. In \cref{Sec:ExpRBF}, I will show how
the Fourier transforms of bound-state hydrogen eigenfunctions, Sturmians,
and Lambda functions and of other Laguerre-type functions can be
constructed in an almost trivially simple way by expanding generalized
Laguerre polynomials in terms of so-called reduced Bessel functions
(compare \cite[Section IV]{Weniger/1985}).
\typeout{==> Section: The Work of Y\"{u}k\c{c}\"{u} and
Y\"{u}k\c{c}\"{u}}
\section{The Work of Y\"{u}k\c{c}\"{u} and Y\"{u}k\c{c}\"{u}}
\label{Sec:WorkYuekcueYuekcue}
\citet*{Podolsky/Pauling/1929} faced the problem that no simple closed
form expression for the Hankel-type integral in \cref{FT_Sturm_1} was
known. They solved this problem by computing instead the Fourier
transform of the generating function \eqref{GLag_GenFun}, which leads to
the comparatively simple and explicitly known Hankel-type integral in
\cref{FT_Sturm_2}. In this way, \citeauthor{Podolsky/Pauling/1929} only
had to perform a series expansion of the radial integral in
\cref{FT_Sturm_2} to derive the explicit expression \eqref{FT_HydEigFun}
for the Fourier transform of a bound state hydrogen eigenfunction.
\citet*{Yuekcue/Yuekcue/2018}, who were apparently unaware of the work by
\citet*{Podolsky/Pauling/1929} or of the whole extensive literature on
this topic, proceeded differently. They utilized the fact that a
generalized Laguerre polynomial $L_{n}^{(\alpha)} (z)$ is according to
\cref{GLag_1F1} a polynomial of degree $n$ in $z$. Thus, the generalized
Laguerre polynomial $L_{n-\ell-1}^{(2\ell+1)} (2 \beta r)$ occurring in
\cref{Def_SturmFun} can be expressed as a sum of powers:
\begin{align}
\label{GLag->PowZ}
& L_{n-\ell-1}^{(2\ell+1)} (2 \beta r)
\notag
\\
& \quad \; = \;
\frac {(n+\ell)!} {(n-\ell-1)!} \, \sum_{\nu=0}^{n-\ell-1} \,
\frac {(-n+\ell+1)_{\nu}}{(2\ell+\nu+1)!} \,
\frac {(2 \beta r)^{\nu}}{\nu!} \, .
\end{align}
To achieve what they believe to be a further simplification,
\citet*[Eqs.\ (10) - (12)]{Yuekcue/Yuekcue/2018} combined
\cref{GLag->PowZ} with the Laguerre multiplication theorem \citep*[p.\
249]{Magnus/Oberhettinger/Soni/1966}
\begin{equation}
\label{GLag_MultThm}
L_{n}^{(\alpha)} (z x) \; = \; \sum_{m=0}^{n} \,
\binom{n+\alpha}{m} \, \frac{(1-z)^{m}}{z^{m-n}} \,
L_{n-m}^{(\alpha)} (x) \, .
\end{equation}
However, the combination of \cref{GLag->PowZ,GLag_MultThm} leads to the
same Hankel-type integrals as the direct use of \cref{GLag->PowZ}. Thus,
this combination accomplishes nothing and only introduces a completely
useless additional inner sum. Therefore, I will only consider the direct
use of \cref{GLag->PowZ}.
In \citeyear{Slater/1930}, \citet{Slater/1930} introduced the so-called
Slater-type functions, which had an enormous impact on atomic electronic
structure theory and which in unnormalized form are expressed as follows:
\begin{align}
\label{Def_STF}
\chi_{n, \ell}^{m} (a, \bm{r}) \; = \; & (\alpha r)^{n-1} \,
\mathrm{e}^{-\alpha r} \, Y_{\ell}^{m} (\theta, \phi)
\notag
\\
\; = \; &
(\alpha r)^{n-\ell-1} \, \mathrm{e}^{-\alpha r} \,
\mathcal{Y}_{\ell}^{m} (\alpha \bm{r}) \, ,
\quad \alpha > 0 \, .
\end{align}
I always tacitly assume that the principal quantum number $n$ is a
positive integer $n \in \mathbb{N}$ satisfying $n - \ell \ge 1$.
With the help of Slater-type functions, an unnormalized Sturmian function
\eqref{Def_SturmFun} with fixed $\beta > 0$ can be expressed as follows:
\begin{align}
\label{SturmFunUN->STF}
& \mathrm{e}^{-\beta r} \, L_{n-\ell-1}^{(2\ell+1)} (2\beta r) \,
\mathcal{Y}_{\ell}^{m} (2 \beta \bm{r})
\; = \; \frac {(n+\ell)!} {(n-\ell-1)!}
\notag
\\
& \quad \, \times \, \sum_{\nu=0}^{n-\ell-1} \,
\frac {(-n+\ell+1)_{\nu}}{(2\ell+\nu+1)!} \,
\frac {2^{\nu}}{\nu!} \,
\chi_{\nu+\ell+1, \ell}^{m} (\beta, \bm{r}) \, .
\end{align}
The idea of expressing functions based on the generalized Laguerre
polynomial by finite sums of Slater-type functions is not new. To the
best of my knowledge, it was introduced by \citet{Smeyers/1966} in
\citeyear{Smeyers/1966}, who expressed Lambda functions defined by
\cref{Def_LambdaFun} as linear combinations of Slater-type
functions. \citeauthor{Smeyers/1966} constructed in this way one-range
addition theorems of Slater-type functions, which were expansion in terms
of Lambda functions. Thus, their expansion coefficients are overlap
integrals \citep[Section 3]{Smeyers/1966}. In \citeyear{Guseinov/1978},
\citet[Eqs.\ (6) - (8)]{Guseinov/1978} adopted \citeauthor{Smeyers/1966}'
approach and consistently used it in his countless later publications,
without ever giving credit to \citet{Smeyers/1966}.
Smeyers' approach is undoubtedly very simple. Nevertheless, it is not
good. In \cref{GLag->PowZ} there are strictly alternating
signs. Therefore, in sums of the type of \cref{SturmFunUN->STF}, which
inherit the alternating signs from \cref{GLag->PowZ}, numerical
instabilities are to be expected in the case of larger summation
indices. This had already been emphasized in
\citeyear{Trivedi/Steinborn/1982} by \citet*[pp.\ 116 -
117]{Trivedi/Steinborn/1982}. For a more detailed discussion plus
additional references, see \citep[pp.\ 32 - 34]{Weniger/2012}.
Fourier transformation is a linear operation. Consequently,
\cref{SturmFunUN->STF} implies that the Fourier transformation of a
Sturmian -- or of any of the various other functions based on generalized
Laguerre polynomials -- can be expressed as a finite linear combination
of Fourier transforms of Slater-type functions with integral principal
quantum numbers (compare \citep*[Eq.\ (12)]{Yuekcue/Yuekcue/2018}).
There is an extensive literature on Fourier transforms of Slater-type
functions. I am aware of articles by \citet{Geller/1962,Geller/1963a},
\citet{Silverstone/1966,Silverstone/1967a},
\citet*{Edwards/Gottlieb/Doddrell/1979}, \citet*{Henneker/Cade/1968},
\citet*{Kaijser/Smith/1977}. \citet*{Weniger/Steinborn/1983a},
\citet{Niukkanen/1984c}, \citet*{Belkic/Taylor/1989}, and by
\citet{Akdemir/2018}. In addition, there is a Wikipedia article
\citep{WikipediaSTF/2018}, whose principal reference is the article by
\citet*{Belkic/Taylor/1989}. \citet*{Yuekcue/Yuekcue/2018} only mentioned
\citet{Niukkanen/1984c} as their Ref.\ [8].
The Rayleigh expansion \eqref{Rayleigh_Expan} leads to an expression of
the Fourier transform of a Slater-type function as a Hankel-type radial
integral:
\begin{align}
\label{Def_STF_FT}
& \overline{\chi_{n,\ell}^{m}} (\alpha, \bm{p}) \; = \; (2\pi)^{-3/2}
\, \int \, \mathrm{e}^{- \mathrm{i} \bm{p} \cdot \bm{r}} \,
\chi_{n,\ell}^{m} (\alpha, \bm{r}) \, \mathrm{d}^3 \bm{r}
\notag
\\
& \qquad \; = \;
\alpha^{n-1} \, (-\mathrm{i})^{\ell} \, Y_{\ell}^{m} (\bm{p}/p)
\notag
\\
& \qquad \quad \; \; \times \,
p^{-1/2} \int_{0}^{\infty} \, \mathrm{e}^{- \alpha r} \,
r^{n+1/2} \, J_{\ell+1/2} (p r) \, \mathrm{d} r \, .
\end{align}
This Hankel-type integral is a special case of the one in
\cref{Int_ExpBesJ}. Thus, we immediately obtain \citep*[Eq.\
(3.15)]{Weniger/Steinborn/1983a}
\begin{align}
\label{FT_STF_1}
& \overline{\chi_{n,\ell}^{m}} (\alpha, \bm{p}) \; = \; \frac
{(n+\ell+1)!} {\alpha^{\ell+3} \, (2\pi)^{1/2} \, (1/2)_{\ell+1}} \,
\mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2)
\notag
\\
& \qquad \times \, {}_{2} F_{1} \left( \frac{n+\ell+2}{2},
\frac{n+\ell+3}{2}; \ell+\frac{3}{2}; - \frac{p^2}{\alpha^2} \right)
\, ,
\end{align}
which corresponds to \citep*[Eqs.\ (14) and (16)]{Yuekcue/Yuekcue/2018}.
For the derivation of Watson's hypergeometric representation
\eqref{Int_ExpBesJ}, one only has to insert the power series
$J_{\nu} (z) = \bigl[ (z/2)^{\nu}/\Gamma(\nu+1) \bigr] {}_{0} F_{1}
(\nu+1; -z^{2}/4)$ \citep*[Eq.\
(10.16.9)]{Olver/Lozier/Boisvert/Clark/2010} into the integral, followed
by an interchange of summation and term-wise integration. In the final
step, the Pochhammer duplication formula \citep*[Eq.\
(5.2.8)]{Olver/Lozier/Boisvert/Clark/2010} is to be used.
\citeauthor{Watson/1922}'s derivation can be modified easily. If we
instead use
$J_{\nu} (z) = \bigl[ (z/2)^{\nu} \mathrm{e}^{\mp \mathrm{i} z}/\Gamma
(\nu+1) \bigr] {}_{1} F_{1} (\nu+1/2; 2\nu+1; \pm 2 \mathrm{i} z)$
\citep*[Eq.\ (10.16.5)]{Olver/Lozier/Boisvert/Clark/2010}, we obtain an
alternative representation which involves a ${}_{2} F_{1}$ with complex
argument:
\begin{align}
\label{Int_ExpBesJ_CompArg}
& \int_{0}^{\infty} \, \mathrm{e}^{-at} \, J_{\nu} (bt) \, t^{\mu-1}
\, \mathrm{d} t \; = \; \frac
{(b/2)^{\nu}}{(a \pm \mathrm{i} b)^{\mu+\nu}} \,
\frac{\Gamma (\mu+\nu)}{\Gamma (\nu+1)}
\notag
\\
& \quad \times \, {}_{2} F_{1}
\left( \nu+\frac{1}{2}, \mu+\nu; 2\nu+1;
\frac{\pm 2 \mathrm{i} b}{a \pm \mathrm{i} b} \right) \, .
\end{align}
This yields the following Fourier transformation:
\begin{align}
\label{FT_STF_1_C}
& \overline{\chi_{n,\ell}^{m}} (\alpha, \bm{p}) \; = \;
\frac {(n+\ell+1)!} {(2\pi)^{1/2} \, (1/2)_{\ell+1}} \,
\frac{\alpha^{n-1}}{(\alpha \pm \mathrm{i} p)^{n+\ell+2}} \, \mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2)
\notag
\\
& \qquad \times \,
{}_{2} F_{1} \left( n+\ell+2, \ell+1; 2 \ell +2;
\frac{\pm 2 \mathrm{i} p}{\alpha \pm \mathrm{i} p} \right) \, .
\end{align}
A slightly less general result had been obtained by \citet[Eq.\
(15)]{Belkic/Taylor/1989}. They used only the upper signs in the
expression involving the ${}_{1} F_{1}$ given above. In this way,
\citet[Eq.\ (15)]{Belkic/Taylor/1989} obtained only the upper signs on
the right-hand side of \cref{FT_STF_1_C}.
The Hankel-type integral in \cref{Int_ExpBesJ_CompArg} is real if
$\mu, \nu, a, p \in \mathbb{R}$. Thus, the right-hand side of
\cref{Int_ExpBesJ_CompArg} also has to be real, or equivalently, it has
to be equal to its complex conjugate. This implies:
\begin{align}
\label{HTI_mod_4_real}
& {}_{2} F_{1}
\left( \nu+\frac{1}{2}, \mu+\nu; 2\nu+1;
\frac{\pm 2 \mathrm{i} b}{a \pm \mathrm{i} b} \right) \; = \;
\left( \frac{a \pm \mathrm{i} b}
{a \mp \mathrm{i} b} \right)^{\mu+\nu}
\notag
\\
& \quad \times \,
{}_{2} F_{1} \left( \nu+\frac{1}{2}, \mu+\nu; 2\nu+1;
\frac{\mp 2 \mathrm{i} b}{a \mp \mathrm{i} b} \right) \, .
\end{align}
Analogous symmetries also exist in \cref{FT_STF_1_C} and in all other
expressions of that kind derived later.
The hypergeometric series ${}_{2} F_{1}$ in \cref{FT_STF_1,FT_STF_1_C} do
not converge for all $p \ge 0$. We either have
$\vert-p^{2}/\alpha^{2} \vert \to \infty$ as $p \to \infty$, or
$\vert \pm 2 \mathrm{i} p/(\alpha \pm \mathrm{i} p) \vert \to 2$ as
$p \to \infty$. Thus, \cref{FT_STF_1,FT_STF_1_C} are not sufficient for
computational purposes. They are, however, convenient starting points for
the construction of alternative expressions with better numerical
properties. In the case of the ${}_{2} F_{1}$ in \cref{FT_STF_1}, this
had already been emphasized in the first edition of
\citeauthor{Watson/1922}'s classic book \citep[Eq.\ (3) on p.\
385]{Watson/1922} which appeared in \citeyear{Watson/1922}.
For the construction of analytic continuation formulas, it makes sense to
use the highly developed transformation theory of the Gaussian
hypergeometric function ${}_{2} F_{1}$ as an ordering principle (see for
example \citep*[pp.\ 47 - 51]{Magnus/Oberhettinger/Soni/1966} or
\citep*[\S 15.8 Transformations of
Variable]{Olver/Lozier/Boisvert/Clark/2010}). This leads to a vast number
of alternatively expressions (far too many to be presented
here). Therefore, I will only concentrate on illustrative examples.
The first author -- N.\ \citeauthor{Yuekcue/2017} -- should be aware of
the relevance of analytic continuation formulas of hypergeometric
functions because of his recent article \emph{Hypergeometric Functions in
Mathematics and Theoretical Physics} \citep{Yuekcue/2017}.
The simplest transformations of a ${}_{2} F_{1}$ are the Euler and Pfaff
transformations (see for example \citep*[p.\
47]{Magnus/Oberhettinger/Soni/1966}):
\begin{align}
\label{LTr_0}
& {}_{2} F_{1} (a, b; c; z)
\notag
\\
& \qquad \; = \;
(1-z)^{c-a-b} \, {}_{2} F_{1} (c-a, c-b; c; z)
\\
\label{LTr_1}
& \qquad \; = \;
(1-z)^{-a} \, {}_{2} F_{1} \bigl( a, c-b; c; z/(z-1) \bigr)
\\
\label{LTr_2}
& \qquad \; = \;
(1-z)^{-b} \, {}_{2} F_{1} \bigl( c-a, b; c; z/(z-1) \bigr) \, .
\end{align}
The application the Euler transformation \eqref{LTr_0} to the
${}_{2} F_{1}$s in \cref{FT_STF_1,FT_STF_1_C} yields \citep*[Eq.\
(3.11)]{Weniger/Steinborn/1983a}
\begin{align}
\label{FT_STF_2}
& \overline{\chi_{n,\ell}^{m}} (\alpha, \bm{p}) \; = \; \frac
{(n+\ell+1)!} {(2\pi)^{1/2} \, (1/2)_{\ell+1}} \,
\mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2)
\notag
\\
& \quad \times \, \frac{\alpha^{2n-\ell-1}}{[\alpha^2+p^2]^{n+1}}
\notag
\\
& \qquad \times \, \,
{}_{2} F_{1} \left( \frac{\ell-n}{2}, \frac{\ell-n+1}{2};
\ell+\frac{3}{2}; - \frac{p^2}{\alpha^2} \right)
\end{align}
and
\begin{align}
\label{FT_STF_1_C_Euler}
& \overline{\chi_{n,\ell}^{m}} (\alpha, \bm{p}) \; = \; \frac
{(n+\ell+1)!} {(2\pi)^{1/2} \, (1/2)_{\ell+1}} \,
\mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2)
\notag
\\
& \qquad \times \, \,
\frac{\alpha^{n-1}} {(\alpha \pm \mathrm{i} p)^{\ell+1} \,
(\alpha \mp \mathrm{i} p)^{n+1}}
\notag
\\
& \qquad \quad \, {}_{2} F_{1} \left( \ell-n, \ell+1; 2 \ell +2;
\frac{\pm 2 \mathrm{i} p}{\alpha \pm \mathrm{i} p} \right) \, .
\end{align}
Since we assume $n \in \mathbb{N}$, $\ell \in \mathbb{N}_{0}$,
$n-\ell-1 \ge 0$, $n-\ell$ and either $(\ell-n)/2$ or $(\ell-n+1)/2$ are
positive integers. Accordingly, the ${}_{2} F_{1}$s in
\cref{FT_STF_2,FT_STF_1_C_Euler} terminate, which represents a
substantial improvement compared to the non-terminating ${}_{2} F_{1}$s
in \cref{FT_STF_1,FT_STF_1_C}. Both \cref{FT_STF_2,FT_STF_1_C_Euler}
allow a convenient evaluation of
$\overline{\chi_{n,\ell}^{m}} (\alpha, \bm{p})$ for all
$\bm{p} \in \mathbb{R}^{3}$. For recurrence formulas of the Gaussian
hypergeometric function ${}_{2} F_{1} (a, b; c;z)$, where two or three of
the parameters $a$, $b$, and $c$ change \emph{simultaneously}, see
\citep[Appendix C]{Weniger/2001}.
We can also employ the Pfaff transformations \eqref{LTr_1} and
\eqref{LTr_2}. In the case of \cref{FT_STF_1}, this yields hypergeometric
series with argument $p^{2}/(\alpha^{2}+p^{2})$ that either terminate or
converge for all $p \ge 0$ \citep*[Eqs.\ (3.16) and
(3.17)]{Weniger/Steinborn/1983a}. In the case of \cref{FT_STF_1_C_Euler},
we only obtain complex conjugates of known radial parts.
But this is not yet the end of the story. By systematically exploiting
the \emph{known} transformation properties of the Gaussian hypergeometric
function ${}_{2} F_{1}$, many other terminating or non-terminating
expressions can be derived. For example, we could also use one of the
linear transformations that accomplish the variable transformations
$z \mapsto 1-z$, $z \mapsto 1/z$, $z \mapsto 1/(1-z)$, and
$z \mapsto 1 - 1/z$, respectively, by expressing a given ${}_{2} F_{1}$
in terms of two other ${}_{2} F_{1}$s (see for example \citep*[pp.\ 47 -
49]{Magnus/Oberhettinger/Soni/1966}). Normally, these transformations
lead to comparatively complicated expressions which can safely be
ignored. An exception is the following expression obtained by a
transformation $z \mapsto 1/(1-z)$ \citep*[Eqs.\ (3.19) and
(3.20)]{Weniger/Steinborn/1983a}:
{\allowdisplaybreaks
\begin{align}
\label{FT_STF_5}
& \overline{\chi_{n,\ell}^{m}} (\alpha, \bm{p}) \; = \; (\pi/2)^{1/2}
\, (n+\ell+1)! \, \mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2)
\notag
\\
& \quad \times \, \Biggl[
\frac{\alpha^{n-1} [\alpha^2+p^2]^{-(n+\ell+2)/2}}{\Gamma
\bigl([n+\ell+3]/2 \bigr) \Gamma \bigl([\ell-n+1]/2 \bigl)}
\notag
\\[1\jot]
& \qquad \times \, {}_{2} F_{1} \left( \frac{n+\ell+3}{2},
\frac{\ell-n}{2}; \frac{1}{2}; \frac{\alpha^2}{\alpha^2+p^2} \right)
\notag \\[1\jot]
& \quad - \, \frac{2\alpha^{n} [\alpha^2+p^2]^{-(n+\ell+3)/2}}
{\Gamma \bigl([n+\ell+2]/2 \bigr) \Gamma \bigl([\ell-n]/2 \bigr)}
\notag \\[1\jot]
& \qquad \times \, {}_{2} F_{1} \left( \frac{n+\ell+3}{2},
\frac{\ell-n+1}{2}; \frac{3}{2}; \frac{\alpha^2}{\alpha^2+p^2}
\right) \Biggr] \, .
\end{align} }
This expression is simpler than it looks. If $n-\ell$ is even, the second
part of the right-hand side vanishes because of the gamma function
$\Gamma \bigl([\ell-n]/2 \bigr)$, and if $n-\ell$ is odd, the first part
vanishes because of the gamma function
$\Gamma \bigl([\ell-n+1]/2 \bigr)$.
In addition to linear transformations, a Gaussian hypergeometric function
${}_{2} F_{1}$ may also satisfy so-called quadratic transformations (see
for example \citep*[pp.\ 49 - 51]{Magnus/Oberhettinger/Soni/1966} or
\citep*[\S 15.8(iii) Quadratic
Transformations]{Olver/Lozier/Boisvert/Clark/2010}). Unlike the linear
transformations considered so far, quadratic transformations do not exist
for a ${}_{2} F_{1} (a, b; c; z)$ with completely arbitrary parameters
$a$, $b$, and $c$. They only exists for special values of the parameters
$a$, $b$, and $c$ \citep*[Table
15.8.1]{Olver/Lozier/Boisvert/Clark/2010}.
The hypergeometric series in \cref{FT_STF_1} is of the general type
${}_{2} F_{1} \left( a, a+1/2; c; z \right)$. This suggests the
application of the following quadratic transformations \citep*[p.\
50]{Magnus/Oberhettinger/Soni/1966} to this ${}_{2} F_{1}$:
\begin{align}
\label{QT_2F1_17}
& {}_{2} F_{1} \left( a, a+\frac{1}{2}; c; z \right)
\notag
\\
& \quad \; = \; (1-z)^{-a} \, {}_{2} F_{1}
\left( 2a, 2c-2a-1; c; \frac{\sqrt{1-z}-1}{2\sqrt{1-z}} \right)
\\
\label{QT_2F1_18}
& \quad \; = \; \left(1\pm\sqrt{z}\right)^{-2a} \,
{}_{2} F_{1} \left( 2a, c-\frac{1}{2}; 2c-1;
\pm \frac{2\sqrt{z}}{1 \pm \sqrt{z}} \right)
\\
\label{QT_2F1_19}
& \quad \; = \; \left( \frac{1+\sqrt{1-z}}{2} \right)^{-2a}
\notag
\\
& \qquad \quad \times \, {}_{2} F_{1} \left( 2a, 2a-c+1; c;
\frac{1-\sqrt{1-z}}{1+\sqrt{1-z}} \right)
\end{align}
Application of \cref{QT_2F1_17,QT_2F1_18,QT_2F1_19} to the ${}_{2} F_{1}$
in \cref{FT_STF_1} yields the following alternative expressions:
\begin{align}
\label{FT_STF_1_QT_17}
& \overline{\chi_{n,\ell}^{m}} (\alpha, \bm{p})
\notag
\\
& \quad \; = \; \frac
{(n+\ell+1)!} {(2\pi)^{1/2} \, (1/2)_{\ell+1}} \,
\frac{\alpha^{n-1}}{[\alpha^{2} + p^{2}]^{(n+\ell+2)/2}} \,
\mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2)
\notag
\\
& \qquad \times \, {}_{2} F_{1} \left( n+\ell+2, \ell-n;
\ell+\frac{3}{2}; \frac{\sqrt{\alpha^{2}+p^{2}}-\alpha} {2\sqrt{\alpha^{2}+p^{2}}} \right)
\\
\label{FT_STF_1_QT_18}
& \quad \; = \; \frac
{(n+\ell+1)!} {(2\pi)^{1/2} \, (1/2)_{\ell+1}} \,
\frac{\alpha^{n-1}}{(\alpha \pm \mathrm{i} p)^{n+\ell+2}} \,
\mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2)
\notag
\\
& \qquad \times \,
{}_{2} F_{1} \left( n+\ell+2, \ell+1; 2\ell+2;
\frac{ \pm 2\mathrm{i}p}{\alpha \pm \mathrm{i} p} \right)
\\
\label{FT_STF_1_QT_19}
& \quad \; = \; \frac {(n+\ell+1)!} {(2\pi)^{1/2} \, (1/2)_{\ell+1}} \,
\frac {2^{n+\ell+2} \, \alpha^{n-1}}
{[\alpha^{2}+p^{2}]^{(n+\ell+2)/2}} \,
\mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2)
\notag
\\
& \qquad \times \,
{}_{2} F_{1} \left( n+\ell+2, n+\frac{3}{2}; \ell+\frac{3}{2};
\frac{\alpha-\sqrt{\alpha^{2}+p^{2}}}
{\alpha+\sqrt{\alpha^{2}+p^{2}}} \right) \, .
\end{align}
\Cref{FT_STF_1_C,FT_STF_1_QT_18} are identical. The derivation of
\cref{FT_STF_1_QT_18} shows that the quadratic transformation
\eqref{QT_2F1_18} can create a representation containing a ${}_{2} F_{1}$
with complex argument from a ${}_{2} F_{1}$ with real
argument. Consequently, the derivation of the complex expression
\eqref{Int_ExpBesJ_CompArg} for the Hankel-type integral in
\cref{Int_ExpBesJ} by directly evaluating the integral is -- strictly
speaking -- superfluous. Applying the quadratic transformation
\eqref{QT_2F1_18} to the ${}_{2} F_{1}$ in \cref{Int_ExpBesJ} would have
done the job.
\begin{widetext}
Only the ${}_{2} F_{1}$ in \cref{FT_STF_1_QT_17} terminates. As a
remedy, we can apply the Euler transformation \eqref{LTr_0} to the
non-terminating ${}_{2} F_{1}$s in
\cref{FT_STF_1_QT_18,FT_STF_1_QT_19}, yielding
\begin{align}
\label{FT_STF_1_QT_18_Euler}
& \overline{\chi_{n,\ell}^{m}} (\alpha, \bm{p})
\notag
\\
& \qquad \; = \;
\frac {(n+\ell+1)!} {(2\pi)^{1/2} \, (1/2)_{\ell+1}} \,
\frac{\alpha^{n-1}} {(\alpha \pm \mathrm{i} p)^{\ell+1} \,
(\alpha \mp \mathrm{i} p)^{n+1}} \,
\mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2) \,
{}_{2} F_{1} \left( \ell-n, \ell+1; 2 \ell +2;
\frac{\pm 2 \mathrm{i} p}{\alpha \pm \mathrm{i} p} \right)
\\
\label{FT_STF_1_QT_19_Euler}
& \qquad \; = \; \frac {(n+\ell+1)!}{(2\pi)^{1/2} \, (1/2)_{\ell+1}}
\, \frac{\alpha^{n-1}}{2^{n-\ell}} \,
\frac{\left[\alpha+\sqrt{\alpha^{2}+p^{2}} \right]^{2n+2}}
{\left[\alpha^{2}+p^{2}\right]^{(3n+\ell+4)/2}} \,
\mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2) \,
{}_{2} F_{1} \left(\ell-n, -n-\frac{1}{2}; \ell+\frac{3}{2};
\frac{\alpha-\sqrt{\alpha^{2}+p^{2}}}
{\alpha+\sqrt{\alpha^{2}+p^{2}}} \right) \, .
\end{align}
\end{widetext}
The terminating ${}_{2} F_{1}$ in \cref{FT_STF_1_QT_17} can be expressed
as a Gegenbauer polynomial via \citep*[p.\
220]{Magnus/Oberhettinger/Soni/1966}
\begin{equation}
\label{GegPol_2F1_a}
C_{n}^{\lambda} (x) \; = \; \frac{(2\lambda)_n}{n!} \,
{}_2 F_1 \left( -n, n+2\lambda; \lambda+\frac{1}{2};
\frac{1-x}{2} \right) \, ,
\end{equation}
yielding
\begin{align}
\label{FT_STF_8_GegPol}
& \overline{\chi_{n,\ell}^{m}} (\alpha, \bm{p}) \; = \;
\frac {(n+\ell+1)! \, (n-\ell)!}
{(2\pi)^{1/2} \, (1/2)_{\ell+1} \, (2\ell+2)_{n-\ell}} \,
\mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}/2)
\notag
\\
& \qquad \times \,
\frac{\alpha^{n-1}}{[\alpha^2+p^2]^{\frac{n+\ell}{2}+1}} \,
C_{n-\ell}^{\ell+1}
\left( \frac{\alpha}{\sqrt{\alpha^{2}+p^{2}}} \right) \, .
\end{align}
The Gegenbauer polynomial representation \eqref{FT_STF_8_GegPol}
corresponds to the second representation given by
\citet*{Yuekcue/Yuekcue/2018} in their Eqs.\ (13) and (15). As their
source, \citet*{Yuekcue/Yuekcue/2018} give the book by
\citet*{Gradshteyn/Rhyzhik/2000} as their Ref.\ [25], without specifying
a page or equation number. Unfortunately, I was not able to find the
corresponding expression in the book by \citet*{Gradshteyn/Rhyzhik/2000}.
In earlier articles by \citet*[Eq.\
(12)]{Yavuz/Yuekcue/Oeztekin/Yilmaz/Doenduer/2005} and \citet[Eq.\
(32)]{Yuekcue/2015}, the Gegenbauer polynomial representation
\eqref{FT_STF_8_GegPol} had been attributed to
\citet{Guseinov/1987}. Google Scholar gave me the title of
\citeauthor{Guseinov/1987}'s article, but I was not able to obtain a
copy. Personal contacts to the Wuhan Institute of Physics and Mathematics
of the Chinese Academy of Sciences could not help, either.
But \citeauthor{Guseinov/1987} was not the first one to derive the
Gegenbauer polynomial representation \eqref{FT_STF_8_GegPol}. To the best
of my knowledge, this had been achieved by \citet{Niukkanen/1984c} in
\citeyear{Niukkanen/1984c}, who introduced a fairly large class of
exponentially decaying functions \citep[Eqs.\ (2) and
(3)]{Niukkanen/1984c}, which contain all function sets considered in this
article as special cases. The radial part of the Fourier transform of
\citeauthor{Niukkanen/1984c}'s function can be expressed in terms of an
Appell function $F_{2}$ \citep[Eq.\ (21)]{Niukkanen/1984c}, which is an
hypergeometric function in two variables \citep*[Eq.\
(16.13.2)]{Olver/Lozier/Boisvert/Clark/2010}. By means of a reduction
formula in combination with a suitable quadratic transformation of a
${}_{2} F_{1}$, \citet[Eq.\ (55)]{Niukkanen/1984c} obtained the
Gegenbauer polynomial representation \eqref{FT_STF_8_GegPol}. This
Gegenbauer representation had also been derived by \citet*[Eq.\
(21)]{Belkic/Taylor/1989} in \citeyear{Belkic/Taylor/1989} in connection
with their restricted version of \cref{Int_ExpBesJ_CompArg} \citep[Eq.\
(15)]{Belkic/Taylor/1989}.
\citet*{Yuekcue/Yuekcue/2018} used either a representation given by their
Eqs.\ (14) and (16) involving a non-terminating ${}_{2} F_{1}$, which
correspond to \cref{FT_STF_1}, or alternatively a Gegenbauer polynomial
representation given by their Eqs.\ (13) and (15), which correspond to
\cref{FT_STF_8_GegPol}. The non-terminating ${}_{2} F_{1}$ in
\cref{FT_STF_1} converges only for $\vert p^{2}/\alpha^{2} \vert < 1$,
whereas the Gegenbauer polynomial in \cref{FT_STF_8_GegPol} is meaningful
for all $\vert \bm{p} \vert \in \mathbb{R}^{3}$. Thus,
\citeauthor{Yuekcue/Yuekcue/2018} had to prove that their Gegenbauer
polynomial representation provides an analytic continuation of their
representation involving a non-terminating ${}_{2} F_{1}$ with a finite
radius of convergence to all $\vert \bm{p} \vert \in
\mathbb{R}^{3}$. They did this by showing in \citep[Table
1]{Yuekcue/Yuekcue/2018} that the radial parts of these representation
give for a variety of quantum numbers $n$ and $\ell$ and for certain
values of $p$ identical \emph{numerical} results. This highly pedestrian
approach is no substitute for a rigorous mathematical proof.
So far, I only showed that representations involving a ${}_{2} F_{1}$
with real argument can be obtained from representations involving a
${}_{2} F_{1}$ with complex argument (compare
\cref{FT_STF_1_C,FT_STF_1_C_Euler}). However, the inverse operations are
also possible. For example, the application of the quadratic
transformation \citep*[p.\ 51]{Magnus/Oberhettinger/Soni/1966}
\begin{align}
\label{QT_2F1_32}
& {}_{2} F_{1} \left( a, b; 2b; z \right) \; = \;
\left( 1-z/2 \right)^{{-a}}
\notag
\\
& \qquad \times \,
{}_{2} F_{1} \left( \frac{a}{2}, \frac{a+1}{2}; b+\frac{1}{2};
\frac{z^{2}}{[2-z]^{2}} \right)
\end{align}
to the ${}_{2} F_{1}$s in \cref{FT_STF_1_C,FT_STF_1_C_Euler} yields
\cref{FT_STF_1,FT_STF_2}.
By suitably combining linear and quadratic transformations, many explicit
expressions for the Fourier transform of a Slater-type function can be
derived. However, this is not yet the end of the story. Those Gaussian
hypergeometric functions ${}_{2} F_{1}$, for which a quadratic
transformation exists, can also be expressed in terms of Legendre
functions \citep*[pp.\ 51 - 54]{Magnus/Oberhettinger/Soni/1966}. Since,
however, Legendre functions can be viewed to be nothing but special
hypergeometric series ${}_{2} F_{1}$ \citep*[\S 14.3 Definitions and
Hypergeometric Representations]{Olver/Lozier/Boisvert/Clark/2010}, I will
refrain from considering Legendre function representations
explicitly. This would only lead to a repetition of known hypergeometric
expressions in disguise. Let me just mention that \citet*[Eq.\
(6.621.1)]{Gradshteyn/Rhyzhik/2000} expressed the Hankel-type integral in
\cref{Int_ExpBesJ} also in terms of Legendre functions.
My incomplete list of representations of the Fourier transform of a
Slater-type function should suffice to convince even a skeptical reader
that the highly developed transformation theory of the Gaussian
hypergeometric function ${}_{2} F_{1}$ is extremely useful in this
context. It allows the derivation of a large variety of different
representations, which are all analytic continuations of the basic
expressions \eqref{FT_STF_1} and \eqref{FT_STF_1_C}.
The derivation and classification of the various expressions for the
Fourier transforms of Slater-type functions is certainly an achievement
in its own right. Nevertheless, one should not forget that in the context
of the Fourier transform of a bound-state hydrogen eigenfunction or of
other functions based on the generalized Laguerre polynomials, these
Slater results are essentially irrelevant. The formulas presented in this
Section confirm once more what I had already emphasized in \citep*[p.\
29]{Weniger/2012}: although extremely simple in the coordinate
representation, Slater-type functions are comparatively complicated
object in momentum space. Their Fourier transforms have the same level of
complexity as the Fourier transforms of bound state hydrogen
eigenfunctions (see \cite[Section IV]{Weniger/1985}).
Therefore, it cannot be a good idea to express the Fourier transform of a
bound-state hydrogen eigenfunction as a linear combination of Fourier
transforms of Slater-type functions. Because of strictly alternating
sings, \cref{SturmFunUN->STF} as well as all formulas derived from it
become numerically unstable for large quantum numbers $n$. In addition,
these linear combinations of the Fourier transforms of Slater-type
functions are for large $n$ hopelessly inefficient compared to the
classic result \eqref{FT_HydEigFun} derived by \citet*[Eq.\
(28)]{Podolsky/Pauling/1929}. To the best of my knowledge, nobody has
ever been able to construct \cref{FT_HydEigFun} from a linear combination
of Fourier transforms of Slater-type functions.
If we evaluate the Fourier transform of a bound-state hydrogen
eigenfunction or of related functions via linear combinations of the
Fourier transforms of Slater-type functions, we have to deal with
extensive intrinsic cancellations. I learned the hard way from my work on
convergence acceleration and the summation of divergent series (see for
example \citep*{Weniger/1989,Brezinski/RedivoZaglia/Weniger/2010a} or
\citep*[\S 3.9(v) Levin's and Weniger's
Transformations]{Olver/Lozier/Boisvert/Clark/2010}) that expansions,
which are plagued by substantial intrinsic cancellations, can easily
become numerically problematic. It is always desirable to use only those
expressions for computational purposes, whose cancellations had been done
analytically.
\typeout{==> Section: Expansion in Terms of Reduced Bessel Functions}
\section{Expansion in Terms of Reduced Bessel Functions}
\label{Sec:ExpRBF}
A singe power $z^{n}$ is obviously simpler than a generalized Laguerre
polynomial $L_{n}^{(\alpha)} (z)$. Therefore, it is tempting to believe
that powers produce simpler Hankel-type integrals than corresponding
generalized Laguerre polynomials. However, simplicity is a very elusive
concept, and the results in \cref{Sec:WorkYuekcueYuekcue} show that this
seemingly obvious assumption is not true.
If we want to evaluate the Fourier transforms of bound-state hydrogen
eigenfunctions or of related functions by expanding the generalized
Laguerre polynomials, we must find alternative expansion functions that
have more convenient properties than powers. The so-called reduced Bessel
functions and their an-isotropic generalization, the so-called $B$
functions produce the desired expansions. Based on previous work by
\citet[Eq.\ (55) on p.\ 15]{Shavitt/1963}, $B$ functions were defined in
\citeyear{Filter/Steinborn/1978b} by \citet*[Eq.\
(2.14)]{Filter/Steinborn/1978b} as follows:
\begin{equation}
\label{Def:B_Fun}
B_{n,\ell}^{m} (\beta, \bm{r}) \; = \;
\frac {\hat{k}_{n-1/2} (\beta r)} {2^{n+\ell} (n+\ell)!} \,
\mathcal{Y}_{\ell}^{m} (\beta \bm{r}) \, .
\end{equation}
Here, $\beta > 0$, $n \in \mathbb{Z}$, and $\hat{k}_{n-1/2}$ is a reduced
Bessel function. If $K_{\nu} (z)$ is a modified Bessel function of the
second kind \citep*[Eq.\ (10.27.4)]{Olver/Lozier/Boisvert/Clark/2010},
the reduced Bessel function is defined as follows \cite[Eqs.\ (3.1) and
(3.2)]{Steinborn/Filter/1975c}:
\begin{equation}
\label{Def:RBF}
\hat{k}_{\nu} (z) \; = \; (2/\pi)^{1/2} \, z^{\nu} \, K_{\nu} (z) \, ,
\qquad \nu, z \in \mathbb{C} \, .
\end{equation}
If the order $\nu$ is half-integral, $\nu = n + 1/2$ with
$n \in \mathbb{N}_0$, the reduced Bessel function can be expressed as an
exponential multiplied by a terminating confluent hypergeometric series
${}_1 F_1$ (see for example \cite[Eq.\ (3.7)]{Weniger/Steinborn/1983b}):
\begin{equation}
\label{RBF_HalfInt}
\hat{k}_{n+1/2} (z) \; = \; 2^n \, (1/2)_n \,
\mathrm{e}^{-z} \, {}_1 F_1 (-n; -2n; 2z) \, .
\end{equation}
A condensed review of the history of $B$ functions including numerous
references can be found in \cite{Weniger/2009a}. Reduced Bessel and $B$
functions had been the topic of my Diploma \citep{Weniger/1977} and my
PhD thesis \citep{Weniger/1982}.
\Crefrange{Def:B_Fun}{RBF_HalfInt} indicate that $B$ functions are fairly
complicated mathematical objects. Therefore, it is not at all obvious why
$B$ functions should offer any advantages. However, the Hankel-type
integral \citep[Eq.\ (2) on p.\ 410]{Watson/1922})
\begin{align}
\label{B_Fun}
& \int_{0}^{\infty} \, K_{\mu} (\alpha t) \, J_{\nu} (\beta t) \,
t^{\mu+\nu+1} \, \mathrm{d} t
\notag
\\
& \qquad \; = \; \Gamma (\mu+\nu+1) \,
\frac{2^{\mu+\nu} \, \alpha^{\mu} \, \beta^{\nu}}
{[\alpha^{2} + \beta^{2}]^{\mu+\nu+1}} \, ,
\notag
\\
& \qquad \qquad \Re (\mu+\nu) > \vert \Re (\mu) \vert \, , \quad
\Re (\alpha) > \vert \Re (\beta) \vert \, ,
\end{align}
implies that a $B$ function possesses a Fourier transform of exceptional
simplicity:
\begin{align}
\label{FT_B_Fun}
& \overline{B_{n,\ell}^{m}} (\beta, \bm{p}) \; = \; (2\pi)^{-3/2} \,
\int \, \mathrm{e}^{- \mathrm{i} \bm{p} \cdot \bm{r}} \,
B_{n,\ell}^{m} (\beta, \bm{r}) \, \mathrm{d}^3 \bm{r}
\notag
\\
& \qquad \; = \;
(2/\pi)^{1/2} \, \frac{\beta^{2n+\ell-1}}{[\beta^{2} +
p^{2}]^{n+\ell+1}} \, \mathcal{Y}_{\ell}^{m} (- \mathrm{i} \bm{p}) \, .
\end{align}
This is the most consequential and also the most often cited result of my
PhD thesis \cite[Eq.\ (7.1-6) on p.\ 160]{Weniger/1982}. Later, the
Fourier transform \eqref{FT_B_Fun} was published in \citep*[Eq.\
(3.7)]{Weniger/Steinborn/1983a}. Independently and almost simultaneously,
\cref{FT_B_Fun} was also derived by \citet[Eqs.\ (57) -
(58)]{Niukkanen/1984c}.
It follows from \cref{RBF_HalfInt} that a $B$ function can be expressed
as a finite sum of Slater-type functions, or equivalently, that the
Fourier transform \eqref{FT_B_Fun} of a $B$ function can be expressed as
a linear combination of the Fourier transforms of Slater-type functions,
just as \citet*{Yuekcue/Yuekcue/2018} had done it in the case of
bound-state hydrogen eigenfunctions (compare
\cref{Sec:WorkYuekcueYuekcue}).
\citet{Yuekcue/2015} used this seemingly simple approach of expressing a
$B$ function as a linear combination of Slater-type functions \citep[Eq.\
(21)]{Yuekcue/2015}. For the Fourier transform of a Slater-type function
-- his Eqs.\ (32), (39), and (40) -- he used the same expressions as the
ones used by \citet*[Eqs. (13) - (16)]{Yuekcue/Yuekcue/2018}. This leads
to explicit expressions \citep[Eqs.\ (41) and (42)]{Yuekcue/2015} that
are, however, much more complicated and therefore much less useful than
the remarkably compact Fourier transform \eqref{FT_B_Fun}.
We do not know for sure whether \citet*{Yuekcue/Yuekcue/2018} were aware
of the Fourier transform \eqref{FT_HydEigFun} derived by \citet*[Eq.\
(28)]{Podolsky/Pauling/1929} or of the other earlier references mentioned
in \cref{Sec:Intro}. Maybe, \citeauthor{Yuekcue/Yuekcue/2018} genuinely
believed that their results for the Fourier transform of a bound-state
hydrogen eigenfunctions are actually the best possible. However,
\citet{Yuekcue/2015} did not only present his fairly complicated Eqs.\
(41) and (42) for the Fourier transform of a $B$ function, but as his
Eq.\ (28) also the very compact expression \eqref{FT_B_Fun}. It is hard
to imagine that anyone would want to use \citeauthor{Yuekcue/2015}'s
complicated Eqs.\ (41) and (42) instead of the much simpler
\cref{FT_B_Fun}. Not all expressions, which are mathematically correct,
are useful and deserve to be published.
The exceptionally simple Fourier transform \eqref{FT_B_Fun} gives $B$
functions a special position among exponentially decaying functions. It
explains why other exponentially decaying functions as for example
Slater-type functions with integral principal quantum numbers, bound
state hydrogen eigenfunctions, and other functions based on generalized
Laguerre polynomials can be expressed in terms of finite linear
combinations of $B$ functions (for details, see \cite[Section
IV]{Weniger/1985} or \cite[Section 4]{Weniger/2002}).
The Fourier transform \eqref{FT_B_Fun} was extensively used by Safouhi
and co-workers for the evaluation of molecular multicenter integrals with
the help of numerical quadrature combined with extrapolation
techniques. Many references of the Safouhi group can be found in the PhD
thesis of \citet{Slevinsky/2014}.
Apart from the Fourier transform \eqref{FT_B_Fun}, the most important
expression of this Section is the expansion of a generalized Laguerre
polynomial in terms of reduced Bessel functions with half-integral
indices \citep[Eq.\ (3.3-35) on p.\ 45]{Weniger/1982}:
\begin{align}
\label{GLag_FinSum_RBF}
& \mathrm{e}^{-z} \, L_{n}^{(\alpha)} (2z) \; = \; (2n+\alpha+1)
\notag
\\
& \quad \times \,
\sum_{\nu=0}^{n} \, \frac{(-2)^{\nu} \Gamma (n+\alpha+\nu+1)}
{\nu! (n-\nu)! \Gamma (\alpha+2\nu+2)} \, \hat{k}_{\nu+1/2} (z) \, .
\end{align}
This relationship was used by \citet[Eq.\ (3.17)]{Filter/Steinborn/1980}
for the construction of addition theorems and other expansions in terms
of Lambda functions.
With the help of \cref{GLag_FinSum_RBF}, it is trivially simple to
express Sturmians and Lambda functions as finite linear combinations of
$B$ functions \cite[Eqs.\ (4.19) and (4.20)]{Weniger/1985}:
\begin{widetext}
\begin{align}
\label{Sturm_Bfun}
\Psi_{n, \ell}^{m} (\beta, \bm{r}) & \; = \;
\frac{(2 \beta)^{3/2} \, 2^{\ell}}{(2\ell+1)!!} \,
\left[ \frac{2n(n+\ell)!}{(n-\ell-1)!} \right]^{1/2} \,
\sum_{\nu=0}^{n-\ell-1} \, \frac{(-n+\ell+1)_{\nu} \,
(n+\ell+1)_{\nu}}{\nu! \, (\ell+3/2)_{\nu}} \,
B_{\nu+1,\ell}^{m} (\beta, \hm{r}) \, ,
\\
\label{Lambda_Bfun}
\Lambda_{n, \ell}^{m} (\beta, \bm{r}) & \; = \;
(2 \beta)^{3/2} \, 2^{\ell} \, \frac{(2n+1)}{(2\ell+3)!!} \,
\left[ \frac{(n+\ell+1)!}{(n-\ell-1)!} \right]^{1/2} \,
\sum_{\nu=0}^{n-\ell-1} \, \frac{(-n+\ell+1)_{\nu} \,
(n+\ell+2)_{\nu}}{\nu! \, (\ell+5/2)_{\nu}} \,
B_{\nu+1,\ell}^{m} (\beta, \hm{r}) \, .
\end{align}
Now, we only need the Fourier transform (\ref{FT_B_Fun}) of a $B$
function to obtain explicit expressions for the Fourier transforms of a
Sturmian or of a Lambda function. By combining
\cref{FT_B_Fun,Sturm_Bfun,Lambda_Bfun}, we obtain the following
hypergeometric representations:
\begin{align}
\label{FT_Sturm_2F1}
& \overline{\Psi_{n, \ell}^{m}} (\beta, \bm{p}) \; = \;
(2\pi)^{-3/2} \, \int \, \mathrm{e}^{-\mathrm{i} \bm{p} \cdot \bm{r}}
\, \Psi_{n, \ell}^{m} (\beta, \bm{r}) \, \mathrm{d}^{3} \bm{r}
\notag
\\
& \quad \; = \; \frac{1}{(2\ell+1)!!} \, \left[ \frac{\beta}{\pi} \,
\frac {2n \, (n+\ell)!} {(n-\ell-1)!} \right]^{1/2} \, \left[
\frac{2\beta}{\beta^2+p^2} \right]^{\ell+2} \,
\mathcal{Y}_{\ell}^{m} (- \mathrm{i} \bm{p}) \,
{}_2 F_1 \left( -n+\ell+1, n+\ell+1;
\ell+\frac{3}{2}; \frac{\beta^2}{\beta^2+p^2} \right) \, ,
\\
\label{FT_Lambda_2F1}
& \overline{\Lambda_{n, \ell}^{m}} (\beta, \bm{p}) \; = \; (2\pi)^{-3/2} \,
\int \, \mathrm{e}^{-\mathrm{i} \bm{p} \cdot \bm{r}} \, \Lambda_{n,
\ell}^{m} (\beta, \bm{r}) \, \mathrm{d}^3 \bm{r}
\notag
\\
& \quad \; = \; \frac{(2n+1)}{(2\ell+3)!!} \, \left[ \frac{\beta \,
(n+\ell+1)!}{\pi \, (n-\ell-1)!} \right]^{1/2} \, \left[
\frac{2\beta}{\beta^2+p^2} \right]^{\ell+2} \,
\mathcal{Y}_{\ell}^{m} (- \mathrm{i} \bm{p}) \,
{}_2 F_1 \left( -n+\ell+1, n+\ell+2;
\ell+\frac{5}{2}; \frac{\beta^2}{\beta^2+p^2} \right) \, .
\end{align}
The terminating ${}_{2} F_{1}$ in \cref{FT_Sturm_2F1} can according to
\cref{GegPol_2F1_a} be replaced as a Gegenbauer polynomial, yielding
\cref{FT_SturmFun} \cite[Eq.\ (4.24)]{Weniger/1985}, and the terminating
${}_{2} F_{1}$ in \cref{FT_Lambda_2F1} can be expressed as a Jacobi
polynomial \citep*[p.\ 212]{Magnus/Oberhettinger/Soni/1966} via
\begin{equation}
\label{JacPol_2F1_a}
P_{n}^{(\alpha, \beta)} (x) \; = \; \binom{n+\alpha}{n} \,
{}_2 F_1 \left( -n, \alpha+\beta+n+1; \alpha+1;
\frac{1-x}{2} \right) \, ,
\end{equation}
yielding the following explicit expressions for the Fourier transforms of
a Lambda function \citep[Eq.\ (4.25)]{Weniger/1985}:
\begin{align}
\label{FT_Lambda}
\overline{\Lambda_{n, \ell}^{m}} (\beta, \bm{p}) & \; = \;
\frac{2}{(1/2)_n} \, \left\{ \frac{\beta \,
(n+\ell+1)! \, (n-\ell-1)!}{\pi} \right\}^{1/2} \, \left[
\frac{\beta}{\beta^2+p^2} \right]^{\ell+2} \,
\mathcal{Y}_{\ell}^{m} (-\mathrm{i} \bm{p}) \,
P_{n-\ell-1}^{(\ell+3/2, \ell+1/2)}
\left( \frac{p^2-\beta^2}{p^2+\beta^2} \right) \, .
\end{align}
\end{widetext}
The orthogonality relationships satisfied by the Fourier transforms of
Sturmians and Lambda functions with respect to an integration over the
whole three-dimensional momentum space can be deduced directly from the
known orthogonality properties of the Gegenbauer and Jacobi polynomials
\citep[Eqs.\ (4.31) - (4.37)]{Weniger/1985}.
My approach, which is based on the \cref{GLag_FinSum_RBF,FT_B_Fun}, can
also be employed in the case of other, more complicated exponentially
decaying functions. In \citep[Abstract or Eqs.\ (1) and
(2)]{Guseinov/2002c}, \citeauthor{Guseinov/2002c} introduced a large
class of complete and orthonormal functions. In terms of the polynomials
$\bigl[ L_{q}^{{p}} (x) \bigr]_{\text{BS}}$ defined in
\cref{AssLagFun_BS}, \citeauthor{Guseinov/2002c}'s functions can be
expressed as follows:
\begin{align}
\label{OrigDef_Psi_Guseinov}
& \Psi_{n \ell m}^{\alpha} (\zeta, \bm{r}) \; = \;
(-1)^{\alpha} \, \left[ \frac{(2 \zeta)^{3} (n-\ell-1)!}
{(2n)^{\alpha} (n+\ell+1-\alpha)!} \right]^{1/2} \,
\notag
\\
& \quad \times \, (2 \zeta r)^{\ell} \, \mathrm{e}^{-\zeta r} \,
\bigr[ L_{n+\ell+1-\alpha}^{2\ell+2-\alpha} \bigl]_{\text{BS}}
(2\zeta r) \, S_{\ell m} (\theta, \varphi) \, .
\end{align}
Here, $\zeta > 0$ is a scaling parameter, and
$S_{\ell m} (\theta, \varphi)$ is either a real or a complex spherical
harmonic (Guseinov did not provide an exact definition of
$S_{\ell m} (\theta, \varphi)$).
The additional parameter $\alpha$, which Guseinov calls \emph{frictional}
or \emph{self-frictional quantum number}, was originally chosen to be an
integer satisfying $\alpha = 1, 0, -1, -2, \cdots$
\citep[Abstract]{Guseinov/2002c}. In the text following \citep[Eq.\
(3)]{Guseinov/2002c}, \citeauthor{Guseinov/2002c} remarked that for
\emph{fixed} $\alpha = 1, 0, -1, -2, \cdots$ the functions
\eqref{OrigDef_Psi_Guseinov} form a \emph{complete orthonormal set}.
This statement is meaningless. Completeness is not a generally valid
property of a given function set. It only guarantees that functions
belonging to a suitable Hilbert space, which has to be specified, can be
expanded by this function set, and that the resulting expansions converge
with respect to the norm of this Hilbert space (for further details, I
recommend a book by \citet{Higgins/1977} or a review by
\citet{Klahn/1981}).
\citeauthor{Guseinov/2002c}'s original definition
\eqref{OrigDef_Psi_Guseinov} implies that his functions are according to
\citep[Eq.\ (4)]{Guseinov/2002c} orthogonal with respect to the weight
function $w (r) = [n'/(\zeta r)]^{\alpha}$ \citep[Eq.\
(4)]{Guseinov/2002c}:
\begin{align}
\label{Orig_Psi_Guseinov_Orthogon}
& \int \, \left[ \Psi_{n \ell m}^{\alpha} (\zeta, \bm{r}) \right]^{*}
\, \left( \frac{n'}{\zeta r} \right)^{\alpha} \,
\Psi_{n' \ell' m'}^{\alpha} (\zeta, \bm{r}) \, \mathrm{d}^{3} \bm{r}
\notag
\\
& \qquad \; = \;
\delta_{n n'} \, \delta_{\ell \ell'} \, \delta_{m m'} \, .
\end{align}
In the theory of classical orthogonal polynomials, which is intimately
linked to Hilbert space theory, it is common practice to introduce on the
basis of their orthogonality relationships suitable inner products
$( f \vert g )_{w} = \int_{a}^{b} w (x) [f (x)]^{*} g (x) \mathrm{d} x$
with a positive weight function $w \colon [a, b] \to
\mathbb{R}_{+}$. These weighted inner products then lead to the
corresponding weighted Hilbert spaces $\mathcal{H}_{w}$ in which the
orthogonal polynomials under consideration are complete and orthogonal.
In the case of Guseinov's orthogonality relationship
\eqref{Orig_Psi_Guseinov_Orthogon}, this approach does not work. The
weight function $w (r) = [n'/(\zeta r)]^{\alpha}$ cannot be used to
define a Hilbert space because both $\zeta$ and $n'$ are in general
undefined. Thus, instead of incorporating $\zeta$ and $n'$ into the
weight function, they should be incorporated in the normalization factor.
A further disadvantage of Guseinov's original definition
\eqref{OrigDef_Psi_Guseinov} is its use of the polynomials
$\bigl[ L_{q}^{{p}} (x) \bigr]_{\text{BS}}$ defined by
\cref{AssLagFun_BS}, which can only have integral superscripts. As an
alternative, I suggested the following definition, which uses the modern
mathematical notation for the generalized Laguerre polynomials (see for
example \citep[Eq.\ (4.16)]{Weniger/2007b} or \citep[Eq.\
(2.13)]{Weniger/2012}):
\begin{align}
\label{Def_Psi_Guseinov}
& \prescript{}{k}{\Psi}_{n, \ell}^{m} (\beta, \bm{r}) \; = \; \left[
\frac{(2\beta)^{k+3} (n-\ell-1)!}{\Gamma (n+\ell+k+2)} \right]^{1/2}
\notag
\\
& \qquad \times \, \mathrm{e}^{-\beta r} \,
L_{n-\ell-1}^{(2\ell+k+2)} (2 \beta r) \,
\mathcal{Y}_{\ell}^{m} (2 \beta \bm{r}) \, .
\end{align}
The indices satisfy $n \in \mathbb{N}$,
$\ell \in \mathbb{N}_0 \le n - 1$, $-\ell \le m \le \ell$, and the
scaling parameter satisfies $\beta > 0$.
In my original definition in \citep[Eq.\ (4.16)]{Weniger/2007b} or
\citep[Eq.\ (2.13)]{Weniger/2012}, I had assumed that $k$ is a positive
or negative integer satisfying $k = -1, 0, 1, 2, \dots$, which
corresponds to the straightforward translation $-\alpha \mapsto k$ of
Guseinov's original condition $\alpha = 1, 0, -1, -2, \dots$ \citep[Eq.\
(4)]{Guseinov/2002c}. Therefore, my original definition in \citep[Eq.\
(4.16)]{Weniger/2007b} or \citep[Eq.\ (2.13)]{Weniger/2012} assumed $k$
being integral and contained $(n+\ell+k+1)!$ instead of
$\Gamma (n+\ell+k+2)$.
However, in the text following \citep[Eq.\ (4.16) on p.\
11]{Weniger/2007b} or in the text following \citep[Eq.\ (2.13) on p.\
27]{Weniger/2012}, I had emphasized that the condition
$k = -1, 0, 1, 2, \dots$ is unnecessarily restrictive and that it can be
generalized to $k \in [-1, \infty)$. My criticism of Guseinov's original
definition \eqref{OrigDef_Psi_Guseinov} was implicitly confirmed by
Guseinov himself. In his later articles
\citep{Guseinov/2012a,Guseinov/2013a}, Guseinov generalized his so-called
frictional quantum number from originally $\alpha = 1, 0, -1, -2, \cdots$
to $\alpha \in (-\infty, 3)$, which corresponds to $k \in (-3, \infty)$
in my notation. This change could not be done with Guseinov's original
definition \eqref{OrigDef_Psi_Guseinov}.
Therefore, Guseinov finally had to use the modern mathematical notation
for his functions (compare \citep[Eqs.\ (1) - (5)]{Guseinov/2012a} or
\citep[Abstract]{Guseinov/2013a}). To disguise the obvious, Guseinov used
in these formulas instead of a generalized Laguerre polynomial a
terminating confluent hypergeometric series ${}_{1} F_{1}$. Because of
\cref{GLag_1F1}, Guseinov's formulas are equivalent to my definition
\eqref{Def_Psi_Guseinov}. Characteristically, Guseinov did not
acknowledge my contributions \citep[Eq.\ (4.16)]{Weniger/2007b} or
\citep[Eq.\ (2.13)]{Weniger/2012} to his functions. Because of
\citep{Guseinov/2007a}, Guseinov cannot claim to be unaware of
\citep{Weniger/2007b}.
For fixed $k \in (-3, \infty)$, Guseinov's functions defined by
\cref{Def_Psi_Guseinov} satisfy the orthonormality relationship
\begin{align}
\label{Psi_Guseinov_OrthoNor}
& \int \, \bigl[ \prescript{}{k}{\Psi}_{n, \ell}^{m}
(\beta, \bm{r}) \bigr]^{*} \, r^{k} \,
\prescript{}{k}{\Psi}_{n', \ell'}^{m'}
(\beta, \bm{r}) \, \mathrm{d}^3 \bm{r}
\notag
\\
& \qquad \; = \;
\delta_{n n'} \, \delta_{\ell \ell'} \, \delta_{m m'} \, ,
\end{align}
which implies that they are complete and orthonormal in the weighted
Hilbert space
\begin{align}
\label{HilbertL_r^k^2}
& L_{r^k}^{2} (\mathbb{R}^3)
\notag
\\
& \qquad \; = \; \Bigl\{ f \colon \mathbb{R}^3 \to
\mathbb{C} \Bigm\vert \, \int \, r^k \, \vert f (\bm{r}) \vert^2 \,
\mathrm{d}^3 \bm{r} < \infty \Bigr\} \, .
\end{align}
For $k=0$, the functions
$\prescript{}{k}{\Psi}_{n, \ell}^{m} (\beta, \bm{r})$ are identical to
the Lambda functions defined by \cref{Def_LambdaFun}. Thus, they are
complete and orthonormal in the Hilbert space $L^{2} (\mathbb{R}^3)$ of
square integrable functions defined by \cref{HilbertL^2}.
For $k = - 1$, Guseinov's functions yield apart from a different
normalization the Sturmians \eqref{Def_SturmFun}, which are complete and
orthonormal in the Sobolev space $W_{2}^{(1)} (\mathbb{R}^3)$, or
complete and orthogonal in the weighted Hilbert space
$L_{1/r}^{2} (\mathbb{R}^3)$.
Personally, I prefer Sturmians satisfying \cref{Def_SturmFun}:
$W_{2}^{(1)} (\mathbb{R}^3)$ is a proper subspace of
$L^{2} (\mathbb{R}^3)$ with some additional advantageous features
\cite{Klahn/1981}, whereas $L_{1/r}^{2} (\mathbb{R}^3)$ is not.
For $k \ne 0$, the weighted Hilbert spaces $L_{r^k}^{2} (\mathbb{R}^3)$
are genuinely different from the Hilbert space $L^{2} (\mathbb{R}^3)$ of
square integrable functions. We neither have
$L^{2} (\mathbb{R}^3) \subset L_{r^k}^{2} (\mathbb{R}^3)$ nor
$L_{r^k}^{2} (\mathbb{R}^3) \subset L^{2} (\mathbb{R}^3)$. In quantum
physics, it is tacitly assumed that bound-state wave functions are square
integrable \citep{Born/1955}. However, approximation processes converging
with respect to the norm of the weighted Hilbert space
$L_{r^k}^{2} (\mathbb{R}^3)$ with $k \ne -1, 0$ could produce functions
that are not square integrable. Obviously, this would lead to some
embarrassing conceptual and technical problems.
\begin{widetext}
With the help of \cref{GLag_FinSum_RBF}, Guseinov's functions can be
expressed as a finite sum of $B$ functions \citep[Eq.\
(2.22)]{Weniger/2012}:
\begin{align}
\label{GusFun_Bfun}
& \prescript{}{k}{\Psi}_{n, \ell}^{m} (\beta, \bm{r}) \; = \; \left\{
\frac{\beta^{k+3} \, (n+\ell+k+1)!}{2^{k+1} \, (n-\ell-1)!}
\right\}^{1/2}
\frac{(2n+k+1) \, \Gamma (1/2) \, (\ell+1)!} {\Gamma
\bigl(\ell+2+k/2\bigr) \, \Gamma \bigl(\ell+[k+5]/2\bigr)}
\notag \\
& \qquad \times \sum_{\nu=0}^{n-\ell-1} \, \frac{(-n+\ell+1)_{\nu} \,
(n+\ell+k+2)_{\nu} \, (\ell+2)_{\nu}}{\nu! \,
\bigl(\ell+2+k/2\bigr)_{\nu} \, \bigl(\ell+[k+5]/2\bigr)_{\nu}} \,
B_{\nu+1,\ell}^{m} (\beta, \hm{r}) \, .
\end{align}
Now, we only need the Fourier transform (\ref{FT_B_Fun}) to obtain an
explicit expression for the Fourier transform of Guseinov's function:
\begin{align}
\label{FT_GusFun}
& \overline{\prescript{}{k}{\Psi}_{n, \ell}^{m}} (\beta, \bm{p})
\; = \; (2\pi)^{-3/2} \, \int \,
\mathrm{e}^{-\mathrm{i} \bm{p} \cdot \bm{r}} \,
\prescript{}{k}{\Psi}_{n, \ell}^{m} (\beta, \bm{r}) \,
\mathrm{d}^{3} \bm{r}
\notag
\\
& \qquad \; = \; \left\{ \frac{\beta^{k+1} \, (n+\ell+k+1)!}
{\pi \, 2^k \, (n-\ell-1)!} \right\}^{1/2} \, \frac
{(2n+k+1) \, \Gamma (1/2) \, (\ell+1)!}
{2^{\ell+2} \, \Gamma \bigl(\ell+2+k/2\bigr) \,
\Gamma \bigl(\ell+[k+5]/2\bigr)} \,
\mathcal{Y}_{\ell}^{m} (- \mathrm{i} \bm{p})
\notag
\\
& \qquad \qquad \times \, \left( \frac{2\beta}{\beta^2+p^2}
\right)^{\ell+2} \, {}_3 F_2 \biggl( -n+\ell+1, n+\ell+2, \ell+2;
\ell+2+\frac{k}{2}, \ell+\frac{k+5}{2};
\frac{\beta^2}{\beta^2+p^2} \biggr) \, .
\end{align}
In this Fourier transform, the radial part is essentially a terminating
generalized hypergeometric series ${}_3 F_2$, which simplifies for either
$k = -1$ or $k = 0$ to yield the terminating Gaussian hypergeometric
series ${}_{2} F_{1}$ in the hypergeometric representations
\eqref{FT_Sturm_2F1} or \eqref{FT_Lambda_2F1} for the Fourier transforms
of Sturmians and Lambda functions, respectively. Thus, the Fourier
transform of Guseinov's function
$\prescript{}{k}{\Psi}_{n, \ell}^{m} (\beta, \bm{r})$ with $k \ne -1, 0$
is more complicated than the Fourier transforms of either Sturmians or
Lambda functions.
\end{widetext}
\typeout{==> Section: Summary and Conclusions}
\section{Summary and Conclusions}
\label{Sec:SummaryAndConclusions}
The Fourier transform \eqref{FT_HydEigFun} for a bound-state hydrogen
eigenfunction \eqref{Def_HydEigFun} is a classic result of quantum
physics already derived in \citeyear{Podolsky/Pauling/1929} by
\citet*[Eq.\ (28)]{Podolsky/Pauling/1929} with the help of the generating
function \eqref{GLag_GenFun}. I am not aware of a more compact and more
useful expression for this Fourier transform.
\citet*{Yuekcue/Yuekcue/2018}, who were apparently unaware not only of
\citeauthor{Podolsky/Pauling/1929}, but also of the other references
listed in \cref{Sec:Intro}, proceeded differently. As discussed in
\cref{Sec:WorkYuekcueYuekcue}, \citet*{Yuekcue/Yuekcue/2018} expressed a
generalized Laguerre polynomial as a finite sum of powers according to
\cref{GLag->PowZ}, or equivalently, they expressed a bound-state hydrogen
eigenfunction as a finite sum of Slater-type functions. Since Fourier
transformation is a linear operation, this leads to an expression of the
Fourier transformation of a bound-state hydrogen eigenfunction as a
finite sum of Fourier transforms of Slater-type functions, for which many
explicit expressions are known in the literature (compare
\cref{Sec:WorkYuekcueYuekcue}).
At first sight, this approach, which requires no mathematical skills,
looks like a good idea. Unfortunately, the simplicity of Slater-type
functions in the coordinate representation is deceptive. As already
emphasized in \citep*[p.\ 29]{Weniger/2012}, the Fourier transforms of
bound-state hydrogen eigenfunctions and Slater-type function have the
same level of complexity. Consequently, it cannot be a good idea to
express the Fourier transform of a bound-state hydrogen eigenfunction as
a linear combination of Fourier transforms of Slater-type
functions. Moreover, in the case of large principal quantum numbers $n$,
these finite sums tend to become numerically unstable. This is a direct
consequence of the alternating signs in \cref{GLag->PowZ}.
In principle, it should be possible to derive the
\citeauthor{Podolsky/Pauling/1929} formula \eqref{FT_HydEigFun} from the
comparatively complicated linear combinations presented by
\citeauthor{Yuekcue/Yuekcue/2018}. However, the Fourier transforms of
Slater-type functions discussed in \cref{Sec:WorkYuekcueYuekcue} are all
fairly complicated objects. Therefore, it is very difficult or even
practically impossible to obtain the remarkably compact
\citeauthor{Podolsky/Pauling/1929} formula \eqref{FT_HydEigFun} in this
way. I am not aware of anybody who achieved this.
It is nevertheless possible to derive the
\citeauthor{Podolsky/Pauling/1929} formula \eqref{FT_HydEigFun} by
expanding generalized Laguerre polynomials, albeit in terms of some
other, less well known polynomials. This was shown in
\cref{Sec:ExpRBF}. The key relationships in \cref{Sec:ExpRBF} are the
exceptionally simple Fourier transform \eqref{FT_B_Fun} of a $B$ function
and the expansion \eqref{GLag_FinSum_RBF} of a generalized Laguerre
polynomial in terms of reduced Bessel functions \eqref{RBF_HalfInt} with
half-integral indices. With the help of \cref{FT_B_Fun,GLag_FinSum_RBF}
it is trivially simple to derive the \citeauthor{Podolsky/Pauling/1929}
formula \eqref{FT_HydEigFun}. This derivation is much simpler than the
original derivation by \citet*{Podolsky/Pauling/1929}, which is discussed
in \cref{Sec:PodolskyPauling} and which required the skillful use of the
generating function \eqref{GLag_GenFun} of the generalized Laguerre
polynomials.
\citet{Hylleraas/1928} observed already in \citeyear{Hylleraas/1928} that
bound-state hydrogen eigenfunctions \emph{without} the inclusion of the
mathematically very difficult continuum eigenfunctions are incomplete in
the Hilbert space of square integrable functions (compare
\cref{Sec:Incom_BoundStateHydrEigFun}). This is a highly consequential
fact, which \citeauthor{Yuekcue/Yuekcue/2018} were apparently not aware
of. In combination with the difficult nature of the continuum
eigenfunctions, this incompleteness greatly limits the practical
usefulness of bound-state eigenfunctions as mathematical tools. It is
certainly not a good idea to do expansions in terms of an incomplete
function set.
Therefore, attention has shifted away from hydrogen eigenfunctions to
other, related function sets also based on the generalized Laguerre
polynomials, which, however, have more convenient completeness
properties. The best known examples are the so-called Sturmians
\eqref{Def_SturmFun}, which had been introduced by \citet{Hylleraas/1928}
already in \citeyear{Hylleraas/1928} and which can be obtained from the
bound-state hydrogen eigenfunctions by the substitution
$Z/n \mapsto \beta$ according to \cref{SturmFun<->BSHEF}, and the
so-called Lambda functions \eqref{Def_LambdaFun}, which were also
introduced by \citet{Hylleraas/1929} in \citeyear{Hylleraas/1929}.
With the help of the expansion \eqref{GLag_FinSum_RBF}, it is a trivial
matter to express both Sturmians and Lambda functions as linear
combinations of $B$ functions, yielding
\cref{Sturm_Bfun,Lambda_Bfun}. Then, one only needs the Fourier transform
\eqref{FT_B_Fun} to convert these linear combinations to compact explicit
expressions for the Fourier transforms of Sturmians and Lambda functions,
respectively.
In \citeyear{Guseinov/2002c}, \citet{Guseinov/2002c} introduced a large
class of complete and orthonormal functions defined by
\cref{OrigDef_Psi_Guseinov}, which used an antiquated notation for the
Laguerre polynomials. Guseinov's functions contain an additional
parameter $\alpha$ called \emph{frictional} or \emph{self-frictional
quantum number}, which was originally assumed to be integral. Depending
on this $\alpha$, Guseinov's functions contain Sturmians and Lambda
functions as special cases.
Guseinov's original notation \eqref{OrigDef_Psi_Guseinov} does not allow
non-integral values of $\alpha$. In order to rectify this obvious
deficiency, I introduced in \citep{Weniger/2007b,Weniger/2012} the
alternative definition \eqref{Def_Psi_Guseinov} which uses the modern
mathematical notation for the generalized Laguerre polynomials. Later,
\citet{Guseinov/2012a,Guseinov/2013a} was forced to change to my notation
because he wanted to consider non-integral self-frictional quantum
numbers $\alpha$.
With the help of \cref{FT_B_Fun,GLag_FinSum_RBF}, it is again a trivial
matter to construct the Fourier transform \eqref{FT_GusFun} of a Guseinov
function in the notation of \cref{Def_Psi_Guseinov}. \Cref{FT_GusFun}
contains the Fourier transforms of Sturmians and Lambda functions as
special cases, but is more complicated.
|
train/arxiv
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.